title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
May Community Update | In May, the Quadrant team has been growing, receiving industry recognition, and delivering on promises to our customers. Here are a few highlights of what’s been going behind the scenes at Quadrant since our last update.
New Team Members
Firstly, we’re delighted to welcome two new data engineers to the Quadrant family — Sam Darmali and Aishwarya Bose. They will help our data engineering team to ensure that we deliver what our clients require.
Both Sam and Aishwarya are eager young engineers, ready to jump in with both feet and contribute to solving some of the biggest data challenges in the industry today. We wish them luck in their roles!
Industry Recognition
We’re proud of Barkha Jasani, Quadrant’s Director of Engineering, for being recognised at the prestigious Women in IT Awards Asia 2019. Shortlisted as one of the finalists for the Data Leader of the Year award category, this acknowledgement of Barkha’s commitment to data engineering is a huge honour for her and the whole Quadrant team.
Barkha’s personal journey to date has been inspiring. Born and raised in a small town in Gujarat, India far from the major IT hubs, she started her life as one of many women who would typically go into more administrative jobs or stay home and start a family.
However, Barkha followed her dreams and enrolled in a course to study Computer Engineering. Thanks to her bravery and tenacity, when she finally graduated, she was able to join an IT firm in India before joining the Quadrant family here in Singapore.
We wish her many more awards and happy years ahead as a leading and remarkable engineer!
Data Quality Dashboard Release
As promised in our April update, we have now released the all-new Data Quality Dashboard tool, which lets users quickly assess the quality and suitability of a given location data feed for their business use case.
The dashboard contains a suite a quality metrics that provide a quick overview prior to running full evaluation analysis. This includes a world map view of monthly and daily active users, a data completeness matrix based on Quadrant’s internal completeness scoring system, and quality distribution charts.
You can read more on the new Data Quality Dashboard here.
Training New Data Scientists (students)
Quadrant was pleased to collaborate with Real Skills Education Australia to hold a data science workshop for students from leading Australian education institutions.
The students visited the Quadrant Singapore office on an educational overseas programme and enjoyed a day-long workshop led by Roger from our Data Science team, who taught them to work with data beyond the limitations of academia.
The students learned new ways to apply data science to its maximum capacity and got a sense of what it’s like to live a day in the life as a data scientist. At Quadrant, we are strong believers in education not being limited to academia and textbooks — get out there and have some practical experiences!
We wish all the students who took time out of their studies to visit the Quadrant office in Singapore the best for their future endeavours as data scientists.
May Events
Turning to events, Quadrant was represented by Barkha at the developer conference Voxxed Days Singapore, where she gave a presentation on the overall state of blockchain in the data economy and shared her vision of how blockchain could be a game-changer for the data economy.
Finally, Quadrant was pleased to host the next edition of its monthly community meetup. This month, the theme was advertising. Presented by Roger and Qi Xuan, attendees were given insights into the importance of leveraging the right data for the right advertising use cases. You can read further on that event, as well as download the event material, here.
Best Regards,
The Quadrant Team
We want to give you every opportunity to stay involved. Please continue sending your questions and suggestions and check our official channels in Telegram | Twitter | Facebook |Reddit for regular updates. | https://medium.com/quadrantprotocol/may-community-update-e0f7668aa553 | ['Nikos', 'Quadrant Protocol'] | 2019-06-13 11:55:42.516000+00:00 | ['Data Science', 'Technology', 'Singapore', 'Blockchain', 'Big Data'] |
OAuth 2.0 Grant flows and Recommendations | In this article, we would like to provide an overview of what is OAuth 2.0 and the concepts like scopes, Grant Types we must aware of before proceeding with OAuth 2.0.
What is OAuth 2.0?
OAuth 2.0 is an authorization framework that allows users to grant a third-party website or application to access the user’s protected resources without revealing their credentials or identity. For that purpose, an OAuth 2.0 server issues access tokens that the client applications can use to access protected resources on behalf of the resource owner.
OAuth Scopes
OAuth 2.0 scopes provide a way to limit the amount of access that is granted to an access token. When the app requests permission to access a resource through the Authorization server, it uses a scope parameter to specify what access it needs, and the authorization server uses the scope parameter to respond with the access that was actually granted.
OAuth 2.0 Terminology
Resource Owner: the entity that can grant access to a protected resource. Typically this is the end-user.
Resource Server (API Server): The server that is hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.
Client: An application making protected resource requests on behalf of the resource owner and with its authorization. The term client does not imply any particular implementation characteristics (e.g. whether the application executes on a server, a desktop, or other devices).
Authorization Server: The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization.
OAuth Grant Types
OAuth 2.0 provides below-mentioned grant types (“methods”) for a client application to acquire an access token that can be used to authenticate a request to API endpoints / other integrations.
Authorization code grant flow with PKCE ( P roof K ey for C ode E xchange)
roof ey for ode xchange) Authorization code grant flow
Client credentials grant flow
Implicit grant flow
Resource owner credentials grant flow
Authorization code grant flow with PKCE (Proof Key for Code Exchange)
Please do note that the Authorization code grant flow is considered insecure without PKCE.
Before the introduction of PKCE, Authorization code grant flow is not recommended to use along with SPAs implemented using JavaScript frameworks. Traditionally the Authorization Code flow uses a client secret when exchanging the authorization code for an access token, but there is no way to include a client secret in a JavaScript app and have it remain a secret.
The above-mentioned issue is applicable to mobile native apps as well. In mobile apps, by decompiling the app, you can view the client secret. Thankfully OAuth team has solved the issue by extending the Authorization Code flow with PKCE extension.
The Authorization Code flow with PKCE adds an additional step which allows us to protect the authorization code so that even if it is stolen during the redirect it will be useless by itself.
The key difference between the PKCE flow and the standard Authorization Code flow is users aren’t required to provide a client_secret. PKCE reduces security risks for native apps, and SPAs as embedded secrets aren’t required in source code, which limits exposure to reverse engineering. Client_secret is used by the Authorization server to identify the client who is making the request.
In this approach, the client first generates a runtime secret called the code_verifier. The client hashes this secret and sends this value as code_challenge as part of the frontend request. The Authorization server saves this value. The client includes the code_verifier as part of the subsequent code exchange request. The Authorization server compares the hash of the code_verifier with the original code_challenge it received.
How does It work?
Authorization Code flow with PKCE is recommended for Mobile Apps, SPAs. If your application can secure the ‘Client_secret’, you can opt-in for Authorization code Flow
Authorization code grant flow
Authorization code grant flow is used in web apps are server-side apps where the source code is not publicly exposed. Your application must be server-side because, during this exchange, you must also pass along your application’s Client Secret, which must always be kept secure, and you will have to store it in your client.
How does It work?
Authorization code grant flow is recommended to use with the web apps that are server-side apps where the source code is not publicly exposed.
Client credentials grant flow
Client credentials grant flow allows a web service to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. In the approach, the client (Daemon service, Cron job) sends the credentials to the Authorization server, on successful authorization; the authorization server sends back the access token to access the resources.
How does It work?
Client credentials grant flow is recommended to use to run the scheduled jobs or authenticating the external systems requests / Remote API calls.
Implicit grant flow
The Implicit grant type is used to requests the access tokens directly from the authorization server, without the use of the authorization code or client_secret. It is recommended to use as part of JavaScript applications like Angular or React JS applications. This grant type does not include client authentication because the client_secret cannot be stored safely on the public client. This grant type relies on the callback URL given when the client was registered, to validate the identity of the client.
How does It work?
Before the Introduction of Authorization code flow with PKCE, Implicit grant flow is recommended for SPA and Mobile applications where client secrets cannot be kept secret.
Resource Owner Credentials Grant flow
The Resource Owner Password Credentials flow allows exchanging the username and password of a user for an access token. In the grant flow, the client application accepts the user credentials (User Id and Password) in an interactive form (Login page) and sends over to the authorization server for validation. The authorization server validates the credentials and responds back with an access token.
The resource owner password credentials grant type is suitable in cases where the resource owner has a trust relationship with the client, such as the device operating system or a highly privileged application.
It is recommended only for first-party “official” applications released by the API provider.
How does It work?
This is not recommended to use with the client as it imposes additional security issues as the user entering the user id and password on an external login page. This is only recommended for the products from the Authorization Server.
Thank you for reading the article. Please share your thoughts in the comments box. If you like our article, please share it.
References:
https://auth0.com/docs/flows https://docs.wso2.com/display/IS530/Working+with+OAuth | https://medium.com/techmonks/oauth-2-0-grant-flows-and-recommendations-2ca8e38fe1c8 | ['Anji'] | 2020-07-18 07:26:47.522000+00:00 | ['Oauth Grant Flows', 'Oidc', 'Oauth2', 'Microservices', 'Security Token'] |
Sundar Pichai and The Ethics Of Algorithms | Sundar Pichai and The Ethics Of Algorithms
Today The Country Is Mocking Politicians, They Should Also Be Criticizing Google’s CEO
Photo by Goran Ivos on Unsplash
The latest congressional technology hearing was as cringeworthy as you would expect.
There were politicians who thought Google was the same company as Apple. There were politicians that wondered why Google was censoring hate-speech. There were politicians that thought Sundar Pichai’s salary and some aggressive alpha-male shouting would enable him to reveal the answer to the age old mystery of “is Google tracking our every step?”
Confused? So am I.
Through all the hardships, Pichai remained calm and collected. He provided insight to a group of politicians who clearly lacked expertise. This is difficult to do and I give him credit. For 99% of the hearing, Sundar Pichai was on fire.
But there’s one crucial question that Pichai botched. It was about the ethics of algorithms.
Listen to this question by Rep. Zoe Lofgren (D-CA),
Right now, if you google the word ‘idiot’ under images, a picture of Donald Trump comes up. I just did that,” she said. “How would that happen?”
This is Pichai’s response,
Any time you type in a keyword, as Google we have gone out and crawled and stored copies of billions of [websites’] pages in our index. And we take the keyword and match it against their pages and rank them based on over 200 signals — things like relevance, freshness, popularity, how other people are using it. And based on that, at any given time, we try to rank and find the best search results for that query. And then we evaluate them with external raters, and they evaluate it to objective guidelines. And that’s how we make sure the process is working.
Representative Zoe Lofgren later concludes that she looks forward to working with Pichai on serious issues and,
It’s pretty obvious that bias against conservative voices is not one of them [google’s priorities].
Pichai’s response was not wrong or nefarious. Pichai did an excellent job at explaining the technical-side of how Google handles queries in Layman’s terms.
However this exchange as a whole may be misleading to the public eye. It lends itself to a common, dangerous misconception that sophisticated algorithms are always unbiased.
Photo by Avi Richards on Unsplash
With this exchange, Rep. Lofgren and Pichai establish a defensive narrative that Google takes hundreds, thousands, even billions of data points into consideration before listing a website at the top. Furthermore, Google’s algorithm takes into account an unfathomable number of ‘objective guidelines’ and ‘external raters’ to evaluate. Lastly but most importantly, algorithms like this are too sophisticated to experience bias.
Of course Pachai knows this narrative is not true. But does Rep. Lofgren know? Do the other congressmen and congresswomen know? Does the public know?
Well the fact remains that algorithms were built by people. People have agendas. When people get to define what is a success and what is a failure, there will always be at least some inherent bias.
Just because a solution was discovered by an algorithm doesn’t necessarily make the solution unbiased. Sometimes, algorithms can make bias decisions and the amount of ‘data’ and ‘guidelines’ the algorithm has access to does not make the algorithm more credible.
For instance, there are criminal justice algorithms that are prone to label African Americans as ‘high risk’ (and thus ineligible for parole) more often than Caucasians. This algorithm has access to a wide array of ‘data’ and ‘objective guidelines’ yet it still makes biased decisions. Why? Because the court system is bias. All of the data the algorithm has access to is bias.
Additionally, there is an infamously bias flight algorithm that chose to remove Dr. Dao from a United Airlines flight and resulted in this traumatic video:
This is another extremely sophisticated algorithm that failed to provide biased-free judgement. So to suggest that Google’s search algorithm is unbiased because it’s a sophisticated algorithm is false. Algorithms can be incredibly prejudice if not careful.
The fact of the matter is, Google’s search algorithm is very close to being unbiased because of meticulous evaluation and consistent reevaluation by the team.
To my knowledge, the only way to validate an algorithm’s credibility is to consistently reevaluate the results by a third party. But even then, the term ‘bias’ is subjective. So this evaluation process is more like a short-answer question than a true or false question.
Photo by Nathan Dumlao on Unsplash
Pichai’s answer to the question of “how does searching ‘idiot’ reveal a picture of Donald Trump” was technically true but culturally disappointing.
Instead, consider what would’ve happened if Pachai answered Rep. Lofgren’s question with, “we have policies in place so that humans can not directly manipulate search results to make Donald Trump appear on the search of idiot. We’ve proven through independent parties that Google’s search does not show political bias and that this particular query-result could happen to a democratic president under the same conditions. Furthermore we are always reevaluating how the search engine could improve.”
This answer may not instill the same confidence of Pichai’s original answer, but it’s the most honest and complete answer in the context of bias.
Moving into an era where algorithms have more decision-making power, the general public is going to need to learn about what makes an algorithm credible and what makes an algorithm biased. | https://medium.com/hackernoon/sundar-pichai-and-the-ethics-of-algorithms-7948226aa7f6 | ['Max Albert'] | 2018-12-16 16:36:00.827000+00:00 | ['Sundar Pichai', 'Ethics', 'Google', 'Ethics Of Algorithms', 'Algorithms'] |
Build a Bot to Communicate With Your Smart Home Over Telegram | Build a Bot to Communicate With Your Smart Home Over Telegram
The power of Raspberry Pi and Telegram
You’ve got your smart home fully set up. You regularly like to show off with your friends how cool it is to turn on light bulbs, play videos and movies with a hint to your voice assistant, make coffee, and adjust the thermostat with a tap on an app. Congratulations!
But if you’re an automation enthusiast who rarely settles, you’ve probably grown frustrated with the number of apps you’ll have to download and the number of interfaces you’ll have to master to control your gadgets.
You’ll probably have an app for the lights, one for your media center, one for your smart shades, one for your thermostat, and a Google Home app that zealously (and hopelessly) tries to put all of them in the same place.
Probably, most of these apps won’t communicate with the others, and probably many of them won’t work if you aren’t on the same network as your gadget.
Wouldn’t it be cool if we could control everything from the same interface, without cluttering our phones or computers with tons of apps, through an interface that is accessible both from mobile and desktop devices as well as through external scripts/integrations, whether you are on your home network or outdoor? An interface that was both lightweight and straightforward to use?
But wait, hasn’t such an interface been around for a while, under the name of messaging or chat? After all, wouldn’t it be cool to control our house, gadgets, and cloud services through the same interface that we use to send cat pictures to our friends, and through a bot completely tailored to our needs, without all the boilerplate/frustration that usually comes with third-party bots? | https://medium.com/better-programming/communicating-with-your-smart-home-over-telegram-76850522759 | ['Fabio Manganiello'] | 2020-01-07 23:43:33.079000+00:00 | ['Python', 'Programming', 'Telegram', 'Home Automation', 'Raspberry Pi'] |
JavaScript for Data Engineers | DATA ENGINEERING
JavaScript for Data Engineers
A brief introduction to JavaScript-based open-source SQL, orchestration and ETL tools for data engineers
The latest StackOverflow Developer survey deemed JavaScript as the most popular technology closely followed by SQL as the third most popular technology. The former was considered to be a client-side scripting/front end language until a number of years ago when JavaScript based servers got widespread attention. Since then, JavaScript projects have been initiated for almost all major areas of work in the umbrella of software development. Data Engineering is one such field where JavaScript is being used more than ever.
There are already a lot of visualisation libraries written in JavaScript such as D3.js, C3.js, Charts.js and so on. Much has been written about them but not about hardcore data engineering tools related to handling databases, data cleansing, ETL, data pipeline orchestration and so on. Let’s take a look at some of the most popular and useful active JavaScript projects that data engineers could learn and use in their current work.
It is a query builder for PostgreSQL, MySQL, MariaDB, MSSQL, Oracle and Amazon Redshift. With over 300 contributors and about 150 releases till date, it is by far the most popular query builder available today. Fun fact — the author of Slonik wrote this about why using Knex.js is bad for dynamic query building, got a lot of flak — he’s made some general good points in the following piece but NOT enough to convince me that Knex.js is bad!
A JavaScript based database for the browser that works for mobile apps, browsers and node.js applications. It’s really good at handling CSVs & Excel files. This project has 5.2K stars on GitHub and has 51K downloads/month from npm. It is being maintained pretty well with the last code push done a week ago.
There are two projects worth looking at from the Koop project — the first one is, not surprisingly, called Koop — it’s an ETL utility for geospatial data. The project is well maintained and is sponsored by ESRI which is the company to talk about when we talk about location intelligence.
Calling it a complete ETL tool would be a mistake. Firstly, it is just meant for geospatial data as the transform it supports relate to geospatial data. For example, transforming geospatial data on the fly into GeoJSON and Vector Tiles.
A component based programming environment following the principles of Flow-Based Programming (the logic of a program is defined in a graph). Many people will relate this to Airflow or similar orchestrators. While there are similarities in the sense that NoFlo can be programmed to be used somewhat like an orchestrator, it is a bit vast in scope than the popular orchestrators. You can use noflo-nodejs to execute NoFlo code on Node.js.
This is TaskRabbit’s contribution to the open source data engineering community. Empujar is an ETL tool which can be used to do a lot of moving around of data, including a creating and storing backups. Currently, Empujar has support for MySQL, Amazon Redshift, Elasticsearch and S3. Custom connectors can be created easily to include other databases or data sources. Although this tool has been around for a while, it is worth mentioning that the latest PR was not merged as the build was failing.
Honorary Mentions
GruntJS — This is a simple task runner that helps you automate your grunt work like making sure the code is formatted, linted, minified and so on.
— This is a simple task runner that helps you automate your grunt work like making sure the code is formatted, linted, minified and so on. BookshelfJS — An ORM built on Knex.js with transaction support, eager relational loading and support for 1:1, 1:n and n:n relations.
— An ORM built on Knex.js with transaction support, eager relational loading and support for 1:1, 1:n and n:n relations. ObjectionJS — An ORM for Node.js based on Knex.js which fully supports MySQL, MariaDB and PostgreSQL.
— An ORM for Node.js based on Knex.js which fully supports MySQL, MariaDB and PostgreSQL. Slonik — A Node.js based SQL client for PostgreSQL that promotes writing raw SQL and discourages ad-hoc dynamic generation of SQL.
There are many other projects on GitHub but I haven’t picked many of them because most of them are not up to date and haven’t seen a code checkin in a long time.
To conclude, we can say that there are a lot of JavaScript based, well maintained, open-source repositories to help with the day-to-day data engineerie stuff — generating SQL, interacting with databases, moving data around from one place to another, integrating data and visualising it too. With JavaScript being one of the promising languages of the future, it is worth investing in learning JavaScript as it will be more widely used further down the line.
JavaScript In Plain English
Did you know that we have three publications and a YouTube channel? Find links to everything at plainenglish.io! | https://medium.com/javascript-in-plain-english/javascript-for-data-engineers-ccce214e9aff | ['Kovid Rathee'] | 2020-11-26 22:15:11.138000+00:00 | ['Data Science', 'Programming', 'Software Development', 'Data Engineering', 'JavaScript'] |
A Working-Class Perspective on Responding to Pandemic and the Value of Labor: Timothy Sheard’s “One Foot in the Grave,” a Hospital Mystery Novel | commons.wikimedia.org
The coronavirus pandemic, it is now commonly observed, has drawn into relief, and indeed exacerbated, the various layers of inequality already fragmenting culture and society in the United States.
At the same time, when it comes to class inequality in particular, Americans have tended to find ways to gloss over the brutal realities of US class society by heroicizing workers upon whom, it is at this moment impossible to deny, Americans’ lives depend. Farmworkers, grocery store workers, delivery truck drivers, in addition to doctors, nurses, and other hospital workers, have now been recognized as “essential”. This categorization supposedly acknowledges their social value, but little is done to alter their remuneration, improve their working conditions, or end labor exploitation.
Karleigh Frisbie Brogan, writing in The Atlantic, captures this dynamic in her experience as a grocery worker. She acknowledges some satisfaction from this momentary recognition, writing, “Working in a grocery store has earned me and my co-workers a temporary status. After years of being overlooked, we suddenly feel a sense of responsibility, solidarity, and pride.”
But their feelings about themselves aside, she views this larger social recognition as not just fleeting but as serving the more dangerous psychological purpose of absolving consumers of guilt and enabling the same system of exploitation to continue:
Cashiers and shelf-stockers and delivery-truck drivers aren’t heroes. They’re victims. To call them heroes is to justify their exploitation. By praising the blue-collar worker’s public service, the progressive consumer is assuaged of her cognitive dissonance. When the world isn’t falling apart, we know the view of us is usually as faceless, throwaway citizens. The wealthy CEO telling his thousands of employees that they are vital, brave, and noble is a manipulative strategy to keep them churning out profits.
Brogan’s representation of these workers as victims underscores the fact that while heroicized, workers aren’t turned to for their insights and perspectives for best addressing the social and public health challenges posed by the pandemic, not to mention the ongoing economic inequities in America’s class society.
What would a working-class cultural response to the pandemic look like? Is there a labor perspective that offers special or different insights into how we address public health issues, into how we heal our deep social ills?
If we believe Marx and Engels that “the history of all hitherto existing society is the history of class struggle”, then it stands to reason, of course, that experiences of and responses to this pandemic are inevitably bound up in these class antagonisms — that the pandemic itself is a source of class struggle.
If we’re looking for such a model of a popular working-class cultural response to pandemic, author and activist Timothy Sheard’s 2019 novel, One Foot in the Grave, offers an eerily prescient example. The novel is the eighth installment in Sheard’s Lenny Moss mystery series, all of which are set in the hospital workplace. The series weaves murder mysteries with the intricate and fascinating drama of all the labor that goes into the work of healing, from the custodial work that cleans and sanitizes the hospital, to the engineers that maintain the building operations, to the food service workers, to the doctors and nurses and more.
The Covid-19 pandemic heightens the relevance of Sheard’s work to be sure, and One Foot in the Grave takes on particular relevance as Sheard presciently imagines an outbreak of the Zika virus in Philadelphia. This pandemic challenges the hospital staff in their healing efforts in all the ways we see healthcare workers stressed in these times: dealing with lack of PPE and other equipment, facing capacity issues, working in dangerous conditions that expose them to the virus, and so forth.
The hospital workplace is Sheard’s Conradian sea, full of the drama of work he represents in ways that challenge the US dominant culture’s “degradation of labor”. Harry Braverman develops this term in in his 1974 classic study Labor and Monopoly Capital, referring to two phenomena. First, the term describes the way an intensified division of labor has eroded craft and artisanal expertise by taking work and breaking it into smaller and smaller tasks, thus “de-skilling” the worker. Second, the term refers to the way this process then supposedly enables us to de-value the necessary work people do; we come to refer to such types of work as “unskilled”, justifying the low wage at which the work is remunerated and, hence, in social terms, valued and appreciated.
This degradation results in the dismissal and ignorance of the talents and expertise that workers in all positions in the labor force possess and also creates damaging divisions within the labor force between erroneously defined categories of “skilled” and “unskilled”, or “professional” and “worker”, which disarm the possibilities for the cooperative leveraging of all the social power and expertise we have at our disposal to solve problems and address our collective social needs in the service of all lives.
It is through his intricate representations of work and workplace relations, as well as his representation of class conflict in the hospital, that Sheard’s novel provides a narrative of working-class response to pandemic conditions, and of working-class solutions to larger social ills. One Foot in the Grave presents these solutions as involving a combination of overcoming class divisions, developing solidarity and cooperation among all workers, and respecting and availing ourselves of the working-class knowledge typically dismissed within US capitalist culture.
From the first page, Sheard conveys high-strung class tensions in the hospital as Catherine, a pregnant nurse, is in tears, needing her job but fearful of the potentially dangerous workplace endangering her and her unborn child. She is unable to refuse work assignments or not come to work, and she has no union to advocate for the safety of her work conditions or protect her job.
The second chapter opens with Lenny Moss, James Madison Hospital’s custodian and also chief sleuth for whom Sheard names his series, complaining that the hospital executives don’t supply him with enough bleach to do his job while he does work that is essential to providing a healthy environment for patients. Sheard describes the work to make clear the importance and value of Lenny’s job to the overall work of healing: “He ran the mop over the old, cracked marble floor in broad strokes, washing away a night’s worth of spills and stains. It wasn’t the work he had to do this morning that was annoying Lenny … It was the boss’s unwillingness to supply him and his co-workers with bleach that had gotten under his skin”(Sheard 5).
Then, Dr. Auginello, chief of the Infectious Disease Division, checks in with Lenny while instructing his residents. He explains to them that “the most effective preventive strategy is to continually clean horizontal surfaces” (Sheard 6). He elaborates, with particular relation to the Zika virus: “Once droplets fall onto the horizontal surfaces, we touch them with our hands and transfer them to our skin. So continuous cleaning of the environment with a strong antiseptic solution remains the most effective procedure for preventing horizontal transmission, and bleach is the most effective solution for killing virus” (Sheard 6).
Auginello explains that the “business” interests in the hospital try to cut down on the use of bleach because patients don’t like the smell and complain, and it is important to them to secure positive patient satisfaction surveys.
Auginello reveals that he and Lenny share an important technical knowledge, and we see the doctor valorizing custodial work — as well as the knowledge that pertains to it — and its important role in contributing to the overall task of healing patients and keeping staff safe. Throughout the novel, Auginello cooperates with other hospital workers to create the best conditions for serving patients.
For example, when the hospital is running low on negative pressure rooms and HEPA filters are difficult to find Katchi, the maintenance engineer, figures out that they can create the same environment in rooms with fans purchased from Home Depot, where they can also buy paint masks to make up for the shortage of standard hospital issue masks. Through these relations, Sheard challenges the degradation of labor by putting all of this technical knowledge, contained in occupations typically stratified by class, on equally important footing.
Auginello represents solidarity itself. His goal is to serve patients, and the class system presents an obstacle to his ability to fulfill his Hippocratic oath. If he operated according to the values of class society, he would likely dismiss out of hand the knowledge a janitor or maintenance engineer can contribute to optimizing medical care for patients. Solidarity and the dismissal of class distinctions are central to Auginello doing his job most effectively.
Lenny and Auginello, in fact, together represent a cross-class solidarity rooted in a dismissal of class values and status and in a common respect and valuing of one another, offering the cultural basis for exploding the value underpinnings of class society altogether. Lenny, of course, is not only the central intelligence solving crimes novel after novel, but he’s also the go-to authority, in part because of his union activism, that other workers — doctors, nurses, staff — turn to help resolve conflicts and solve problems in the hospital.
In One Foot in the Grave, some nurses come to Lenny, a leader in the Hospital Service Workers Union, asking for help in addressing dangerous and oppressive working conditions. And here we see the fault lines in solidarity, rooted in class divisions and a class value system that forwards the degradation of labor. Some of the nurses are uncomfortable, for example, joining a union with service workers. One of the nurses, Agnes, “was not sure she wanted to be in a union with service workers, she was a professional, they were all non-professionals”(Sheard 21).
This division persists in the novel, posing an obstacle to worker solidarity and hence to the nurses addressing and improving working conditions. The question abiding in One Foot in the Grave is raised by Mimi, a nurse trying to organize the union: “She asked herself: what would it take to win them to the union? How could she break through their stubborn ideas about being ‘professionals’? Lenny had often said, every worker is a professional, every job is a skilled job.Why couldn’t the nurses see that?” (Sheard 169).
And, oh yeah, there is a murder to be solved, involving a former resident who tries to poison the attending physician who had him dismissed for sexually abusing a cadaver. The resident, of course, shows himself unworthy of the work of healing.
I’m focusing minimally on this mystery because I think it’s less consequential to the success of the novel. Sheard provides a broader social mystery to be resolved. Typically, detective novels have been understood as conservative in nature. ScholarFranco Moretti, for example, encapsulates this standard view, arguing, the very task of detective fiction is not to find society guilty but to find the individual criminal guilty and restore innocence to society, validating the dominant social conceptions of law, order, and class hierarchy and thus forestalling social critique. He declares detective fiction “the sui generis totalitarianism of contemporary capitalism” and “a hymn to culture’s coercive abilities,” associating the genre wholesale with “an economic imagination interested only in perpetuating the existing order” (Moretti 155).
But the real mystery for Sheard is that of class struggle and when workers will unite. Sheard leaves this mystery unsolved, leaving the fate of the working class and status of unions to be resolved at a future moment through the march of history. Yet that march is not an abstract process but rather one that people direct. This novel is an attempt to raise the consciousness of workers to direct that march toward a union movement rooted in solidarity, overcoming the cultural class biases and misguided labor aristocracies that fragment the working class.
Part of the meaning of the novel’s title, we can infer, is that the labor movement has one foot in the grave. Unless it can overcome divisions within groups of workers themselves, it cannot engage in the struggle between classes. Overcoming class differences, which can enable truly cooperative work, is the key to being able to heal our world, inside and outside the hospital.
To know how to address and coordinate a response to a pandemic means one understands the many layers and levels of work involved in treating illness, from the way a janitor cleans a floor or disinfects a room, to the workers who operate and maintain communication networks, to orderlies, nurses, and doctors, and more. That means understanding and respecting all of this work — and thus overcoming the degradation of labor that so animates the culture of US political economy.
Fredric Jameson argues, “One hallmark of capitalist culture is that the fact of work and of production is a secret as carefully concealed as anything in our culture. Indeed, this is the very meaning of the commodity as a form, to obliterate the signs of work on the product in order to make it easier for us to forget the class structure which is its organizational framework” (Jameson 327).
In many cases, the literary works valorized in capitalist culture participate in erasing the traces of labor. Sheard is clear in One Foot in the Grave that unless we confront the facts of labor directly and the obstacles class society presents in preventing an organization of work that can optimally serve us and promote healthy workplaces and a healthy society, we cannot address pandemic conditions in a humane way. | https://timlittlebooks.medium.com/a-working-class-perspective-on-responding-to-pandemic-and-the-value-of-labor-timothy-sheards-f7daa4820c79 | ['Tim Libretti'] | 2020-07-08 18:05:17.073000+00:00 | ['Politics', 'Literature', 'Culture', 'Books', 'Work'] |
An Introduction to Convolutional Neural Networks | An Introduction to Convolutional Neural Networks
A simple guide to what CNNs are, how they work, and how to build one from scratch in Python.
There’s been a lot of buzz about Convolution Neural Networks (CNNs) in the past few years, especially because of how they’ve revolutionized the field of Computer Vision. In this post, we’ll build on a basic background knowledge of neural networks and explore what CNNs are, understand how they work, and build a real one from scratch (using only numpy) in Python.
This post assumes only a basic knowledge of neural networks. My introduction to Neural Networks covers everything you’ll need to know, so you might want to read that first.
Ready? Let’s jump in.
T he formatting in this article looks best in the original post on victorzhou.com.
1. Motivation
A classic use case of CNNs is to perform image classification, e.g. looking at an image of a pet and deciding whether it’s a cat or a dog. It’s a seemingly simple task — why not just use a normal Neural Network?
Good question.
Reason 1: Images are Big
Images used for Computer Vision problems nowadays are often 224x224 or larger. Imagine building a neural network to process 224x224 color images: including the 3 color channels (RGB) in the image, that comes out to 224 x 224 x 3 = 150,528 input features! A typical hidden layer in such a network might have 1024 nodes, so we’d have to train 150,528 x 1024 = 150+ million weights for the first layer alone. Our network would be huge and nearly impossible to train.
It’s not like we need that many weights, either. The nice thing about images is that we know pixels are most useful in the context of their neighbors. Objects in images are made up of small, localized features, like the circular iris of an eye or the square corner of a piece of paper. Doesn’t it seem wasteful for every node in the first hidden layer to look at every pixel?
Reason 2: Positions can change
If you trained a network to detect dogs, you’d want it to be able to a detect a dog regardless of where it appears in the image. Imagine training a network that works well on a certain dog image, but then feeding it a slightly shifted version of the same image. The dog would not activate the same neurons, so the network would react completely differently!
We’ll see soon how a CNN can help us mitigate these problems.
2. Dataset
In this post, we’ll tackle the “Hello, World!” of Computer Vision: the MNIST handwritten digit classification problem. It’s simple: given an image, classify it as a digit.
Each image in the MNIST dataset is 28x28 and contains a centered, grayscale digit.
Truth be told, a normal neural network would actually work just fine for this problem. You could treat each image as a 28 x 28 = 784-dimensional vector, feed that to a 784-dim input layer, stack a few hidden layers, and finish with an output layer of 10 nodes, 1 for each digit.
This would only work because the MNIST dataset contains small images that are centered, so we wouldn’t run into the aforementioned issues of size or shifting. Keep in mind throughout the course of this post, however, that most real-world image classification problems aren’t this easy.
Enough buildup. Let’s get into CNNs!
3. Convolutions
What are Convolutional Neural Networks?
They’re basically just neural networks that use Convolutional layers, a.k.a. Conv layers, which are based on the mathematical operation of convolution. Conv layers consist of a set of filters, which you can think of as just 2d matrices of numbers. Here’s an example 3x3 filter:
A 3x3 filter
We can use an input image and a filter to produce an output image by convolving the filter with the input image. This consists of
Overlaying the filter on top of the image at some location. Performing element-wise multiplication between the values in the filter and their corresponding values in the image. Summing up all the element-wise products. This sum is the output value for the destination pixel in the output image. Repeating for all locations.
Side Note: We (along with many CNN implementations) are technically actually using cross-correlation instead of convolution here, but they do almost the same thing. I won’t go into the difference in this post because it’s not that important, but feel free to look this up if you’re curious.
That 4-step description was a little abstract, so let’s do an example. Consider this tiny 4x4 grayscale image and this 3x3 filter:
A 4x4 image (left) and a 3x3 filter (right)
The numbers in the image represent pixel intensities, where 0 is black and 255 is white. We’ll convolve the input image and the filter to produce a 2x2 output image:
A 2x2 output image
To start, lets overlay our filter in the top left corner of the image:
Step 1: Overlay the filter (right) on top of the image (left)
Next, we perform element-wise multiplication between the overlapping image values and filter values. Here are the results, starting from the top left corner and going right, then down:
Step 2: Performing element-wise multiplication.
Next, we sum up all the results. That’s easy enough: 62–33=29.
Finally, we place our result in the destination pixel of our output image. Since our filter is overlayed in the top left corner of the input image, our destination pixel is the top left pixel of the output image:
We do the same thing to generate the rest of the output image:
3.1 How is this useful?
Let’s zoom out for a second and see this at a higher level. What does convolving an image with a filter do? We can start by using the example 3x3 filter we’ve been using, which is commonly known as the vertical Sobel filter:
Here’s an example of what the vertical Sobel filter does:
An image convolved with the vertical Sobel filter
Similarly, there’s also a horizontal Sobel filter:
An image convolved with the horizontal Sobel filter
See what’s happening? Sobel filters are edge-detectors. The vertical Sobel filter detects vertical edges, and the horizontal Sobel filter detects horizontal edges. The output images are now easily interpreted: a bright pixel (one that has a high value) in the output image indicates that there’s a strong edge around there in the original image.
Can you see why an edge-detected image might be more useful than the raw image? Think back to our MNIST handwritten digit classification problem for a second. A CNN trained on MNIST might look for the digit 1, for example, by using an edge-detection filter and checking for two prominent vertical edges near the center of the image. In general, convolution helps us look for specific localized image features (like edges) that we can use later in the network.
3.2 Padding
Remember convolving a 4x4 input image with a 3x3 filter earlier to produce a 2x2 output image? Often times, we’d prefer to have the output image be the same size as the input image. To do this, we add zeros around the image so we can overlay the filter in more places. A 3x3 filter requires 1 pixel of padding:
A 4x4 input convolved with a 3x3 filter to produce a 4x4 output using same padding
This is called “same” padding, since the input and output have the same dimensions. Not using any padding, which is what we’ve been doing and will continue to do for this post, is sometimes referred to as “valid” padding.
3.3 Conv Layers
Now that we know how image convolution works and why it’s useful, let’s see how it’s actually used in CNNs. As mentioned before, CNNs include conv layers that use a set of filters to turn input images into output images. A conv layer’s primary parameter is the number of filters it has.
For our MNIST CNN, we’ll use a small conv layer with 8 filters as the initial layer in our network. This means it’ll turn the 28x28 input image into a 26x26x8 output volume:
Reminder: The output is 26x26x8 and not 28x28x8 because we’re using valid padding, which decreases the input’s width and height by 2.
Each of the 8 filters in the conv layer produces a 26x26 output, so stacked together they make up a 26x26x8 volume. All of this happens because of 3 x 3 (filter size) x 8 (number of filters) = only 72 weights!
3.4 Implementing Convolution
Time to put what we’ve learned into code! We’ll implement a conv layer’s feedforward portion, which takes care of convolving filters with an input image to produce an output volume. For simplicity, we’ll assume filters are always 3x3 (which is not true — 5x5 and 7x7 filters are also very common).
Let’s start implementing a conv layer class:
The Conv3x3 class takes only one argument: the number of filters. In the constructor, we store the number of filters and initialize a random filters array using NumPy's randn() method.
Note: Diving by 9 during the initialization is more important than you may think. If the initial values are too large or too small, training the network will be ineffective. To learn more, read about Xavier Initialization.
Next, the actual convolution:
iterate_regions() is a helper generator method that yields all valid 3x3 image regions for us. This will be useful for implementing the backwards portion of this class later on.
Line 26 actually performs the convolutions. Let’s break it down:
We have im_region , a 3x3 array containing the relevant image region.
, a 3x3 array containing the relevant image region. We have self.filters , a 3d array.
, a 3d array. We do , which uses numpy’s broadcasting feature to element-wise multiply the two arrays. The result is a 3d array with the same dimension as self.filters .
. We np.sum() the result of the previous step using , which produces a 1d array of length num_filters where each element contains the convolution result for the corresponding filter.
The sequence above is performed for each pixel in the output until we obtain our final output volume! Let’s give our code a test run:
Looks good so far.
Note: in our Conv3x3 implementation, we assume the input is a 2d numpy array for simplicity, because that's how our MNIST images are stored. This works for us because we use it as the first layer in our network, but most CNNs have many more Conv layers. If we were building a bigger network that needed to use Conv3x3 multiple times, we'd have to make the input be a 3d numpy array.
4. Pooling
Neighboring pixels in images tend to have similar values, so conv layers will typically also produce similar values for neighboring pixels in outputs. As a result, much of the information contained in a conv layer’s output is redundant. For example, if we use an edge-detecting filter and find a strong edge at a certain location, chances are that we’ll also find relatively strong edges at locations 1 pixel shifted from the original one. However, these are all the same edge! We’re not finding anything new.
Pooling layers solve this problem. All they do is reduce the size of the input it’s given by (you guessed it) pooling values together in the input. The pooling is usually done by a simple operation like max , min , or average . Here's an example of a Max Pooling layer with a pooling size of 2:
Max Pooling (pool size 2) on a 4x4 image to produce a 2x2 output
To perform max pooling, we traverse the input image in 2x2 blocks (because pool size = 2) and put the max value into the output image at the corresponding pixel. That’s it!
Pooling divides the input’s width and height by the pool size. For our MNIST CNN, we’ll place a Max Pooling layer with a pool size of 2 right after our initial conv layer. The pooling layer will transform a 26x26x8 input into a 13x13x8 output:
4.1 Implementing Pooling
We’ll implement a MaxPool2 class with the same methods as our conv class from the previous section:
This class works similarly to the Conv3x3 class we implemented previously. The critical line is line 30: to find the max from a given image region, we use np.amax(), numpy's array max method. We set because we only want to maximize over the first two dimensions, height and width, and not the third, num_filters .
Let’s test it!
Our MNIST CNN is starting to come together!
5. Softmax
To complete our CNN, we need to give it the ability to actually make predictions. We’ll do that by using the standard final layer for a multiclass classification problem: the Softmax layer, a fully-connected (dense) layer that uses the Softmax function as its activation.
Reminder: fully-connected layers have every node connected to every output from the previous layer. We used fully-connected layers in my intro to Neural Networks if you need a refresher.
If you haven’t heard of Softmax before, read my quick introduction to Softmax before continuing.
5.1 Usage
We’ll use a softmax layer with 10 nodes, one representing each digit, as the final layer in our CNN. Each node in the layer will be connected to every input. After the softmax transformation is applied, the digit represented by the node with the highest probability will be the output of the CNN!
5.2 Cross-Entropy Loss
You might have just thought to yourself, why bother transforming the outputs into probabilities? Won’t the highest output value always have the highest probability? If you did, you’re absolutely right. We don’t actually need to use softmax to predict a digit — we could just pick the digit with the highest output from the network!
What softmax really does is help us quantify how sure we are of our prediction, which is useful when training and evaluating our CNN. More specifically, using softmax lets us use cross-entropy loss, which takes into account how sure we are of each prediction. Here’s how we calculate cross-entropy loss:
where c is the correct class (in our case, the correct digit), pc is the predicted probability for class c, and ln is the natural log. As always, a lower loss is better. For example, in the best case, we’d have
In a more realistic case, we might have
We’ll be seeing cross-entropy loss again later on in this post, so keep it in mind!
5.3 Implementing Softmax
You know the drill by now — let’s implement a Softmax layer class:
There’s nothing too complicated here. A few highlights:
We flatten() the input to make it easier to work with, since we no longer need its shape.
np.dot() multiplies input and self.weights element-wise and then sums the results.
and element-wise and then sums the results. np.exp() calculates the exponentials used for Softmax.
We’ve now completed the entire forward pass of our CNN! Putting it together:
Running cnn.py gives us output similar to this:
MNIST CNN initialized!
[Step 100] Past 100 steps: Average Loss 2.302 | Accuracy: 11%
[Step 200] Past 100 steps: Average Loss 2.302 | Accuracy: 8%
[Step 300] Past 100 steps: Average Loss 2.302 | Accuracy: 3%
[Step 400] Past 100 steps: Average Loss 2.302 | Accuracy: 12%
This makes sense: with random weight initialization, you’d expect the CNN to be only as good as random guessing. Random guessing would yield 10% accuracy (since there are 10 classes) and a cross-entropy loss of −ln(0.1)=2.302, which is what we get!
Want to try or tinker with this code yourself? Run this CNN in your browser. It’s also available on Github.
6. Conclusion
That’s the end of this introduction to CNNs! In this post, we
Motivated why CNNs might be more useful for certain problems, like image classification.
Introduced the MNIST handwritten digit dataset.
handwritten digit dataset. Learned about Conv layers , which convolve filters with images to produce more useful outputs.
, which convolve filters with images to produce more useful outputs. Talked about Pooling layers , which can help prune everything but the most useful features.
, which can help prune everything but the most useful features. Implemented a Softmax layer so we could use cross-entropy loss.
There’s still much more that we haven’t covered yet, such as how to actually train a CNN. My next post will do a deep-dive on training a CNN, including deriving gradients and implementing backprop, so stay tuned!
If you’re eager to see a trained CNN in action: this example Keras CNN trained on MNIST achieves 99.25% accuracy. CNNs are powerful! | https://towardsdatascience.com/an-introduction-to-convolutional-neural-networks-bdf692352c7 | ['Victor Zhou'] | 2019-07-22 20:21:31.802000+00:00 | ['Machine Learning', 'Python', 'Towards Data Science', 'Convolutional Network', 'Neural Networks'] |
Symbol Table Applications | Sets
They are just collection of distinct keys. There is no values associated; the keys themselves are values at the same time.
The operations are: add a key, ask if the set contains a key, or remove a key.
Implementation
Sets are commonly implemented in the same way as associative arrays, either using:
A self-balancing binary search tree for sorted sets (which has O(LogN) for most operations).
A hash table for unsorted sets (which has O(1) average-case, but O(N) worst-case, for most operations).
Therefore, sets can be implemented using associative arrays, but, we either:
Remove all references to “value” field from any symbol table implementation.
Use a dummy value as the values, which is ignored.
Applications
Spell Checker : Store words in a dictionary, identify misspelled words.
: Store words in a dictionary, identify misspelled words. Browser : Store visited pages, mark the current page if visited not.
: Store visited pages, mark the current page if visited not. Spam Filter: Store spam IP addresses, eliminate IP address if it’s a spam.
All of these applications are “Exception filter” application; meaning, they store either a whitelist or blacklist of keys, then check if a key exists or not.
public class WhiteList {
public static void main(String[] args) {
SET<String> set = new SET<String>();
String[] whitelist = {/*...*/};
String[] input = {/*...*/};
for(String word: whitelist) // store whitelist words
set.add(word);
for(String word: input) // print whitelist words in input
if (set.contains(word))
System.out.println(word);
}
}
Dictionary Clients
It’s an application of symbol tables, where we find the value (if any) that’s mapped to a given key in a list of key-value pairs.
Applications
DNS Lookup : Store a list of URL and it’s associated IP address in symbol table. The key can be the URL, and value is the IP address, or vice-versa.
: Store a list of URL and it’s associated IP address in symbol table. The key can be the URL, and value is the IP address, or vice-versa. Student Lookup: Store a list of students data, where key can be for example their id, and first name is the value.
Indexing Clients
It’s same as Dictionary Clients. But, this time, the value is a list of values, instead of only one value.
Applications
File Indexing: Store words in all files, then tell me which files contain a given query string. The key is the word, and the value is a set of files containing this word.
import java.io.File;
public class FileIndex {
public static void main(String[] args) {
// The value can have type of "Set";
// a collection of distinct items.
ST<String, SET<File>> st = new ST<String, SET<File>>();
String files[] = {/*...*/};
// for each word in file, add to corresponding set
for (String filename : files) {
File file = new File(filename);
String words[] = file.getWords();
for(String word: words) {
if (!st.contains(word))
st.put(word, new SET<File>());
SET<File> set = st.get(word);
set.add(file);
}
} // get list of files for each word
String input[] = {/*...*/};
for(String word: input)
System.out.println(st.get(word));
}
}
Book Index: Store words in an article, then tell me the position(s) where a word appeared in the article. The key is the word, and the value is a set of indexes where the word appeared in the article.
Sparse Vectors
The matrix-vector multiplication; multiplying matrix by a vector using the brute force solution takes O(N²).
And, if the matrix(or the vector) is sparse, meaning, most of the elements are zero, which is the case in many applications, then using associative arrays can get better performance.
We assume if matrix dimension is 10⁴, then non-zeros per row ~ 10 on average.
Vector Representations
Although using a 1d array takes a constant time to access an element, but, it might consume a lot of wasted space if most of the elements are zeros; Sparse Vector. In addition, iterating over the elements would take a linear time in the worst case.
So, using associative arrays instead can save time and space (both are ~= non-zero keys).
Vector Representations — algs4.cs.princeton.edu
Thus, we can make a data type called “SparseVector”, which uses the symbol table to represent a sparse vector (a vector with a lot of zeros), and provides extra methods for mathematical operations, such as finding dot product.
Matrix Representations
A matrix can also have two representations; 2d array, or an array of associative arrays, or SparseVectors.
Using associative arrays save time and space since most of the elements are zeros (both are ~= non-zero keys in each row, and plus N for space).
Matrix Representations — algs4.cs.princeton.edu
Sparse Matrix-vector Multiplication
The number of iterations equals to non-zero keys in each row (constant for each row). Therefore, using associative arrays can get linear running time for sparse matrix. | https://medium.com/omarelgabrys-blog/symbol-table-applications-bd3793b7f3ec | ['Omar Elgabry'] | 2017-02-17 06:19:00.984000+00:00 | ['Programming', 'Coding', 'Algorithms', 'Data Structures', 'Java'] |
How we redesigned LCL Mes Comptes on Android | Q1: Theme
The first step was to customize the theme, which is the better way to reflect a product’s brand.
It was released in version 4.7.0.
Colors
We updated the color palette and removed the custom image background to make the app less disturbing and improve user productivity.
To learn how to customize colors, check out Color Theming.
Typography
We replaced Roboto and Fjalla One with Montserrat. Since these fonts have different properties, it was necessary to adjust text sizes so we defined and used standard text styles.
Thanks to AndroidX, we also switched from the legacy Calligraphy to use Fonts in XML.
To learn how to customize typography, check out Typography Theming.
Design System
We started to create a design system with a set of components that will be used during the whole graphic migration.
To collaborate between designers and developers, we used zeroheight to share these components and InVision to share prototypes. It incredibly increased the whole team productivity and happiness. | https://medium.com/ideas-by-idean/how-we-redesigned-lcl-mes-comptes-on-android-bc6a52bd87fd | ['Kamal Faraj'] | 2020-10-28 16:55:37.448000+00:00 | ['Android', 'Technology', 'Android App Development', 'Mobile Apps', 'Design'] |
React: Client-Side Routing. Using React Router to Implement Client… | React: Client-Side Routing
Using React Router to Implement Client-Side Routing
Photo by Joey Kyber on Unsplash
What is client-side routing
Client-side routing happens when the route is handled by the front end code (Javascript/JSX) that is loaded on the page. When a link is clicked, the route will be changed in the URL but the server request is prevented. The url will be changed by the state. This change in state will result in a different view of the webpage.
Let’s look at a real-world example, let’s say you have an application that allows you to view your blogs, your profile page, and your drafts. Each of those pages is a different view of the same SPA (single page application). With Client-Side routing, you’ll get the data you need to be able to render all of those pages on the first-page load. When a user clicks to view their drafts, the content is already ready to go, and therefore it will render faster than if you were making a request to the server for that page.
Speed is the major benefit of client-side routing. We make only one request to the server and so we don’t have to wait around for the server to get back to us. Everything is stored on the client-side and accessible to us as we need it.
Note: Our whole page will not refresh when using client-side routing. Instead, only the elements on the page will change.
Drawbacks of Client-Side routing
Since we are loading all of our code on the initial GET request it can be pretty slow for the initial page render, especially if you have a really large application. This is one of the biggest drawbacks of client-side routing.
Another drawback is search engine crawling which is less optimized for SPAs. Google is working on improving crawling on SPAs but it is still much less efficient than doing so on server-side applications.
React Router
React Router is a routing library for React. It lets us link to specific URLs then play hide and seek with components depending on which URL is being displayed.
As React Router’s documentation states:
Components are the heart of React’s powerful, declarative programming model. React Router is a collection of navigational components that compose declaratively with your application. Whether you want to have bookmarkable URLs for your web app or a composable way to navigate in React Native, React Router works wherever React is rendering — so take your pick!
The first step in using React-Router is installing that bad boy:
npm install react-router-dom
Next, we will need to import BrowserRouter and Route from react-router-dom. To note, conventionally, BrowserRouter is aliased as Router . Now we can add our first route. We will be setting up a home route. See the example below.
import React from 'react';
import React from 'react'; import ReactDOM from 'react-dom'; // Step 1. Import react-router functions import { BrowserRouter as Router, Route } from 'react-router-dom'; const Home = () => {
return (
<div>
<h1>Home!</h1>
</div>
);
}; // Step 2. Changed to have router coordinate what is displayed ReactDOM.render((
<Router>
<Route path="/" component={Home} />
</Router>),
document.getElementById('root')
);
Let’s break down step 2: The Router (our alias for BrowserRouter) component is the base for our routing. It is the component that we will use to declare how React Router will be used. Inside the Router component, we have the route component. The Route component has two props in our example: path and component . This component decides what is rendered based on whether or not a path matches the URL.
Adding Additional Routes
Before we add in additional code, let’s go ahead and extract our home code to its own file. We can then make two more files one called About.js and one called Login.js . We should have three total, Home.js , About.js , & Login.js . These files should look like this:
Home:
import React from 'react'; class Home extends React.Component {
render() {
return
<h1>Home!</h1>
}} export default Home;
About:
import React from 'react';
class About extends React.Component {
render() {
return <h1>This is my about component!</h1>;
}
}
export default About;
Login:
import React from 'react';
class Login extends React.Component {
render() {
return (
<form>
<h1>Login</h1>
<div>
<input type="text" name="username" placeholder="Username" />
<label htmlFor="username">Username</label>
</div>
<div>
<input type="password" name="password" placeholder="Password" />
<label htmlFor="password">Password</label>
</div>
<input type="submit" value="Login" />
</form>
);
}
}
export default Login;
Be sure you are importing these new components into your index.js file so you have access to the code.
With that, we can start to add our second and third routes. A quick note, a Router can only have one child so listening to all of your routes within the Router component will result in an error. We will solve that by wrapping all of our Route components in a <div> to avoid errors. Here is our code so far:
import React from 'react';
import ReactDOM from 'react-dom';
import Home from './Home'
import About from './About'
import Login from './Login'
import { BrowserRouter as Router, Route } from 'react-router-dom';
ReactDOM.render((
<Router>
<div>
<Navbar />
<Route exact path="/" component={Home} />
<Route exact path="/about" component={About} />
<Route exact path="/login" component={Login} />
</div>
</Router>),
document.getElementById('root')
);
The React Router API provides two components that enable us to trigger our routing: Link and NavLink . They both update the browser and render the correct Route component. Let’s make a Navbar.js and add the following code (be sure to import this into your index.js ):
import React from 'react'
import { NavLink } from 'react-router-dom';
const link = {
width: '100px',
padding: '12px',
margin: '0 6px 6px',
background: 'blue',
textDecoration: 'none',
color: 'white',
}
class Navbar extends React.Component {
render() {
return (
<div>
<NavLink
to="/"
/* set exact so it knows to only set activeStyle when route is deeply equal to link */
exact
/* add styling to Navlink */
style={link}
/* add prop for activeStyle */
activeStyle={{
background: 'darkblue'
}}
>Home</NavLink>
<NavLink
to="/about"
exact
style={link}
activeStyle={{
background: 'darkblue'
}}
>About</NavLink>
<NavLink
to="/login"
exact
style={link}
activeStyle={{
background: 'darkblue'
}}
>Login</NavLink>
</div>
)
}
}
export default Navbar;
At this point, if you were to spin up a browser you should see some lovely blue navlinks and you should be able to go to each component and see changes on the page and in the URL!
Final Code:
import React from 'react';
import ReactDOM from 'react-dom';
import Home from './Home'
import About from './About'
import Login from './Login'
import Navbar from './Navbar'
import { BrowserRouter as Router, Route } from 'react-router-dom';
ReactDOM.render((
<Router>
<div>
<Navbar />
<Route exact path="/" component={Home} />
<Route exact path="/about" component={About} />
<Route exact path="/login" component={Login} />
</div>
</Router>),
document.getElementById('root')
);
Conclusion:
Client-Side Routing is a very cool way for us to make use of the URLs and keep our application running smoothly and quickly. React Router is an awesome library that helps us to do so and is quite easy to master with a little practice. We were able to slap some links on the DOM and get a simple SPA working in no time! Go us!
Resources: | https://medium.com/weekly-webtips/react-client-side-routing-90873b96b429 | ['Jordan T Romero'] | 2020-11-19 18:31:17.336000+00:00 | ['Software Development', 'JavaScript', 'React', 'Frontend'] |
The state of Java [developers] — reflections on Devoxx 2019 | I attended Devoxx Belgium — November 2019. The yearly gathering of over 3000 Java developers (numbers provided by Devoxx website). Maybe not all of them Java and perhaps some not even developers. But by and large … Java and software development are the core themes.
This conference has taken the place of JavaOne as the premier venue for the Java community — to exchange ideas, make announcements, promote open source projects and win the hearts and minds of the community at large. It is a great place to learn, get comforted by the pains that others go through such as much as you are yourself, get answers to burning questions and most of all: be inspired. I leave Devoxx with a head full of plans, ideas, intentions, question and ambitions. It will sustain me for a long time. And if I need more — I will check the online videos at YouTube where almost all talks are available.
In this article — I have tried to “persist” some of the ideas and findings that are spinning around in my head. I am aware — and so should you be, my dear reader — that there is a bit of bias involved. The conference offered curated content: decisions were made by the organizers about what subjects to include in the agenda — and which ones not. Perhaps topics that are very relevant were excluded in that way. I also did not visit all sessions: I chose sessions that fit in with my highly personal frame of mind (even though I try to attend some sessions way out of my comfort zone).
Some of my conclusions are not well founded on objective fact and measurements; they reflect my sense of the buzz and general sentiment at this conference — highly influenced by my own preferences and the people I talked with (and not those I did not talk to). With all these caveats, I believe I did capture something that is relevant — at least to me going forward. At the same time I would like to invite you to add comments to this article, to give me a piece of your mind. What did you learn and conclude? Do you concur with what I deduced or do you have an alternative opinion to share?
Shakers and Movers, Hot and Lukewarm and Cool (or not so cool)
Some technologies, tools, frameworks, standards, themes are hot, others are distinctly not. Devoxx is a great place to get feel for what is happening and what is running out of steam. There are several ways of classifying. In the end, I give this ‘gut feel’ based classification.
Foundational (everyone is using this, no discussion needed) : Maven, Java (8), REST (& JSON), containerized, Kubernetes, JUnit, IntelliJ IDEA, Jenkins
(everyone is using this, no discussion needed) : Maven, Java (8), REST (& JSON), containerized, Kubernetes, JUnit, IntelliJ IDEA, Jenkins Strong contenders (close to widespread or even general adoption, could be a relatively new very promising kid on the block ): GraalVM, Kotlin, Quarkus, Micronaut, Visual Studio Code, PostgreSQL, Reactive style, Netty, Microprofile, Go, DevOps, production environment testing (canary, A/B), microservices
(close to widespread or even general adoption, could be a relatively new very promising kid on the block ): GraalVM, Kotlin, Quarkus, Micronaut, Visual Studio Code, PostgreSQL, Reactive style, Netty, Microprofile, Go, DevOps, production environment testing (canary, A/B), microservices To watch : RESTEasy, Knative, Apache Pulsar, Rust
: RESTEasy, Knative, Apache Pulsar, Rust Under pressure (apparently losing ground ): Eclipse, Spring, Grails, Scala, Groovy, Reflection & Dynamic Class Loading, Java EE/Jakarta, JBoss/WildFly | WebLogic | WebSphere
(apparently losing ground ): Eclipse, Spring, Grails, Scala, Groovy, Reflection & Dynamic Class Loading, Java EE/Jakarta, JBoss/WildFly | WebLogic | WebSphere Fading into background: Swing
Themes & Messages
Some themes that ran through the entire conference are briefly discussed below.
Java runtime — lean and fast
Prepared for Serverless Functions and Containerized Microservices (dynamic, horizontal scalability on container platforms)
There is a very clear trend of being smarter about building applications to enable being better with running them. By removing stuff at compile time that will not be used at runtime anyways, we can create smaller uberjars and get away with trimmed down Java runtime environments. By inspecting code at pre-compile time — much of the work that is typically done at run time (with reflection and dynamic class loading) can be handled by source code manipulatio. This includes weaving in aspect code, manipulating code based on annotations.
Some concrete aspects to this theme:
Smart container image building for Java applications (Jib — for quick because smart image rebuild)
Smart compile time optimizations (Quarkus, Micronaut — for expanding annotations, chucking out unneeded classes, injecting code)
Native Image (GraalVM, Quarkus, Micronau — create native image/platform specific binary executable with small size, small memory footprint and quick startup)
Do not do expensive runtime stuff such as reflection, dynamic proxies, AOP and dynamic class loading
DevOps — the Developerator
The distinction between development and operations is rapidly becoming meaningless. A separate operations department may be concerned with the platform (Kubernetes and all underlying IaaS). However, application operations are done by the same team that has created and rolled out the software. Testing is increasingly done in Production (with canary workloads), monitoring is being upgraded to provide immediate insight to the Developerators, mean time to repair is reduced through automated build, regression test and (controlled) release. There is no handover to ‘another department’ or even ‘the Ops-people on the team’.
This not a bane for developers. It is actually a boon. To have a rapid feedback cycle from creating code to having that code being used and seeing the metrics of that usage is exhilarating. Being informed of an issue and being able to analyze the issue, develop a fix and release solution all in a matter of hours is equally fulfilling. Okay, having to do that at night is not great. So perhaps do no release major changes just before you go home for the day. Or ever: try to break up changes into smaller changes — perhaps using feature toggles or flags to release code that is not necessarily active.
Clean Code
80–90% of IT budget is spent on maintaining and evolving systems, not on building them from scratch. Code is read 10 times more often than it is written. Even the original author of the code will not have any recollection of the how and why of her own code after a weeks’ spent on other topics. Productivity, quality and joy in the lives of developers is increased with clean code — that is easily read and understood. Code whose meaning is clear. What it does and why it does that.
Naming of variables, methods and classes: clear naming is mandatory.
Methods should be short (one page in the IDE — 20 lines of code or preferably less)
Methods should not have more than three parameters
Parameters should not be of type Boolean — at least not used as flags to request alternative behaviors of the method
Methods that return a result should not have side effects; methods that return null can have a side effect; make clear in the name of the method what that side effect is
Comments in code should rarely be used — the code should speak for itself. However, comments that reveal workaround for non trivial issues and bugs are valuable. Or that explain a special corner case.
(serious) Peer reviews should ensure that code does speak for itself.
You should only commit code that you are prepared to add to your CV.
Watch: this talk by @VictorRentea on Clean Code: http://youtube.com/watch?v=wY_CUkU1zfw...
Developer — know your IDE! For productivity, refactoring, uniform code [quality], instant testing
Get the most out of your IDE. For productivity and fun. For overcoming fears of refactoring — and to apply refactoring. One of the most important refactoring tools: Extract Method.
For a still increasing number of people, that IDE is IntelliJ. At the same time, there is a meteoric rise in the use of Visual Studio Code. Eclipse is losing ground rapidly. Who even remembers NetBeans or Oracle JDeveloper?
Letting go, Learning and Unlearning, Deprecate
The challenge to get rid of stuff — old ways of doing things, old technologies with unnecessary shortcomings, old fears and long held beliefs — is tremendous. On various levels — from psychological to economic. We have to be prepared to unlearn things — even or maybe especially things we have done and known and believed in for many years. At least be prepared to change and embrace new ways of doing things if they are better. With our experience, we should be able to judge whether they are better — if we can be really honest.
Being able to get rid of technologies that are really of yesteryear is a challenge: there are risks involved, there is no immediate business benefit [so how to find budget, time and priority]. However, continuing on with those technologies is risky and in de long run similarly challenging [ unsupported, vulnerable technology for which no developers and admins can be found, that are unproductive and eventually may not run on the platform, the OS or the hardware].
The Java Platform — OpenJDK, distributions, HotSpot and GraalVM
OpenJDK is an open source project — with sources for the open source implementation of the Java Platform Standard Edition. OpenJDK does not ship binaries. Various companies provide builds or binary distributions based on OpenJDK — a bit like various companies provide their own distributions of the Linux (open source) Kernel. Oracle happens to be one of them — but not necessarily still the leading one. Other builds are available from AWS (Corretto), Azul (Zulu), RedHat, SAP and IBM. Rumours are spreading that Microsoft will soon ship its own build as well. Note: as one source told me, up to 75% or more of the commits on the OpenJDK projects are made by Oracle staff. Some of the negative emotions projected at Oracle may be softened a little if people would be aware of that fact. Oracle is still the by far biggest contributor to the evolution of the Java platform.
The HotSpot Virtual Machine is part of OpenJDK. As such, both the C1 and C2 JIT compilers are there. These compilers have been implemented in C/C++. It has become quite hard to further evolve especially the C2 compiler — although that certainly is going on with for example Java language enhancements such as Valhalla, Panama and Loom. All (?) JVM distributions ship the HotSpot compilers.
Oracle Labs produced GraalVM. Under this umbrella project, several components are worked on. One is a new JIT compiler that can replace the current C2 HotSpot compiler. This compiler is created to better optimize modern Java Byte code patterns that are getting more common for example with Java Byte code originating from Scala code or from modern Java features such as Streams. The GraalVM JIT Compiler can be enabled in existing JVM environments to implement the JIT compiler and as such bring modern optimization patterns (this I believe is the approach taken by Twitter to run their Scala applications).
Also read this quite good Baeldung article.
Note: it seemed at Devoxx that 80% of attendees was on Java 8 and 20% was on later versions already.
The Star of the Show: GraalVM
If I would have to decide what the biggest star of this week of Devoxx was, I would say the prize goes to GraalVM. GraalVM started life as a research project in Oracle Labs — to see if a replacement could be created for C2 — the C++ based HotSpot JIT compiler that had gotten very hard to maintain and optimize for modern code paths. In seven years, GraalVM has expanded quite a bit. It has delivered the JIT compiler that we can now all plug into our JDK environments to improve [in certain cases substantially] the performance of our Java applications. Additionally, GraalVM can run applications written in other — non JVM — languages such as JavaScript/Node, R, Ruby, Python and LLVM languages (C/C++, Rust, Swift, ..) and it can run hybrid or polyglot applications that combine multiple languages.
A feature of GraalVM that played an important role during Devoxx this year is its ability to produce a native image (a stand alone binary executable) for a Java application through Ahead of Time compilation. After normal compilation, GraalVM produces a single binary file that contains everything needed to run the Java application. This binary file starts a small as 10 MB — and it does not need anything else to run. No JRE or any form of Java Run Time. It just runs as native applications — because that is what it is. Startup time is every short and memory footprint is very small — ideal characteristics for Serverless Functions and dynamically scalable containerized applications. Because AOT is applied instead of JIT, there will be no run time optimizations to the application — which for serverless use cases is typically not a big loss at all. Quarkus and Micronaut are frameworks that make great use of GraalVM to produce even faster startup and smaller runtime footprint.
Oracle offers GraalVM in two flavors: a community edition which is based on open source and is offered for free and the enterprise edition which is paid for and offers 24/7 support and enhanced optimizations in the native image as well as improved security. For Oracle Cloud users, GraalVM Enterprise Edition is included in their subscription. Here is the global price list for GraalVM Enterprise Edition — a short inspection suggests a price of $12K to $19K per processor.
The big question around GraalVM is: will Oracle make the Community Edition sufficiently more appealing than OpenJDK with HotSpot to build up real traction and will it not bring too many goodies to the Enterprise Edition? To be fair: GraalVM offers many attractive features and it seems quite reasonable that Oracle stands to make some money for that effort and the value it delivers. Another question: will Oracle be able to follow the evolution of the Java language in GraalVM — such as the language enhancements discussed in the next section. It took until 19th November of 2019 before GraalVM provides full Java 11 support.
Note: GraalVM has been promoted from a research project to a ‘real product’ and the team around GraalVM is growing rapidly. This includes product management in addition to probably more developers and support engineers. GraalVM is serious business for Oracle.
Java Evolution
Java execution is still quite special: the runtime optimization performed by the JVM (C2 JIT compiler) provides many times better performance than static compilers.
Compatibility — old code must still run. Still the platform managed to absorb Generics, Lambdas and Streams, a Modular system. What are driving forces? Changing hardware, changing challenges and changing software [ how other languages do things].
Project Amber — productivity oriented language features ( Local Variable Type Inference (JDK 10), Switch Expressions (JDK 12), Text Blocks (JDK 13), Concise class declarations (records), Sealed types, Pattern Matching)
Java Fibers and Continuations — Project Loom — concurrency, light weight focused at scalability: “A light weight or user mode thread, scheduled by the Java virtual machine, not the operating system. Fibers are intended to have very low footprint and have negligible task-switching overhead. You can have millions of them! Fibers allow developers to write simple synchronous/blocking code that is easy read, maintain, debug and profile, yet scales. Project mantra: Make concurrency simple again” Fibers are built on top of continuations — a low level construct in the HotSpot VM.
(it is to be decided whether continuations themselves in their own right are to be exposed to developers). Associated terms: yield, async, (carrier) thread, executor, Promise, await, park and unpark. Watch: https://www.youtube.com/watch?v=lIq-x_iI-kc
Project Valhalla — “reboot the layout of data in memory” — value types and specialized generics — benefitting from modern hardware — not everything needs to be an object — getting more instructions per CPU cycle by removing the memory [pipeline] bottleneck (first release deep into 2020)
Project Panama — allow easy access to Java developers to native libraries — go beyond JNI (improve on the complexity, lack of JIT optimization, exchanging of native structs and off-heap data structures) and make a native library accessible in Java through JDK generated Interface. This hides away most of the native aspects of what is still a native library (see: https://www.youtube.com/watch?v=cfxBrYud9KM and read Project Panama home page https://openjdk.java.net/projects/panama/) Note: some overlap with GraalVM interoperability. Early access builds are available for Panama.
Books
Three special book tips:
Refactoring — edition 2 Martin Fowler, Kent Beck — december 2018 — https://martinfowler.com/articles/refactoring-2nd-ed.html
Apprenticeship Patterns by Adewale Oshineye, Dave Hoover — https://www.oreilly.com/library/view/apprenticeship-patterns/9780596806842/
Java by Comparison Become a Java Craftsman in 70 Examples by Simon Harrer, Jörg Lenhard, Linus Dietz — https://pragprog.com/book/javacomp/java-by-comparison
And a website:
Tools
Below is a fairly random list of tools, sites, services and technologies that came to my attention during this Devoxx 2019 conference. They seem interesting, I would like to try them out. Most of them are as yet unknown. If you can recommend any — do let me know!
Kubernetes Security:
Kube-bench — Checks whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark — https://github.com/aquasecurity/kube-bench
Clair — Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers. — https://coreos.com/clair
Falco (CNCF, find rogue workloads, checks all sys calls — including SSH calls and disk writes), Falco is an open source project for intrusion and abnormality detection for Cloud Native platforms such as Kubernetes, Mesosphere, and Cloud Foundry. Detect abnormal application behavior. — https://falco.org/
Sonobuoy a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a set of Kubernetes conformance tests and other plugins in an accessible and non-destructive manner.- https://github.com/vmware-tanzu/sonobuoy
Harbor (is an open source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted.),
kube-hunter an open-source tool that hunts for security issues in your Kubernetes clusters. It’s designed to increase awareness and visibility of the security controls in Kubernetes environments.- https://kube-hunter.aquasec.com/
open policy agent — www.openpolicyagent.org
These come on top of the more usual suspects such as Helm, Prometheus, Jaeger, Grafana, Maven, JUnit. Java Lambdas and Streams should be every day tools for Java Developers by now. | https://medium.com/oracledevs/the-state-of-java-developers-reflections-on-devoxx-2019-d82b17488301 | ['Lucas Jellema'] | 2020-04-02 09:43:50.872000+00:00 | ['DevOps', 'Programming', 'Devoxx', 'Java', 'Graalvm'] |
Technology and The Evolution of Storytelling | It is such an exciting time to be a filmmaker.
I do not believe the notion that the cinema is dying or dead because it’s amazing what technology can do to the cinematic storytelling.
What’s great about film is it constantly reinvents itself. It started as a sheer novelty, those images moving on the screen.
Then it went and every step of the way a new technology started being added — sound, color.
What happens is the film grammar of storytelling evolves and changes as well. The technology goes directly with the evolution of the storytelling.
The way films look —it started with old 35mm motion picture cameras, to color with the three-strip Technicolor, to cameras that weighed hundreds of pounds and had to be on dollies and cranes — that was the film grammar of the day.
The limitations of the technology being used to shoot the films set up what we’ve learned as film grammar.
Then, we came to lighter cameras, to handheld cameras, steady cams, and on and on, all the way down to now.
There’s a unique thing to a GoPro.
There’s a unique thing to an iPhone — the way things are shot and the way it’s held. It just gives it a vibrancy you’ve never been able to have before.
I believe new film grammar is going to come from these things.
It evolves, it changes, and it’s in great part because of the technology.
Walt Disney in 1939 receiving one Oscar statuette and seven miniature statuettes from Shirley Temple for “Snow White and the Seven Dwarfs”
In my own field, in animation, a seminal film in the history of animation is Snow White and the Seven Dwarfs, Walt Disney’s first feature-length film.
People thought Walt was insane.
“People aren’t going to sit still for a feature-length cartoon. Are you nuts?”
But Walt was a visionary.
Walt saw beyond what people were used to. They were used to the short cartoon.
It’s interesting how people cannot see beyond what they’re used to.
There’s a famous statement by Henry Ford that before the Model T if you asked people what they wanted, they would say, “A faster horse.”
My own partner at Pixar for 25 years, Steve Jobs, never liked market research. Never did market research for anything.
He said, “It’s not the audience’s job to tell us what they want in the future, it’s for us to tell them what they want in the future.”
If you use technology correctly, you can change opinions overnight.
There’s a great statement I love. It’s that you only get one chance to make a first impression.
First impressions are nearly impossible to get people off of if they have the wrong impression.
I remember when I first saw computer animation. It wasn’t being used for much at the time. It was really geometric, sterile and cold, but I was blown away by it. Not by what I was seeing, but the potential I saw in it.
It was true three dimensionality with a control that we had in hand-drawn animation. I saw the potential in computer animation and was like, “This is great. Everybody, can you see this?”
But everybody was saying, “It looks like… It’s too sterile. No, I don’t like it.”
I realized they were judging from exactly what they were seeing.
People always push back saying, “It’s too cold, too sterile.”
In the early days of computer graphics, it found its way into special effects.
There were some people who didn’t understand the medium and thought it could do everything. There was this company that tried when they were making a movie called Something Wicked This Way Comes.
They had worked on Tron, did some effects, and they had a very charismatic effects guy that convinced them they could create this magical circus that would erect itself — this evil circus comes to town.
Disney bought in on it and they worked for a very long time. I had a very dear friend working on it.
It was way beyond what the computer could do at that time. They ended up cutting the entire sequence out of the film.
That set back computer graphics in the effects world years, because everybody remembered that experience.
It was because people didn’t understand what the technology could do.
About six years after that I was working at Lucas Film’s computer division and Dennis Muren, the brilliant Dennis Muren, Effects Director at ILM, came over to me and said, “We have this effect in a film called Young Sherlock Holmes, and we don’t know how to do it. I’m thinking computer graphics.”
It was only six shots. We said, “Let’s try it.”
It was some of the hardest things we ever did, but I’ll never forget when it came out — the effects industry, people from all over the world, had no idea how it was done.
But it worked. It fit in there. It was nominated for an Oscar for best visual effects.
We were so excited. But it was focusing on understanding the technology and pushing it to places that we couldn’t.
The goal was to make the technology invisible.
When we became Pixar in 1986 and we started working towards our first feature film, I remembered all those projects. I was blessed by, number one, loving the medium of computer animation.
I was just so interested in it and working with the people who basically had invented much of computer animation and we were pushing it all along.
We really understood what the computer could and could not do.
Pixar co-founders Ed Catmull, Steve Jobs, and John Lasseter.
At that time when we rendered things, everything kind of looked plastic-y.
So we started thinking about a subject matter that lent itself to the medium at that time.
“Everything looks like plastic, so what if the characters were made of plastic? What if they were…toys?”
That’s one of the reasons why we leaned into toys becoming alive as a subject for our very first feature film, Toy Story.
It was about the toys that lent themselves to the medium at that time. We chose toys that worked for that.
In fact, it was better in CG than any other medium we could have done because we could make Buzz Lightyear feel like he was made of plastic and ball-and-socket joints and we had screws and scratches and decals and all this stuff you could not have done in any other medium.
When it came out, our main focus was not the technology.
What I was scared about was that people would be like, “Oh, it’s the first computer-animated feature film.”
We made sure Disney, and all around the world, didn’t sell it as “The First CG film.”
You sell it as a great motion picture, because that’s how we made it.
We focused on the story and hiding the technology.
It came out and people loved it. You watch it today and it’s just as entertaining as the day it came out.
Woody and Buzz in the original “Toy Story”
Like I said, you’ve only got one chance to make a first impression.
Unlike Something Wicked This Way Comes, Toy Story was the number one film of the year it came out.
It was a huge hit and everybody started looking at this as a viable filmmaking medium.
Overnight, the opinion changed. Because the technology was used in the right way, telling the right story.
Alfred Hitchcock is one of my favorite filmmakers and one of the reasons why I’ve studied and admired his films is that guy used new technology in incredible ways, but it was completely invisible in everything he made.
You study his films and realize there’s no way he could have made that film, that shot, without that technology.
But he didn’t want you to notice it.
We focus on entertaining people in new ways, and if you focus on the technology too much you get caught up.
It’s not the technology that entertains people, it’s what you do with the technology.
It’s important, I believe, to make the technology invisible, but have it push to do something new.
That’s when you make real breakthroughs.
If you love a technology, if you really, really, really, really love a technology, then dig into it.
Learn as much as you can. It’s fun. That’s what I did with CG.
From left: Pete Docter, Andrew Stanton, John Lasseter, and Joe Ranft, nominees, Writing (Screenplay Written Directly for the Screen) (TOY STORY), at the 1995 (68th) Academy Awards Nominees Luncheon.
I was trained by these great Disney artists. I drew. It was all about story, character drawing, all that stuff, but when I got into computer graphics I was like, “Oh, my god, this is so much fun.”
I wanted to learn as much as I could.
The more you dig into the technology and the more you learn it, you are going to get ideas you would never have thought of without knowing your technology.
The kind of shots you can get from an iPhone that you cannot get with any other camera. Use it.
GoPros: use it. Be inspired by it.
Try things. It’s digital. Get another memory card, for God’s sake.
You will start creating ideas that lend themselves to these things and start looking new.
When you start doing something that’s truly new you will hear, “It’s not going to work.”
Walt Disney heard it. I heard it with CG.
“ Computer animation is so cold.”
Really? No, I don’t think so.
You think about it, it’s true for color, sound, feature length animation, CG.
The first feature film shot on an iPhone? “That’s not going to work.”
Yeah, it’s going to work. It’s going to be awesome.
The first feature shot with a GoPro? It’s going to be awesome in the hands of the right people.
The reason why they say this is because it’s not what people are used to.
Before the Model T, you ask people what they want and they’re going to want a faster horse. It’s not what they’re used to.
When I started working with CG, I could not wait for the tools to become commonplace.
In the early days, when SIGGRAPH was the only place you could go and see computer graphics, it was always fun. Everybody would cheer for reflective clear balls floating over a checkerboard and be amazed by it.
It was in a world where all of the art and the CG was being created by the guys who were writing the program.
There was no such thing as off-the-shelf software. There were no tools available.
They were writing their software and then creating it, and they were kind of the artistic guys within the computer world.
They were just showing off the technology. I kept thinking to myself, “Yeah, but they’re really ugly. This is like boring. Let’s entertain people.”
I couldn’t wait because I always viewed the technology as simply a tool.
Can you imagine the guy who invented the pencil and all of the things that that invention has brought the world?
That’s what I was feeling like with CG.
I couldn’t wait to get it in the hands of everybody to see what they would do.
The mediums we use are simply tools for expressing your art.
Your goal as a filmmaker is to entertain. And to entertain people is about story.
It’s about characters.
It’s about connecting with that audience.
It’s making that connection where you really deeply entertain an audience.
But it’s not just an art form that we’re in. It’s a business. Entertaining stuff simply just does better.
If you can make people laugh, cry and feel things with a film you make, you will be successful.
No matter what medium, any way you’ve distributed it — it all comes down to your knowledge skills.
What makes a good story? How can I tell it properly?
People get so excited about new technologies. I’ve had the question so many times from young people, “What software should I use?”
You know what? In your lifetime the software and the technology will change so drastically, it doesn’t matter.
What matters is when you’re young, you get excited about learning the fundamentals.
It sounds so boring to young people when they can make a movie so quickly and release it to the world and get millions of Likes.
“It’s so boring. I know how to do that.”
Trust me, you don’t.
The fundamentals of good storytelling, the fundamentals of film grammar, even though it was made with old Mitchell cameras and stuff like that, learn it.
Learn the fundamentals of animation. Learn the fundamentals of physics and things like that, of basic color, basic design.
This is the foundation of the building of your career.
Then, as you get into new technology, you’ll know exactly what to do.
And your work will not be about the technology. It will be about connecting and entertaining people.
No matter the length of your film — 30 seconds, five minutes, 22 minutes, feature length — it needs a story. It needs a beginning, a middle and an end.
It needs to deeply connect with people.
There are big differences between storytelling at 30 seconds or a feature film. Big differences.
We did a series of short films in the beginning of Pixar and we did television commercials.
We were thinking the next step for us was to do a Christmas special, but Disney threw us in the deep end, and we developed a feature film.
It was amazing what we didn’t know.
But I went back to my traditional training I had learned from my mentors — Frank Thomas, Ollie Johnston, and the great Disney animators that were still working at the studio when I started there — and the fundamentals of animation they kept talking about.
Ollie Johnston would turn to me and I was expecting something about arcs and lines and silhouette value and all that stuff.
He would turn and say, “John, what’s the character thinking?”
It was amazing to me, just that simple statement. It was not about the drawing.
It was never about the drawing to them. It was about that character and what it’s thinking.
Through pure movement they taught me to bring a character to life and give it an emotion, a personality, a uniqueness, and it was done through just pure motion.
So when I started working with a computer, I just brought that technology with me. As we started developing the story, it was always about emotion. It was always about emotion from day one with Toy Story.
It was about emotion, making you feel.
I’ve admired Walt Disney so much my whole life and part of it is because he entertained people like no other person in history has ever done. The way he makes you feel when you watch his movies, the way he makes you feel when go through that tunnel under the train station at Disneyland and you’re transported.
It’s about emotion and that connection.
Walt always said, “For every laugh, there should be a tear.”
It felt like that core emotion. That became the hallmark of what we tried to do at Pixar — to do it with the new technology.
I think the biggest thing for us is we studied films. We watched films religiously.
With Toy Story, it was a buddy picture. We watched every buddy picture we could find and analyzed it. Good ones, and it’s very important to watch bad ones too.
You start understanding what they did. Don’t copy things. It’s about understanding and learning.
Very, very, very important: Do not work in a vacuum.
You have to surround yourself with trusted people. You get so immersed in your work, you will not be able to see the forest from the trees. Frankly, you’ll be studying the pine needles and worrying about them.
You need someone to help you back up and take a look at the forest and see where things are working or not working.
And you need to surround yourself with people whose judgment you trust and they can be brutally honest with you.
As an artist, showing unfinished work to people is really difficult. It’s really hard. It always is hard. It always will be hard. It never gets any easier, but you have to do it.
Andrew Stanton, my creative partner at Pixar, has this fantastic phrase that I use all the time, “Be wrong as fast as you can.”
Trust me, when you go from an outline to a treatment, your first treatment sucks and you do revisions and talk to people and you get something working really great.
Go to your first draft of the script, it sucks. You do it a whole bunch of times.
For us, we go to story reels, the first story reel sucks. But the longer you say, “I’m not ready yet, give me a little more time, give me a little more time,” and like that, it’s not going to help the problem.
You’re just going to be polishing. You’re not going to see where it’s not working.
Get it up there. Throw it up there as fast as you can, talk about it, tear it back down, put it back up there. Keep doing this.
Surround yourself with people you trust.
Be thirsty for knowledge.
It will always make your work better. The market is changing really, really quickly.
Who knows what the business will look like ten years from now?
I know one thing for sure. | https://medium.com/art-science/technology-and-the-evolution-of-storytelling-d641d4a27116 | ['The Academy'] | 2015-06-25 00:58:48.313000+00:00 | ['Storytelling', 'Technology', 'Movies'] |
So I am very proud of myself having accomplished several things today! | My own photo of my new office space
So I am very proud of myself having accomplished several things today!
I managed to switch hosts of my domain all by myself with just some advice from my friend Grayson. I paid for another year of my blog with my own money (from my part-time real job). Found and freed my heater from under my hubby's workbench, so now I’m not freezing to death as I write from my new office space.
My own photo-found my heater!
I have longer to write because I got a reprieve from not just work, but a family obligation.
Now that I’m “cooking with gas,” let’s go! | https://medium.com/100-naked-words/so-i-am-very-proud-of-myself-having-accomplished-several-things-today-b31c3aa381 | ['Kim Smyth'] | 2020-02-04 16:01:01.140000+00:00 | ['Productivity', 'Work Life Balance', '100 Naked Words'] |
Why are so many coders musicians? | I am currently at the beginning of my career as a developer. I came in ‘through the back door’ so to speak. I’ve been a musician my whole life, and it was through my interest in building music tools that I got into software.
Going from music to code was a gradual transition. It took me some time to make the decision because of my fear of alienation. I didn’t want to turn into a cubicule zombie, typing mindlessly into a computer all day long, detaching from life and art. I have had a couple of corporate jobs before and couldn’t take it. For the creative type, alienated work sucks the marrow out of life. Musicians need creative expression in their work and have low tolerance towards soul crushing jobs.
Even though coding implies staring into a screen for long hours, to my surprise, I found it was not alienating. In it I found a new way of expressing creativity. Just like working in music, either producing, composing, or playing, it didn’t feel like actual work. Why was that? In addition to this, I noticed many developers were musicians. After a few months in the industry, I realized this was not a coincidence.
Finding musicians made me realize my fears were unjustified and it also made me wonder why many coders were musicians. What are the commonalities between the two professions that make this relationship?
In this three part series of posts, I’ll talk about the different qualities that relate these two professions.
Long-term commitment
There seems to be a quality of focus for musicians and for coders. That reserve and focus is needed for people to be able to concentrate and develop skills for the long term. Staying on track and persevering through continuous frustration is a personal trait I find in both disciplines.
Developing musicianship requires long-term commitment and a continuous training of brain plasticity to incorporate fine hand movements in instrumentality, to train the ear to distinguish between notes, chords and timbres, to learn how to read scores, and to transform the theoretical abstraction of harmony, counterpoint and instrumentation into mental representations of sound.
As a beginner coder I’ve found myself in a similar process. Learning the fundamentals and becoming comfortable and creative with them requires a maturity of the concepts that takes a long time. A lot of the concepts in programming are abstractions that you can’t relate to day-to-day experiences, hence, they require a long time to settle in.
This being said, though the nature of the two disciplines requires a similar mindset, it doesn’t mean the skills are the same. I don’t think logic and algorithmic thinking translate directly into music, which requires knowing how to count and having a good ear and coordination. At the same time, I don’t see how these last skills would translate into coding. A lot of musicians can’t code and a lot of coders couldn’t be musicians no matter how hard they tried. Nevertheless, the process by which you gain the skills is similar and rewards the kind of personality that is able to engage in long-term practice and learning.
Thanks for reading. Please continue with the second part of the series, with other relationships I have found between music and coding. | https://medium.com/hackernoon/why-are-so-many-coders-musicians-60389fb8b645 | ['Francisco Rafart'] | 2018-03-08 13:19:10.481000+00:00 | ['Software Development', 'Technology', 'Programming', 'Learning', 'Music'] |
What Is the Planning Fallacy and How to Beat It Down (9 Useful Tips) | The planning fallacy is a prediction phenomenon. It occurs that people underestimate the time it will take them to complete a task.
It’s all too familiar to many of us.
…and it continues despite knowing that previous tasks have taken longer than planned.
The planning fallacy was first proposed by Daniel Kahneman and Amos Tversky. They presented their theory in an influential 1979 paper.
Let me explain.
The study “Exploring the “Planning Fallacy”: Why People Underestimate Their Task Completion Times”:
“37 psychology students were asked to estimate how long it would take to finish their senior theses. The average estimate was 33.9 days. They also estimated how long it would take “if everything went as well as it possibly could” (averaging 27.4 days) and “if everything went as poorly as it possibly could” (averaging 48.6 days). The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in the amount of time they predicted”.
Turns out students’ actual completion time was a remarkable 21.6 days longer than their best estimate (55.5 days to 33.9 days).
Other scientists, in “ An Economic Model of The Planning Fallacy” (2008), say:
“Faced with an unpleasant task, people tend both to underestimate the time necessary to complete the task and to postpone working on the task. Thus projects often take inordinately long to complete and people struggle to meet, or even miss, deadlines”.
Why Does it Happen and How to Beat it?
Let’s see why we assume we have more time than we actually do.
Then we will fix it in an instant.
The planning fallacy can make it difficult for us to complete tasks. Things like:
being on time for a meeting
meeting the application deadlines for college scholarships
getting ready for your plane departure
filing taxes
doing referee reports
planning for retirement
making investments in your health
…and many others (also referred to as “life”).
The planning fallacy can influence your health and work satisfaction.
Assuming you have more time than you do is the quickest route to:
stress
overwork
a lack of productivity
burnout
Consider that Mr. Average and the Sydney Opera House are in the same boat when it comes to the planning fallacy. The Australian government first commissioned the project in 1958. They set the expected completion date for 1963. Yet, it didn’t open until 1973–10 years late.
Happens to the best of us.
Planning Fallacy — 9 Ways to Overcome it
What if you could fix the cognitive bias that causes the planning fallacy to happen?
Let’s break it down.
Below, you’ll find 9 ways to overcome this cognitive bias.
1. Take an Outside View
Kahneman and Tversky believe that people lean towards an “inside view”.
They focus on the specifics of the task at hand, paying special attention to its unique features.
For example:
People imagine and plan out the specific steps they will take to carry out the target project.
Do you know what’s the problem here?
Events usually don’t unfold exactly as we imagine (not to mention — never).
We love to create a thoughtful mental scenario in advance, but we will likely encounter:
unexpected obstacles
delays
interruptions
Try to make more realistic predictions. Take an “outside view”. Be smarter than your cognitive bias.
Overcome your own (incorrect) subjectivity.
Do not base your estimates on your own frame of reference.
Base your predictions on your prior experiences so you don’t fall into the trap of thinking that your previous experiences aren’t relevant to the new task.
Here’s what happens:
People recognize that their past predictions have been over-optimistic.
Yet, they insist that their current predictions are realistic.
We are complicated creatures, aren’t we?
2. Be a Pessimist
What can go wrong, will go wrong — states Murphy’s Law.
Sad as it seems, it’s pretty useful when you have work to do.
Your projects won’t run perfectly, even with your best intentions at heart.
Approaching planning from a “negative”, i.e. risk management standpoint will help curb enthusiasm.
Here is some handy advice to follow:
Set a realistic deadline. Add a little buffer time (for example 20 percent to your estimated time) to that. Recalculate as needed
Focus on your lists. Only spend time and energy on what needs to get done
Consider what could go wrong. Think of how you can respond
3. Resist the Autocracy of the Urgent
Kat Boogaard of Trello explains:
“Our brains have the not-so-helpful tendency to conflate real, productive work with those other small, menial, and mindless tasks. By totally pushing those out of your mind (and off your to-do list) for now, you won’t be tempted to color-code your inbox when you should actually be completing that presentation that’s due in two hours.”
We tend to put important tasks aside and deal with urgent tasks.
Why?
Because they provide us a rapid sense of accomplishment and this is what tigers like.
Urgent tasks need your immediate attention. Phone calls, meetings, tasks with tight deadlines — they want you to take quick action.
These tasks don’t help advance long-term goals. Important tasks do.
To solve this vicious cycle, understand the difference between urgent and important tasks.
How good you are at distinguishing urgent and important tasks influences your future success.
4. Make Use of the Pomodoro Technique
Let’s jump right in:
For many people, time is an enemy — says Francesco Cirillo.
He’s an Italian entrepreneur, creator of the time management know-how called Pomodoro technique.
When Cirillo was a student, he created his own simple study habit. He used it to maximize his productivity and reduce a feeling of burnout.
It’s all about tracking your time to get a more realistic handle on your projects. Especially on how long specific projects and tasks take you.
The Pomodoro technique teaches us to work with time, not against it.
How to put it into practice?
This technique focuses on working in short, focused bursts of 20/30/40 minutes. Then you give yourself a brief break to recover and start over.
The technique requires a timer. It allows you to break down your large complex task into manageable intervals.
Once you start a task, you aim to finish it before attending to urgent but unimportant tasks.
5. Declutter of “Time Bullies”
Your working time is special. It’s important. Care about it.
When you work, your time is for your work. Stick to it.
Say “no” to unwanted cigarette breaks and gossiping with office co-workers.
Let’s say it again: When you work, you work.
Learn to say “no” to those who don’t respect it.
Saying “no” gives you time to focus on your creative efforts.
Don’t get suckered into tasks or leisure you don’t have time for.
It may be hard sometimes but use your assertive skills. Know how important is the working time for the development of your idea.
Say “yes” or “no” when you mean it.
6. Break Big Tasks Into Smaller Ones | https://medium.com/swlh/what-is-the-planning-fallacy-and-how-to-beat-it-down-9-useful-tips-24d967c8d5eb | ['Dan Silvestre'] | 2020-12-11 03:51:54.291000+00:00 | ['Planning', 'Procrastination', 'Productivity', 'Time Management', 'Self Improvement'] |
Spent $50 to Discover My Strengths — Was It Worth It? | Life-long learning has become a popular concept against unemployment and economic crises. But what are you supposed to learn? It is crucial to understand your strengths and cultivate them properly instead of leaning into your weaknesses.
Stop focusing on your shortcomings!
When it comes to being successful, there is a myth that you need to work on your weaknesses and overcome your shortcomings. That, however, is not true. The authors Marcus Buckingham and Ashley Goodall have dedicated a chapter on this myth in their book “Nine Lies About Work. A Freethinking Leader’s Guide to the Real World”. The lie, they say, is that “the best people are well-rounded”.
In reality, it requires plenty of time and energy to become well-rounded and to cover your weaknesses. Instead, they claim, the most successful people have learned to develop their talents and cultivate them conscientiously. Regardless if sports or business — achievers know what they are good at and they dive into those talents that not only generate good results but create joy.
In their book, they mention Gallup’s CliftonStrengths 34 test. It’s a sort of personality test with 117 items that tells you which are your core strengths out of 34 different items. I was intrigued.
Like many people, I’ve come across the question “What are your biggest strengths?” in job interviews. Usually, I’d come up with standard phrases out of lack of understanding and framing. I always considered this question to be just a test to see how you sell yourself and if you are rather a humble or bragging character.
It seems that knowing your strengths specifically can help you. Not only in job interviews, but in focusing on what matters for your individual career path.
Would you prefer talking to a historian or a futurist?
The test looks quite typical. It includes 117 questions, which you can answer within 20 seconds. You have a scale with two extremes like “I like to talk to others” and “I like to spend time alone”. Then you need to decide which option at the spectrum describes you best. Sometimes, the statements appear to have little in common or to be of equal quality. To me, the question if I’d rather prefer talking to a futurist or a historian, was hard to answer. In practice, I’d love to hear both people.
The strengths finder is based on the work of psychologist Donald Clifton. In his view, psychologists and assessments usually focused on what was wrong in people and not what makes them excel. Therefore, he has dedicated his research to develop a tool to help people teach them what their talents are.
Struggling with labeling and reflecting on your strengths, it can be a gamechanger to finally know what to really focus on. So, I gave it a shot. I paid roughly 50 US-Dollars to access the assessment and answered all questions in roughly 35 minutes.
Then, the results poured in. You get a strengths report after completing your test and it shows you your 10 talents and highlights the top 5. The talents fall into four categories: Execution (e.g. discipline and consistency), Influencing (e.g. communication and activating), Relationship Building (e.g. empathy and harmony), and Strategic Thinking (e.g. futurist and analyst).
My profile was dominated by strategic thinking with five talents and influencing with four skills. My top 5 reflect this pattern:
Source: Screenshot by author; Gallup.com
What’s fairly useful is what you receive in the end. You receive a report with your 10 main strengths, descriptions of what they entail, and how to strengthen them. Additionally, there are short explanatory videos with some insights on how to progress with each talent.
Was it worth the 50 Dollars?
Were the results surprising? Yes and no. I’ve known that I’m inclined into thinking strategically and preferring the big picture than completing precise and detailed tasks. However, what I find useful is that the results provide me with the right vocabulary to describe my strengths and generate insights on how to become a better version of myself.
After all, each person has individual strengths and a unique path and there is no use in trying to adapt and copy somebody else’s ways. For instance, it is no surprise that I’d love to speak both with the historian and the futurist as apparently, my main talent is input — collecting information. Moreover, I will follow the advice that I was given for my main talent — input: I will be more selective and careful regarding the information and media I consume as I want to focus on fruitful insights in my mind.
Additionally, the test results help me to look for a career development that fits me. I will clearly try to avoid jobs with a large chunk of administrative tasks.
In comparison to Jordan Peterson’s self-assessment which I’ve done in the past, the CliftonStregths test gives you a specific vocabulary and indication for your professional life. Peterson’s self-assessment delivers a broader picture on the path of understanding yourself in all areas of life.
If you have some spare money, it’s worth investing that in yourself. I’m a firm believer that understanding yourself is a crucial step to carving the life that is right for you.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/i-spent-50-to-discover-my-strengths-was-it-worth-it-c6dd8de8a2c6 | ['Alice Greschkow'] | 2020-12-03 17:12:26.541000+00:00 | ['Work', 'Self-awareness', 'Careers', 'Professional Development', 'Self Improvement'] |
Honest Thoughts From a Veteran About Gun Control and Mental Health | Since leaving the military, I’ve been surprised by the number of people I’ve encountered who want to show me their AR-15s (the civilian version of an M4 or M16). They show them off as if we suddenly have a bond because they bought a weapon I went to war with and can strip apart blindfolded.
I always ask the same question: “What made you choose the AR-15?”
With my fellow veterans, it’s an easy answer. The AR-15 is the weapon we’re most familiar with. Durable. Lightweight. Personally, it’s what I’m most comfortable hunting with (especially boar).
Some of the responsibilities that come with being a soldier include gun control.
A few of the people I’ve asked have responded with legitimate and responsible answers. A few others were collectors. But after hearing most people’s reasons, my internal response is the same as my fellow sergeant’s outburst. Basically, these people just want to look cool. They want to play “military” without ever enlisting. Some have even shown me that they have the latest tactical gear, leaving me wondering why they want all the bells and whistles that come along with being a soldier, yet none of the responsibilities. And believe it or not, some of the responsibilities that come with being a soldier include gun control.
Why the military does a better job at gun control than anyone
One thing that has baffled me over the years is that I can go to the grocery store and buy a pack of Tic Tacs and then walk across the street and buy a gun. I’m not baffled that I can buy a gun, as I believe it’s an important liberty to have. But what concerns me is the ease and utter lack of training required to buy a tool that has no purpose other than to kill something.
A knife can be used for cooking and a bat for baseball. But a gun? Unless you’re collecting them for a museum, the point of a gun is to kill something.
Let me give you a breakdown of how the military gets gun control right, and society has the process backwards:
When you enlist in the military, you spend several weeks learning weapons safety and training. Before you are ever allowed to fire a weapon, you must be able to disassemble the rifle, clean it, and then reassemble it. You take tests and quizzes asking you questions pertaining to the distance and speed a bullet can travel. Once you pass your exams, you will then be allowed to fire the weapon under the supervision and training of drill sergeants and weapons experts. Finally, you must qualify with your weapon on targets. If you’re unable to do that, they will not allow you to graduate from basic training.
Even overseas, training and practice is vital towards safety.
In the military, every weapon has a serial number. If that weapon gets lost or misplaced, they know “who done it” and there are serious repercussions. If you own a personal weapon, you must register it with the base you’re stationed at.
In combat or on duty, if it’s determined you’re mentally unfit to carry out your duties, your weapon is confiscated. You’ll then go through counseling until you’re deemed fit to once more carry out your duties. I’ve seen it happen on more than one occasion.
Now compare that to the process of any random person buying a gun in the United States, where there’s no required training and no one who determines whether you’re mentally sound.
“See! This is why we need common sense gun laws,” you say
We’re all tired of mass shootings and know something needs to change. Like you, I fear for my child’s safety at school, and I believe there are reforms that need to take place. The current battle cry is “common sense gun laws” but when pressed, most people can’t articulate what the hell that means aside from emotionally vague sentiments.
The most common argument for gun control is to ban certain styles of weapons from the populace or ensure greater safety measures — but you’re more likely to hear polarizing extremes when this is discussed, as opposed to a thoughtful and well-informed debate. Let me show you an example of how most people don’t know what they’re talking about.
Pop quiz, HotShot! (Bonus points if you get the reference.)
Below is a photo of your standard AR-15 with collapsible buttstock, rail sights for optics, and a 30-round magazine. Below that is a Ruger Mini 14.
An AR-15 SAINT | Springfield Armory
A Ruger Mini 14 | Ruger
Can you tell me the difference between the two? Which one of these should be banned (and why) under “common sense” gun laws?
Most people will point to the AR-15 as the weapon that needs a ban because they associate the AR-15 with mass shootings. Aside from that, they know little to nothing about guns. When I ask about the Ruger Mini 14, however, people assume it’s just your standard hunting rifle. But nothing could be further from the truth.
Both weapons are semi-automatic rifles that fire .223 caliber bullets. Both have the option of collapsible buttstocks, advanced optics, and customization. In fact, here’s a Ruger Mini 14 with custom, tactical options:
A Ruger Mini 14 Model 5846 | Ruger
Yet, there is no difference in the lethality between the Ruger mini 14 with tactical options and the version without. It just looks different because one has wood paneling. Thus, when most people say “common sense gun laws” they’re really saying “ban weapons that look scary.”
Another argument is to ban semi-automatic weapons. The only problem with that is that you’d need to ban pistols, most rifles, and some shotguns if you want to go with that category. A semi-automatic weapon is nothing more than a weapon whose firing mechanism you don’t have to re-cock. When the bullet shell ejects, it loads the next bullet into the chamber. The speed at which you can fire the weapon depends solely on how fast you can pull the trigger.
Potential solutions
If we want to impart common sense gun laws and not let our emotions or misinformation dictate the outcome, perhaps we should follow the same lead as the military:
1. You’re required to go through training and orientation first. You must be able to disassemble the weapon and clean it, plus know the difference between bullet calibers and rate of fire. Afterwards, you’ll be supervised by a weapons instructor and pass target practice. Then, and only then, will you be allowed to take the weapon home. (Veterans and law enforcement are exempt.)
2. Your weapon will be registered with the local police and you’re responsible for its whereabouts. (This needs to be true to a greater degree than what we have in place now.) Parents, if your kids get hold of your weapons and do something dumb, you’re responsible in the same way that you would be if they were driving your car and hit someone. There are repercussions on your end too.
3. Required mental health and criminal background checks. (It’s embarrassing that I even have to state this.) This is perhaps the largest part of the issue that doesn’t have any proper solutions.
If we don’t address mental health, we miss a huge part of the problem
In October 2016, one of my good friends killed herself. I was the last person to speak with her before she put a gun to her head and ended her life.
When I found out that she’d committed suicide with a firearm, I was angry. She should have never been allowed to buy a weapon in the first place. I knew firsthand of her long history with mental illness, and her issues were well documented with psychiatrists. She even informed her clinician that due to a recent divorce, she was struggling with suicidal ideation. A few days prior to ending her life, she bought a handgun. That weapon would be how she exited this world.
As someone who works inside the mental health industry, I’ll be the first to tell you that this is a major issue left unaddressed. I’ve been working on a new program (with a mental health expert) to help men and women combat depression. Often when I speak with men and women about their depression, anxiety, or other mental health issues and I ask what they think the reason behind their issues is, they say, “I don’t know.”
However, I have a hunch that the men and women responding do know; they’re just avoiding a deeper or more intimate conversation. So we surveyed over 500 men and women and the results were staggering. In the survey, our respondents could choose as many answers as they wanted that felt relevant to their situation. We asked things like “Does past pain or trauma play a part in your depression?” and “Are emotional or relational issues part of your depression?”
75% of respondents said stress and difficult life situations were the number one reason behind their depression. Relational issues with others and not knowing how to handle their emotions also affected 60% of those who responded. Unresolved past pain and lacking purpose or direction in life accounted for more than half of all respondents. Less than 5% said they “didn’t know” the reason behind their depression, and that number is misleading because almost every person chose another reason in addition to “I don’t know.”
What we’re seeing is an entire generation of people who no longer have the skills necessary to face adversity or learn how to become resilient men and women. Bullying in schools is endemic, and it’s no longer just the kid from the traumatic home who’s lashing out. It’s the “mean girls” and the cool kids asserting dominance and slighting even their closest friends because that’s what they see on social media and from celebrities. More people are more lonely and isolated than ever before and they’re lashing out, too. Men, in particular, have toxic views of masculinity, like: “Real men do everything on their own. Real men don’t cry. Real men express anger through violence.”
I’ve heard people tell me “I got bullied and didn’t shoot anyone.” Yeah, me neither. I even got stitches from a school bully. But there’s a saying in the military’s SERE school (Survival, Evasion, Resistance, and Escape), which is where you learn to experience torture as a prisoner of war: “Everyone breaks.”
You’ve probably heard that phrase in the movies or from political hearings on torture, too. Once you’ve experienced bullying, isolation, or trauma long enough, you too will snap. | https://humanparts.medium.com/honest-thoughts-from-a-veteran-about-gun-control-and-mental-health-c74930488e28 | ['Benjamin Sledge'] | 2019-10-16 16:22:40.704000+00:00 | ['Guns', 'Mental Health', 'Gun Control', 'Military', 'Life'] |
How to land a career in UX design with zero qualifications | What should I keep in mind?
Focus on developing your skills, not your job title.
Your job will change in the future, so learn to adapt to new and emerging trends in the market. UX design was not a thing when I was at university, so don’t bet that it’s here to stay. The skills that I had acquired from when I was an auditor to digital marketer has led me to where I am now; and has in fact equipped me with the skills to inform what I currently do in my UX role.
Don’t stick to a plan; stick to a goal.
Always have a beacon of light that serves a higher purpose. Your plan today is probably not the same as it was five years ago, nor will it ever be stagnant going forward. So find out what it is that you want to do as a lifelong goal; whether that’s to help other people achieve their goals or to create products for a better future. You might not know what it is now, but your experiences overtime will shape and inform your understanding.
If you don’t like where you are, then do something about it.
There have been too many times where I’ve heard people complain about how they hate their job or how they wish they could quit sooner. In all honesty, I was once in that position too. Understandably, it’s hard to break out of the mentality that “there will never be a better opportunity after this” or that “it’d be pointless to leave now”. It’s okay to feel that way but don’t let it become an unhealthy blocker that stops you from pursuing what you really want to do. Remember, change won’t happen until you do.
Don’t give up! Learning takes time.
You’re not going to achieve everything in one go, so set yourself some goals that can be achieved within a reasonable time frame. If you’re currently unemployed and feeling disheartened, don’t give up! Use this time to invest in yourself and fill in the gaps of what you currently don’t know. Continue to build up your portfolio with new experiences using the Learning Framework and check in with a mentor or a friend to keep you on track. Good luck! :) | https://uxplanet.org/how-to-land-a-career-in-ux-design-with-zero-qualifications-16ddcb3b3eda | ['Gloria Lo'] | 2019-04-17 12:34:42.273000+00:00 | ['Design', 'Career Change', 'UX Design', 'Design Process', 'Careers'] |
Interspeech 2018 Highlights | This year the Sciforce team have travelled as far as to India to one of the most important events in speech processing community, the Interspeech conference. It is a truly scientific conference, where every speech, poster or demo is accompanied by a paper published in the ISCA journal. As usual, it covered most of the speech-related topics, and even more: automatic speech recognition (ASR) and generation (TTS), voice conversion and denoising, speaker verification and diarization, spoken dialogue systems, languages education and healthcare-related topics.
At a glance
● This year’s keynote was “Speech research for emerging markets in multilingual society”. Together with several sessions on providing speech technologies to cover dozens of languages spoken in India, it shows an important shift from focusing on several well-researched languages on the developed market to a broader coverage.
● Quite in line with that, while ASR for endangered languages is still a matter of academic research and funded by non-profit organizations, ASR for under-resourced languages with a sufficient amount of speakers is found attractive for industry.
● End-to-end (attention-based) models gradually become the mainstream speech recognition. More traditional hybrid HMM+DNN models (mostly, based on Kaldi toolkit) remain nevertheless popular and provide state-of art results in many tasks.
● Speech technologies in education are getting momentum, and healthcare-related speech technologies have already formed a big domain.
● Though Interspeech is a speech processing conference, there are many overlappings with other areas of ML, such as Natural Language Processing (NLP), or video and image processing. Spoken language understanding, multimodal systems and dialogue agents were widely presented.
● The conference covered some fundamental theoretical aspects of machine learning, which can be equally applied to speech as well as to computer vision and other areas.
● More and more researchers share their code, so that their results could be checked and reproduced.
● Ultimately, ready-to-use open-source solutions were presented, e.g. HALEF, S4D.
Our Top
At the conference, we focused on topics related to application of speech technologies to language education and on more general topics such as automatic speech recognition, learning speech signal representations, etc. We also visited two pre-conference tutorials — End-To-End Models for ASR and Information Theory of Deep Learning.
Tutorial 1: End-To-End Models for Automatic Speech Recognition
This tutorial given by Rohit Prabhavalkar and Tara Sainath from Google Inc., USA. was undeniably one of the most valuable events of the conference bringing new ideas and uncovering some important details even for quite experienced specialists.
Conventional pipelines involve several separately trained components such as an acoustic model, a pronunciation model, a language model, and 2nd-pass rescoring for ASR. In contrast, end-to-end models are typically sequence-to-sequence models that output words or graphemes directly and simplify the pipeline greatly.
The tutorial presented several end-to-end ASR models, starting with the first model called Connectionist Temporal Network (CTC) which receives acoustic data at the input, passes it through the encoder and outputs softmax representing the distribution over characters or (sub)word and its development RNN-T which incorporates a language model component trained jointly.
Yet, most of state-of-art end-to-end solutions use attention-based models. Attention mechanism summarizes encoder features relevant to predict next label. Most of the modern architectures are improvements on Listen, Attend and Spell (LAS) proposed by Chan and Chorowski in 2015. The LAS model consists of an encoder (similar to an acoustic model), which has pyramidal structure to reduce the time step, an attention (alignment) model, and a decoder — an analogue to a pronunciation or a language model. LAS offers good results without an additional language model, and is able to recognize out-of-vocabulary words. However, to decrease word error rate (WER), special techniques are used, such as shallow fusion, which is integration of separately trained LM and is used as input to decoder and as additional input to final output layer.
Tutorial 2: Information theory approach to Deep Learning
One of the most noticeable events of this year’s Intespeech was a tutorial by Naftali Tishby from Hebrew University of Jerusalem. Although the author first proposed this approach more than a decade ago and it is familiar to the community, and this tutorial was a Skype teleconference, there were no free seats at the venue.
Naftali Tishby started with overview of deep learning models and information theory. He covered information plane based analysis, described learning dynamics of neural networks and other models, and, finally, showed an impact of multiple layers on learning process.
Although the tutorial is highly theoretical and requires mathematical background to understand, deep learning practitioner can take away the following useful tips:
● Information plane is a useful tool for analyzing behavior of complex DNNs.
● If a model can be presented as a Markov chain, it would likely have predefined learning dynamics in the information plane.
● There are two learning phases: capturing inputs-targets relation and representation compression.
Though his research covers a very small subset of modern neural network architectures, N. Tishby’s theory spawns lots of discussions in the deep learning community.
Speech processing and education
There are two major speech-related tasks for foreign language learners: computer-aided language learning (CALL) and computer-aided pronunciation training (CAPT). The main difference is that CALL applications are focused on vocabulary, grammar, and semantics checking, and CAPT applications do pronunciation assessment.
Most of the CALL solutions use ASR at their back-end. However, a conventional ASR system trained on native speech is not suitable for this task, due to students’ accent, language errors, lots of incorrect words or out-of-vocabulary words (OOV). Therefore, techniques from Natural Language Processing (NLP) and Natural Language Understanding (NLU) should be applied to determine the meaning of the student’s utterance and detect errors. Most of the systems are trained on non-native speech corpora with a fixed native language, using in-house corpora.
Most of CAPT papers use ASR models in a specific way, for forced alignment. A student’s waveform is aligned in time with the textual prompt, and the confidence score for each phone is used to estimate the quality of pronunciation of this phone by the user. However, some novel approaches were presented, where, for example, relative distance between different phones is used to assess student’s language proficiency, and involves end-to-end training.
Bonus: CALL shared task is an annual competition based on a real-world task. Participants from both academia and industry presented their solutions which were benchmarked on an opened dataset consisting of two parts: speech processing and text processing. They contain German prompts and English answers by a student. Language (vocabulary, grammar) and meaning of the responses have been assessed independently by human experts. The task is open-ended, i.e. there are multiple ways to say the same thing, and only a few of them are specified in the dataset.
ASR
This year, A. Zeyer and colleagues presented a new ASR model showing the best ever results on LibriSpeech corpus (1000 hours of clean English speech) — the reported WER is 3.82%. This is another example of an end-to-end model, an improvement of LAS. It uses special Byte-Pair-Encoding subword units, having 10K subword targets in total.
For a smaller English corpus — Switchboard (300 hours of telephone-quality speech) the best result is shown by a modification of Lattice-free MMI (Maximum Mutual Information) approach by H. Hadian et. al. — 7.5% WER.
Despite the success of end-to-end neural network approaches, one of their main shortcomings is that they need huge databases for their training. For endangered languages with few native speakers, creating such database is close to impossible. This year, traditionally, there was a session on ASR for such languages. The most popular approach for this task is transfer learning, i. e. training a model on well supported language(s) and retraining on an underresourced one. Unsupervised (sub)word units discovery is another widely used approach.
A bit different task is ASR for under-resourced languages. In this case, a relatively small dataset (dozens of hours) is usually available. This year, Microsoft organized a challenge on Indian languages ASR, and even shared a dataset, containing circa 40 hours of training material and 5 hours of test dataset in Tamil, Telugu and Gujarati. The winner is a system named “BUT Jilebi” that uses Kaldi-based ASR with LF-MMI objective, speaker adaptation using feature-space maximum likelihood linear regression (fMMLR and data augmentation with speed perturbation.
Other topics
This year we have seen many presentations on voice conversion. For example, trained on VCTK corpus (40 hours of native English speech), a voice conversion tool computes the speaker embedding or i-vector of a new target speaker using only a single target speaker’s utterance. The results sound a bit robotic, yet the target voice is recognizable.
Another interesting approach for word-level speech processing is Speech2Vec. It resembles Word2Vec widely used in the field of natural language processing, and lets learn fixed-length embeddings for variable length word speech segments. Under the hood, Speech2Vec uses encoder-decoder model with attention.
Other topics included speech synthesis manners discrimination, unsupervised phone recognition and many more.
Conclusion
With the development of Deep Learning, Interspeech conference, originally intended for the speech processing and DSP community, gradually transforms to a broader platform for communication of machine learning scientists irrespective of their field of interest. It becomes the place to share common ideas across different areas of machine learning, and to inspire multi-modal solutions where speech processing occurs together (and sometimes in the same pipeline) with video and natural language processing. Sharing the ideas between fields, undoubtedly, speeds up the progress; and this year’s Interspeech conference has shown several examples of such sharing.
Further reading for the fellow geeks and crazy scientists
Tutorial 1:
1. A. Graves, S. Fernández, F. Gomez, J. Schmidhuber. Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks. ICML 2006. [pdf]
2. A. Graves. Sequence Transduction with Recurrent Neural Networks. Representation Learning Workshop, ICML 2012. [pdf]
3. W. Chan, N. Jaitly, Q. V. Le, O. Vinyals. Listen, Attend and Spell. 2015. [pdf]
4. J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, Y. Bengio. Attention-Based Models for Speech Recognition. 2015. [pdf]
5. G. Pundak, T. Sainath, R. Prabhavalkar, A. Kannan, Ding Zhao.
Deep context: end-to-end contextual speech recognition. 2018. [pdf]
Tutorial 2:
6. N. Tishby, F. Pereira, W. Bialek. The Information Bottleneck Method. Invited paper, in “Proceedings of 37th Annual Allerton Conference on Communication, Control and Computing”, pages 368–377, (1999). [pdf]
Speech processing and education:
7. Evanini, K., Timpe-Laughlin, V., Tsuprun, E., Blood, I., Lee, J., Bruno, J., Ramanarayanan, V., Lange, P., Suendermann-Oeft, D. Game-based Spoken Dialog Language Learning Applications for Young Students. Proc. Interspeech 2018, 548–549. [pdf]
8. Nguyen, H., Chen, L., Prieto, R., Wang, C., Liu, Y. Liulishuo’s System for the Spoken CALL Shared Task 2018. Proc. Interspeech 2018, 2364–2368. [pdf]
9. Tu, M., Grabek, A., Liss, J., Berisha, V. Investigating the Role of L1 in Automatic Pronunciation Evaluation of L2 Speech. Proc. Interspeech 2018, 1636–1640 [pdf]
10. Kyriakopoulos, K., Knill, K., Gales, M. A Deep Learning Approach to Assessing Non-native Pronunciation of English Using Phone Distances. Proc. Interspeech 2018, 1626–1630 [pdf]
ASR:
11. Zeyer, A., Irie, K., Schlüter, R., Ney, H. Improved Training of End-to-end Attention Models for Speech Recognition. Proc. Interspeech 2018, 7–11 [pdf]
12. Hadian, H., Sameti, H., Povey, D., Khudanpur, S. End-to-end Speech Recognition Using Lattice-free MMI. Proc. Interspeech 2018, 12–16 [pdf]
13. He, D., Lim, B.P., Yang, X., Hasegawa-Johnson, M., Chen, D. Improved ASR for Under-resourced Languages through Multi-task Learning with Acoustic Landmarks. Proc. Interspeech 2018, 2618–2622 [pdf]
14. Chen, W., Hasegawa-Johnson, M., Chen, N.F. Topic and Keyword Identification for Low-resourced Speech Using Cross-Language Transfer Learning. Proc. Interspeech 2018, 2047–2051 [pdf]
15. Hermann, E., Goldwater, S. Multilingual Bottleneck Features for Subword Modeling in Zero-resource Languages. Proc. Interspeech 2018 [pdf]
16. Feng, S., Lee, T. Exploiting Speaker and Phonetic Diversity of Mismatched Language Resources for Unsupervised Subword Modeling. Proc. Interspeech 2018, 2673–2677 [pdf]
17. Godard, P., Boito, M.Z., Ondel, L., Berard, A., Yvon, F., Villavicencio, A., Besacier, L. Unsupervised Word Segmentation from Speech with Attention. Proc. Interspeech 2018, 2678–2682 [pdf]
18. Glarner, T., Hanebrink, P., Ebbers, J., Haeb-Umbach, R. Full Bayesian Hidden Markov Model Variational Autoencoder for Acoustic Unit Discovery. Proc. Interspeech 2018, 2688–2692 [pdf]
19. Holzenberger, N., Du, M., Karadayi, J., Riad, R., Dupoux, E. Learning Word Embeddings: Unsupervised Methods for Fixed-size Representations of Variable-length Speech Segments. Proc. Interspeech 2018, 2683–2687 [pdf]
20. Pulugundla, B., Baskar, M.K., Kesiraju, S., Egorova, E., Karafiát, M., Burget, L., Černocký, J. BUT System for Low Resource Indian Language ASR. Proc. Interspeech 2018, 3182–3186 [pdf]
Other topics:
21. Liu, S., Zhong, J., Sun, L., Wu, X., Liu, X., Meng, H. Voice Conversion Across Arbitrary Speakers Based on a Single Target-Speaker Utterance. Proc. Interspeech 2018, 496–500 [pdf]
22. Chung, Y., Glass, J. Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech. Proc. Interspeech 2018, 811–815 [pdf]
23. Lee, J.Y., Cheon, S.J., Choi, B.J., Kim, N.S., Song, E. Acoustic Modeling Using Adversarially Trained Variational Recurrent Neural Network for Speech Synthesis. Proc. Interspeech 2018, 917–921 [pdf]
24. Tjandra, A., Sakti, S., Nakamura, S. Machine Speech Chain with One-shot Speaker Adaptation. Proc. Interspeech 2018, 887–891 [pdf]
25. Renkens, V., van Hamme, H. Capsule Networks for Low Resource Spoken Language Understanding. Proc. Interspeech 2018, 601–605 [pdf]
26. Prasad, R., Yegnanarayana, B. Identification and Classification of Fricatives in Speech Using Zero Time Windowing Method. Proc. Interspeech 2018, 187–191 [pdf]
27. Liu, D., Chen, K., Lee, H., Lee, L. Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings. Proc. Interspeech 2018, 3748–3752. | https://medium.com/sciforce/interspeech-2018-highlights-a81743351715 | [] | 2018-12-05 09:10:02.442000+00:00 | ['Machine Learning', 'NLP', 'Artificial Intelligence', 'Deep Learning', 'Speech'] |
Overwhelmed With Expectations | expectation poisons
my vitality
be it positive or negative;
all I get is the pain of
excitement or nervousness
expectations
be it good or bad;
blocks my vision
and keeps me dulled
expectation prevents me
from being wholehearted
and keeps me stuck
in double-mindedness
negative expectations
sucks up all my energy
in order to prevent
any negative outcomes
positive expectations
drives me crazy
with the impatient itching
towards the positive outcome
and there is no way
I could stop it
by an act of will — an act of foolishness
so I might as well
stop expecting expectations
to go away for a moment
and who knows
perhaps they are expecting
my attention to show me
a surprising insight within
after all there
is no space for surprises
in a life of expectations | https://medium.com/spiritual-secrets/overwhelmed-with-expectations-d05f05d4eea3 | ['Pretheesh Presannan'] | 2020-10-21 08:19:52.328000+00:00 | ['Expectations', 'Mental Health', 'Poetry', 'Spiritual Secrets', 'Anxiety'] |
Defining Tele-consultation for Medlife | Also, we segmented the task of booking a consultation and providing secondary information like age, sex and supporting documents to maximise the booking of consultations without the need for providing additional details. This forking of the task-flow allowed us to half the time in which a consultation is booked and to ensure better secondary information flow through successful priming.
Service Model
On the back end of things, we followed a model that maximised a customer’s chance of finding a doctor. We achieved this by floating the customer request to a pool of 10 doctors for a limited amount of time (5 minutes). The minute a request is accepted, it would expire on all other doctors.
Another advantage of this model was that to the customer, we would always present a time slot of an hour instead of a specific time. This meant that instead of having to find a doctor to make the call at the exact specific time, we had a whole hour to find a doctor and make him/her consult the customer.
With the pool of doctors we had and the number of doctors that were active during specific times of each day, we got the average doctor response time down to less than 5 minutes. This, coupled with a time slot of an hour meant that we hit the consultation match rate at 98%. This was a clear case of under-committing and over-delivering on the promise which always held the customer satisfaction SLAs high.
This model, as we discovered later also had certain business-level benefits, which shall be discussed in later parts of this case study.
Feedback
After a month of launching consultations, we started to collect and analyse qualitative feedback from the users. We segmented this into two types: users who have successfully booked a consultation and users who had dropped off in the process of booking a consultation.
For the users who had successfully booked a consultation, we accessed the consultation call and listened to the calls. For the users who dropped off in the process of booking a consultation, I called them within 24 hours of dropping off; with help of a customer service agent and probed them on their drop in the middle of booking a consultation.
Most quotes from the probe calls more or less fell into the following categories:
“I could not see doctors anywhere” “I wanted to consult a gynaecologist, but I did not know how to do it” “I didn’t understand where to go to the doctor” “Where are my medicines?” “I wanted to order medicines, but did not have a prescription” “I was just checking out the application” “I did it by mistake”
We observed that a significant proportion of users cited phrases that more or less meant either the first or second phrase from the aforementioned list. These phrases essentially boiled down to represent an expectation of the user to know more about his / her doctor before booking the consultation and to specify or book a doctor of a specific speciality.
Apart from these users, a majority of Medlife’s returning users confused our paid consultations with parallel free consultations that were pushed for users who had a need for medicines, but no legally valid prescriptions to place the order against. A quick solution was prototyped, tested and deployed for this.
A chunk of users were just explorers of the application who had no intention of booking a consultation.
User segmentation
At the same time, we refined the business strategy for the second run of teleconsultations and according to both the calls reviewed and an analysis of user types in the market, we segmented the user base into 4 segments:
Explorers Health-conscious seekers Chronic patients Doctor loyals
Insights
From the feedback obtained from reviewing calls and interviewing final stage dropouts, it was clear that users had an expectation of knowing their doctor or choosing their required speciality before booking a consultation. A part of this also matched the use cases obtained via the initial validation study.
Consultation V2
The service model conundrum
The new set of requirements meant that we had to flip back to a different service and hence business model. This model would be more doctor centric and that meant onboarding doctors the majority of the customers desired. This meant that the commission Medlife made from the transaction was minimal as the transaction going through primarily boiled down to the demand of a particular doctor. This also leads to a clear case of polarisation among the doctor base we would have, with 20% of the doctors having 80% of the bookings and the remaining 80% eventually dropping off the platform due to unsatisfactory numbers. We had observed this from an earlier pilot we had conducted of providing doctor appointment management in the Medlife consumer application.
Another problem in this model stemmed from the fact that most doctors worked on their own schedule. In the earlier pilot we floated to test in clinic doctor appointments, we noticed a significant lack of punctuality from the doctors’ side. This happened as a lot of doctors did not have personal assistants and often overbooked their calendars. A lot of doctors also accepted last minute special cases or surgeries, which meant that our time compliance SLAs went for a toss. This was one of the main reasons we shifted from handing out appointments at exact times to giving out one-hour long time slots for customers to visit a doctor. This significantly helped our SLAs and customer satisfaction, but the problem of a doctor going AWOL still remained. In addition to this, there was always the problem of the doctor slot non-availability, which we could not solve.
In contrast to that, our current service model ensured higher doctor punctuality as the request was essentially floated to a set of 10 doctors and whoever would accept the request first would get the consultation on a first come first serve basis. If none of the doctors accepted the consultation in 5 minutes, the request would be floated to a different set of 10 doctors. This meant that in an hourly timeslot that we give out to a customer, the request essentially goes to almost 120 doctors and this ensured higher compliance of time from the doctor’s side. This also removed the calendar problem from the equation as requests would only be floated to the doctors who are active at that point in time and are available to take a consultation at that particular point in time; without any scheduling needed.
In addition to being more customer-centric, this model also ensured higher satisfaction of both doctors and customers as doctors who logged in almost always received requests and customer SLAs were fulfilled with a staggering rate of 98%. The customer was always allotted a doctor in the timeslot that was booked.
Another advantage this model had was that of being more financially profitable to both the customer and Medlife. For this kind of a consultation, Medlife charged a standard fee of 150 rupees from the customer; irrespective of the doctor being assigned in the end. The commission we earned out of this would be constant. The new model would completely depend upon the price a doctor wanted to float for his / her consultation and the averages were around 500 rupees, as opposed to 150 rupees. The commission Medlife earned from the doctor centric model would also pale in comparison to the customer-centric one.
After much deliberation, we decided to pitch both the models against each other in the upcoming version of the product and let our users answer the question for us. This meant that the design had to accommodate and solve for the confusion that would occur in the minds of users when presented with more than one way to achieve the same task. After multiple iterations and constant testing with users, we came up with a design that clearly conveyed this fork to the users and helped them make an informed choice.
From the user research done after V1, we noticed that there were clearly some concerns about the credibility of doctors that would be assigned to a consultation from the older model. To work around this, we included a gallery of doctors and their credentials to accelerate user trust in the model. | https://medium.com/abhinav-krishnas-portfolio/defining-tele-consultation-for-medlife-c5d0fe46a5c6 | ['Abhinav Krishna'] | 2020-04-28 19:15:43.945000+00:00 | ['User Research', 'Design', 'Healthcare', 'Product', 'A B Testing'] |
Fight for Your Destiny | Fight for Your Destiny
A Sonnet to Inspire your Ambition
Photo by Juan Jose on Unsplash.
Say it with me: Yes, yes, yes, yes, yes, yes, yes!
Now don’t think, just dive in and try your best.
I know, the water’s cold, the unknown’s scary,
But nothing great ever came from being wary.
You must strive and fight to clear your path,
No backward steps, pin your courage to the mast.
Then shout to Heaven to bless your voyage,
And strut bold feet on this Earthly stage.
Live not in fear and expectation of regret,
If you want the life that you truly deserve,
You can not leave opportunities unmet,
You must overcome that humble reserve.
But remember the rocks and respect the wave,
Much is lost, when reckless takes over from brave.
Photo by isaac sloman on Unsplash.
If you enjoyed this, you may also like: | https://medium.com/sonnetry/fight-for-your-destiny-a23f866d8bef | ['Joseph Brown'] | 2019-11-13 23:09:28.710000+00:00 | ['Destiny', 'Ambition', 'Motivation', 'Poetry', 'Sonnet'] |
The Secret of the React Render Function | The Standard React Component Will Call the Render Function Every Time Its Parents Re-Render
Let’s have a look at this simple example. This is a simple counter app. Whenever the user clicks the “Up” button, the counter will increase by one and the app component gets re-rendered.
So the question is: “Does the child component get re-rendered too?”
See, it has logged child component re-render five times when we clicked the Up button five times and the counter increased to five.
So, does that mean the child component got re-rendered?
No, it’s not re-rendered. It’s only called the render function and hasn’t actually re-rendered. So, we need to distinguish between “re-render” and “called function render”. (In this case, the child component is a functional component so the render function is itself.)
The normal component will call the render function when its parent gets re-rendered. In this case, the app component got re-rendered because its state changed when we clicked the Up button, so it’s led to the child component calling the render function.
What React does under the hood when the render function gets called, is that it’ll recalculate the virtual DOM of that component and will compare that with the previous virtual DOM.
If it’s different, React will actually re-render the real DOM in our browser. If not, nothing on our browser changed. This is the main reason why React is fast.
I will show you how to test if the real DOM is updated or not.
1. Enable the Rendering tab in the Chrome console
2. Run your app and observe
Every DOM node getting re-rendered will flash in a green background color like this:
As you can see, on first load, the whole DOM got re-rendered, so the background green area shows on the whole page.
But, when we click the Up button, even the console log shows five times, but the child component (Hello text) is not re-rendered. | https://medium.com/better-programming/secret-about-react-render-function-abefcd32f625 | ['Nguyễn Quyết'] | 2019-08-11 04:35:51.481000+00:00 | ['Reactjs', 'React Performance', 'Programming', 'Rendering', 'React'] |
Ladies, Don’t Let Religion Stop You From Dating People You Like! | Ladies, Don’t Let Religion Stop You From Dating People You Like!
Why so many ladies back-pedal when they hear I’m religious?
“Hey Oren, I decided to be brave and send you a message when I saw your post.”
That message appeared in my message requests on Facebook today. I published I’m looking for my other half in a group on Facebook where they allow dating threads every Friday.
I answered the message, and we started chatting. Then the woman wrote the following:
“Yeah, I realized you’re religious right after I sent the message…”
This encounter is not the first time religion seems to stand between attractive girls and me. I’m Jewish and Orthodox, and I guess some women don’t like the idea of keeping Shabbat. I wrote about the many benefits of keeping Shabbat non-religiously before.
Also, I live in Tel Aviv, Israel. It’s not like I’m dating somewhere where Jewish people are scarce. This country is a Jewish country! Where else am I supposed to look?
What’s funny to me is that some women saw my picture and sent a message without reading much of the text. I wrote I’m religious in the second line of the post.
Last night I also talked with another woman who then said that my being religious doesn’t work for her.
Earlier this week, I took a woman to dinner where she talked about how her brother is getting married on November to a religious woman, and he started putting on a Kipa and keeping Shabbat because of her.
By the way, some people keep religious tenets without putting on a Kipa. Some may even be more religious than I am — and I do wear a Kipa.
When I told her I do it too, she said she is not like her brother. Their family is traditionally religious meaning they are not keeping Shabbat or holiday but do eat Kosher and do some of the religious tenets.
We ended the date with a question mark. Both she and I had to decide if we want to give this a chance.
The next morning I decided we could try and see what happens. The long-curly-haired young woman decided religion was a deal-breaker for her.
I don’t know what makes so many women reject the idea of a religious man. Here are three points I want to make that will help you deal with the idea of a religious man in your lives. | https://medium.com/a-geeks-blog/ladies-dont-let-religion-stop-you-from-dating-people-you-like-37e6c25cf05f | ['Oren Cohen'] | 2020-05-20 12:28:04.174000+00:00 | ['Self-awareness', 'Religion', 'Relationships', 'Life Lessons', 'Judaism'] |
Deploying FastAPI application in Google App Engine in Standard Environment | FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. To learn more about FastAPI, you can visit the docs of FastAPI by clicking here.
FastAPI also provides Swagger UI by default in the {base_url}/docs for testing apis.
Installation
pip install fastapi
You will also need an ASGI server, for production such as Uvicorn or Hypercorn. We will be using uvicorn for this article.
Uvicorn is a lightning-fast ASGI server implementation, using uvloop and httptools.
Until recently Python has lacked a minimal low-level server/application interface for asyncio frameworks. The ASGI specification fills this gap, and means we’re now able to start building a common set of tooling usable across all asyncio frameworks.
To install uvicorn:
pip install uvicorn
This will install uvicorn with minimal (pure Python) dependencies.
pip install uvicorn[standard]
This will install uvicorn with “Cython-based” dependencies (where possible) and other “optional extras”.
In this context, “Cython-based” means the following:
the event loop uvloop will be installed and used if possible.
will be installed and used if possible. the http protocol will be handled by httptools if possible.
I prefer using uvicorn[standard] as it installs cython-based dependencies which will prevent error related to uvloop and httptools while running in production.
Lastly, you should also install Gunicorn as it is probably the simplest way to run and manage Uvicorn in a production setting. Uvicorn includes a gunicorn worker class that means you can get set up with very little configuration. You do not need to install Gunicorn while running locally.
To install Gunicorn:
pip install gunicorn
Freezing Requirements File
After installing every required dependencies inside virtualenv, do not forget to freeze the requirements file to update before deploying as App Engine installs dependencies from requirements.txt file.
To freeze the requirements file:
pip freeze > requirements.txt
Configuring app.yaml file
Your python version should be above 3.6 for FastAPI to work. Here is my configurations for my project:
runtime: python37
entrypoint: gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app
instance_class: F2
You can have any python version above 3.6 and instance_class as per your need. The following will start Gunicorn with four worker processes:
gunicorn -w 4 -k uvicorn.workers.UvicornWorker
main is my main.py file and app is the instance of my FastAPI application. App Engine handles the port number but you can define your desired port number.
Confirming before deployment
After you completed all the above steps, confirm the following for the last time:
Your virtualenv is activated and all the requirements are installed by activating the environment. Make sure you have installed only the necessary dependencies and included .gcloudignore to ignore unnecessary files and folders during deployment. Freeze the requirements.txt file before deploying so that you don’t miss adding newly installed dependencies in the requirements file. Your app.yaml file is properly configured. Your service account json files have necessary access.
Deploying in App Engine
If you have not installed Google Cloud SDK, then you must install and configure the sdk. You can follow this link to properly configure your google cloud sdk.
After installing the sdk, you need to initialize the sdk. To initialize Cloud SDK:
Run gcloud init from the terminal.
After initializing, make sure you select your correct project id. To select the project from google cloud, you have to run gcloud config set project [project_id]
Finally, to deploy your FastAPI application in the selected project-id:
gcloud app deploy
You will get the url to view the application in your terminal. | https://medium.com/analytics-vidhya/deploying-fastapi-application-in-google-app-engine-in-standard-environment-dc061d3277a | ['Pujan Thapa'] | 2020-11-13 03:42:24.323000+00:00 | ['Python', 'Fastapi', 'App Engine', 'Google Cloud Platform', 'Deployment'] |
Synchronous and Asynchronous Servers With Python. | In this article will go through two types of server-client codes.
One is over synchronous (multi process package) and the other is asynchronous (asyncore package), they do almost the same thing, the asynchronous one is more robust, and data does not get lost.
Try it out on your machine, play with settings a bit and see the synchronous server limits and data loss. | https://medium.com/swlh/synchronous-and-asynchronous-servers-with-python-d5900e215483 | ['Ohad Gazit'] | 2020-12-06 12:39:38.479000+00:00 | ['Python', 'Multiprocessing', 'Asynchronous', 'Client Server', 'Async'] |
Deploying a Machine Learning Model Using Flask and Heroku | Deploying a Machine Learning Model Using Flask and Heroku
She Code Africa Cohort 3 Final project
Photo by Robina Weermeijer on Unsplash
Cardiovascular diseases (which often leads to heart failures) are the number 1 cause of death globally, taking an estimated 17.9 million lives each year, which accounts for 31% of global deaths.
Most cardiovascular diseases can be prevented by addressing behavioral risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity, and harmful use of alcohol using population-wide strategies. However, people with cardiovascular disease or who are at high cardiovascular risk (due to the presence of one or more risk factors such as hypertension, diabetes, hyperlipidemia, or already established disease) need early detection and management wherein a machine learning model can be of great help.
This machine learning model could help in estimating the probability of deaths caused by heart failure by taking in important features from the dataset and making predictions based on these features.
The dataset consists of 12 variables/features, and 1 output variable/target variable. Let us examine the role of each feature in determining if a person is likely to have heart failure or not:
Age: This variable shows the patient’s age Anemia: is the decrease in red blood cells or hemoglobin Creatinine_phosphokinase: is the level of creatine kinase in the blood. This enzyme is important for muscle function. Diabetes: is a chronic disease that causes high blood sugar Ejection fraction: is the percentage of blood leaving the heart at each contraction High blood pressure: is blood pressure that is higher than normal Platelets: are tiny blood cells that help your body form clots to stop bleeding Serum creatinine: is the level of serum creatinine in the blood Serum sodium: is the level of serum sodium in the blood Sex: gender of the patient Time: This captures the time of the event Death event: which is the predictor variable.
Now that we know the function of each feature, Let's get started
Step 1: Import Libraries
Step 2: Import the Dataset
The Dataset used in building this model was downloaded as a CSV file to my PC from Kaggle.
Step 3: Data Cleaning and EDA
This data was pretty much clean, so I didn’t have to do any more cleaning. However, some important pieces of information can still be explored.
Next, I use Matplotlib to visualize the distribution of the target variable (Death_event). From the visualization, we can see that a greater percentage of the patients had a failed heart.
I also visualized the Distribution of each feature to investigate how they are related to the target variable. Some important features are discussed:
First, I explored the importance of the Age feature in determining If a patient is likely to have a heart failure or not. From the above, we can see that as the age increases, the probability of a death event also increases (i.e the older a patient is, the more likely he is to have a heart failure). Also, since the increase in one variable results in an increase in the other variable, we can deduce that these two variables are positively correlated. However, a correlation matrix will still be plotted for confirmation.
In general, the normal creatinine levels range from 0.9–1.3, and from the distribution of serum_creatinine against Death_event visualized above, we can see that the chances of survival are higher within this range.
The reference range for serum sodium is between 135–147 mmol/L. From the visualization above, the survival rate only starts to increase at this range. This feature also has a considerable correlation with Death_Event
To further evaluate the relationship between each input variable and the target variable, I use a heatmap, which gives a graphical representation of the relationship between the variables.
Step 4: Splitting the Train and Test Data
Step 5: Data Preprocessing
This brings the data to a state that the model can parse easily. For the purpose of this project, the Standard Scaler is used, which standardizes the features by subtracting the mean and then scaling to unit variance.
Step 6: Model Selection
The support vector machine (SVM), a supervised machine learning model that uses classification algorithms for two-group classification problems is used. After giving the SVM model sets of the preprocessed training data for each category, they’re able to categorize new output.
The classification report shows an accuracy of 75%.
Since this model will be deployed, it is saved into a pickle file (model.pkl) created by pickle, and this file will reflect in your project folder.
Pickle is a python module that enables python objects to be written to files on the disk and read back into the python program runtime.
Step 7: Deploying with Flask and Heroku
Deploying a machine learning model means making the model available for end-users to make use of.
Create the Webpage
Here we will create a CSS webpage that has text boxes to take in input from users. The CSS file was named index.html and can be found here.
Several templates for creating a CSS webpage can be found online.
Deploy the model on the webpage using Flask
In deploying this heart failure prediction model into production, a web application framework called Flask is used. Flask makes it easy to write applications, and also gives a variety of choices for developing web applications.
To make use of this web application framework in deploying this model, we install Flask by running the following command:
Next, a Flask environment with an API endpoint that takes in the model and enables it to receive input from users, and return output is setup.
After this, a python file app.py is created, and the required libraries imported
Create the Flask App
Load the pickle
Create an app route to render the HTML template as the home page
Create an API that gets input from the user and computes a predicted value based on the model.
Now, call the run function to start the Flask server.
This should return an output that shows that your app is running. Simply copy the URL and paste it into your browser to test the app.
Deploy the Flask APP to Heroku
Heroku is a multi-language application platform that allows developers to deploy, and manage their applications. It is flexible and easy to use, offering developers the simplest path to getting their apps to market.
The first thing to do in deploying the Flask app to Heroku is to Sign up and Log In to Heroku. After which you can create a Procfile and requirement.txt file, which handles the configuration part in order to deploy the model into the Heroku server.
web: gunicorn is the fixed command for the Procfile.
The requirements file consists of the project dependencies and can be installed with a single command:
Next, you commit your code to Github and connect Github to Heroku.
After you connect, there are 2 ways to deploy your app. You could either choose automatic deploy or manual deploy. The automatic deployment will take place whenever you commit anything into your Github repository.
By selecting the branch and clicking on deploy, build starts.
After a successful deployment, the app will be created. Click on the view and your app should open. A new URL will also be created and can be shared by users.
Check Out my app via ‘https://heart-failure-prediction-app20.herokuapp.com/’
Conclusion
It is one thing to build a Machine learning model, and it's another thing to deploy the model by integrating it into an existing production environment that can take in input from users and return an output. This article covered building and most importantly deploying a heart failure prediction machine learning model that could significantly help reduce the mortality rate amongst patients with cardiovascular diseases.
It is important to note that asides from the Algorithm (SVM), web framework (Flask), and the Application platform (Heroku) used in this project, there are several other options that can be explored.
The link to the Github Repository can be found here
Dataset Authors: Davide Chicco, Giuseppe Jurman
Link to Dataset
This was my first machine learning Deployment project, and I hope someone finds this useful🙂. | https://towardsdatascience.com/deploying-a-heart-failure-prediction-model-using-flask-and-heroku-55fdf51ee18e | ['Osasona Ifeoluwa'] | 2020-12-24 10:50:54.820000+00:00 | ['AI', 'Machine Learning', 'Data Science', 'Data Visualization'] |
June 18 — pricing blackout kit for drop shipping | Alex from HighTech3D gave me a great opportunity! He offered to do drop shipping with my blackout vinyl on his website. The deal is that he’d list the product on his website, and if he gets an order, he’ll send me a shipping label so that I can ship it.
Here’s the pricing breakdown for the goal price of $20:
$4 materials
$1.50 for Alex (exposure on HighTech3D and shipping logistics)
$1 for a 1.5" x 24" mailing tube
$1 Paypal fees (30 cents + 3.4% of price)
$4 shipping (to New Jersey from San Francisco as a test)
$8.50 my time and 10% cut failure rate (the vinyl often slips in the cutter and messes up the cut, also I’ve spent hours debugging the vinyl cutter)
I think this is pretty reasonable as it includes shipping, and if you were just to buy the matte vinyl from TAP plastics, it’s $3 per 1ft of the 2ft wide roll, so it would cost you $15 to buy the vinyl domestically anyways. I was able to get it cheap because I paid $400 for three 1.52x30m rolls (shipped via FedEx from China).
Now that the pricing is sorted out I bought the mailing tubes and we’ll have to make some nice photographs and copy for the website.
Shipping
My vinyl roll weighs 175g rounded up.
I bought this scale for a wagyu beef costco group buy that never happened… did you know costco sells wagyu beef for $1200 for 13 pounds? wagyu steaks are supposed to melt in your mouth like butter. :O
The mailing tubes on Amazon are 12.4lb for 50, so about 115 grams. 290 g to oz is 10.23 oz, so let’s say my package weighs 10.5 oz.
Shipping from San Francisco to New Jersey is $4! I figure shipping across the country is a good estimate.
Packaging test
The biggest piece of vinyl is just over 15" tall, so a 24" length mailing tube should be perfect.
I checked to see if the vinyl would fit inside a 1.5 inch diameter (well actually 1.38 inches taking into account the width of the mailer walls), and so long as I wrap it around a dowel that is the diameter of my marker (1.5cm) or smaller, it should fit fine. This is great news because the next size up is twice as expensive.
I also noticed that painter’s tape sometimes leaves sticky residue on the vinyl, so I made sure that the pieces of painter’s tape holding the rolls together were small, and it looks like it works. This one’s been in storage for about a month now, and no sticky residue left from the tape.
Never perfect
The thing I’m self conscious about with this product is that sometimes when I cut it the backing is not cut through on the whole cut, so there’s a bit of extra backing sticking out past the edge of the sticker.
It doesn’t affect the vinyl sticker, it just looks less professional. I experimented with doing a kiss cut instead of cutting through, but if you have to take the sticker off the backing before application it gets deformed really easily and it’s hard to apply it nicely (without tons of wrinkles). I can also adjust the blade to make a cleaner cut, but with a completely through cut the vinyl is more likely to slip and ruin the cut. I have been convincing myself that it should be fine as it doesn’t affect the function. To have it consistently nice I would either have to trim them off by hand (which would take a really long time), or invest in super expensive die cut tooling, which is how the professionals do it.
Similarly, the other things on the website are 3D printed rather than professionally injection molded with expensive tooling. We’re not rich corporations after all, just people. | https://medium.com/grow-bucket-life-project-kickstarter-diary/june-18-pricing-blackout-kit-for-drop-shipping-923f455a6b2 | ['Ruth Grace Wong'] | 2017-06-19 05:35:44.500000+00:00 | ['Startup'] |
Filecoin v. Sia, Storj & MaidSafe: The Crowded Push for Decentralized Storage | With “exabytes” of digital storage space left unused, according to its founder, Juan Benet, Filecoin is building an end-to-end encrypted decentralized storage network and file hosting platform. The company hopes those with large amounts of unused storage will rent their space in exchange for its tokens.
Doing so would drive the price of storage down “significantly,” Benet said in an interview with Y Combinator in June. It’s already pretty cheap. For its smallest enterprise customers, Amazon Web Services charges about $25 per terabyte.
Benet says a secondary motivator will be that people need their data distributed across many providers and want stronger guarantees that their data will remain secure and won’t be lost. Decentralized storage provides added data security, as files broken up and stored across multiple locations are more difficult to hack.
In Filecoin’s markets, clients will spend tokens for storing and retrieving data while miners earn tokens by storing and serving data. Smart contracts will be deployed in the network to let users write their own conditions for storage as well as design reward strategies for miners, among other things. Filecoin eventually aims to integrate with other blockchains to bring storage and retrieval support to Zcash, Ethereum, and Bitcoin for example.
Established competitors on the blockchain
The blockchain is the “future of cloud storage,” say those in the industry already working in the space. Filecoin is entering a market where Storj, Sia and MaidSafe all have a working product available to the public. A fifth, Cryptyk, also under development, says it will achieve the security benefits of decentralized storage by spreading files across a handful of incumbent cloud storage providers, like Amazon.
In addition to security benefits, decentralized cloud storage networks are generally marketed as cheaper. A terabyte of storage at Sia costs about 2 USD per month. Storj charges by gigabyte, starting at 0.015 per gigabyte per month.
Storj, Sia, MaidSafe and Filecoin are all built with a native storage marketplace where users and hosts can buy and sell storage space. All use mining to provide computing power for the network. In Filecoin, not only are miners given token rewards for hosting files, but they must prove that they are continuously replicating the files for more secure storage. They are also rewarded for distributing content quickly — the miner that can do this the fastest ends up with the tokens. In Maidsafe’s network — dubbed SAFE — Safecoin are paid to the user as data is retrieved; however, this is done in a lottery system where a miner is rewarded at random. The amount of Safecoin someone can earn is directly linked to the resources they provide and how often their computer is turned on.
Filecoin and Sia both support smart contracts on the blockchain that set the rules and requirements for storage, while Storj does not; Storj users pay what they use. This particular payment model means that if a user disappears, the host will no longer be paid for lending their space, a potential problem for those who will be renting their storage space.
MaidSafe aims to do more on its network than trade storage; it markets itself as a “crowdsourced internet,” on which not only data is stored but decentralized applications live. Miners rent out their unused computing resources to the SAFE network, including hard drive space, processing power and data connection — and are paid in the native Safecoin. The SAFE network also supports a marketplace in which Safecoin is used to access, with part of the payment going to the application’s developer. Miners can also sell the coins that they earn for other digital currencies, and these transactions can happen either on the network or directly between individuals. Filecoin also aims to allow the exchange of its tokens with fiat currencies and other tokens via wallets and exchanges.
All four products store data using multi-region redundancy — also known as erasure coding — meaning that files are split apart and distributed across many locations and servers to eliminate the chance of a single point of failure wiping away data. Filecoin’s specifically will use the IPFS distributed web protocol to do this, and doing so means that nodes can “continue talking to each other even if the rest of the network disappears,” says Benet. The development team plans on making Filecoin’s erasure coding a “turnable parameter” so a user can can set the level of erasure code for a particular piece of data.
Market opportunity and competitor traction
Humans are creating data by the quintillions every day. The International Data Center, a telecommunications and IT market research company, reported that digital data will grow at a compound annual growth rate of 42 percent through to 2020–50 times from 2010 to 2020.
In Sia’s latest triannual update, posted to its blog in April, it announced it had 100 TB of file contracts on the network and a total capacity of almost 1 petabyte. As of Aug. 1, Sia reports it has doubled the amount of file contracts on the network to 202 TB. It grew its number of hosts from 75 on Dec. 31 to 145 in April, stating that it expected to continue this trend. Siacoin’s total supply is worth about 249.3M USD.
Storj says on their website that they have over 20,000 users and 19,000 “farmers,” or miners. This year it signed its first service agreement with a Fortune 500 company. It has a total supply worth 29.8M USD.
Filecoin’s token sale terms
Filecoin’s token sale begins on Aug. 7. It has not released an end date for its token sale. Filecoin is only allowing accredited investors to take part in its token sale to ensure that it abides by law, according to a blog post announcing the sale. An accredited investor must meet standards defined by the U.S. Securities and Exchange Commission that allow them to invest in private securities offerings. This is defined as any individual with an income of over 200,000 USD over the last two years or with net assets of over 1 million USD, excluding the primary residence.
Filecoin is the first project from Coinlist, a funding platform that helps decentralized projects get off the ground. It was built in conjunction with AngelList and Protocol Labs, the latter founded by Juan Benet. Coinlist is building an open-source framework for token sales — dubbed the Simple Agreement for Future Tokens (SAFT) — that is the default agreement for all Coinlist investments. | https://medium.com/tokenreport/filecoin-v-sia-storj-maidsafe-the-crowded-push-for-decentralized-storage-7157eb5060c9 | ['Seline Jung'] | 2017-08-03 13:58:18.581000+00:00 | ['Cloud Computing', 'Cryptocurrency', 'ICO', 'Blockchain'] |
3 to read: ‘Deep fakes’ are coming | 2d & 3d subscriptions? | Behind the curtain: FB’s big fail | By Matt Carroll <@MattCData>
Nov. 17, 2018: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co
You thought fake news was bad? Deep fakes are where truth goes to die: Fake news is about to get a lot faker. Improving technology means it is getting more and more difficult to tell doctored videos from real life. Heck, even badly edited fake videos are taken for the truth — what happens when people can’t tell the difference? A chilling look a the future of fake news by Oscar Schwartz for The Guardian. Extra: World’s first AI TV news anchor unveiled in China.
How many people will pay for 2d or 3d news #subscription?: Quartz and New York mag just put up paywalls, joining a lengthening list of high-profile, quality news pubs that have done so, such as the NYT and WaPo. But how many people can afford to pay for two, three or more news sites?, asks Joshua Benton of NielamnLab. He’s pessimistic, citing research that says only 16% of Americans will pay for any news. (Myself, I’m more optimistic. As more paywalls go up, people will of necessity read fewer sites. But from the perspective of the newsrooms, they don’t care, as long as they have enough paying customers. We’ll see how it plays out.)
Delay, deny & deflect blame at others — How Facebook’s leaders handled crisis: This story will only reinforce your worst fears, if you’ve had doubts about Facebook’s ability to come up with a successful solution in the wake of the Russian election scandal and the company’s unscrupulous handling of data from millions of users. FB’s top leaders were slow to realize they had a problem, slow to realize the breadth and depth of the issues, including the anger of the public, and seemed mostly interested in wallpapering over concerns. The NYT story paints a picture of a dysfunctional platform. Not pretty, but a great read. | https://medium.com/3-to-read/3-to-read-deep-fakes-are-coming-2d-3d-subscriptions-behind-the-curtain-fb-s-big-fail-59a5090ec3a2 | ['Matt Carroll'] | 2018-11-17 14:01:01.569000+00:00 | ['3 To Read', 'Journalism', 'Matt Carroll', 'Media', 'Media Criticism'] |
How to Follow Your Heart When Your Mind Won’t Listen | How to Follow Your Heart When Your Mind Won’t Listen
Start by paying attention to your persistent inner voice
We make choices every day.
Sometimes opportunities come knocking, enticing you with fancy bells and whistles. You’re intrigued because they’re offering a sweet deal. To add to the appeal, other people you know are signing up and finding success.
But there’s something that doesn’t feel right. You can’t put your finger on it. You just know. So you sign up for the program, add their app, but don’t take further action.
You peruse the site, and they bombard you with an endless scroll of news stories teeming with ads, some flashing in your face. It’s an assault to your senses. But you still wonder, should I open the door a little more, just to make sure?
The relationship isn’t getting off to a good start.
Your values don’t match. But you’re curious, mostly because they’re offering you the money you rarely have these days.
It’s like being on a first date with someone who promises you the world and distracts you from their flaws. It’s tempting for a minute until you notice the crumbling tower of lies.
They tell you they’re looking for a serious long-term relationship, but what they show you proves otherwise. You know they only want to get down your pants. They’re going to use you for sex, then drop you when they get bored. Empty promises are an early warning sign of trouble ahead.
If it sounds too good to be true, it usually is.
Quite a few writers are joining a new site to make some money. Our minds can play tricks on us when money’s involved. Sure, some of you might find success. We all must listen to what feels right for us. But I can’t help but feel we’re being manipulated.
They ensured an initial payment but were vague about continuous compensation. They never explicitly said we own the rights to our work, but they did say they could reproduce it in any way they choose. They’re banking on us reading the boldly printed cash promise.
I don’t trust that it’ll end well for me. And I’m sure it’s going to end. That should already tell you I see our relationship was doomed from the start.
But everyone else is doing it, so why don’t I try it? What do I have to lose? My mind is trying to justify it, but my heart says a big fat no. I’m attuned to my intuition these days, and I know when something feels off.
How can we distinguish our mind chatter from our wise inner voice?
Pay attention to your inner voice
When you can’t shake a feeling, your intuition is commanding your attention. There’s a persistent voice telling you, “Don’t do it!” Your arguments to the contrary fall flat. You can’t help but wonder if maybe this isn’t such a good idea after all.
Pay attention. Your inner voice isn’t make-believe. Intuition is a powerful force that guides our path and helps us live in abundance and comfort. If we’d only listen to it, we might avoid heartbreak and chaos more often.
Our intuition can tell us when to go for it and when to pause. It can help us sidestep these issues every day, in any life situation. When we stop and listen to our hearts, we find answers that lead us to the best action.
Recognize the Bandwagon Effect in action
The cognitive bias called the Bandwagon Effect, as described by Kendra Cherry in verywellmind, refers to “…the tendency people have to adopt a certain behavior, style, or attitude simply because everyone else is doing it.”
When we observe others joining a club, paying for a service, or eating at a particular restaurant, we’re attracted, too. It makes us want more of what they’re having, even if we know nothing about it. We find it more appealing when several people are interested.
Author Walter Veit, of Psychology Today, explains further, “The phrase ‘jump on the bandwagon’ is typically used in a derogatory fashion to indicate that someone is following a trend without actually having made a rational evaluation of the idea or behaviour itself. It is thus merely the success of the trend that leads to its further success.”
This common phenomenon can squelch one’s ability to listen to intuition. Be aware of your tendency to jump on the bandwagon. Companies use precise strategies and enticing tactics to manipulate us into thinking we need or want their product or service. They know all the tricks and prey on our inclinations to follow trends. Your intuition is still in good working order. You just need to listen.
Be aware of your body language
Our bodies expand or shrink, showing us what we intuitively know as truth. Have you ever noticed how you gravitate closer to someone when you feel comfortable around them? Our bodies indicate our level of interest and our trust for another. If I don’t trust you, I’ll instinctively move away.
The same can happen when we see something online, like a job offer or ad. We may cringe or feel inexplicable anxiety arise within us. We click off quickly to avoid the feeling. Our bodies tell us the truth, whether or not we want to admit it.
Final thoughts
You don’t need a complicated formula to your heart. You do need to stop the chatter in your mind and drop into your intuition. As I finished this article, I took a peek at that site “everyone” appears to be talking about. I guess it’s sort of ok, once I found some friends to follow. I wish them the best, but I know I’m not supposed to do it.
I knew because I listened to my inner voice telling me to stop. Some of us, myself included, have a history of pursuing relationships doomed to fail. We know they won’t give us what we need or deserve. And we used to keep trying to make them work.
It’s time to practice new behaviors. Your intuitive voice always works when you’re paying attention. Don’t follow the crowd when you know it isn’t a good fit. Be aware of the body language that’s giving you clues. Then face the direction where you hear, “Yes, this is the one for you.” You’ll be amazed when you follow your heart’s desires. Those promises offered by others will be fulfilled, and then some. | https://medium.com/the-partnered-pen/how-to-follow-your-heart-when-your-mind-wont-listen-21b5749f97d6 | ['Michelle Marie Warner'] | 2020-11-29 03:30:27.981000+00:00 | ['Self-awareness', 'Personal Growth', 'Relationships', 'Life Lessons', 'Self Improvement'] |
The art of event advertising or 7 tips on how to make a felicitous flyer | You can predict the success of an upcoming event by analyzing the promotional campaign. It is extremely important to attract as many people from the target audience as possible. You should allocate enough time to learning the interests, needs, and preferences of your potential client. One of the most important steps of preparation is flyers distribution.
Although the purpose of a flyer is to inform the person, it is essential to care about its design. The first impression is the key to a successful promotional campaign. People are fed up with mediocrity and empty moments. Not only should you inform the person about the event details but also you should brighten up their day using an eye-catching flyer design.
What is a flyer, then? It is a leaflet that has to attract a person’s attention, interest them, make them want to read the information about the upcoming event. You should make a design engaging and impressive so that people cannot resist the urge to read the flyer. Keep in mind that the style of the flyer should be relevant to the event and your target audience. Also, it should not be over complicated because it prevents a person from perceiving the info.
How do you make a perfect flyer design? No doubt, it requires a lot of patience and creativity.
However, we will share 7 tips on how to make a felicitous flyer:
1. A new perspective
You should shed new light on old things. Even reviewing the flyers of primitive design, you will get new insights. Do not try to invent something completely new because there is no need to. Pay attention to a simple, minimalistic design that fits every event. Do not let a person find fault in the extra element. If your event does not include anything extraordinary, pick up a basic, straightforward design.
2. Only important information
Do not try to include every possible design element on one leaflet. Avoid using opposite colors, fonts, shapes. It does not even attract children anymore as it looks flat. You should remember that people are overwhelmed with designs of promotion campaigns because they go from everywhere. Therefore, your potential clients might be very picky.
Do not make people look for the most important information among superfluous decoration elements. Make it simple for a client so that they can remember the data and purpose of the event. If you doubt whether to place one more element or not — you would better leave the design the way it is.
3. One-two fonts, that’s it
We understand that there is a wide range of fonts and it is quite difficult to pick up a suitable one. The thing is that people do not care about the font you use. They want to find out what type of event it is, when it is going to happen, and what benefits they get if visiting. Using more than 2 fonts, you do not let a person focus on anything so it might cause irritation. Then, your flyer will end up in the rubbish bin.
We recommend you to use two fonts that do not differ that much. Pick up the main font for the name of the event and its venue, the date, and a special offer for each visitor. Then, choose a smaller font size to add details about the organizers and special guests.
However, using more than 3 fonts can be justified if the event is uncommon. For instance, you promote the show of unusual clothes collection, an underground party, or an extraordinary exhibition. In other cases, limit yourself to 2, maximum 3 fonts.
4. Unique design
Most of us are visuals, so we are quite picky about design and appearance. We have already mentioned that it is best to use a minimalistic design. However, do not neglect the use of unusual images, combining styles, graphic elements.
However, make sure that the design is not oversaturated. If you combine elements of different styles, it is better to use soft, pastel colors.
5. Unexpected solutions
Try to replay a well-known image, make a reference to the classics, use an example from modern art. You should interest a person in a new, unexpected design that they have not met yet. Try to avoid trending decisions that are already abused by a lot of designers.
People are used to saturated colors on restaurant flyers and wedding invitations. Deceive their expectations — make the flyer black and white or use a bright, humorous look. Also, don’t be afraid to add hidden meaning to make the person think a little.
Even if a person does not fall within your target audience, they can save the flyer due to the interesting design. Then they will show it to a friend, family, colleagues. Thus, you might get many more clients than you expected.
6. Content literacy
In addition to a successful design, you should work on the text content of the flyer. Use concise, vibrant headlines that catch your eye. It should be like the name of a book or magazine. If you are not familiar with the author of the book, it may interest you only by its unusual name. Even if a person does not take the flyer, they will be able to find the event on the Internet using your title.
The second point is the use of a hook. People like to get something in return, whether it’s a free cup of coffee or a discount on a second pizza. Sometimes, advertisers use click bates that are deciphered at the bottom of the flyer. Pick up a beneficial special offer that is relevant to your target audience.
Remember that you cannot be 100% sure that a person will take advantage of this offer. However, word of mouth works fine, and this person’s friends can also find out about your offer.
7. Online Constructors Usage
Every day there is ever-increasing automation and optimization of work processes. Millions of people go online. Hundreds of resources offer to help you with the design and decoration of flyers, and it would be foolish not to use them. However, how to choose a reliable design making service?
From our personal experience, we would recommend you to use Canva, PosterMyWall, Crello, or Elegantflyer. Using online design making services, you can create absolutely any flyer design. At the same time, you can either take ready-made templates and customize them, or create your own design. Keep in mind that each service offers both free flyer templates and premium collections.
Most often, such services allow you to easily edit templates, download, and print them. You can choose a suitable image, style, and adjust the size of the template. Most services are completely free, so we recommend that you take this opportunity.
As you can see, even a designer without experience can create an impressive and memorable flyer. You must remember that the main points are the unique design, competent content, and a proper special offer. Also, consider a suitable flyer format.
Despite the mediocrity of the upcoming event, interesting design and a cool offer will definitely interest a person. Do not be afraid to experiment with style, a combination of elements, or vivid images. Also, remember the special offer, which is a hook for visitors. We all love to get something for free, especially if it’s a cup of aromatic coffee or a glass of champagne at a party. Let the person struggle with the desire to attend your event.
Remember that a unique, direct design will appeal to almost everyone. Follow the result of the promotion and you will understand which design suits your company. | https://elegantflyer.medium.com/the-art-of-event-advertising-or-7-tips-on-how-to-make-a-felicitous-flyer-9504a08cfbcc | [] | 2020-07-06 16:30:48+00:00 | ['Flyers', 'Event Advertising', 'Design', 'Style', 'Online Creators'] |
The Power of Functions Returning Other Functions in JavaScript | The Power of Functions Returning Other Functions in JavaScript
Functions and composability
Photo by Shahadat Rahman on Unsplash
JavaScript is widely known for being extremely flexible by its nature. This article will show some examples of taking advantage of this by working with functions.
Since functions can be passed around anywhere, we can pass them into the arguments of functions.
My first hands-on experience with anything having to do with programming in general was getting started with writing code in JavaScript, and one concept in practice that was confusing to me was passing functions into other functions. I tried to do some of the advanced stuff all the pros were doing, but I kept ending up with something like this:
This was absolutely ridiculous and even made it more difficult to understand why we’d even pass functions into other functions in the real world, when we could’ve just done this and gotten the same behavior back:
const date = new Date()
console.log(`Todays date: ${date}`)
But why isn’t this good enough for more complex situations? What’s the point of creating a custom getDate(callback) function and having to do extra work, besides feeling cool?
I then proceeded to ask more questions about these use cases and asked to be given an example of a good use on a community board, but no one wanted to explain and give an example.
Thinking back, I realize the problem was my mind didn’t know how to think programmatically yet. It takes a while to get your mind to shift from your original life to programming in a computer language.
Since I understand the frustrations of trying to understand when higher-order functions are useful in JavaScript, I decided to write this article to explain, step by step, a good use case, starting with a very basic function that anyone can write. We’ll work our way up from there into a complex implementation that provides additional benefits. | https://medium.com/better-programming/the-power-of-functions-returning-other-functions-in-javascript-501562a521df | [] | 2020-06-22 20:21:29.786000+00:00 | ['JavaScript', 'Web Development', 'React', 'Nodejs', 'Programming'] |
Most People Want Happiness but I Just Want Peace | Most People Want Happiness but I Just Want Peace
My introduction to spirituality
Photo by Tj Holowaychuk
Lately, I’ve been having so much inner peace. I signed up for a spiritual coaching program about a month ago. I was terrified of what I would get out of this course. Even though the coach was too spiritual for me, I liked her overall message.
I knew nothing about spirituality, but I took that leap and bought the course.
When I thought about the word spirituality, I thought about religion, oracle cards, crystal balls, or just things that I didn’t understand.
I signed up for this online course because I wanted to have a coaching business aligned with my core values and beliefs. The problem was that I didn’t know my values or beliefs.
I fell into the trap of rushing to get to your success, but I also felt like I was getting more and more confused about who I was and about what I wanted out of life.
In short, I wanted to have a business that aligned with my true self.
When I thought about the word spirituality, I thought about religion, oracle cards, crystal balls, or just things that I didn’t understand.
I was wrong about what I thought about spirituality.
The course introduced me to a world of lived experience. It is teaching me how to be the person you want to become.
I am a high achiever, and I always rush to get to my goal, but this has caused more mental burnout than anything. As my best self, I wanted inner peace, happiness, and the trust to get what I want, no matter what.
The course taught me I can be that person now.
I don’t have to believe that I need to be successful to be at peace. It is a trait that is currently available to me right now.
Before the course, I felt that I had so much inner conflict about my success and goals. I’m learning that I don’t have to figure out everything right now. I can go day by day, listen to my inner self, and trust that everything will work out.
It may not be today or tomorrow, but I trust that it will happen.
I know it sounds like ‘woo woo’ and all, but there’s also a science behind it. You and I have been doing it all our lives, but we weren’t just aware of it.
I’m learning that I don’t have to figure out everything right now.
I remember thinking that it would take my partner and me at least five years to save up for our place after graduating. But it happened only a year after without us having to sacrifice traveling or social life.
This is an example of what was once an impossible thing, but it happened anyway. Now, I have even more faith that whatever I decide to do in the future will happen.
One thing that my coach always says, “There’s no other way,” and I believe that too.
The course wasn’t your typical manifestation teaching where you “ask, believe, and you’ll receive.” You have to take action.
I believe in the power of hard work, but sometimes our mind keeps us from doing that.
The more I understand how our mind works, the more I believe that you and I can work with it despite the fears and emotions that we are scared to feel when things don’t work out.
Even though I haven’t fully figured out what I want to do in the future, I know that if I listen to myself every day, at every moment, regardless of what happens, it’ll be enough.
While spirituality can be scary for some people, at least for me, it’s brought me a sense of inner peace. I think that’s how people find the world of spirituality.
It’s when people need some sort of faith that there has to be more in life than our day-to-day work.
I wanted to learn how to truly live, despite what’s everything that is happening around me. I wanted to learn how to make hard decisions.
I wanted to know how to be myself. I am slowly learning to do all of that.
I also believe that every one of us will go through this process, where we will ask ourselves at the point of our time, “Is this what life is? What else is there?”
I feel lucky and grateful enough that I asked this question just before I turned 25. Now, I have the urge to inspire people and make an impact in the world.
While I haven’t figured out how I will do that, I trust that all my decisions will lead to doing that.
I also realized that I am already inspiring people. I just had to look at the evidence.
Last month, I received a card from a family member about how she was so grateful and inspired by how I took care of her dad at the end-of-his life. In my head, I was doing my job. But to her, it meant the world.
Ironically enough, I signed up for the course to have a coaching business, but through this program, I also found that it doesn’t fit me right now, so I write on Medium instead. Life is funny like that. | https://medium.com/mystic-minds/most-people-want-happiness-but-i-just-want-peace-266d9770968e | ['Jerine Nicole'] | 2020-12-05 17:35:39.914000+00:00 | ['Self-awareness', 'Mindfulness', 'Self', 'Spirituality', 'Life'] |
The Switch | The Switch
You can trade your life
Photo by Liv Cashman on Unsplash
After reading several psychological thrillers back to back, I wanted a break from murder, mayhem and madness. Beth O’Leary’s The Switch was exactly what I was looking for. It’s a fast, fun read about two women who decide to trade lives a la The Holiday.
What makes the book different is that one of the women is 79 years old. Eileen Cotton resides in a Yorkshire village and she’s got a problem — she’s lonely and a bit bored. Her husband has run off with his dance teacher and she doesn’t want to spend the rest of her life alone. But the local pickings in the over seventy set are slim, to say the least.
Enter her granddaughter, Leena. Her problem is quite different. She’s a workaholic whose boss has just forced her to take a two-month sabbatical. On the surface, Leena’s got the perfect life.
Gorgeous boyfriend, cool flatmates, great job — and have I mentioned she’s just been given two months paid leave? But free time is not something Leena wants. At all. She hasn’t recovered from her sister’s death and she’s not on great terms with her mother, who seems to have fallen apart.
Leena and Eileen soon decide to try an experiment. Eileen will spend eight weeks in Leena’s London flat and Leena will house-sit for her grandmother. London has got to have more men — and more excitement-than Hamleigh. And Hamleigh will surely be more restful than the city.
So it begins.
Forget tea
Eileen was by far my favorite part of The Switch. I loved that O’Leary chose to feature an elderly woman as one of the main characters in this story and that she is portrayed as a real person, not some sedate matriarch whose primary function is to advise the heroine over tea.
Eileen is her own heroine and she’s interested in the same things we all are: love, sex, adventure, family, community, and friendship. By placing Eileen in this central role, O’Leary also introduces many older characters into the story, who all have their own struggles and desires. I can’t remember the last time I’ve read a book that does this.
I also liked Leena, whose attempts to adapt to country life made me smile. And I thought her struggle to overcome the pain her sister’s death caused was well done, for the most part.
Where the novel fell short for me was the flatness/predictability of the romances, in part because the split narrative made it difficult for either of them to grab my attention. On the flip side, the ending of the book was a little too much for me.
Neither of these issues was a deal breaker, though. I was looking for something upbeat and The Switch delivered. It’s a nice winter read, especially in the time of social distancing.
Much thanks to Macmillan and Netgalley for an ARC in exchange for an honest review.
If you liked this review, you might like these suggestions as well:
Lori Lamothe’s book reviews have appeared in Mostly Fiction, Curled up with a Good Book, Daily Must Books, The Chick Lit Review and elsewhere. | https://medium.com/amateur-book-reviews/the-switch-f419a25d9110 | ['Lori Lamothe'] | 2020-12-27 23:39:33.230000+00:00 | ['Books', 'Fiction', 'Feminism', 'Love', 'Aging'] |
What if Roger Federer, Tiger Woods, a Raccoon and a Koala were UXers? | Introduction — Specialists vs Generalists — I had passed my Masters in Industrial Design from a premier institute in India and had been working in the automotive industry as a User Experience Designer/ Industrial Designer for about 3 years in the best automotive companies in India. In spite of all of the quality industry experience, I had an unanswered career related question in my mind. I had a feeling that that if I asked the question to a senior at work, I might get a biased answer due to a conflict of interest and if I asked it to some senior from university who had just a few more years of experience than me, then, I might get a relatively uninformed answer, for how could someone who has not travelled far enough herself/ himself show the the way to others. So, I thought of taking that question back to my professor from university, my mentor and also one of the stalwarts of Industrial Design in India. The question — “ Should I be a generalist or a specialist?”, and the prompt but well thought of and confident answer — “ be a specialist first and then a generalist”.
It is quite common for any professional, in any field of work to face this dilemma. These days, while participating in discussions on various User Experience R&D forums and 2D and 3D Physical and Digital Design forums, I often find people asking questions of a similar nature — should I learn Figma or should I learn Sketch or should I learn Adobe XD?; should I restrict myself to just being a UX Researcher or should I be both a UX Researcher and a UX Designer?; should I just be a UX R&D professional or should I be a Product Manager?; should I just be a Product Designer or should I spread my wings and attempt to be that cool rare species called the UX-Unicorn who can do it all from research to design to development? These are all very valid questions. Applying the wisdom from the previous paragraph, we can easily deduce that the logical way to proceed would be — Be a Specialist in one particular software in one particular field and in one particular industry. Do not stagnate and keep pushing your boundaries in to being a Specialist in the next software while keeping the field and industry constant, then try to be a Specialist in the next field while keeping the industry constant and then jump to be an expert in another industry and so on. Don’t bite more than you can chew but keep progressing, keep taking that one step at a time forwards.
Generalist = Specialist1 + Specialist2 + Specialist3 …. and so on.
Way to becoming a Generalist
Lessons to learn from the Animal kingdom —
Lessons from Nature: Raccoon (generalist) vs Koala (specialist)
According to the National Geographic Channel — In the field of ecology, classifying a species as a generalist or a specialist is a way to identify what kinds of food and habitat resources it relies on to survive. Generalists can eat a variety of foods and thrive in a range of habitats. Specialists, on the other hand, have a limited diet and stricter habitat requirements. Raccoons are an example of a generalist species. They can live in a wide variety of environments, including forests, mountains, and large cities, which they do throughout North America. Raccoons are omnivores and can feast on everything from fruit and nuts to insects, frogs, eggs, and human trash. The Koala on the other hand is a super specialist. Native to Australia, Koalas are herbivorous marsupials that feed only on the leaves of the eucalyptus tree. Therefore, their range is restricted to habitats that support eucalyptus trees. Within this diet, some koalas specialize even further and eat leaves from only one or two specific trees. But what does specialization or generalization of the Racoons and the Koalas have to do with us humans? And here is the gem. It has been observed by the ecologists that the effects of climate change or habitat loss are far greater in animals that are specialists like the Koala, as compared to that on generalists like the Raccoon. This ranks the generalists much higher in the survival of the fittest scale compared to the specialists. I am not over here saying that there is no scope for professionals in UX who specialize either in research or in design or in coding. There are thousands who have specialized and lead a very rich and fulfilling career. What, I am saying is that, being a generalist increases your chances of doing well in this very uncertain and volatile job market which can be effected now by a virus, some other time by stricter border controls and yet another time by AI! Who knows? So better be safe than sorry.
Lessons from the World of Sports —
Lessons from the world of sports — Roger vs Tiger
A few weeks back, I read a book called — Range by David Epstein ( also the author of the best seller — The Sports Gene). Unlike, popular advise that urges us to specialize deeper and earlier in life, this wonderful book has a different take on life and the way we can shape a safe, secure and flourishing career. It advocates the case how Generalists can Triumph in a Specialized World. The most striking example that I found in the book is a comparison between two of the topmost athletes of recent times, legends of comparable stature, Roger Federer and Tiger Woods. While Tiger is the child prodigy, the guy who gets in to golf at ten months of age accompanying his dad to the golf course and by age two wins an under ten golf tournament, Roger is pretty diversified till a much older age. While Tiger would play nothing but golf, Roger would be playing anything that involved a ball as a little kid, focusing specifically on tennis much later. However, as we all know, later in life both of these wonderful athletes reached the top of their respective sports. So how did this happen? The point that writer is trying to emphasize here is that more skills are transferable than we we think. A good percentage of ace athletes, according to research, go through, what is called a sampling period. In this time, they hone their skills playing different games, the learnings of which they transfer later in life to their chosen game of specialization. This is a very happy news, particularly for career switchers. The skills that you have developed as a journalist or a blogger can be easily transferred in to a UX-content writer’s job, the skills you have as a sales person can be transferred to a researcher, the skills you develop as a logo designer can be transferred to being a UI designer and the skills you have as a 3D digital sculptor in the automotive industry handling top end 3D CAD softwares can easily be transferred into working on 2D softwares like Figma, XD and so on. The research based design skills which some one from any design background learns — be it fashion design, shoe design, furniture design, accessory design or industrial design can easily be upgraded to the more well articulated UX-Research method using Empathy maps, Personas and Journey Maps. If you have been working on physical products there is nothing stopping you from transferring the skills to digital products. This can been further backed up by Indiana University’s Professor Douglas Hofstadter’s contributions in to the study of Analogy as the Core of Cognition, where in he stresses the point that “Analogy making = the perception of common essence between 2 things”.
Finally let us focus on the “T” between Roger and Tiger. The “T” signifies an employee or a professional who has has a “T” shaped knowledge. While Tiger’s skill can be equated to the vertical line of the letter T, Roger derives his skills from a wide range of experiences. According to the Harvard Business review, there is going to be a growing need in future for employees who have both — Depth of knowledge in their field of expertise as well as a Range of knowledge of various other related and not so directly related fields. Any innovation book you pick up will tell you that innovation happens at the intersection of various fields coming together. A “T” shaped employee is much more likely to be innovative than an “I” shaped employee.
Conclusion: Let me end this blog by saying that, while my professor’s wisdom teaches us how to be a generalist, picking up examples from the animal-world and from the sports-world helps us understand why one needs to be a generalist. In a race where only the fittest survive, the generalist certainly has a clear edge over the specialist. However, may I stress the point that Generalist here is not expected to be the typical Jack of All Trades; rather he is expected to be a King of All Trades… and he achieves than by being a King at one thing at a time … step by step by step ….. | https://medium.com/design-bootcamp/what-if-roger-federer-tiger-woods-a-raccoon-and-a-koala-were-user-experience-professionals-706177f6a034 | ['Arup Roy'] | 2020-12-17 12:44:38.788000+00:00 | ['User Experience', 'Careers', 'Career Change', 'Design', 'UX'] |
Why COVID-19 Data Confuses People | A two-dimensional table like this one can be read two ways: horizontally (where the percentages add up to 100% across the rows); and vertically (where the percentages add up to 100% down the columns).
The horizontal view corresponds to the first headline. It’s the view of politicians and professionals, who are looking across society. Crucially, it’s much easier to collect data in this view — simply count the people in the hospital. The data organized this way is useful for resource and capacity planning.
The vertical view corresponds to the second headline. It’s the view of the general population, including most of the media. It answers the question “What does this mean for me?” Unfortunately, data is harder to collect for the vertical view, and rarely presented this way.
There you have it — same information, two accurate but different views and interpretations. The simple reason that younger adults can make up a large percentage of hospitalizations while having a low hospitalization rate? There are so many more young adults than old people.
We need this critical piece of “prior information” on the overall population to reconcile the two views, yet it is too often omitted — a form of what Daniel Kahneman (and others) call “Base Rate Neglect.” [3]
Mixed-up Views, Mixed-up People
Now that we understand that both views are correct and each answers different but important questions, does that mean there isn’t a problem?
No, there’s a BIG problem: Most of the time the data is presented to us in the horizontal view, while most of us are interested in the vertical view question: “What does it mean for me?” [4]
Mixed up views result in mixed up people. Many simply tune out. Others reach wrong conclusions — in this case, the alarming-but-wrong conclusion that over 40% of younger adults with COVID-19 are being hospitalized. “Mixed messages allow people to follow their biases and believe whatever they want.” [5]
Even worse: when the “expert” (horizontal) view and the “popular” (vertical) view are left unreconciled, it can breed mistrust of experts and their motives. Accuracy without clarity may lead to the boomerang effect, “the unintended consequence of an attempt to persuade resulting in the adoption of the opposite position instead.” [6].
In the early stages of COVID-19 pandemic, U.S. Surgeon General Dr. Jerome Adams asked Americans to stop buying masks because “They are NOT effective in preventing the general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk.” He was legitimately worried about the health care system (horizontal view) but blurred the risk to the public (vertical view). [7]
Yet many people were aware of how Asian countries treated masks as essential to personal safety, causing confusion and sowing doubts about America’s experts and their motives. Sadly, some people have responded in opposite to what the experts intended — at first by hoarding masks, and later (after the experts reversed course) by refusing to wear them.
Accuracy and Clarity for Data Scientists
Recognizing that there are two views provides an opportunity for the data scientist to lead in the quest for accuracy and clarity. We mentioned earlier that it’s often difficult to compile the population data for the vertical view. This is especially true when we’re dividing the population into sub-groups.
Suppose the COVID-19 hospitalization data was missing much of the data for the vertical view: | https://towardsdatascience.com/the-simple-reason-why-covid-19-data-is-so-confusing-48a2f614391a | ['Matthew Raphaelson'] | 2020-09-14 20:00:05.441000+00:00 | ['Storytelling', 'Data Science', 'Decision Making', 'Behavioral Economics', 'Covid 19'] |
VC: Visualization with Trees and Graphs | Graph
Do you like graphs? I really like it because it looks like a real-world for me. Didn’t you? I hope you feel the same thing. Graphs can be anything, it can be protein, human, animal, molecule, machine, and anything you can imagine. In this post, we will learn some techniques based on graphs to visualize the data.
Force-directed Graph Layouts
The goal of this technique is to place groups of strongly connected nodes close to each other, preserve the minimum distance between nodes. How can we achieve this, we will take the idea from nature. The idea is to model the graph as a spring system. Edges are modeled as springs and we need to avoid the overlaps between nodes, we model nodes to electrically repel each other. The final positions are selected by simulating the resulting forces, finding equilibrium between spring force and electrical force.
Let’s define both forces:
Spring force
The Spring force is calculated for the connected nodes. pj-pi is the vector from ni to nj. We normalize this vector and s is the natural spring length, the normal length when there are no external forces. k is the tension of the spring.
Electrical Repulsion
Electrical repulsion is calculated for every node. It is defined by the above equation. r is repulsion strength.
General Algorithm
The idea is to find out the equilibrium point where the distances of the nodes make the forces zero.
Initialize randomly the position of nodes or through a heuristic Iterate:
Sum all attractive and repulsive forces => Multiply overall force by stepsize (people call it temperature, you can think learning rate in deep learning) => Impose a maximum displacement=> move nodes => Adjust temperature(this step can be skipped)
Do this till the forces are going to zero.
Quenching and Simmering | https://medium.com/swlh/vc-trees-and-graphs-ae31b8e842e8 | [] | 2020-09-12 21:56:20.985000+00:00 | ['Machine Learning', 'Data Science', 'Graph', 'Trees', 'Visualization'] |
Lazy Loading Images Made Easy in JavaScript | PERFORMANCE
Lazy Loading Images Made Easy in JavaScript
Improve image loading in Angular, React, or Vue.js apps with 3 lines of code
Photo by Thong Vo on Unsplash
When building a web application, we are always looking for the best performance in order not to impact user experience.
Among all the techniques to improve web performance of a website, lazy loading of images allows us to defer image retrieval. Indeed, according to the dimensions, the compression, and the quality of the images, it may impact the size of the bundle downloaded by the user’s device. Therefore, deferring these downloads offers several advantages:
Bundle size reduced
User experience improved (bundle is ready more quickly)
Controlled data consumption (only the visible images are downloaded)
In this article, I will introduce an HTML attribute to lazy load images. To go further, I will show how to use this attribute in a wider application with the three main JavaScript frameworks: Angular, React, and Vue.js | https://medium.com/better-programming/lazy-loading-images-made-easy-in-javascript-37b0ff91974c | ['Adrien Miquel'] | 2020-10-25 22:29:48.786000+00:00 | ['Programming', 'Software Development', 'Nodejs', 'React', 'JavaScript'] |
by Martino Pietropoli | First thing in the morning: a glass of water and a cartoon by The Fluxus.
Follow | https://medium.com/the-fluxus/tuesday-hidden-brain-c5cac6dabe19 | ['Martino Pietropoli'] | 2018-02-27 01:36:00.804000+00:00 | ['Tuesday', 'Comics', 'Psychology', 'Drawing', 'Cartoon'] |
Why Logarithms Are So Important In Machine Learning | Photo by Tim Foster on Unsplash
If you are living on the 10th floor of a building, are you going to take the stairs or use the elevator?
The goal in both cases is the same: You want to go back to your apartment after a long day at work.
Of course, taking the stairs is better if you are a busy person who doesn’t have time to go to the gym and wants to use the stairs as a simplified version of the cardio exercises. But, aside from that, you are more likely to take the elevator.
Let’s take another example.
Let’s say that you are trying to go to your workplace. It takes you 10 minutes by car when there are no traffic jams and 50 minutes walking.
You can choose to either drive or walk. You are still going to reach the same destination, but you want to save time. You go to work every workday and not just once in your lifetime. As a result, you may need to decide about this on a regular basis.
You want to be able to go to your work faster so that you can have more time in your day to stay with your family and friends. You want to start that side project. Read the book that you bought at the local book store. Watch the lectures that you always wanted.
Instead of spending so much of your time to go to the same destination, you want to take a car or a bus that helps you get there. This way, you have more time to do other things.
Examples of the benefits of using the logarithm
Using logarithm is the same: You need to find the parameters that minimize the loss function, which is one of the main problems that you try to solve in Machine Learning.
Let’s say that your function seems like the following:
If we find its first derivative, we will have the following expression in the end: | https://towardsdatascience.com/why-logarithms-are-so-important-in-machine-learning-6d2ff7930c8e | ['Fatos Morina'] | 2020-08-23 12:16:12.292000+00:00 | ['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Programming'] |
3 Reasons why Google Data Studio’s “Extract Data” Feature is a Game Changer | The Google Marketing Platform team quietly made a huge update to Google Data Studio over the past week. Analysts now have the ability to choose “Extract Data” directly from the data source selection screen. Previously, this feature was part of the new Data Blending features that Google rolled out over the summer, but now as a stand-alone feature, Data Studio’s potential as a go-to analytics tool has increased immensely. Here are the 3 reasons why this is such a monumental update:
1. Faster Load Times and Report Refreshes
It’s no secret that Data Studio can be quite slow at times, especially when you are trying to connect massive data sets together. Instead of connecting to your entire data set, you can select a small extract that will be faster to update and load in the interface for your end users.
Instead of connecting to your entire data set, you can select a small extract that will be faster to update
2. Row-Level Metric Calculations for Google Analytics Reports
This was my biggest pain-point when trying to do any advanced Google Analytics reporting in Data Studio. Typically, your most granular GA data is stored in Event tables, but the out-of-the-box GA connector does not allow you to do row-level metric calculations because metrics are auto-aggregated. For example, the following function to get a metric for just ‘button clicks’ would not work in the past, but now it does:
CASE
WHEN Event Category='button clicks'
THEN Total Events
ELSE 0
END
3. Better Data Management
The Extract Data feature also allows you to easily exclude entire sections of your original data set. If you had sensitive customer data that you didn’t want anywhere near your final report, you can choose to exclude that information in your extract. Pretty neat.
What’s Next?
I can only imagine what the next interaction of this feature will bring, but I hope it involves an even more robust version of Data Blending — similar to Tableau Prep and Power BI Query Editor. | https://medium.com/compassred-data-blog/3-reasons-why-google-data-studios-extract-data-feature-is-a-game-changer-5535c7c0475e | ['Patrick Strickler'] | 2018-09-11 13:58:33.067000+00:00 | ['Google Analytics', 'Analytics', 'Google', 'Data', 'Data Visualization'] |
Cat Food and Crackers | Cat Food and Crackers
Getting Through Life With a Little Humor
Photo by Ramiz Dedaković on Unsplash
After Grandma died, Aunt Sarah sank into a deep depression and hardly ever left her bedroom. That meant Uncle Joe, and I cooked or reheated leftovers.
I listened to a lot of music and watched TV that made me laugh. Chico and The Man, Good Times, Welcome Back Kotter, and What’s Happening!! Were a few favorites that got me through the sadness at home.
Uncle Joe still dropped me off and picked me up at school every day. He’d bring the Detroit News or Free Press and read until I came out and jumped into the big blue station wagon.
“You want to stop at the store on the way home?” He asked.
“Sure, I said.”
Uncle Joe never shopped without a list. Between his grocery list and Aunt Sarah’s, shopping was an Olympic event.
“Where are we going?”
“Felice’s — my prescriptions are ready.”
We shopped at the stores with the best deals for what we needed or that doubled coupons. By the time we finished, our cart was overflowing.
Uncle Joe must have sensed Aunt Sarah was in a cycle of staying upstairs because he stocked up on TV dinners. I could gag them down when I needed to. He always bought a couple of the turkey dinners for me — the only ones I could stomach. If you put enough cranberries on the turkey to give it taste and cover up the texture and saved the cobbler until the end, it was edible.
We started shopping mostly at Felice’s because they had a pharmacy and his medication was a little less expensive. Still, he complained the food prices were higher. “So, what are you gonna do?” He liked to say.
Sometimes, we ran out of cat food before our next big shopping trip. If Uncle Joe was in a good mood, he’d also buy a box of Hi-Ho crackers.
When It was our turn at the check-out line, he’d put down several cans of cat food and the box of crackers. With a perfectly straight face, he looked at the cashier, Shrugged his shoulders, and said: “This isn’t so bad when you put it on crackers.”
He kept a straight face until we were out of the store. It took everything in me not to laugh as soon as he put down the crackers. The incredulous look on the cashier’s face was worth every second I had to keep my poker face.
It was a small thing, but it was always so good to laugh at something. Thankfully, no one ever called Child Protective Services about us.
Even now, I read or watch something funny to keep me from going too far into the darkness. I love people who have a great sense of humor and can also engage in deep, meaningful conversation. We could all use a little of that, I’m sure. | https://medium.com/illumination/cat-food-and-crackers-a75c9f575168 | ['Denise Garratt'] | 2020-10-28 11:15:21.921000+00:00 | ['Humor', 'Mental Health', 'Pets And Animals', 'Family', 'TV Shows'] |
Making sense of the coronavirus pandemic— what the science is telling us, and what you can do to prevent its spread | I’m an infectious disease epidemiologist with a background in new and emerging infections, just like this new coronavirus which is currently sweeping the world.
Over the past few weeks I’ve watched as a lot of fear and misinformation has swirled round the internet, and it occurred to me that a lot of it is there because people simply don’t really understand how the science of new infections works, and why the messages we are hearing seem to be changing every day.
I orginally wrote this article as a Facebook post for friends, but was urged to make it public and it has since been shared >1000 times. So I’m adapting it here so that people without Facebook can read and share alike. I hope it helps to demystify things for you. Note that I’m writing about the current situation in the UK, which will be slightly different depending on where you are and at what point in time you’re reading this. I hope you find it helpful.
(Just to note — you can find my CV on LinkedIn; I am writing in a personal capacity and not affiliated with any organisation. The modelling paper referenced is “Report 9: Impact of Non-Pharmaceutical Intervenions to reduce COVID-19 mortality and healthcare demand” by Ferguson et al at Imperial College, and is available freely online).
THE SCIENCE OF THE SITUATION
More data are in, and the data aren’t very encouraging, sadly. But before I elaborate any further, some background info on what work is being done, both in the UK and internationally, to help stem the tide as much as we can.
In any new and emerging infection, you never know at the start what it’s going to do. You don’t know how good it is at transmitting to other people. You don’t know who’ll be at most risk. You don’t know its fatality rate. You don’t know if people have symptoms before they are infectious, or if they spread it around for a little while before they know they are sick.
Infectious disease epidemiologists are the people who track the emergence in real time, in an effort to work out some rough answers for these questions. We’re also good at looking at what happened in hindsight, which can be used for learning for the future. But in an emerging situation, what you really want to know is not what’s happening right now, nor what happened over the last few weeks: what you really want is some kind of crystal ball to look into the future. And if you want to predict what’s actually going to happen in the future — then that’s for the mathematical modellers.
Mathematical modellers play “Let’s pretend”. They use fancy computers and statistics to input the rough numbers the epidemiologists have come up with so far. They say: “Let’s assume that one infected person is infectious for two days before they stay home feeling sick. Let’s assume that person will come into contact with 20 people in a day. Let’s assume that one case usually leads to two other people becoming sick. Let’s assume that if we implement social distancing, 50% of people will comply with it… Etc etc etc. And then they crunch the numbers and come out with what they think the outcome may be. Remember: these outcomes are based on a *lot* of assumptions (more on that later).
So…using data that the epidemiologists have gathered so far, Imperial College has done some mathematical modelling on coronavirus, and it’s not brilliant news. To start with, the UK’s plan had been to let the virus move slowly through the population, whilst making sure the most vulnerable were protected by self-isolation etc. The strategy was to slow the outbreak down with various control measures, but still let it progress, with the aim that most people would become immune, herd immunity would then protect those more vulnerable to infection, and health service capacity would not be overwhelmed. It was a nice plan and would have minimised the disruption to society that we’re now going to see (more on that later).
However, since deciding on that strategy, more work has been done to quantify exactly what capacity the NHS has, and now this is known and the numbers have been crunched, it looks as though if we were to pursue that strategy, we could end up in the situation where we could potentially exceed the number of ICU beds we need by a factor of eight to one. Or put simply: eight people needing beds, and only one available. And that’s only considering the beds needed for coronavirus. Suffice to say that wasn’t quite the prediction we’d been hoping for.
Given this, you may well be thinking “WHY DID WE EVER PURSUE THIS ROUTE, WHY DIDN’T WE GO ON LOCKDOWN LIKE EVERYONE ELSE FOR GOODNESS SAKE????” And that’s a fair and reasonable question, which I’ll attempt to answer now.
The trouble with lockdown or super strict measures to stop people moving about (leaving aside the catastrophic effects that has on people’s livelihoods and mental health and wellbeing) is that it doesn’t so much get rid of the problem as just put it on hold. We can see the numbers coming down now in China and Italy, but meanwhile the virus is grumbling away under the surface. When those measures are relaxed and people start mixing again, we are likely to see case numbers climbing right on up again. It becomes like a game of cat and mouse. Ultimately, in the absence of a vaccine (a minimum of 12–18 months away), the only way you can bring an end to this game is to get some degree of immunity developing in your population. So it might well end up being to our advantage that we didn’t implement harsh control measures right away. Only time will tell, but it’s good to note that the lead author on this new modelling paper reckons the UK has got the timings about right so far.
So what’s the plan? Well, the modelling has concluded that probably the best way to contain this is a cyclical approach. When numbers start to climb up and hit a certain threshold, we hit pause by implementing strict control measures. Then when numbers start to go down, we relax those measures — but in the knowledge that the cases will again creep up and meet that threshold, and we will need to hit pause again. It suggests that we will need to hit pause for a period of around five months in the first instance, and that we will have to continue to have to implement this cycle of pause-relax-pause-relax-pause-relax for the best part of two years.
In short: this is not all going to go away after a few weeks of shutting schools. Not. Even. Close. Don’t be fooled into thinking that we will have a couple of weeks of lockdown and then it will all go back to normal. This is a new reality, the impact will be huge, and we all need to be looking to the government to support people during this time as businesses will collapse and people will be pushed into poverty. Many of us who have never considered ourselves as “vulnerable” will become vulnerable.
My take-home message is this:
***Coronavirus looks set to fundamentally change the way we live for the medium and probably long term.***
While you let that sink in, here are a couple of positives to hold on to:
First: hitting pause won’t just have the benefit of slowing viral spread — it will also give us a chance to upscale ICU capacity, produce and install more respirators, upscale our testing capacity so it’s easier to find out who’s infected versus who’s just coughing, find treatments that work, develop a “serological” test to help identify people who’ve had the infection and are now immune, etc. It also helps us get a bit further down the road to potentially developing a vaccine.
Second: models are based on a whole host of assumptions. We don’t know how accurate the predictions will turn out to be. Nothing is a definite fact, and the strategy will evolve and become more predictable as more data and information become available. A lot of what is scary and unsettling about this is the uncertainty of it all. Fear of the unknown is not good for our mental health, but the unknowns will reduce as time progresses. All eyes will be on the likes of China and Italy as restrictions begin to be lifted.
Make no mistake that these are unprecedented times, though. How we all react to it will be key to how well we do to contain the virus spread.
So, with that in mind, let’s turn to what YOU can do to help us all get through this.
PRACTICAL TIPS
***LIMIT YOUR SOCIAL CONTACTS***
We need to fundamentally change the way we socialise. It’s going to be very hard because as humans, we are social animals. But we need to seriously distance ourselves from each other now, and especially so if people have underlying health conditions or are older. There is great guidance on how to do this on the government Coronavirus webpages (referenced below). It’s time to start working out ways you can socialise with the people you love and who keep you sane by virtual means (as well as remote working wherever possible). Skype, Microsoft Teams, Zoom, Slack, Facetime, WhatsApp — look into them and find out what works for you.
Note: ***standing in long queues at 8am trying to get hand sanitiser from Superdrug is NOT a good way to reduce your social contacts*** Soap is just as effective (which I’ll come onto in my next point):
***IF YOU’RE OUT, PRETEND YOU’RE ON HOLBY CITY***
You’re still going to have to go out sometimes, e.g. to get food. Channel your inner Holby City surgeon (for non UK-readers, this is British drama series set in a hospital — think Grey’s antomy or ER). Thoroughly wash your hands when you go out, and after that pretend EVERYTHING you touch could have germs on it. You know how they switch off the tap with their elbow? Do that when you’ve washed your hands in public. You know how they bust through the theatre door using their backs rather than pushing a door handle? Do that when you’re entering a shop. Seen how they clasp their hands together as if in prayer when they’re standing by the operating table in order to avoid touching anything? Do that while you’re on the bus. It’s a good way to reduce touching your face, which is really hard to do.
I *cannot emphasise enough* the importance of good hand hygiene. Soap kills coronavirus. So does hand sanitiser, but it’s exceedingly hard to come by at the moment so let’s set it aside for now. All you need is soap and water — doesn’t need to be hot water. Doesn’t need to be fancy soap. Any bar of soap, liquid soap, shampoo, shower gel, washing up liquid — any of those things will kill coronavirus so long as you wash your hands thoroughly with it for 20 seconds.
I encourage you all to set up a bucket of water and a bar of soap next to your front door and make sure anyone who steps across your threshold washes their hands before entry. If you’re out and about, carry a bottle of water and soap with you. Wash your hands like it’s going out of fashion. With soap.
***IF YOU’RE OUT — ALSO PRETEND YOU’RE A SPY***
Inevitably we’re all going to want to get out of the house and see some people, or it’s going to be really hard on our mental health. If you are gagging for some social contact — I am stealing these excellent tips from my fellow infectious disease epidemiologist Dr Naomi Boxall to behave as if you’re in a Jonn Le Carré novel: 1) take circuitous routes to outdoor destinations, avoiding those hidden in the crowd 2) meet up with friends on park benches, sit 1m apart facing the same direction 3) communicate enthusiasm of greetings with brooches/hat angles 4) only meet physically with those from similar isolated cells 5) TOUCHING YOUR FACE IS A CODE YOU HAVEN’T YET LEARNED; DON’T DO IT: YOU MAY INADVERTENTLY CONDEMN INNOCENTS 6) remove outerwear as soon as you enter a dwelling, be silent of foot 7) leave no fingerprints 8) hold private discussions under the cover of running hand washing water.”
Note, if you have a new continuous cough or fever, or have been in contact with someone who has, you don’t get to play the spy game for at least two weeks.
***IF YOU GET A NEW CONTINUOUS COUGH OR A FEVER, OR HAVE BEEN IN CONTACT WITH SOMEONE WHO HAS, STAY AT HOME***
Follow the self-isolation guidelines to the letter. Do not assume after a couple of days that you’re fine and start going out again. Stay home. Consult the NHS webpages (link below). Do not call 111 unless you absolutely can’t find the answer to your query online. Absolutely do not call 999 unless it’s an emergency.
*** CHANGE YOUR SHOPPPING HABITS***
Vulnerable people, the elderly and those in self-isolation are going to need those online slots. If you’re young and healthy and you’re a usual online shopper, cancel your slots and instead go in person to the shops (whilst pretending to be a spy and a surgeon).Go at non-peak times and don’t stand in any long queues or crowds. Obviously we know that shelves are running empty, so rather than stocking up, instead just buy a couple of items every day from a different shop each time, including your local corner shops, until the supply chains are a bit more restored. That will mean there’s enough to go round.
Remember that if you hog all the produce and the online slots to yourself, basically what the result will be is a load of people who should be in self-isolation traipsing round town in search of what they need and leaving a big trail of virus everywhere. It is not in your interest.
If you *are* elderly or vulnerable, avoid supermarkets and shops quite literally like the plague — even the special hours that supermarkets have reserved for you, as you need to be avoiding places where lots of people tend to go. Instead, shop online or have a friend or neighbour or Mutual Aid volunteer (see next point) bring you what you need.
***GET IN TOUCH WITH YOUR NEIGHBOURS***
Now’s the time for us all to finally get to know our neighbours, at least virtually. People who are vulnerable and who are self-isolating and are going to need help with getting bits dropped off to them. Find your local Covid Mutual Aid group on Facebook, or join Nextdoor and get directed to your local ward/street group there. There is some fantastic community organisation going on that will do a lot to lift your spirits, but bear in mind that there will sadly always be unscrupulous people who are drawn to these networks to prey on the vulnerable, so keep your wits about you.
***SUPPORT LOCAL BUSINESSES***
It’s quite hard to fathom what our high streets and communities might look like in a year’s time. Be creative in how you can continue to support them in new ways, to help see them through this time. It’s encouraging to see the support beginning to come through from the government, but make no mistake this will be a time when many people’s livelihoods lay on the line, so make every effort to support while you can.
***SUPPORT THE VOLUNTARY SECTOR***
It’s also going to be a really hard time for people who were vulnerable to start with. Food Banks are running low on stocks; domestic violence is predicted to increase as people are forced together for long periods in stressful conditions. Think about how you might be able to contribute to enable the extra efforts the voluntary sector will be making to support these people during this time.
***KEEP UP TO DATE WITH THE GUIDANCE***
This is an evolving situation and the guidance on what we should be doing will be updated every day. Listen out for developments and access the latest guidance at the websites below
https://www.nhs.uk/conditions/coronavirus-covid-19/
https://www.gov.uk/guidance/coronavirus-covid-19-information-for-the-public
That’s all, folks. I’m sorry it’s not better news, but we’re in this for the long haul. Let’s hunker down. | https://georgialadbury.medium.com/making-sense-of-coronavirus-what-the-science-is-telling-us-and-what-you-can-do-2b8063cc9dcc | ['Georgia Ladbury'] | 2020-03-22 18:33:55.933000+00:00 | ['Pandemic', 'Outbreak Response', 'Coronavirus', 'Epidemiology', 'Covid 19'] |
Psychedelics and the Hero’s Journey | “The privilege of a lifetime is being who you are.” — Joseph Campbell
I am watching a single drop of water.
Lights bounce and dance off of dark corners while the musicians onstage pour their heart into their strings, their keys, their drums. The crowd moves as one, sinuous and fluid-like, to the beat of the music pouring from the speakers. I am standing too close to the speakers, as usual, pressed up against the raised platform that forms the stage, my spirit moving in time, a cell in rhythm with its whole, an organism formed of many twisting limbs like some ancient goddess embodied. But I am not listening to the music.
I am watching a single drop of water.
A row of water bottles lines the stage, a convenient place to set a drink down where it won’t be disturbed. Someone has spilled one and a droplet sits on the edge of the stage, a bead of moisture encapsulating a raised bump of black paint. It, too, is caught up in the movement of the music — the bass from the speaker is thumping so loud that the droplet vibrates along. I am fascinated, mind bent from both LSD and MDMA, and that dancing drop of water is the most beautiful thing I’ve ever seen. There is no separation, it tells me. All is connected, even down to the smallest droplet of water, all of us wiggling along in to the music that is life.
My soul opens up, not for the first or the last time. I’ve always been a very closed-off sort of individual, afraid that the smallest thing I say or do will cause the meltdown of my entire life, and that same fear prevented me from experiencing the world in its most simple and divine state. To open myself up was to approach fear head-on. It wasn’t until psychedelics entered my life that I was able to even look that fear in the face, let alone confront it and move past it into the oneness of everything.
I experienced ego death that night, dancing along with that droplet of water. “Death” might be a misnomer, because ego death is not a true death — it is a rebirth of the mind and soul into a new phase of being, a recognition of your true nature as a part of the cosmic whole. But ego death is not just the purview of psychedelics, though they make the path to it a little more direct. If you’ve ever read a particularly thrilling book, you know that the death of the ego isn’t always a terrifying thing — it can instead bring you to other worlds, put you into the shoes of others.
“The cave you fear to enter holds the treasure you seek.” — Joseph Campbell
The Hero’s Journey
Joseph Campbell is one of my all-time favorite people — “The Hero’s Journey” was his seminal work, a book about the single story that is told throughout recorded history. That story is simple: Something upsets the balance of a person’s life and they must go on a journey to right that balance, along the way confronting their deepest and darkest fears and growing into a new iteration of themselves — hence, “hero”.
Ever seen The Matrix or Lord of the Rings or Star Wars? All of these ring true with the echoes of the hero’s journey.
The first time I ever heard of this concept was in a high school Creative Writing class, taught by a teacher I will forever thank. We studied the phases of the journey and then watched The Matrix, writing down each phase as it happened to Neo. But the journey is not just a set of steps, a few plot points to create a well-told story. The journey is had in confronting your fears, in slaying your own metaphorical dragons and becoming something greater than just yourself— a true “hero”, if you will.
When I was a teenager, this idea didn’t really resonate with me. I was a socially-anxious, overly-emotional, egotistical, and a people-pleaser. My entire life was wrapped up in simultaneously trying to appease the wishes of others while remaining at heart a rebellious, angry individual who knew none of this made her happy. But the idea of confronting my fears didn’t seem particularly desirable at that time in my life — I was just trying to make it to the next day without killing myself.
It wasn’t until I was in college that I began to realize how much my own life had common with Joseph Campbell’s ideas. My biggest challenge, the treasure that lies at the heart of my fear, has always been overcoming my traumatic past, and as I began to confront the most hated parts of myself, I experienced growth like no other. LSD helped quite a bit — it enabled me to realize how much I truly detested myself and begin to make changes in my life — but as I became more focused on my own “hero’s journey” I realized that psychedelics were only a small part of it. It is more about attitude and taking what comes at you than pushing yourself towards huge epiphanies.
So I began my work.
By “my work” I mean my work on myself, my work on my own fragmented and broken heart. The work of waking up every single day, meditating, saying my affirmations, writing my gratitude journal, reading personal development books, and pouring my extra energy out into exercise (mostly my hula hoop).
I’ve done this work for years. Not consistently, not every single day, but over time, discipline developed. Over time, my monsters seemed less scary, less intelligent, and more like big stupid dinosaurs than some kind of Lovecraftian horror. Whereas before entering the “cave you fear to enter” (as Campbell puts it) was an intense, terrifying thing — now that cave is my haven, my place to relax my mind and allow my creativity to flow.
My Continuing Journey
For so many years, I’ve put off starting my career as a writer because of my mental health. I’ve pushed back my goal of being published in favor of taking care of my mind — which I do not regret one iota. However, I’m now twenty-seven years old with no real future ahead of me and I’m tired of working the daily grind to achieve what I could do with a cup of coffee and a few hours behind a laptop every day.
The “cave” that I fear to enter will become my haven. These monsters I fight will become my hard-won words of wisdom that I pass onto you all, letter by letter. The thousands of books I’ve read will become my army, generals passing on strategies in whispers as I write.
My hero’s journey is continuing. That is the biggest thing that Joseph Campbell taught me — in this long saga of life, we all have many phases to pass between to become our true selves. It is only through struggle and suffering that we learn who we are at our most deepest level. Once we learn who we are, we can then portray that to the world more accurately and use that to further ourselves as both humans and working cogs in this ever-moving machinery of society.
I intend to honor the lessons taught to me by both Campbell’s work and my own inward journey with psychedelics. There’s an oft-quoted aphorism in the psychedelic community — “When you’ve heard the message, hang up the phone.” It means that once you’ve absorbed what you need to learn, psychedelics become a tool like any other to enact change in those areas of your life. Use the ideas and all of the wisdom you have gained over your years of existence to push yourself to the next level of your soul. Don’t lose yourself so much in the beauty that you become blind to the work you need to do on yourself.
Honor this wisdom, and you will become a hero, too. | https://sameripley.medium.com/psychedelics-and-the-heros-journey-6f1600256b19 | ['Sam Ripples'] | 2019-05-05 15:13:18.859000+00:00 | ['Mental Health', 'Spirituality', 'Psychedelics'] |
6 (more) tips to quickly improve your UIs | Creating beautiful, usable, and efficient UIs takes time, with many design revisions along the way.
Making those constant tweaks to produce something that your clients, users, and yourself are truly happy with. I know. I’ve been there many times before myself.
But what I’ve discovered over the years is that by making some simple adjustments you can quickly improve the designs you’re trying to create.
In this follow up article (You can find Parts 1 & 2 here, and here), I’ve once again put together a small, and easy to put into practice selection of tips that can, with little effort, help improve both your designs (UI), and the user experience (UX).
Let’s dive on in… | https://uxdesign.cc/6-more-tips-to-quickly-improve-your-uis-2130d3e89d59 | ['Marc Andrew'] | 2020-08-28 09:06:58.813000+00:00 | ['UI Design', 'UI', 'Visual Design', 'Design', 'Product Design'] |
Young Sir Jagadish Chandra Bose | Quick Intro
The True Laboratory Is The mind, Where Behind Illusions We Uncover The Deeper Laws Of Truth
Breakthrough progress under a simmering sense of social pressure is quite difficult — a life of breakthrough achievement while facing down overt racism, is something else entirely. And yet to remain an empathic, humanity-first soul among it all is what truly makes Sir Jagadish Chandra Bose, our twelfth-entry in our Young Polymath series, a master of many worth studying.
From physicist to biologist to author to activist, Sir Jagadish Chandra Bose is tragically overlooked as one of the greatest thinkers of the previous century. Maintaining the same focus as previous submissions, we ask again — what was he like in his twenties?
Note-Worthy Accomplishments
— Evolved botany by inventing the crescograph & proving that plants use electrical impulses to respond to stimuli (similarly to animals)
— Known as the Father of Radio, inventing the Mercury Coherer famously used by Guglielmo Marconi for his transatlantic radio
— Philanthropist who not only self-funded the majority of his research, but additionally refused to patent his inventions
— Famous novelist who authored a classic science fiction novel: Niruddesher Kahini (Story of the Untraceable)
20s To 30s (1878–1888)
Jagadish Chandra Bose was born on November 30th, 1858 to a Bengali family in the district of Bikrampur. His father, Bhagawan Chandra Bose, was a leading member of the local religious sect; a conservative man very tied to his cultural roots, he intended to pass off these local values to young Jagadish. As a result, Jagadish attended school locally, learning the vernacular language before heading off to a more-prestigious academy to study English.
A time of quiet & peace marked by Bose’s innate curiosity, he later in life credited this period with his deep appreciation for nature, claiming:
Sending children to English schools was an aristocratic status symbol. In the vernacular school, the son of the Muslim attendant of my father sat on my right side, & the son of a fisherman sat on my left. They were my playmates. I listened spellbound to their stories of birds, animals, & aquatic creatures. Perhaps these stories created in my mind a keen interest in investigating the workings of Nature.
Bose joined the Hare School in 1869 & then St. Xavier’s School at Kolkata. In 1875, he passed the entrance examination for the University of Calcutta & was therefore admitted to St. Xavier’s College, Kolkata.
St.Xavier’s College — Kolkata
In 1878, the year turned Bose turned twenty, he came in contact with his first mentor: Bose credits Jesuit Father Eugene Lafont with furthering his passion, attention & interest towards the natural sciences. The next year, at twenty-one, Bose received his Bachelor of Arts from St.Xavier’s College. He intended to return home & compete for the Indian Civil Service (in the same footsteps as his father). Senior Bose, however, had other plans — he canceled his son’s examinations & claimed that he wished his to be a scholar, one who would:
Rule Nobody But Himself
In 1880, at twenty-two, Jagadish moved to England to study medicine at London University, England. However, due to multiple bouts of Malaria, Jagadish quickly found his symptoms exacerbated by the odor in the dissection rooms. This led to him withdrawing by the end of the academic year. Gravely disappointed but not down for the count, he applied & received a scholarship to study Natural Science at Christ’s College.
Christ’s College — Cambridge
In 1881, Jagadish departed London & arrived at Cambridge, ready to physics/Natural Sciences. His first year, he’s tapped into a special Tripos program that’ll grant him degree completion in physics, chemist & botany. This first year, he meets another significantly life-long mentor, fellow polymath Lord Rayleigh.
The next two years (1882–1883) were an academic flurry for young Jagadish. It was likely a period of massive personal & professional growth, one made possible by an extremely talented pool & mentors: Lord Rayleigh, Sir James Dewar, Sir Michael Foster, Francis Balfour & Francis Darwin (Charles Darwin’s son).
At twenty-six, in 1884, Jagadish furiously finished one academic program only to apply & complete an entire additional degree within the same year. This year, he not only completed & received his second Bachelors of Arts degree from Christ’s College, but he also attained Bachelors of Science from the University College London.
The mature immigrant now longing to return to his roots, on his return home, twenty-seven year-old Jagadish was appointed as an officiating Professor of Physical Science at Presidency College of Calcutta. In this first job, Bose became a victim of blatant racism as his salary was fixed at a much lower level than that of the British professors. As a protest, Bose refused to accept the salary & taught at the college for free. Eventually taking up research alongside his teachings, the first two years were a heavy adjustment period for the academic Bose now removed from pure learning.
In 1887, the year he turned twenty-nine, Bose married Abala, a prominent activist for education & divorced women. On the professional spectrum, after three years, the college Principal Twany & Director Croft, impressed by his brilliance, jointly recommended full salary for him as well as full payback for the three years since he joined.
Quirks, Rumors & Controversies
This is the section where we turn over stones & search on the dark side of the moon for any hint of darkness within the character of our protagonist. There’s a particular reason why this entry has the word empathy in the subtitle however, not for a lack of trying, just like Mary Somerville, absolutely nothing negative came to light regarding Sir Jagadish Chandra Bose — quite the opposite.
Just within this thirty-year profile, we see two instances of Bose’s marked compassion. First, his quoted recall to his childhood curiosity when observing nature; second, his silent yet powerful protest against employer discrimination. Past the scope of this mini-bio, his single-largest attribution of humanity is missing from this article: his intuition & discovery that plants feel. This contribution to the field of botany is mainly credited to his invention of the crescograph — an instrument used to measure the previously-invisible reactions to external stimuli within a plant. Lastly, famously, one of the main reasons Sir Jagadish Chandra Bose has been overlooked in history is a result of his attitude towards taking credit: throughout his life, Bose refused to patent his inventions for monetary gain or recognition. Bose is one of those very rare souls where his pursuit of science & knowledge wasn’t just personal, as it is for most entries in this series, more importantly, it was selfless —not for the history books but for the continued benefit of all.
In Closing
Who Was Jagadish Bose In His 20s? An avid scientist student with a heightened sense of compassion & self-awareness.
Was He Accomplished In His 20s? Not particularly, no. Similar to Jefferson (in trajectory, certainly not in topics), Bose spent his twenties conquering the collegiate/academic world & rubbing shoulders with future innovators & scientists.
Less well-known then previous subjects, Bose’s humility is unmatched & admirable in its own right. The man to both first demonstrate wireless waves & prove that plants are “alive,” Sir Jagadish Chandra Bose was uniquely tuned in to nature. A stark contrast to previous protagonists Newton & Tesla, Bose reminds us that genius isolation is but one path — warmth for our fellow man & nature can also indeed be a compass for deeper knowledge.
Additional Entries
Part I — Benjamin Franklin
Part II — Bertrand Russell
Part III — Leonardo Da Vinci
Part IV — Thomas Young
Part V — Mary Somerville
Part VI — Richard Feynman
Part VII — Sir Francis Bacon
Part VIII — Jacques Cousteau
Part IX — Nikola Tesla
Part X — Isaac Newton
Part XI — Thomas Jefferson | https://medium.com/young-polymaths/young-sir-jagadish-chandra-bose-e0cdcfca2c81 | ['Jesus Najera'] | 2020-03-01 18:59:24.029000+00:00 | ['History', 'Physics', 'Biography', 'Science', 'Innovation'] |
COVID-19 visualizations with Stata Part 10: Stream graphs | Within the graphs folder, I also create an additional sub-folder called guide10, to store the figures generated here. For details on how to organize your files, please see Guide 1.
In order to make the graphs exactly as they are shown here, several additional item are required:
Install the cleanplots theme for a clean look for your figures (more on themes in Guide 2):
net install cleanplots, from(" https://tdmize.github.io/data/cleanplots ") set scheme cleanplots, perm
net install colrspace, replace from(" net install palettes, replace from(" https://raw.githubusercontent.com/benjann/palettes/master/ ")net install colrspace, replace from(" https://raw.githubusercontent.com/benjann/colrspace/master/ ")
Set default graph font to Arial Narrow (see the Font guide on customizing fonts)
graph set window fontface "Arial Narrow"
This guide has been written in version 16.1 and should work with version 14 and onwards. Earlier versions might need some modification for implementing custom colors.
Get the data in order
We pull the data from the Our World in Data’s COVID-19 webpage as follows:
************************
*** COVID 19 data ***
************************
save ./raw/full_data_raw.dta, replace insheet using " https://covid.ourworldindata.org/data/owid-covid-data.csv ", clearsave ./raw/full_data_raw.dta, replace gen date2 = date(date, "YMD")
format date2 %tdDD-Mon-yy
drop date
ren date2 date ren location country
replace country = "Slovak Republic" if country == "Slovakia" drop if date < 21915 // 1st Jan 2020 save "./master/OWID_data.dta", replace
All observations before 1st Jan 2020 are dropped in the dataset.
Since OWID uses very broad classifications for continents (five continents in total), we will use the World Bank 2020 classifications for country groupings which provides a very large set of regions.
The Excel file can be downloaded from the page linked above. I have already cleaned the file and uploaded it on my GitHub page in Stata format. It can be directly pulled into Stata and saved in the master folder as follows:
*** Country classifications ***
**********************************
copy " ************************************* Country classifications *************************************copy " https://github.com/asjadnaqvi/COVID19-Stata-Tutorials/blob/master/master/country_codes.dta?raw=true " "./master/country_codes.dta", replace
Next we merge the two files together and drop anything that does not match:
use "./master/OWID_data.dta", clear
merge m:1 country using "./master/country_codes.dta"
drop if _m!=3 keep country date new_cases new_deaths group* summ date
drop if date>=r(max)
We also only keep the variables we need and drop the last date observation to avoid missing values for some countries. This might not be necessary and it all depends on when the data is pulled from the OWID website. In the past, not all countries were updated at the same time.
If everything works well, the dataset should look something like this:
Stata 16.1 interface using the Dark theme
Each group variable corresponds to the classification defined by the World Bank.
Setup for stream graphs
Since we now have World Bank country groupings, we can split the data in as many groups as we want. For now, I am defining the following 12 regions:
gen region = . replace region = 1 if group29==1 & country=="United States"
replace region = 2 if group29==1 & country!="United States"
replace region = 3 if group20==1 & country=="Brazil"
replace region = 4 if group20==1 & country!="Brazil"
replace region = 5 if group10==1
replace region = 6 if group8==1 & group10!=1 & country=="United Kingdom"
replace region = 7 if group8==1 & group10!=1 & country!="United Kingdom"
replace region = 8 if group26==1
replace region = 9 if group37==1
replace region = 10 if group35==1 & country=="India"
replace region = 11 if group35==1 & country!="India"
replace region = 12 if group6==1
This of course can be increased or decreased based on the level of detail required. We can also label the values of this variable:
lab de region 1 "United States" 2 "Rest of North America" 3 "Brazil" 4 "Rest of Latin America" 5 "European Union" 6 "United Kingdom" 7 "Rest of Europe" 8 "Middle East and North Africa" 9 "Sub-Saharan Africa" 10 "India" 11 "Rest of South Asia" 12 "East Asia and Pacific" lab val region region
In the next step, we collapse the data and sum up the daily cases and deaths by date and region combination.
collapse (sum) new_cases new_deaths, by(date region) format date %tdDD-Mon-yy
format new_cases %9.0fc *** minor cleaning of negative cases
replace new_cases = 0 if new_cases < 0
replace new_deaths = 0 if new_deaths < 0
Note that all variables that are not defined in the collapse command are automatically dropped from the dataset. We also clean up date format and the variables.
Now we declare the data to be a panel dataset using the xtset command. We can use the panel structure to generate a 7-day moving average to smooth out the series:
xtset region date
tssmooth ma new_cases_ma7 = new_cases , w(6 1 0)
tssmooth ma new_deaths_ma7 = new_deaths , w(6 1 0)
and we can plot the series for the first few regions to see what it looks like:
twoway ///
(line new_cases_ma7 date if region==1) ///
(line new_cases_ma7 date if region==2) ///
(line new_cases_ma7 date if region==3) ///
(line new_cases_ma7 date if region==4) ///
(line new_cases_ma7 date if region==5) ///
(line new_cases_ma7 date if region==6), ///
legend(off)
which gives us this graph:
Next we use the logic introduced in Guide 5 on Stacked area graphs and generate a new set of variables which provides cumulative graphs by stacking values on top of each other:
******** new we stack these up gen stack_cases = .
gen stack_deaths = . sort date region levelsof date, local(dates) foreach y of local dates {
summ region *** cases
replace stack_cases = new_cases_ma7 if date==`y' & region==`r(min)' replace stack_cases = new_cases_ma7 + stack_cases[_n-1] if date==`y' & region!=`r(min)' *** deaths replace stack_deaths = new_deaths_ma7 if date==`y' & region==`r(min)' replace stack_deaths = new_deaths_ma7 + stack_deaths[_n-1] if date==`y' & region!=`r(min)' }
Essentially, we are taking the first region observation as it is, and for the subsequent regions iteratively adding up the values. We can also plot the new variables:
twoway ///
(line stack_cases date if region==1) ///
(line stack_cases date if region==2) ///
(line stack_cases date if region==3) ///
(line stack_cases date if region==4) ///
(line stack_cases date if region==5) ///
(line stack_cases date if region==6), ///
legend(off)
where we get this graph:
Here the difference between the two lines are the daily cases for each region. This figure is also easier to look at relative to the earlier graph.
If you look at the help of the twoway rarea command:
help twoway_rarea
here you will see that the syntax is:
twoway rarea y1var y2var xvar [if] [in] [, options]
This implies that in Stata, if we need to make area graphs, then each region needs to be its own variable. This essentially means we need to reshape the data and make it wide.
Before we reshape, we keep the variables we need and rename the new stack_* variable for convivence:
keep region date new_cases stack_cases new_deaths stack_deaths // rename just to keep life easy
ren stack_cases cases
ren stack_deaths deaths
As discussed in Guide 9, reshaping losses the information on value labels. We can use this three-step process to (a) preserve the labels in locals before reshaping, (b) reshape the data, and, (c) apply the labels after the reshape:
*** preserve the labels levelsof region, local(idlabels) // store the id levels
foreach x of local idlabels {
local idlab_`x' : label region `x'
} *** reshape the data reshape wide cases new_cases deaths new_deaths, i(date) j(region)
order date cases* new_cases* deaths* *** and apply the labels back foreach x of local idlabels { lab var cases`x' "`idlab_`x''"
lab var new_cases`x' "`idlab_`x''"
lab var deaths`x' "`idlab_`x''"
lab var new_deaths`x' "`idlab_`x''"
}
Since there are locals involved, the above code has to run in one go. If the code runs fine, we should get something like this where each variable is given a label based on the corresponding number in the variable name:
We can now redraw the line graph shown earlier but now we just do it using variable names rather than if conditions:
twoway ///
(line cases1 date) ///
(line cases2 date) ///
(line cases3 date) ///
(line cases4 date) ///
(line cases5 date) ///
(line cases6 date) ///
(line cases12 date) ///
, legend(off)
Note that here I am using the last variable cases12 just to show the extent of the data:
Since we want areas to stack up, we also need to define a dummy 0 variable for the first country:
gen cases0 = 0
gen deaths0 = 0 twoway ///
(line cases0 date) ///
(line cases1 date) ///
(line cases2 date) ///
(line cases3 date) ///
(line cases4 date) ///
(line cases5 date) ///
(line cases6 date) ///
(line cases12 date) ///
, legend(off)
Which just gives us this figure:
The importance of having a zero line will become obvious below.
Now we have all the data in place, we can re-center the graph around zero on the y-axis. For this we need to take the maximum value of each date, divide it by two, and subtract each region’s observation by it.
In order to re-center the graph, we take the highest value, or the value of the last variable and divided it by two:
ds cases*
local items : word count `r(varlist)'
local items = `items' - 1
display `items' gen meanval_cases = cases`items' / 2
gen meanval_deaths = deaths`items' / 2
foreach x of varlist cases* {
gen `x'_norm = `x' - meanval_cases
} foreach x of varlist deaths* {
gen `x'_norm = `x' - meanval_deaths
} drop meanval*
The first three lines automate to process of counting the number of variables in the dataset. Note, that we do local items = `items' — 1 to account for the additional cases0 variable we generated earlier, so that the total number of variables in our example are 12, and not 13, for the loop to run properly.
The automation helps if regions are added or subtracted, or the graph is looped over multiple regions, for example, generating a stream graph for each continent with all the countries.
We can now plot the normalized variable given with the suffix *_norm as follows:
twoway ///
(line cases0_norm date) ///
(line cases1_norm date) ///
(line cases2_norm date) ///
(line cases3_norm date) ///
(line cases4_norm date) ///
(line cases5_norm date) ///
(line cases6_norm date) ///
(line cases12_norm date) ///
, legend(off)
which gives us the core skeleton structure we need for stream graphs:
Here we can see the y=0 line has also been re-centered. We can now convert the above figure into an area graph as follows:
twoway ///
(rarea cases0_norm cases1_norm date) ///
(rarea cases1_norm cases2_norm date) ///
(rarea cases2_norm cases3_norm date) ///
(rarea cases3_norm cases4_norm date) ///
(rarea cases4_norm cases5_norm date) ///
(rarea cases5_norm cases6_norm date) ///
(rarea cases6_norm cases12_norm date) ///
, legend(off)
which gives us this figure:
Notice also the y-axis which is now showing zero in the center and everything is distributed around it. Also note the pattern for generating the rarea graph where two y-variables have a difference of one in the name.
Labels
We can also automate the labels in three steps.
Step 1: generate the mid points of the last data observation:
*** this part is for the mid points summ date
gen last = 1 if date==r(max) ds cases*norm
local items : word count `r(varlist)'
local items = `items' - 2
display `items' forval i = 0/`items' {
local i0 = `i'
local i1 = `i' + 1 gen ycases`i1' = (cases`i0'_norm + cases`i1'_norm) / 2 if last==1
gen ydeaths`i1' = (deaths`i0'_norm + deaths`i1'_norm) / 2 if last==1 }
Here note the use of the locals and the word count again to automate the whole process. The mid point has to be calculated as starting value + ending value / 2. Since we are counting from zero, we update the items local by reducing it’s value by 2. Within the loop, we define two locals, i0 and i1 , as indices for variable names, which can then be dynamically used in generating the mid points.
Step 2: Next we generate a variable for the share of cases and deaths for the last observation. This is just to indicate how much each region is contributing to the total of the last data point.
This is achieved using a fairly straightforward loop:
*** this part is for the shares egen lastsum_cases = rowtotal(new_cases*) if last==1
egen lastsum_deaths = rowtotal(new_deaths*) if last==1 foreach x of varlist new_cases* {
gen `x'_share = (`x' / lastsum_cases) * 100
} foreach x of varlist new_deaths* {
gen `x'_share = (`x' / lastsum_deaths) * 100
} drop lastsum*
Note here that I am not using the smoothed variable but the actual data for the accurate value of the share of cases. This is also the reason, we carry this variable forward throughout the collapse and reshaping process.
Step 3: generate the variables containing the label for the graphs. Here we again use a mix of several locals to automate the label generation process:
**** here we generate the labels ds cases*norm
local items : word count `r(varlist)'
local items = `items' - 1 foreach x of numlist 1/`items' {
local t : var lab cases`x'
*** cases
gen label`x'_cases = "`t'" + " (" + string( new_cases`x', "%9.0f") + ", " + string( new_cases`x'_share, "%9.0fc") + "%)" if last==1 *** deaths
gen label`x'_deaths = "`t'" + " (" + string(new_deaths`x', "%9.0f") + ", " + string(new_deaths`x'_share, "%9.0fc") + "%)" if last==1
}
In the code above, ds and word count and item define the number for the loop. The local t picks the variable label, and the gen command puts all this information together. Since the data is in a wide form after the reshape, a new variable is generated for each region.
We can also plot this manually for the same regions shown earlier:
twoway ///
(rarea cases0_norm cases1_norm date) ///
(rarea cases1_norm cases2_norm date) ///
(rarea cases2_norm cases3_norm date) ///
(rarea cases3_norm cases4_norm date) ///
(rarea cases4_norm cases5_norm date) ///
(rarea cases5_norm cases6_norm date) ///
(rarea cases6_norm cases12_norm date) ///
(scatter ycases1 date if last==1, ms(smcircle) msize(0.2) mlabel(label1_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
(scatter ycases2 date if last==1, ms(smcircle) msize(0.2) mlabel(label2_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
(scatter ycases3 date if last==1, ms(smcircle) msize(0.2) mlabel(label3_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
(scatter ycases4 date if last==1, ms(smcircle) msize(0.2) mlabel(label4_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
(scatter ycases5 date if last==1, ms(smcircle) msize(0.2) mlabel(label5_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
(scatter ycases6 date if last==1, ms(smcircle) msize(0.2) mlabel(label6_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ///
, legend(off)
Which gives us this graph:
We have now achieved the core structure required to automate the final figure.
Automate
In the code below, we use a combination of loops and locals to also define the colors and labels for each region segment of the graph:
*** automate the areas, colors, labels ds cases*norm
local items : word count `r(varlist)'
local items = `items' - 2
display `items' forval x = 0/`items' {
colorpalette ///
"253 253 150" ///
"255 197 1" ///
"255 152 1" ///
" 3 125 80" ///
" 2 75 48" ///
, n(13) nograph local x0 = `x'
local x1 = `x' + 1
local areagraph `areagraph' rarea cases`x0'_norm cases`x1'_norm date, fcolor("`r(p`x1')'") lcolor(black) lwidth(*0.15) || (scatter ycases`x1' date if last==1, ms(smcircle) msize(0.2) mlabel(label`x1'_cases) mcolor(black%20) mlabsize(tiny) mlabcolor(black)) ||
} *** get the date ranges in order summ date
local x1 = `r(min)'
local x2 = `r(max)' + 50 *** generate the graph graph twoway `areagraph' ///
, ///
legend(off) ///
ytitle("", size(small)) ///
ylabel(-300000(100000)300000) ///
yscale(noline) ///
ylabel(, nolabels noticks nogrid) ///
xscale(noline) ///
xtitle("") ///
xlabel(`x1'(15)`x2', labsize(*0.6) angle(vertical) glwidth(vvthin) glpattern(solid)) ///
title("{fontface Arial Bold: COVID-19 Daily Cases - The World}") ///
note("Data sources: Our World in Data. World Bank 2020 classifications used for country groups.", size(tiny))
The code above have a lot of parts that fit together. The total number of values that need to be picked are stored in the local items . A custom color palette that has been discussed in the Color guide and applied in the Guide 9 on bar graphs, is also being used here. This can be replaced with any other color scheme. The core body of the figure is stored in the local areagraph which contains two parts; one part for the area graph and one part for the labels including all the customizations of the lines, colors, widths, sizes etc. The color information is also dynamically applied from the values stored in locals after the colorpalette command.
The date range is stored in the two locals x1 and x2 . The graph command call the areagraph local, and the date locals. Y-axis is turned off completely including the line which show the values centered around zero. The title is also customized using the fontface argument (see the Font Guide for details).
From the code above, we get the following final figure: | https://medium.com/the-stata-guide/covid-19-visualizations-with-stata-part-10-stream-graphs-9d55db12318a | ['Asjad Naqvi'] | 2020-12-16 15:05:01.502000+00:00 | ['Stata', 'Automation', 'Stream Graph', 'Visualization', 'Covid 19'] |
September Newsletter: New Submission Guidelines, New Writer Bios + More! | Image: The Lucky Freelancer
Hello! If you’re getting this email, it’s because you’re subscribed to The Lucky Freelancer’s Medium newsletter! Thanks for that, by the way. Because of supportive people like you, we’ve just surpassed 1.6 thousand followers! Not bad growth for a Pandemic year!
But back to the main objective: freelance writing. That’s the reason that you’re here, right? How to get started as a freelancer. How to find writing gigs. How to fine tune your writing (or learn another style). And of course, how to make full- time income!
Our latest pieces cover all that plus more! Check them out (and know that more content is on the way!)
Our Latest Pieces:
The Anatomy Of A News Article — Serenity J.
5 Websites That Pay Women Writers Real Money — Serenity J.
Setting Up Your Home As A New Freelance Writer — Kahli Bree Adams
9 Publications That Are Paying $100 or More During The Pandemic — Serenity J.
This Is The Pitch That Landed Me My First Travel Commission — Beth Seager
Calls For Submissions
Dear writers!
I’d like to thank any and all of you who’ve ever submitted to The Lucky Freelancer. Without your submissions, our publication wouldn’t feel complete.
For September submissions, however, I thought I’d do things differently and focus on three distinct categories. This may or may not be the format, going forward. We’re just trying something new!
Here’s what I’m looking for this month:
Writing Gigs & Opportunities/Scams
Our community has been extremely receptive to posts about various job opportunities. If you’ve found a great gig you want to share, do a write up on the company! Make sure you include who they are, their mission, and how much they pay!
Likewise, there are a bunch of scams out there. Had a bad experience with a company that stole your work, refused to pay you? Let us know!
Money Diaries:
Writing might be the thing that keeps breath in your lungs, but at the end of the day, we’re all in this to make money. And when freelancers are transparent about their income — where it comes from, and how it influences their quality of life— it helps us all to secure higher, fairer rates.
Money diaries will explore on what the average writer makes, whether you’re an occasional, part time, or full time writer.
These posts should cover one (or more) of the following subtopics:
How has the pandemic impacted your income streams? Are you doing better, worse, or about the same?
How much you make from writing in a month? Are you full time or part time? Does your writing pay for expenses? Or is it for spending money?
A detailed breakdowns of your income for the past month. How many clients do you have? How much does each pay?
My Idiot Clients:
We’ve all had clients that were less than ideal.
Some were overbearing, some were rude, but then there were those (hopefully) well-meaning individuals that just didn’t understand anything about the nature of your job or their project. The client that didn’t know what he wanted but told you to “make it good.” The client who didn’t understand that you’re a writer, not a graphic designer. The client who wanted a month’s worth of work in two days, on a $2 budget.
If you ever experienced a client has made you seriously reconsider your career path, please tell us all about it!
Note: While we want honest accounts of your frustrations, please refrain from using profanity or calling out clients by name. | https://medium.com/the-lucky-freelancer/september-newsletter-new-submission-guidelines-new-writer-intros-more-18b46f877ad7 | ['Serenity J.'] | 2020-09-04 00:42:54.178000+00:00 | ['Freelancing', 'Freelance Writing', 'Solopreneur', 'Work From Home', 'Writing'] |
Understanding Data Preprocessing taking the Titanic Dataset. | Source : Google — Thanks for existing Google.
What is Data Pre-Processing?
We know from my last blog that data preprocessing is a data mining technique that involves transforming raw data into an understandable format. Real-world data is often incomplete, inconsistent, and/or lacking in certain behaviors or trends, and is likely to contain many errors. Data preprocessing is a proven method of resolving such issues. Data preprocessing prepares raw data for further processing.
So in this blog we will learn about the implementation of data pre-processing on a data set. I have decided to do my implementation using the Titanic data set, which I have downloaded from Kaggle. Here is the link to get this dataset- https://www.kaggle.com/c/titanic-gettingStarted/data
Note- Kaggle gives 2 datasets, the train and the test dataset, so we will use both of them in this process.
What is the expected outcome?
The Titanic shipwreck was a massive disaster, so we will implement data pre- processing on this data set to know the number of survivors and their details.
I will show you how to apply data preprocessing techniques on the Titanic dataset, with a tinge of my own ideas into this.
So let’s get started…
Importing all the important libraries
Firstly after loading the data sets in our system, we will import the libraries that are needed to perform the functions. In my case I imported NumPy, Pandas and Matplot libraries.
#importing libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
Importing dataset using Pandas
To work on the data, you can either load the CSV in excel software or in pandas. So I will load the CSV data in pandas. Then we will also use a function to view that data in the Jupyter notebook.
#importing dataset using pandas
df = pd.read_csv(r’C:\Users\KIIT\Desktop\Internity Internship\Day 4 task\train.csv’)
df.shape
df.head()
#Taking a look at the data format below
df.info()
Let’s take a look at the data output that we get from the above code snippets :
If you carefully observe the above summary of pandas, there are total 891 rows, Age shows only 714 (means missing), Embarked (2 missing) and Cabin missing a lot as well. Object data types are non-numeric so we have to find a way to encode them to numerical values.
Viewing the columns in the particular dataset
We use a function to view all the columns that are being used in this dataset for a better reference of the kind of data that we are working on.
#Taking a look at all the columns in the data set
print(df.columns)
Defining values for independent and dependent data
Here we will declare the values of X and y for our independent and dependent data.
#independet data
X = df.iloc[:, 1:-1].values
#dependent data
y = df.iloc[:, -1].values
Dropping Columns which are not useful
Lets try to drop some of the columns which many not contribute much to our machine learning model such as Name, Ticket, Cabin etc.
So we will drop 3 columns and then we will take a look at the newly generated data.
#Dropping Columns which are not usefull, so we drop 3 of them here according to our convenience
cols = [‘Name’, ‘Ticket’, ‘Cabin’]
df = df.drop(cols, axis=1) #Taking a look at the newly formed data format below
df.info()
Dropping rows having missing values
Next if we want we can drop all rows in the data that has missing values (NaN). You can do it like the code shows-
#Dropping the rows that have missing values
df = df.dropna()
df.info()
Problem with dropping rows having missing values
After dropping rows with missing values we find that the dataset is reduced to 712 rows from 891, which means we are wasting data. Machine learning models need data for training to perform well. So we preserve the data and make use of it as much as we can. We will see it later.
Creating Dummy Variables
Now we convert the Pclass, Sex, Embarked to columns in pandas and drop them after conversion.
#Creating Dummy Variables
dummies = []
cols = [‘Pclass’, ‘Sex’, ‘Embarked’]
for col in cols:
dummies.append(pd.get_dummies(df[col]))
titanic_dummies = pd.concat(dummies, axis=1)
So on seeing the information we know we have 8 columns transformed to columns where 1,2,3 represents passenger class.
And finally we concatenate to the original data frame column wise.
#Combining the original dataset
df = pd.concat((df,titanic_dummies), axis=1)
Now that we converted Pclass, Sex, Embarked values into columns, we drop the redundant same columns from the data frame and now take a look at the new data set.
df = df.drop([‘Pclass’, ‘Sex’, ‘Embarked’], axis=1) df.info()
Taking Care of Missing Data
All is good, except age which has lots of missing values. Lets compute a median or interpolate() all the ages and fill those missing age values. Pandas has a interpolate() function that will replace all the missing NaNs to interpolated values.
#Taking care of the missing data by interpolate function
df[‘Age’] = df[‘Age’].interpolate() df.info()
Now lets observe the data columns. Notice age which is interpolated now with imputed new values.
Converting the data frame to NumPy
Now that we have converted all the data to numeric, its time for preparing the data for machine learning models. This is where scikit and numpy come into play:
X = Input set with 14 attributes
y = Small y Output, in this case ‘Survived’
Now we convert our dataframe from pandas to numpy and we assign input and output.
#using the concept of survived vlues, we conver and view the dataframe to NumPy
X = df.values
y = df[‘Survived’].values X = np.delete(X, 1, axis=1)
Dividing data set into training set and test set
Now that we are ready with X and y, lets split the dataset for 70% Training and 30% test set using scikit model_selection like in code and the 4 print functions after that-
#Dividing data set into training set and test set (Most important step)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
Feature Scaling
Feature Scaling is an important step of data preprocessing. Feature Scaling makes all data in such way that they lie in same scale usually -3 to +3.
In out data set some field have small value and some field have large value. If we apply out machine learning model without feature scaling then prediction our model have high cost(It does because small value are dominated by large value). So before apply model we have to perform feature scaling.
We can perform feature scaling in two ways.
I-:Standardizaion x=(x-mean(X))/standard deviation(X)
II-:Normalization-: x=(x-min(X))/(max(X)-min(X))
#Using the concept of feature scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train[:,3:] = sc.fit_transform(X_train[:,3:])
X_test[:,3:] = sc.transform(X_test[:,3:])
That’s all for today guys!
This is the final outcome of the whole process. For more of such blogs, stay tuned! | https://medium.com/all-about-machine-learning/understanding-data-preprocessing-taking-the-titanic-dataset-ebb78de162e0 | ['Sanya Raghuwanshi'] | 2020-09-06 14:09:03.742000+00:00 | ['Machine Learning', 'Data Science', 'Data Preprocessing', 'Python', 'Titanic Dataset'] |
A Conversation on Work, Life & Balance With Suzi Dafnis, CEO of HerBusiness | This conversation with part of a weekly interview series by Balance the Grind, where we talk to people from all walks of life about work, life and balance.
From CEOs to musicians, startup founders to freelance journalists, marketing managers to creative directors, we talk to everyone about how they balance the grind.
Suzi Dafnis is the CEO of HerBusiness, a membership community that provides training, resources, mentoring and support for women who want to market and grow their business.
She has been featured in many national publications, including The Australian, The Sydney Morning Herald, Canberra Times, MyBusiness Magazine, Voyeur (Virgin Blue’s in flight magazine), as well as on numerous radio and television programs.
Balance the Grind spoke to Suzi about her extensive entrepreneurial journey, her role as CEO of HerBusiness, working in New York, and more.
1) To kick things off, could you tell us a little about your background and career?
Sure. For over 25 years I’ve run a community that helps women business owners go from being solopreneurs to growing and scaling a sustainable business.
Since I started my first business at age 26 from my spare room, I’ve started and sold online and offline businesses, product and service businesses, offices in Australia and in the US, with teams of 0 to teams of 40.
My main focus right now remains my passion, and that is the HerBusiness community which provides mentoring, training and a business network for women entrepreneurs.
2) What is your current role and what does it entail on a day to day basis?
As the CEO of HerBusiness, my role is to stay in my lane and wear the CEO hat without veering off into other people’s roles.This is something business owners can have trouble doing when there is so much to focus on.
Finding (and doing) what is the best and highest use of OUR time, as business owners, in our business — takes discipline.
It also takes discipline to empower and training your team to do THEIR roles and to implement the systems and technology to take care of all aspects of the business without the owner having to be involved.
My main focus each day is creating content (we’ve been at the forefront of online marketing for many years) in the form of podcasts, blog posts, Facebook LIVES, online courses and training for our community of 30,000 followers across a number of platforms.
As the spokesperson and primary face of the business, I’m also very much responsible for the voice of the organisation and its public-facing communications.
I love connecting with our clients and offering business coaching and support in personal conversations across our social networks, on our regular webinars and through our emails.
3) What does a typical day in the life look like for you? Can you take us through a recent workday?
Right now I’m on a 10 week extended stay in New York while my team of 10 is dispersed across Manila, Sydney and Arizona. The systems and structures and clarity of roles is what allows us to work efficiently as a dispersed team.
What this change in my location and timezone means (I am usually in Sydney) means that my days for until mid-August start later and finish a little later than usual.
I’m actually loving this new pace — the mornings to myself to take a walk to Central Park and to grab a coffee, then some time to write and create content and plan, before heading to the gym and then grabbing lunch.
The early afternoon on replying to emails, posting on social, and catch up on communications until Sydney comes online when we have our daily team huddle (across all time zones the team attends) to refocus what our priorities are.
I try to get offline by about 10 pm local time to then read and get to bed by midnight.
At home I’m an early riser and early to bed, so this schedule is different, but two weeks in I’ve gotten used to it and like it.
4) Do you have any tips, tricks or shortcuts to help you prioritise your workload?
We are big on planning. We have our campaigns, events and promotional schedule planned out at least 12 months in advance. That allows us to get momentum from one project to another.
We also have just a few strategic objectives as a team and a key focus.
For example, we spent a couple of years designing our product line so that we have the right products and services, at the right prices, with the right inclusions for the different types of clients that we serve.
Before that we spent a good chunk of time focused on identifying and getting to know our ideal customer. Right now we have a keen focus on building our audience.
The entire team knows what the focus is — so we don’t get distracted by bright shiny objects that don’t serve our strategic objectives.
When new opportunities or ideas come up — we measure their relevance to what we are trying to achieve as a business… and this means we can funnel our precious time and energy to the right things.
5) In between your job, life and all your other responsibilities, how do you ensure you find some sort of balance in your life?
I don’t really believe in balance as an idea. I think that we go in waves and seasons. When we are gearing up to the launch of a new program, it’s all systems go and there are late nights and early mornings, and weekends when the team and I are ‘flat out’ working.
In between big projects we have time to tidy up, do post-mortems on projects and get a ‘breather’ in before we ramp up again,
For me, personally, self care and balance means going to the gym (I’ve done Crossfit for over 14 years) even in the busiest of times, maintaining a good diet and getting sleep.
Sleep is probably the first thing that goes out the window when I get busy. I’m getting better at recognising that good sleep has me have a much better outlook on life when I’m super busy.
I travel for work and play a few times a year and this fuels me too.
6) What are some of the things you do to take time out and recharge?
I love trying new restaurants. And so, at least once a week I’m out for dinner with my partner PJ and usually with friends.
And while I have a stop-start relationship with running, it is something I keep going back to because it helps me clear my head.
I also love reading and usually have 2–3 books on the go. Reading just before bed gets me off my electronic devices and into the best state for a restful sleep.
7) What do you think are some of the best habits you’ve developed over the years to help you strive for success and balance?
I love this question.
Because it’s what we do all the time that determines who we are and what we achieve, not the things we do now and then.
Strategic planning is one of our company habits. We strategise, plan and then implement.
I know it sounds boring, but it’s actually very exciting when you see your plans and visions become a reality.
A natural ‘new ideas’ person, I love to start things.
Becoming disciplined about planning my time and resources and that of our teams was a game changer.
It’s something we teach our business owner clients which empowers them to get things over the finish line time after time, rather than having a whole lot of balls in the air and nothing landing.
8) Are there any books you’ve read that have helped you with work-life balance?
I wouldn’t read a book about work-life balance. I’m not interested in that so much as I am in doing what I love, every day.
I love to work. I love to play. I don’t see myself as having a work life and a home life.
It’s one life and I want to live it to the fullest and to serve as many people as possible.
9) What is the number one thing you do to make sure you get the most out of your day?
Stopping. Getting present. Checking a big picture of what I want to feel that I have accomplished at the end of the day.
Some might call it mindfulness. To me, it’s prioritising what my attention and time will be spent on.
I’d love to say I do this every day, but I don’t. I do it on the days that work out the best! | https://medium.com/balance-the-grind/a-conversation-on-work-life-balance-with-suzi-dafnis-ceo-of-herbusiness-7128f2e84b1f | ['Balance The Grind'] | 2019-07-02 00:48:28.613000+00:00 | ['Work', 'Day In The Life', 'Work Life Balance', 'CEO', 'Productivity'] |
A Daughter’s Wisdom | “You shouldn’t blame them. It was all my fault. I knew what I was doing when I got in the car with them.”
“And you knew your mother was gone, and this was all up to me? Do you just hate me or something?”
“Anyway, I knew you’d find me.” | https://medium.com/centina-pentina/a-daughters-wisdom-a13ddc33ad37 | ['Terry Barr'] | 2020-12-11 01:48:28.844000+00:00 | ['Family', 'Writing Prompts', 'Nonfiction', 'Dialogue', 'Pentina'] |
December 2020 Deals Recap | As we approach the winter holidays we have a final monthly deals recap market map for you. With a new year, and a light at the end of the tunnel (vaccine rollouts), we are looking forward to a better, brighter, and healthier 2021. Things are looking bright for New England, as capital floods into biotech, deeptech, and just about all tech in the region. Again, we’re thankful for all you founders and investors continuing to move forward with your plans to make the world a better place! Now, onto the deals. [NOTE: Round info per Crunchbase reporting] | https://medium.com/the-startup-buzz/december-2020-deals-recap-5b36019ab47a | ['Matt Snow'] | 2020-12-22 20:02:37.049000+00:00 | ['Technology', 'Fundraising', 'Startup', 'New England', 'Venture Capital'] |
Living in Austin, Texas; Part 1— Housing Trends with Time Series Analysis | Downtown Austin, Texas
Anyone who has lived in Austin for more than a couple years can tell you that the real estate and rental markets here are, as we say, ¡muy caliente! (if you don’t speak Texas Spanish, that means, “really dang hot!”).
Just how hot are the Austin housing and rental markets though? And perhaps more importantly, do they show any signs of cooling off in the near or intermediate future? Having good and empirically grounded answers to these and other questions can be highly valuable for an individual participant in the housing or rental markets — it can mean the difference between getting into or out of the residential real estate market at the wrong time, or seriously overpaying for rent at an apartment with more competitive options at other proximate geographic locations.
In Part 1 of this Two Part post, I will discuss the past and current trends in the Austin residential real estate and rental markets, and perform some analysis on the available time series data to gain insights that may help a prospective buyer/seller/renter make a more highly informed decision about their plans on when and where to buy/sell/rent.
Part 2 then focuses on constructing a rent pricing model using location-based hedonic regression, which may be used to find an optimal apartment, given an individual person’s location, price, and hedonic preferences/constraints, for example.
Data Sources
To perform the analysis, I used to main sources of data: Zillow Public Economic Data (Parts 1 and 2), and results pulled from Apartment.com Listings (Part 2).
Specifically, the Zillow Data used includes:
the median home value ($) per square foot (ft²) for each US zip code from Apr. 1996 to Jul. 2019 (Zip_MedianValuePerSqft_AllHomes.csv); and
the median rent list price for 1-bedroom apartments for each US zip code from Sept. 2010 to Jul. 2019 (Zip_MedianRentalPrice_1Bedroom.csv).
Reading the Data into Python using Pandas
Lets start with reading in the median home $/ft² data at a DataFrame object using the Pandas Python library:
import numpy as np
import pandas as pd df=pd.read_csv("Zip_MedianValuePerSqft_AllHomes.csv",encoding='latin-1')
df.head()
The last line of code produces the following output: | https://medium.com/analytics-vidhya/living-in-austin-texas-part-1-housing-trends-with-time-series-analysis-e131250f5c37 | ['Vincent Musgrove'] | 2019-09-11 04:39:38.762000+00:00 | ['Data Science', 'Housing', 'Time Series Analysis', 'Austin', 'Exploratory Data Analysis'] |
One thing to do before signing off for the holiday | ✅ Today’s tip: Ease your return to work by writing your to-do list now for the first day back.
The first workday after a long holiday weekend is like a regular Monday turned all the way up to 11: You’re grumpy, you’re moving slowly, and it takes all the energy you have just to shock your vacation-sleepy brain back into work mode. It’s no wonder that, as Emily Underwood explains, post-vacation burnout is a very real phenomenon.
To lessen the re-entry pain, take some of the mental burden away from your future self: “Write a detailed, not-too-ambitious to-do list for your first day or two back,” Underwood writes. That way, when the time comes, you can just follow the steps you’ve already outlined — no need to think too hard. | https://forge.medium.com/one-thing-to-do-before-signing-off-for-the-holiday-f85e95276740 | ['Cari Nazeer'] | 2020-12-18 12:02:08.217000+00:00 | ['Vacation', 'Self', 'Productivity', 'Advice', 'Work'] |
7 Historical Facts That Sound Too Fake to Believe | Joan Pujol Garcia was a Spanish spy working as a double agent for both the British and Nazis. He first contacted the British and the American intelligence services for work, but he was denied.
After the rejection, he decided to set up a fake identity so that he could approach the Nazis. Soon thereafter, Garcia was accepted to work for Nazis, and just a little while after that, he was also hired as a double agent for the Allies.
According to the British Security Service MI5 Garcia brought the Nazis powerful information that was useless because it always purposefully arrived a little too late. However, as the results of his “hard” work for the Nazis, he was awarded the Iron Cross by the Germans. For his great service as a spy, he also received the Most Excellent Order by the British Empire. | https://medium.com/history-of-yesterday/7-historical-facts-that-sound-too-fake-to-believe-14fba6ab134d | ['Hossein Raspberry'] | 2020-12-07 14:46:44.484000+00:00 | ['Politics', 'Short Story', 'Culture', 'Science', 'History'] |
The Bitterest Pill to Swallow if You Want a Relationship | Photo by Kelly Sikkema on Unsplash
In Jungian Analysis, there are two unconscious constructs known as the anima and the animus.
The anima is the unconscious feminine that is in every man and the animus the unconscious masculine that is in every woman. We learn of these constructs or archetypes as we interact with the opposite sex during our lives and in our various stages of development.
This means that my concept of the anima is different to that of other men because my experiences with women are my own. Sure, there will be some overlap in certain traits but one has to face their own anima or animus. It isn’t a one size fits all type of thing.
Some have a largely positive animus or anima. Some do not.
There are a couple methods one can use to uncover this archetype. Marriage is the most common and possibly daunting option but there’s also dream work and therapy. I took up the option of reflection.
I made a list of traits that I cannot help but take note of in various girls and women that I’ve met throughout my life. Relatives, friends, romantic partners, coworkers, colleagues and even strangers were roped into this.
I won’t give the entire list but some items on the list were that they depended on me, they were surreptitious and that they seemed to see me as simultaneously forgettable but irreplaceable.
Then the image of a woman came into my mind. She had every trait I had written down and even gave me some more. She was a hilariously combative woman but there was a level of distrust. I could also see that a romantic relationship with her would expose every issue I had with myself.
So I pretty much interrogated her for about an hour.
The topic of relationships came up. I’m comfortably single at the moment but I still wanted some details about why past flings or love interests didn’t become something more.
She said, “Yeah, I love you but I wouldn’t choose you. But let’s face it. You wouldn’t choose me.”
I thought about it and I had to admit that she was right. I couldn’t accept someone who was so secretive, acted like I didn’t exist half the time and I couldn’t trust. Most women I liked had some of these traits if not all of them. If there wasn’t any romance between us I would still take note of it.
Then she pointed out my male friends who had traits that put a strain on the friendship but I was able to accept them. After realizing that level of hypocrisy, I then saw that I too am not the easiest person to get along with in certain contexts.
But more importantly, I had to admit that being in a relationship is not about trying to find the person who will not hurt you and will love you the most. It’s about accepting that whoever you get with, they will hurt you.
The question is, can you accept how they will hurt you? Not if they will hurt you, but how they will.
What pain are you willing to tolerate? Maybe some can stomach their partner being unfaithful or being unable to express how they feel or that they are bad with money or that they have a dependency on their family or that they are workaholics.
We often hear people tell us what not to tolerate in a relationship. Don’t take back a cheater; don’t tolerate anyone who raises their voice at you; don’t get into a relationship with someone who doesn’t like you as much as you like them.
Technically, that was all good advice. But it doesn’t change the fact that there is something that you won’t like that you are going to have to accept because someone will have to do the same exact thing for you if they are aiming to be with you.
This will actually help you with red flags because now that you are consciously acknowledging that someone will hurt you, when you meet someone you will be perfectly aware of what pain you’ll be getting yourself into.
You won’t be blinded by what you stand to gain which tends to happen when you are solely focused on what is pleasurable about the relationship. You will now be able to gauge whether or not to entertain this person and for how long.
When the time comes to integrate your anima/animus by learning to embrace or to even see yourself in the traits that you do not like about the archetypal feminine/masculine, it is imperative that you are honest with yourself.
It was a bit difficult to accept that something so obvious had eluded me. But because I wanted the answer and I wanted to be better, it was ultimately a no-brainer.
There’s no doubt that my already decent relationships with women (and men oddly enough) will improve because I literally feel different. | https://medium.com/the-life-manual/the-bitterest-pill-to-swallow-if-you-want-a-relationship-996d71d76537 | ['Jason Henry'] | 2020-07-29 23:36:29.987000+00:00 | ['Self', 'Relationships', 'Love', 'Psychology', 'Dating'] |
Numpy Guide for People In a Hurry | Photo by Chris Ried on Unsplash
The NumPy library is an important Python library for Data Scientists and it is one that you should be familiar with. Numpy arrays are like Python lists, but much better! It’s much easier manipulating a Numpy array than manipulating a Python list. You can use one Numpy array in place of having multiple Python lists. Numpy arrays also compute faster than lists and is extremely efficient for performing mathematical and logical operations. It’s a powerful tool to know!
This article serves as a quick cheat sheet that provides an overview of the basics of Numpy as well as useful methods. I will go over how to initialize Numpy arrays in multiple ways, access values within arrays, perform mathematical and matrix operations, and use arrays for masking as well as comparisons. I find Numpy arrays to be super helpful to use in solving Python coding puzzles.
Let’s get started on the fun.
Numpy
First and foremost, you must import Numpy with the following code.
import numpy as np
Multiple Ways to Create Numpy Arrays
Unlike a list, you are not able to create an empty Numpy array. Below are multiple ways to initialize a Numpy array depending on your needs.
If you have a list that you would like to convert to a Numpy array, we can easily convert it.
Accessing Elements In Array
We can access an individual item or a slice of data. Similar to lists, the first element is index at 0. For example, array1[0,0] indicates that we are accessing the first row and the first column. The first number in the tuple [0,0] indicates the index of the row and the second number indicates the index of the column.
Broadcasting
“The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations.” — SciPy.org
Broadcasting is a way in which one can get the outer product of two arrays.
According to the documentation, “When operating on two arrays, NumPy compares their shapes element-wise. Two dimensions are compatible when
they are equal, or one of them is 1
If these conditions are not met, a ValueError: frames are not aligned exception is thrown, indicating that the arrays have incompatible shapes.”
In order to successfully get the outer product, we use reshape. This method changes the shape of the array so that we can make it compatible for Numpy operations.
Mathematical and Matrix Calculations
One of the reasons I love Numpy arrays is that it’s super easy to manipulate. Concatenate, add, multiply, transpose with just one line of code!
Below are some examples of various arithmetic and multiplicative operations with the Numpy arrays. Operations not covered below can be found in the documentation here.
Other cool features include concatenating, splitting, transposing (switching items from rows to columns and vice versa), and getting the diagonal elements.
Above, axis = 0 tells the computer that we want to concatenate by rows. If instead we want to concatenate by columns, we use axis = 1.
Comparisons and Masks
A useful thing we can do with Numpy arrays is to compare one array to another. A boolean matrix is returned in the comparison.
We can use this boolean matrix to our advantage. That is, we can do boolean masking. With this boolean matrix as a mask, we can use it to select the particular subset of the data that we are interested in. | https://towardsdatascience.com/numpy-guide-for-people-in-a-hurry-22232699259f | ['Julia Kho'] | 2018-12-31 19:48:56.380000+00:00 | ['Data Science', 'Python', 'Python Programming', 'Numpy', 'Programming'] |
Poppies- Chapter 28- | Fiction Series
Poppies- Chapter 28-
A Novel
Photo owned by Author
Numb Chad sat on the couch not speaking a word. His face drawn and grisly as Mara-Joy exclaimed with fever how much Chad and she were in love.
Alan sat still, breathing hard as he watched the scene unfold before him. Jobeth was just as stunned. Her mouth dropped open when Mara-Joy announced that she and Chad planned to marry as soon as possible.
“What do you mean as soon as possible?” Alan asked through clenched teeth as he gripped the arm of his chair.
“Well, Pappy, We feel there is no reason to wait. We are in love and want to share our lives as man and wife, now,” Mara-Joy said, coming to kneel beside Alan’s lap.
“What exactly does that mean, Mara-Joy? When do you plan to spend your lives together as man and wife?” Alan asked trying to keep the sarcasm out of his voice. It was difficult for him to look into his daughter’s face.
His heart was breaking and he wanted to kill the little son-of-a-bitch sitting stunned on his couch. He wasn’t a stupid man and he knew what was going on, even if Jobeth would never believe it.
“Well, Pappy, I want to be married next week,” Mara-Joy beamed.
“No!” Jobeth stood up, flinging her hands into the air, “Alan!” she pleaded helplessly, unable to say anything else.
“Yes, Mama!” Mara-Joy announced, standing up to confront her mother. “I love him and he loves me. We need to be together.”
Chad sat like a lump of cement, not speaking a word, as the scene played out in front of him.
Alan watched the boy who had taken his daughter and shook his head, his fears confirmed. Mara-Joy would have to marry this weak, decrepit creature. She would have no choice. Her bed was made and now she would have to lie in it with this boy Alan would never respect. How could he? The boy had robbed his daughter of her innocence? Ruined her.
“Alan, are you going to just sit there, or are you going to talk some sense into your daughter?” Jobeth demanded, her mind in an uproar. Everything was falling apart and she didn’t know how to stop it.
Alan listened to his wife of fourteen years and his heart hurt for her. She was blind when it came to their daughter, her baby. The baby so like the one she had lost long ago. He understood her fixation with Mara-Joy, even her refusal to see what was going on in front of her very eyes. And he didn’t have the heart to break the pedestal Jobeth held Mara-Joy on.
“Okay, Mara-Joy,” Alan said, not looking at the two women gaping at him. “It will happen before the end of next week.”
Mara-Joy squealed and ran to Alan throwing her around his neck like she did when she was a little girl.
“Thank you, Pappy! Thank you! I love you so much for understanding!” She gushed into his ear. He hugged Mara-Joy back with little strength. How could he tell this child of his heart that she had kicked him in the stomach with this news?
“Alan, you are not serious!” Jobeth barked, trembling all over. She couldn’t believe her ears. The words she was hearing couldn’t be from the man she had lived with all these years?
“ I am very serious. Mara-Joy will marry next week and we will stand beside her and help her as we always do,” Alan said resigned yet stern. His eyes did not waver from Jobeth’s stunned gaze as Mara-Joy hung off his arm, radiant.
“Alan? You are joking?” Jobeth’s hands flew to her throat. It constricted, making her feel like she was choking. Her hands flew to her neck in an attempt to anchor her to the reality playing out in front of her.
He shook his head.
“Alan?” Jobeth begged, knowing his mind was set and nothing she could say would change it. Why he had agreed to let Mara-Joy marry was beyond her.
“I’ve had enough excitement for today,” Alan declared, releasing Mara-Joy’s grip. He didn’t look at Chad who still sat motionless on the couch. He avoided his wife’s confused expression and left the room, leaving an astonished Jobeth behind. “If you will excuse me, I need to be alone.”
Jobeth went to follow him, but Alan turned and said, “Alone.”
Taken aback, Jobeth watched Alan walk out of the room. Stunned, she didn’t understand why on earth he would agree to Mara-Joy get married. It was plain to see it was the last thing in the world he wanted?
Chad’s parent’s response to their son getting married was pure excitement. They believed in marrying young and thought it was time for Chad to settle down. He was seventeen and marriage was just the medicine needed to tame his wild oats. Mara-Joy was charming and came from a good family. In their opinion, Chad had made a good choice in picking a wife. It didn’t matter that the young couple insisted on marrying right away. All young couples were the same. There was no point in waiting when you have chosen to be together. | https://medium.com/illumination/poppies-chapter-28-e6688f8deec5 | ['Deena Thomson'] | 2020-12-18 07:41:47.475000+00:00 | ['Fiction Writing', 'Fiction', 'Fiction Series', 'Novel', 'Writing'] |
Case Study: 4 Instagram Posts that Got Me 1,000+ Followers Each | Case Study: 4 Instagram Posts that Got Me 1,000+ Followers Each
And 4 things each post have in common
Photo by Tim Gouw from Pexels
I get most of my followers through posts that reach an audience beyond my own. The usual number is in the hundreds per post, but occasionally, I manage to get everything right with the post and gain over 1,000 new followers.
There are four things to master: the photo, the caption, the commentability, and the right hashtags. Get these right, and you’ll get in front of new people. If your profile is in order, that will result in a wave of new followers every time you post.
Let’s look at the four posts — which I’ve included stats for — and the four things to master so that you can all accelerate the growth of your accounts. | https://medium.com/better-marketing/case-study-4-instagram-posts-that-got-me-1000-followers-each-21c8a2b407a9 | ['Sebastian Juhola'] | 2020-10-29 15:32:13.171000+00:00 | ['Instagram', 'Growth', 'Social Media', 'Marketing', 'Growth Hacking'] |
Code Review 101. How to do them well | The Greater Purpose
There is a famous quote by John Woods:
“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”
To understand the sentiment behind the quote, people have to stop taking coding for granted, or doing it for the sake of career demands or to get the pay.
Imagine going to a five-star restaurant and after an hour of waiting, getting a half-cooked mushroom ravioli because the chef was not interested in cooking. I’m guessing most of us would go Anthony Hopkins (Silence of the Lambs) psycho at the chef. Maybe I’m exaggerating, but that’s not the point here. I’m trying to emphasise the chef’s mindset.
This mindset of negligence arrives very often among us engineers when we are not being reviewed or judged on our code. It happens when coders are not motivated to seek suggestions or coding is not a source of pride for them, rather just work designated. Code review is not just done for the sake of code quality. It also contributes to a greater purpose: exposure, motivation, learning, or gratitude for both reviewers and authors.
It aids in improving the professional relationships among co-workers while they are helping and improving each other, eliminating the fear that people usually have around coding. Coding is an art that is not learned just from self-exploration. No one knows everything. People learn from others, too, and that’s how they grow. Maybe you are reviewing your teammate’s code, and they have used a technique or algorithm that you can learn from.
Code reviews are nonhierarchical. Being the most senior person on the team does not imply that your code does not need a review. It doesn’t matter if the author is an intern or the CTO. It doesn’t matter if the author has three months or 30 years of experience. Coding is a delicate task, and things can slip through the cracks easily, so having another set of eyes is always helpful. Even if in the rare case, the code is flawless, the review provides an opportunity for mentorship and collaboration and minimally diversifies the understanding of code in the codebase.
In hindsight, there are other benefits, too.
Better product quality
Tiki-taka — not everyone has heard of these words, but in the world of football, it’s perhaps one of the greatest tactical revolutions and has helped Barcelona FC and Spain dominate world football in the past two decades. Tiki-taka demonstrated excellence in collaborative team play with the highest standard of football. Barcelona FC didn’t get all those wins just because they had world-class players who knew how to play good football on the team, like Messi and Villa. They won because everyone was playing together and improving each other’s play.
Software engineering is not a one-man show. Engineers have to collaborate and aim to deliver best-grade code together, and code reviews contribute to that to a great extent. Also, it sets expectations for the authors so that they keep code up to the standards if they want to contribute. Coding best practices are contagious, and code reviews and feedback are the best ways to share the wisdom with others.
Fewer bugs in code
According to a research conducted by Stripe in partnership with Harris Poll :
“On average developers spend over 17 hours per week dealing with maintenance issues like debugging and refactoring, and about a quarter of that time is spent fixing bad code. That’s nearly $300B in lost productivity every year.”
Although code reviews are not really done to resolve bugs (come on.. we have tests, style guides, and CI for that) sometimes they help — for example, the reviewer might sometimes end up identifying some underlying imported function which they wrote that might act up in production.
Moreover, peer reviews have psychological effects on team members too: a SmartBear study of Cisco Systems found that spot-checking 20% to 33% of code resulted in lower defect density, with minimal time expenditure, for companies who practiced peer code reviews because it prevented people from pushing bad code to their peers.
Improvement in interpersonal skills
We all know coding is a technical skill (or a hard skill). It’s teachable and a measurable ability. We can learn it by reading someone else’s code or just copy it off the internet (most of us do that; pretty easy, right?). Now comes the difficult part, convincing someone that the code they wrote (or copied) isn’t up to the mark or is buggy. It’s arduous to convince someone to change their code for the good, and it happens very often during code reviews.
That’s when soft skills come into play. Leadership, openness to criticism, persuasion, adaptability, and effective communication skills are much more important and difficult to learn for software engineers than coding, and that’s what makes a software engineer different from a coder! Doing code reviews often hones our soft skills while we are trying to communicate and make our peers understand the intentions behind our feedback.
Detecting accidental errors/blind spots/typos
It’s pretty common for people to make typos while texting their friends in their native languages. Writing machine language code shouldn’t be an exception. After all, it’s normal for us humans to make mistakes, and having another set of eyes always helps. Studies have found that even short and informal reads have a significant impact on the mitigation of typos or blind spots.
Smooth onboarding of new team members/interns
There are a lot of onboarding sessions organised by companies, in which new employees are told about the company, how the company works, and the rules and regulations, on a much zoomed-out level. When it comes to engineers who spend more time working on laptops and code than working with people, there are not a lot of processes defined. Most of the time, they are just asked to go and read the code or the documentation.
To be honest, it’s quite frustrating. Imagine getting married to the person of your dreams, but the very next moment after you guys are married, you’re asked to read a blog or a biography of that person rather than talking to them firsthand. Discouraging, right?
Code reviews smooth the process of technical onboarding a lot. Maybe we can kick off the onboarding for the new guys by giving them a small issue that was identified while doing a code review earlier, as a part of future improvements in code. It gives them confidence, as well as opportunity, for creating an impact right away. Also, it’s pretty common for different companies to have different coding standards and styles, so code reviews are also helpful in making new members on the team understand and adapt to the team’s style of coding.
Code ownership divided among the team rather than being a single person’s responsibility
Let’s be honest, how many times has it happened that we wanted to take leave because of some super-urgent work, but because there was a release happening in the coming days, we couldn’t. Pretty common, right? It’s overwhelming sometimes to know that you are not just being asked to code, but you also have to babysit it all the time.
During a review, a reviewer is able to explain the change at a reasonable level of detail to other developers. This ensures that the details of the codebase are known to more than a single person, leading to positive interaction and strengthening of social bonds between team members, which helps in breaking the “my code, my ownership” mindset (which is very toxic!). | https://medium.com/better-programming/code-reviews-really-503e1ea62f45 | ['Aayush Devgan'] | 2020-05-17 11:25:00.055000+00:00 | ['Engineering', 'Coding', 'Code Review', 'Programming', 'Software Development'] |
HOW DATA ANALYTICS IS TRANSFORMING BUSINESS? | Data Analytics and Statistics are rapidly changing the way people used to run their business in the past. For data analytics not just helps drive performance for your business, but also gives you a competitive edge in your industry. Embracing data analytics in your business model helps create new opportunities for your business by increasing customer flow and other revenue streams. In fact, data analytics is something that gives you a chance to reinvent each and every aspect of your business altogether.
Here are some of the main industries that are using data analytics and statistics to transform their entire business model and drive growth at an enormous scale:
Retail — Data analytics has helped the retail industry to a great extent by predicting trends and demands along with optimizing costs and selling strategies for the best possible results. This is why companies and brands that have integrated a data-driven strategy for the buying and selling of their products and services have an edge over their competitors.
Healthcare — Data analytics and statistics has also made a huge impact on the health sector, by transforming the way diseases are identified and treated. This is something that has not only helped in saving more lives, but also made the treatments less expensive and more accessible to patients.
Finance — When it comes to financial services like banking and insurance, data analytics and statistics play a really crucial role in detecting fraudulent transactions and setting fairer policies in the future. In this way, data analytics and statistics are helping in the expansion of financial services by restricting any kinds of fraudulent activities that come their way.
Hospitality — Hospitality is another industry that has a lot to benefit from data analytics. More and more businesses in the hospitality industry are turning to data-driven solutions to understand customer needs to ensure the best in class service all throughout the year.
Looking at all these industries that are using a data-driven approach in their business framework, it is quite clear that data analytics is an asset to every business. Data analytics gives your business a more personalized and effective approach, targeting customers when and where they need your services. So, if you want your company to become a leading name in your field, then it is high time that you integrate a clear data analytics strategy in your overall business model.
At Quark Analytics, we are committed to help you with a fast and easy data analytics platform to get the best results for your business. We offer a secure environment to perform data analysis and get the most useful insights on your data. So what are you waiting for? Give your business a competitive edge by realizing the value of data analytics and optimizing your data to the fullest with the help of our services at Quark Analytics.
Get in touch with us to know more about our services, and we will be right there for you with the best solutions!
Originally published at www.quarkanalytics.com. | https://medium.com/quarkanalytics/how-data-analytics-is-transforming-business-ee147f165dec | ['Ricardo Lourenço'] | 2019-04-26 15:41:35.925000+00:00 | ['Data Science', 'Data Analysis', 'Transformation', 'Business', 'Future'] |
The Depero Bolted Book — a facsimile | The Depero Bolted Book - a facsimile
It's been way too long since I had something to post about, so when this arrived this week I was doubly happy. Some… | https://medium.com/paper-posts/the-depero-bolted-book-a-facsimile-640fd78f3fe5 | [] | 2017-08-14 01:39:12.017000+00:00 | ['Futurism', 'Design'] |
4 Ways To Communicate the Visibility of System Status in UI | Visibility of system status is one of Jakob Nielsen’s ten heuristics for user interface design. By communicating the current state of the system, you make users feel in control with the system, and this sense of control helps you build trust.
Here are four visual feedback methods you can use to communicate the system status:
1. Visual feedback that shows user location or progress
Where I am
No one likes to be lost, but it happens both in real and digital worlds. Making users aware of where they are in the app is essential for creating good navigation experience. Both apps and websites should highlight the currently selected navigation option to help users understand their current location.
Google Bottom Bar Navigation Pattern — Mobile UX Design by Aurélien Salomon ➔
How many steps required to complete this
Knowing how many steps are required to complete a certain operation will help the user estimate the time needed to complete the procedure.
Survey knowledge checking app by SELECTO
2. Visual feedback that confirms user action
It’s vital to providing immediate feedback for all interactive events. Immediate visual feedback will acknowledge that the app has received a user’s action, reinforce the sense of direct manipulation, and prevent a user from making errors (such as tapping the same button twice).
In its basic form, it’s vital to show that the system actually caught the tap/click.
Button hover and active states by Ali Ali
But in some cases, it’s also important to change the state of the button itself. In such cases, visual feedback will also communicate the results of interaction, making it both visible and understandable. Here are a few of such cases:
Clicking on the Like button.
Spread love, not viruses by Charles Patterson
Turning something ON/OFF. The change in the color of the button gives users a signal about the current state of the object.
Switcher XLIV by Oleg Frolov
Bookmarking an item.
Bookmark interaction [SVG animation] by Oleg Frolov
Adding object to cart. In this case, visual feedback will prove that the item was added to cart.
Coffee Ordering Animation (Starbucks) by Nhat M. Tran
3. Visual feedback that shows system status
Show the system is busy doing something
When it requires more than a few seconds for the system to load, it should give users immediate feedback. Depending on the wait time, it’s recommended to use either infinite loading indicators (typically, for operations that take less than 10 seconds):
Infinite Loading Loop by renatorena
Or progress bars (for operations that take 10+ seconds):
Pumping Loading Animation by Allen Zhang
These indicators communicate that the system is working and reduce the level of uncertainty.
For mobile apps, it’s also possible to use animated splash screens during the initial loading. A well-designed splash screen will create a positive impression for first-time users and switch their focus from the fact of waiting.
Logo splash screen by Gleb Kuznetsov✈
Content is loading
When it takes some time to load content, it’s recommended to use a special type of container — skeleton screen. This temporary content container is used to mitigate the wait time and should be filled with real data as soon as the data becomes available.
Skeleton Loader by Ginny Wood
This container works equally well for desktop and mobile products.
Skeleton Loading Animation by Shane Doyle
4. Triggered events
Notifications/Indicators
The purpose of effective notification is to direct user attention to the fact of a new event. It’s recommended to use subtle animations for notifications because animated effects naturally capture user attention — the human eye is hardwired to focus on moving objects.
Notifications by Aleksei Kipin
Request for user actions
There are a lot of cases when a system might request data for user action. For example, when users fill out a form with invalid data. For example, a user creates a password and it’s not good in terms of complexity, provides an invalid email address, etc. It’s always better to tell users about the problem upfront, using appropriate visual feedback.
Inline Email Validation by Derek Reynolds
More control translates to better user experience
Visual feedback might be easily overlooked in the greater design scheme, but it actually holds the entire experience together. When people interact with UIs, they expect predictability and control, and that’s exactly why UI designers should provide visual feedback.
For more information about user interface design, I recommend checking the course UI Design Patterns for Successful Software. This course contains essential information about UI design patterns as well as how to use them appropriately. | https://uxplanet.org/4-ways-to-communicate-the-visibility-of-system-status-in-ui-14ff2351c8e8 | ['Nick Babich'] | 2020-04-28 18:01:00.803000+00:00 | ['User Experience', 'UI', 'Design', 'UX', 'Product Design'] |
I Feel Like I’m Performing My Life | Because I am not sure how to be great at being a human — humaning, I listen to people who say things with certainty.
“You need a morning routine,” they say with the greatest conviction, “because without it, you are lost.”
Not wanting to be lost, I perform scripted morning routines and pretend they are helpful until I can’t be bothered anymore.
“You need to write morning pages, first thing you wake up.” So I do. “You need to do yoga.” “Take at least 10,000 steps a day.” “Eat plant-based.” “Meditate.” “Do breathing exercises. Download this app. Buy that gadget. Read more books. Watch this documentary. Do everything Tony Robbins and Gary Vaynerchuk say. Don’t listen to gurus, except for Thích Nhât Hạnh and the Buddha.”
And so I do. I do all the things the professional humans acing at life tell me to do. Every day I check boxes and leave other boxes empty. Is that what a good human makes? Checking boxes until we die? | https://medium.com/invisible-illness/i-feel-like-im-performing-my-life-46a478dc9d12 | ['Judith Valentijn'] | 2020-12-11 15:31:29.711000+00:00 | ['Self', 'Mental Health', 'Derealization', 'Depersonalization', 'Life'] |
Non-Probability Distribution | In previous blog we covered probability distribution and its types, now we proceed to Non-Probability distribution and its types.
Non-probability sampling is a sampling technique where the odds of any member being selected for a sample cannot be calculated.
Non-probability sampling is defined as a sampling technique in which the researcher selects samples based on the subjective judgment of the researcher rather than random selection.
Types of Non-Probability Sampling:
a)Convenience Sampling :
Convenience sampling which is also known as availability sampling is a specific type of non-probability sampling method. The sample is taken from a group of people easy to contact or to reach. For example, standing at a mall or a grocery store and asking people to answer questions would be an example of a convenience sample.
The relative cost and time required to carry out a convenience sample are small in comparison to probability sampling techniques. This enables you to achieve the sample size you want in a relatively fast and inexpensive way.Limitations include data bias and generating inaccurate parameters. Perhaps the biggest problem with convenience sampling is dependence. Dependent means that the sample items are all connected to each other in some way.
b)Judgement Sampling:
Judgment sampling is a common non-probability method. It is also called as purposive method. The researcher selects the sample based on judgment. This is usually and extension of convenience sampling.
Judgment sampling may be used for a variety of reasons.
In general, the goal of judgment sampling is to deliberately select units (e.g., individual people, events, objects) that are best suited to enable researchers to address their research questions. This is often done when the population of interest is very small, or desired characteristics of units are very rare, making probabilistic sampling infeasible.
c)Quota Sampling:
A sampling method of gathering representative data from a group. As opposed to random sampling, quota sampling requires that representative individuals are chosen out of a specific subgroup. For example, a researcher might ask for a sample of 50 females, or 50 individuals between the ages of 32–42.
Quota sampling is used when the company is short of time or the budget of the person who is researching on the topic is limited. Quota sampling can also be used at times when detailed accuracy is not important. To create a quota sample, knowledge about the population and the objective should be well understood.
d) Snowball Sampling :
As described in Leo Goodman’s (2011) comment, snowball sampling was developed by Coleman (1958–1959) and Goodman (1961) as a means for studying the structure of social networks.
Snowball sampling (or chain sampling, chain-referral sampling, referral sampling) is a nonprobability sampling technique where existing study subjects recruit future subjects from among their acquaintances.Snowball sampling analysis is conducted once the respondents submit their feedback and opinions.Used where potential participants are hard to find.
Advantages of Snowball Sampling
The chain referral process allows the researcher to reach populations that are difficult to sample when using other sampling methods. The process is cheap, simple and cost-efficient. This sampling technique needs little planning and fewer workforce compared to other sampling techniques.
Disadvantages of Snowball Sampling
The researcher has little control over the sampling method.
Representativeness of the sample is not guaranteed.
Sampling bias is also a fear of researchers when using this sampling technique.
THANK YOU KEEP LEARNING :) | https://medium.com/ai-in-plain-english/non-probability-distribution-a15da752a013 | ['Megha Singhal'] | 2020-04-18 16:11:46.349000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Statistics', 'Deep Learning', 'Probability'] |
Are These Fears Your True Feelings? | Keep returning to your natural state of Joy
If your mind is fabricating these fears, then you can observe and let them unfold without needing to chase them away.
If these are your real emotions, then allow them to move through you. Let it go where it needs to go, let it move through the layers of your emotional onion if you need to. There’s no need to add more thinking to the feeling.
Either way, you will still return to your natural state of Joy!
The more you return to feeling and expressing this Joy, the easier it is for you to feel right about being joyful. Keep wondering about feeling joyful for no reason.
You don’t have to justify to anyone why you feel joyful. Allow yourself to get used to feeling joy, even if you are not perfect and that your life is filled with problems you can’t solve yet. Allow that Joy to bubble naturally from your honest, true self.
I hope this is helpful to you. | https://medium.com/bingz-healing-light/are-these-fears-your-true-feelings-b1b776be3100 | ['Bingz Huang'] | 2020-08-27 15:17:15.057000+00:00 | ['Energy', 'Life', 'Mental Health', 'Advice', 'Life Lessons'] |
Understanding Gradient Descent and Adam Optimization | How artificial intelligence has influenced our daily lives in the past decade is something we can only ponder about. From spam filtering to news clustering, computer vision applications like fingerprint sensors to natural language processing problems like handwriting and speech recognition, it is very easy to undermine how big a role AI and data science is playing in our day-to-day lives. However, with an exponential increase in amount of data our algorithms deal with, it is essential to develop algorithms which can keep pace with this rise in complexity. One such algorithm which has caused a notable change to the industry is the Adam Optimization procedure. But before we delve into it, first let us look at gradient descent and where it falls short.
In case if you aren’t aware of what a cost function is, I would recommend you to go through this blog first, which serves as a great introduction to the topic: https://medium.com/@lachlanmiller_52885/understanding-and-calculating-the-cost-function-for-linear-regression-39b8a3519fcb
Gradient Descent
Suppose we have a convex cost function of 2 input variables as shown above and our goal is to minimize its value and find the value of the parameters (x,y) for which f(x,y) is minimum. What the gradient descent algorithm does is, we start at a specific point on the curve and use the negative gradient to find the direction of steepest descent and take a small step in that direction and keep iterating till our value starts converging.
I personally find the above analogy to gradient descent very cool, a person starting from the top of a hill and climbing down by the path which enables him to decrease his altitude quickest.
The formal definition of gradient descent is given alongside, we keep performing the update as required till convergence is reached. We can check convergence easily by checking whether the difference between f(Xi+1) and f(Xi) is less than some number, say 0.0001(the default value if you implement gradient descent using Python). If so, we say that gradient descent has converged at a local minimum of f.
If you cannot quite grasp the gradient concept or are interested in more in-depth knowledge of cost function and gradient descent, I strongly recommend the following video from my favorite YouTube channel 3Blue1Brown -
Where Gradient Descent Falls Short
To perform a single step of gradient descent, we need to iterate over all training examples to find out the gradient at a particular point. This is termed as batch gradient descent and was done for many years but with the advent of the era of deep learning and big data, it has become common to have a training set size of the order of millions and this becomes computationally expensive, it may take few minutes to perform a single step of gradient descent. So what is done commonly, is something called a mini-batch gradient descent where we divide the training set into batches of small size and perform gradient descent using those batches individually. This often results in a faster convergence but there’s a major problem here — We only look at a fraction of the training set while taking a single step and hence, the step may not be towards the steepest decrease of the cost function. This is because we are minimizing the cost based on a subset of the total data, which is not a representative of what’s best for the entire training data. Instead of following a straight path towards the minimum, our algorithm now follows a roundabout path, not always even leading to an optimum and most commonly, overshooting (going past the minimum).
The following figures alongside show the steps of gradient descent in the 3 different batch size cases, and changes in how the cost function minimizes. In both the figures, it is apparent that the cost function is minimizing, but it oscillates even though in general, it is decreasing. The problem is as follows, Can we somehow “smoothen” out these steps of gradient descent so that it can follow a less noisy path and converge faster? The answer, as you might already have guessed, is Adam Optimization.
Adam Optimization Algorithm
There’s a lot going on here. Let’s quickly break it down. First, let’s see the parameters involved.
α — Learning Rate for gradient descent step. β1 — Parameter for momentum step (also known as first moment in Adam). Generally 0.9 β2 — Parameter for RMSProp step (also known as second moment in Adam). Generally 0.99 ϵ — Parameter for numerical stability. Generally 10^-8 m , v — First and second moment estimates, respectively. Initial values of both set to 0. t — The timestep parameter for bias correction steps. g and f — Gradient and function values at θ.
Adam can essentially be broken down as a combination of 2 main algorithms— Momentum and RMSProp. The momentum step is as follows -
m = beta1 * m + (1 - beta1) * g
Suppose beta1=0.9. Then the corresponding step calculates 0.9*current moment + 0.1*current gradient. You can think of this as a weighted average over the last 10 gradient descent steps, which cancels out a lot of noise. However initially, moment is set to 0 hence the moment at the first step = 0.9*0 + 0.1*gradient = gradient/10 and so on. The moment will fail to keep up with the original gradient ,and this is known as a biased estimate. To correct this we do the following, known as bias correction ,dividing by 1 - (beta1 raised to the timestep) -
m_corrected = m / (1 - np.power(beta1, t))
Note that 1 - power(beta1,t) approaches 1 as t becomes higher with each step, decreasing the correction effect later and maximizing it at the first few steps.
The graph alongside pictures this perfectly, the yellow line refers to the moment(estimate) obtained with a smaller beta1, say 0.5 while the green line refers to a beta1 value closer to 1, say 0.9
RMSProp does a similar thing, but slightly different -
v = beta2 * v + (1 - beta2) * np.square(g)
v_corrected = v / (1 - np.power(beta2, t))
It also computes a weighted average over the last 1/(1-beta2) examples approximately, which is 100 when beta2=0.99. But it computes the average of the squares of the gradient (a sort of scaled magnitude), and then the same bias correction step.
Now, in the gradient descent step instead of using the gradient we use these moments as follows -
theta = theta - learning_rate * m_corrected / np.sqrt(v_corrected) + epsilon)
Using m_corrected ensures that our gradient moves in the direction of the general trend and does not oscillate about too much while dividing by the square root of the mean of squared magnitudes ensures that the overall magnitude of the steps is fixed and close to unit value. This also adds in adaptive gradient, which I am not going to talk about in detail, it’s just a procedure of changing the magnitude of the steps as we approach convergence. This helps prevent overshooting. Finally, epsilon is added to the denominator to avoid division by 0 in case the estimate of the gradients encountered are too small and are rounded off to 0 by the compiler. The value is deliberately chosen to be very small so as not to affect the algorithm, generally of the order of 10^-8.
Effect on Performance
Adam has been in widespread use in Deep Learning models since 2015. It was presented by Diederik Kingma from OpenAI and Jimmy Ba from the University of Toronto in their 2015 ICLR paper “Adam: A method for stochastic gradient optimization”. Adam, as it may sound, has not been named after someone. It is short for “Adaptive Moment Estimation”. The following figure shows it’s effectiveness compared to other minimizing algorithms when applied to a neural network model on the MNIST dataset.
Adam has been one of the most remarkable achievements in the grounds of optimization. Several incidents where the training of a large model required days have been reduced to hours since usage of Adam. Since it’s inception it has been made the default optimizer used in almost all deep learning libraries. I myself use Adam frequently — on a handwritten digit classification problem, I found that just by changing my optimizer from mini-batch gradient descent to Adam my training accuracy jumped from 79% to 94%, and number of iterations required reduced to about one-third, a pretty significant change considering that my training data was of size about 10,000, not even close to a million, where the effects would be even more significant! | https://towardsdatascience.com/understanding-gradient-descent-and-adam-optimization-472ae8a78c10 | ['Tamoghno Bhattacharya'] | 2020-06-12 13:30:45.205000+00:00 | ['Artificial Intelligence', 'Data Science', 'Deep Learning'] |
I Built A Jupyter Notebook That Will Analyze Cryptocurrency Portfolios For You | The amount of engagement in the crypto investment space needs no introduction. With market caps, volumes, and public awareness on the rise, I thought I’d put together a simple Jupyter notebook to get a clearer and broader viewpoint into the investment activities within my own crypto portfolio.
TL;DR here’s the code ;)
Why Should We Analyze Our Portfolios?
Because we’re definitely missing important details about our investments by only looking at the total value of our (potentially fat) wallets — even though I enjoy looking at Blockfolio from time to time. Because seeing our Ripple go to the moon and overshadow the rest of our investments is likely increasing our financial risk substantially. Because we all want our money to grow, but achieving this by picking a diverse set of cryptos is easier and safer than picking a moonshot that could end up a dud (and make us broke).
And let’s face it, the market gains are just too big for us to be left in the dark on the true characteristics of our investment portfolios.
Important Portfolio Characteristics
Now there are several characteristics of our portfolio that we should take a good look at, including return and risk. But a lot of the time we’re fixated on one and not the other.
We can look at return in several ways: the amount of money we’ve made from the beginning to the current date, the average rate of money we’ve made over specific time periods (e.g., annual returns), how much better our investments did when compared to several characteristics of a benchmark (e.g., alpha), and even the annual compound rate it would have taken to get to our current investment based on our starting point (i.e., CAGR).
As important, if not more, is how we look at risk and its effect on return. I don’t know about you, but I want to make sure I’m making a good return based on an amount of risk I feel comfortable with. If we take on a huge amount of risk to make one particular return when we could have taken much less risk to make that very same return, the path to take for a more efficient investment is clear.
This is where understanding volatility, correlations, and risk-adjusted returns come into play by computing statistics such as standard deviation of returns (or volatility), beta, the Sharpe ratio, and the Sortino ratio.
And while we can compute all the statistics under the sun to measure our portfolio’s performance, it doesn’t do much good if we don’t include a reference point to see how well we’re doing in comparison. This is called a benchmark, and we’ll be using the golden boy of cryptocurrencies: Bitcoin.
Notebook Walk-Through
So I don’t want to display a bunch of code here because I think you should go through the notebook yourself and get a feel for things. Don’t be afraid, the notebook includes some clear explanations and the code is commented! It’ll also help in better understanding this post. If you want, clone the repo and give it a whirl first. However, I will show you results through some statistics and nice visualizations.
To start, we need to create a tradesheet that emulates how we invested our portfolio. The one below is included in the repo. These are actually the same cryptos I invested in and the times I bought and sold them up until now, but the amount of money and the allocations (i.e., the amount I bought and sold) are not ;)
You can think of the tradesheet as our investment strategy. These are the trades we decided to take based on our wizardry powers or what an algorithm told us.
Along with the tradesheet, we also need historical market data. I chose to go with something simple: download some CSVs from CoinGecko and throw them into a data folder. Pulling data from an API would be better though!
Now we want to run a backtest on our investment strategy. Simply put, running a backtest allows us to go back in time to our first trade, walk forward in time, and simulate the trading activity that occurred in our portfolio up until today. A backtester can be very sophisticated and can be used in a lot of different scenarios (to the finance geeks: pun intended), but in our case it’s rather straightforward.
Based on the statistics above, it’s clear that our portfolio did fairly well when compared to our benchmark. The returns are better, volatility is only slightly worse, and our beta is surprisingly below 100%. And look at that alpha!
OK. Numbers are nice, but I want to see some charts.
Well that’s intimidating. The above chart shows how the USD value of our portfolio evolved over time including all of our cash flows (i.e., deposits and withdrawals). While it’s nice to visualize this, it’s hard to get a clear idea of how our portfolio did in true performance when cash flows are included. For example, if I deposited $1 million (I wish), the portfolio would appear to have a HUGE spike!
Now that’s better. By removing the daily returns when cash flows were witnessed, we have a more accurate representation of the true performance of our portfolio. Fortunately, we have a very small number of cash flows, so this method is acceptable. As you can see, it took us some time to catch up to Bitcoin, but it did and eventually surpassed it (thanks Golem and NEO).
Actually, you can see that after the crazy Bitcoin, Ethereum, and Litecoin boom (aka the Coinbase boom), our portfolio became more diversified. This surely had a lot to do with the dampening of the upcoming Bitcoin drawdowns and the likely larger returns experienced among the newly added assets.
Well there you have it. Clearly, our portfolio experienced much less volatility (i.e., risk) after diversifying. Diversification (and luck) for the win!
For me, this is the most interesting plot. This is a matrix that represents the correlations between all of the assets in our portfolio. While a lot of assets had a medium to high correlation with one another, Bitcoin Cash had a very low correlation to every single asset. You can even see that it was negatively correlated with OmiseGO! Correlations do change over time, but it’s nonetheless interesting to see these types of relationships within our portfolio.
Explanation for the benefits of having diverse, low- and negatively-correlated assets in your portfolio.
Again, go ahead and clone the repo and play around a bit so you can understand in more detail how we went about analyzing our portfolio. You can even add your own tradesheet to get a glimpse into yours. And if you find bugs, let me know!
Summing It All Up
I hope you’ve gained a better appreciation for why it’s important to look at your portfolio through various lenses. It’s hard to get a clear understanding from just visualizing asset price movements, especially with all that’s been going on lately in the crypto space. Also, it’s not always clear how much risk we’re taking on over time, and how those risks will evolve when we invest.
What is clear is that diversification in such a market is important, because none of us knows where this market is going. With that in mind, best to keep an eye on your ship while weathering the storms and HODL.
By the way, none of this should be treated as investment advice and same goes for the code. Whichever investments you pursue are purely at your own discretion.
Full disclosure: At the time of writing this article I was invested in BCH, BTC, ETH, GNT, LTC, NEO, and OMG. | https://medium.com/free-code-camp/i-built-a-jupyter-notebook-that-will-analyze-cryptocurrency-portfolios-for-you-bdaba618aeca | ['Grant Bartel'] | 2019-12-25 09:07:13.690000+00:00 | ['Python', 'Cryptocurrency', 'Investing', 'Data Scientist', 'Bitcoin'] |
Why high-performing millennials leave the ‘fast-paced’ workplace | Millennials crave stimulating work that supports their professional development. Many assume that a fast-paced environment will allow them to explore diverse aspects of work that will support their aims. But many end up disillusioned after trying out this type of workplace.
A favourite cliché in recruitment must be the term ‘fast-paced environment’. The number of job advertisements using this term is simply astonishing. Used mostly to denote a high-energy, challenging and dynamic workplace, the term is very appealing to high-achieving millennials. But many end up disillusioned after trying out this type of workplace. Two assumptions about the fast-paced workplace are the main culprits for this clash between expectation and reality.
Assumption #1: fast = smart
We live in a culture of immediacy where the assumption is that you must be really smart to deliver something fast. How quick you are becomes a convenient measure to assess how capable you are. It’s a simplistic and no-fuss tool to measure someone’s intelligence. Who has time for nuances, anyway? Fast thinkers are said to be sharp and highly intelligent. It also gives people a sense of deep comfort: if you were quick enough today, perhaps, you could go to bed reassured that you are actually smart and worthy.
Fact: smart work is slower
Tasks that can be executed very fast are, paradoxically, less intellectually stimulating. Examples range from building PowerPoints, managing databases, crunching numbers on Excel to contract drafting or doing generally administrative tasks. This is the type of work that can also be easily automated and replaced by technology. Work that demands more brain power has a steadier pace. Some examples include strategy-setting, client meetings, designing a project or building a software. In a corporate setting, junior employees will be tasked with the ‘faster’ tasks first until they prove themselves capable enough to be involved in more complex work.
Assumption #2: fast = exciting
There is a widely held belief that working in a fast-paced environment feels exhilarating and rewarding. Everything moves so quickly, there would not be a minute to get bored. Life becomes a fun roller coaster filled with interesting to-dos that will take your otherwise mundane existence through thrilling tight turns and steep slopes. What a ride! Overcome the challenges successfully and you will climb the corporate ladder in no time.
Fact: fast often means badly managed
We tend to ignore that the need to speed up often comes from bad management and lack of prioritisation. It is all too common that an e-mail sitting in a manager’s inbox for weeks suddenly precipitates a call for action to deliver a huge amount of work at short notice. It does not feel particularly exhilarating. Rather, it makes employees prone to sacrifice quality. Ultimately, it strips the pleasure out of working and creates a sense of frustration. Now, that is not exactly everyone’s definition of fun. The term ‘fast-paced’ can also hide a darker side, that of understaffed workplaces where a culture of overwork and long hours reigns.
When do things go wrong?
Millennials crave stimulating work that supports their professional development. Many assume that a fast-paced environment will allow them to explore diverse aspects of work that will support their aims. Although many are willing to do what is considered ‘grunt work’ in order to demonstrate their commitment to an organisation, this is only to the extent that it is combined with stimulating work. Things go wrong when companies hire highly capable employees and require them to do the ‘fast’ work for prolonged periods of time. The lack of balance between the steadier, more meaningful work, and the faster, more strenuous, work leaves younger employees disillusioned. The effect is that high-performers will leave organisations in search for more fulfilling work. It is an unfortunate outcome for all parties involved.
The solution is simple: wise organisations know that employees need to change gears between the ‘faster’ and ‘slower’ work — and then make this happen.
***
I am a coach and trusted advisor to driven and gifted people who feel there’s an inkling of rebellion in them. I help them create more fulfilment and reduce stress in their work and careers, on their terms. You can sign up to The Sunday Question, my weekly invitation for introspection and action, here: https://www.anisiabucur.com/sign-up/ | https://anisiabucur.medium.com/why-high-performing-millennials-leave-the-fast-paced-workplace-42ec3960d65e | ['Anisia Bucur Frsa'] | 2020-09-17 10:26:31.652000+00:00 | ['Workplace', 'Management', 'Millennials', 'Motivation', 'Human Resources'] |
Agile Marketing in the Age of the Customer | Marketing never sleeps.
At any given moment, marketers have multiple channels and campaigns open alongside the daily grind of deliverables they need to get done within a day. On top of all that, why does everyone else seem to want to sit around and discuss Schitt’s Creek?
But it isn’t just marketers who are always connected — customers are too. They’re switching from screen to screen and device to device, with ever-decreasing attention spans. We can no longer afford to take three weeks to develop an emailing strategy, approve it, test run it, and finally put it out.
Now, businesses have to run with opportunities as they present themselves.
In a world where the hare now beats the tortoise, today’s marketers are agile. They’re reaching their audiences with the right message at the right time, on the most relevant platform.
In the constantly-changing digital scenario, a single Google update can pull the carpet out from under your feet and leave your traditional marketing methodology struggling with its focus on producers and sales cycles. Agile marketing helps businesses consider customers and their buying behaviour, in addition to traditional routes.
Image by Author
Put simply, agile marketing is the use of data gathered to constantly improve your marketing campaigns throughout the process. In 2012, a group of marketers came together to create the Agile Manifesto, an agreed-upon set of values to guide marketers towards a more “agile” way of working.
It isn’t on-the-fly marketing. It means customer focus, constant change and collaboration, and continuous iteration, and focuses on these values:
Testing + data instead of assumption + opinions
Responding to change rather than just following a plan
Collaboration + transparency over hierarchy
Many smaller experiments rather than standalone bets
In agile marketing, teams apply collective efforts to complete projects under short and definitive time periods. Check out this great article to bust the common agile marketing myths.
Image Source AgileSherpa
Agile marketing as a part of business strategy
In B2B and B2C industries, successful marketing means increased customer satisfaction and sales. When marketers need to move quickly due to customer dissatisfaction, product recall or poor response on a social contest, agile marketing comes to the rescue. This is why you should use it.
Better Internal Communication
Sometimes, a business’s marketing and IT teams feel like oil and water. Adopting agile marketing improves communication not only within the marketing team, but between different departments. With regular, target-focused meetings, any challenge is immediately resolved and everyone knows what the other is doing.
Save On Cost
Companies can effectively save and get long-term results at the same time by reaching out to a larger audience through great work ethics and organisation, all without the extra cost of scrambling to find multiple alternative solutions.
Happier Employees
80.9 percent of agile marketers are satisfied with their work, as compared to 27 percent of ad hoc marketers, and 44.2 percent of traditional marketers. When your employees are better able to prioritise tasks, improve coordination and delivery, it boosts morale. Colleagues’ project visibility is another factor that helps quality of work shoot up.
Transparency
When marketers have a clear insight into the course of a project, sprint review meetings, and better feedback, this brings better results. Transparency isn’t just limited to the team, but means that a company’s marketing team can acknowledge the work of management, or work closely with customers to offer genuine services.
Life as a marketer is a series of sprints, where they need to constantly reinvent themselves and embrace new tactics that allow them to stay up-to-date with trends and what customers really want.
It’s time to sprint, because the time for walking is over.
Originally written for and published on Digital Odyssey. | https://medium.com/swlh/agile-marketing-in-the-age-of-the-customer-58aa9cdca3a4 | ['Anannya Sharma'] | 2020-11-28 19:14:41.170000+00:00 | ['Marketing', 'Social Media Marketing', 'Digital Marketing', 'Customer Experience', 'Business Strategy'] |
IJ4EU awards €130,000 to support 14 cross-border investigations | IJ4EU awards €130,000 to support 14 cross-border investigations
The latest funding under the IJ4EU Publication Support Scheme will help journalists complete ongoing collaborative projects.
The IJ4EU fund has selected 14 additional projects under its Publication Support Scheme, allocating €130,000 to cross-border investigations that are already underway and need a final boost to reach publication.
Altogether, the Publication Support Scheme has assisted 24 projects in various stages of completion this year with a total pot of €204,500. This comes in addition to €864,000 in funding allocated to 25 new cross-border projects under the IJ4EU’s Investigation Support Scheme this year.
The Publication Support Scheme grants of up to €10,000 to offer short-term support for journalists in EU member states and candidate countries to get their collaborative projects over the finish line, while the Investigation Support Scheme provides grants of up to €50,000 to launch new projects.
The latest 14 grantees were selected after the Publication Support Scheme closed on 18 September 2020. They involve freelance and staff journalists as well as news organisations from 16 countries on topics ranging from the environment and public health to organised crime.
In no particular order, the teams and their projects are:
An international team of journalists investigating Russian influence and asset-stashing in the EU — €10,000
Two freelance journalists working on corporate social responsibility in the fisheries sector — €8,000
An international team working on a data-driven investigation on COVID-19 and building a database with sociodemographic variables in Europe — €10,000
Michele Catanzaro (Spain) and Astrid Viciano (Germany) are working on a cross-border investigation into the pharmaceutical sector — €8,670
An international collaboration investigating the recycling industry and new garbage routes in the EU — €9,600
An international collaboration between two organisations investigating the environmental impact of local industrial accidents -€10,000
A team of freelancers examining conflicts of interest in science and research — €10,000
Two freelance journalists investigating the impact of the coronavirus pandemic on real estate ownership in Europe — €9,621
An international team coordinated by AlgorithmWatch investigating the impact coordinated groups on social media have on the European public — €4,250
An international collaboration by Slidstvo.info, Bird.bg and a freelance investigative reporter from Romania working on an investigation into organised crime in Eastern Europe — €10,000
An international team of freelance journalists investigating new offshore energy projects in the Black Sea — €10,000
A collaboration investigating political influence in the energy sector in Central and Eastern Europe — €10,000
A group of freelancers investigating organised crime and corruption in Eastern Europe — €9,970
An international team led by Gergely Nyilas (Telex, Hungary) including Dmytro Tuzhanskyi (Varosh, Ukraine) and Markus Müller-Schinwald (ORF, Austria) is working on an environmental investigation in Central Europe — €8,978.
Read more here. | https://medium.com/we-are-the-european-journalism-centre/ij4eu-awards-130-000-to-bolster-14-cross-border-investigations-99efa0e38e1c | ['Zlatina Siderova'] | 2020-11-12 13:00:50.995000+00:00 | ['Journalism', 'Investigative Journalism', 'Journalists', 'Media', 'Updates'] |
A new model of the design process | In my previous essay, I started thinking about a more holistic view on design. I thought about how design is not just beauty, not just problem solving, but also problem finding, questioning. How design is all of that. The head, the heart and the hands. Today, I wanted to take this train of thought one step further. Today, I wanted to think about how to connect these three functions of design and how design can help bridge the gaps between strategy and operations in most organizations. All in the name of uncovering the way design can bring the most value and have the highest impact.
Three levels
Design, as most things in business, happens on three levels: strategic, tactical and operational. That these three levels seem hierarchical comes from the dominant ideas about how to organize work from the Scientific Management movement. In that school of thought, strategic work is done by managers that have a higher hierarchical position. That only ever worked in factories and other environments that have a clear, linear business processes that benefits from dumb operational workers that don’t have to think. In complex environments, this doesn’t work. Managers that are supposed to define strategy don’t have the required overview and necessary operational information to make strategies that work. One of the main problems that arises from defining a strategy anyway is that the strategic, tactical and operational levels are disconnected. The reality of most companies is that people do their work regardless of the strategy. Okay, this might sound a little harsh, but there are clear gaps between these three organizational levels in most organizations.
Three levels of work in organizations
Six areas
To bridge these gaps, we need to let go of the mental model that the levels are hierarchical. Let’s put them all on the same level. Strategy is just different work, not more important or better than other work. And strategy is too important to be left to strategists. We also need to create in-between area’s: strategic tactics between strategy and tactics, tactical operations between tactics and operations, and operational strategy between operations and strategy. Between the three levels interesting things happen.
Six areas of design
If we look at it like this, we get to six areas instead of three levels:
1. Strategy: question.
This is where we go out and find the right questions. What is the real/core problem? Why is it that a problem? Why is it not working right now? What are the assumptions we have around the problem? What is the question we need to answer in this project?
This is where we go out and find the right questions. What is the real/core problem? Why is it that a problem? Why is it not working right now? What are the assumptions we have around the problem? What is the question we need to answer in this project? 2. Strategic tactics: problem framing.
In between coming up with solutions (tactics), and finding the question (strategy), there is work to be done to translate the main questions into a description of the problem that opens up the minds of people to come up with solutions that work. The way you frame a problem determines what kind of solutions that will be found.
In between coming up with solutions (tactics), and finding the question (strategy), there is work to be done to translate the main questions into a description of the problem that opens up the minds of people to come up with solutions that work. The way you frame a problem determines what kind of solutions that will be found. 3. Tactics: functional solutions.
This is where the problems are solved by coming up with solutions. What solutions are we going to develop to answer the questions, to solve the problems? What solution fits best to the questions in the project?
This is where the problems are solved by coming up with solutions. What solutions are we going to develop to answer the questions, to solve the problems? What solution fits best to the questions in the project? 4. Tactical operations: engagement.
Between coming up with functional solutions and determining how they will look like exactly, there is work to be done to imagine how these functional solutions can be designed so they engage people. This is not just about the functionality and the looks but about the interaction patterns, the flow, the user journey, the way we can make the solution work for the users and the business.
Between coming up with functional solutions and determining how they will look like exactly, there is work to be done to imagine how these functional solutions can be designed so they engage people. This is not just about the functionality and the looks but about the interaction patterns, the flow, the user journey, the way we can make the solution work for the users and the business. 5. Operations: love.
Beauty has a big role to play in how successful a solution will be. People are visual creatures and react to solutions not only with their heads but also with their hearts. Beauty makes people enthusiastic, it opens their hearts.
Beauty has a big role to play in how successful a solution will be. People are visual creatures and react to solutions not only with their heads but also with their hearts. Beauty makes people enthusiastic, it opens their hearts. 6. Operational strategy: purpose.
The whole process is a circle. So between operational beauty and strategic questions is work to be done to make sure that the solutions align with the questions that were uncovered. The final solution is input to uncovering new and more fundamental questions. The solution gets purpose and meaning if the solution is lovely but also solves the fundamental question.
Double loop learning
All these areas feed back into each other. Or they should. Creating solutions in a complex environment requires constant learning. There are two types of learning. The first is single loop learning. You do something, there is a certain outcome, then you change the thing you do so the result is different. You learn by comparing the result to the action. Basic trial and error. This can be an effective and simple method to learn things. The downside is that you might up ending trying lots of actions to get to the result you want. There is also another way of learning and that is called double loop learning. In double loop learning, you not only reflect back on the action based on the result but you also revisit your assumptions, mental model of the situation. Especially in complex situations, this can be a more efficient way to learn.
Double loop learning
If we look at the six areas in the design process outlined above, we see that it is best to let the different areas loop back into each other. Each area works on its own assumptions that need to be revisited based on the results in the next phases to get to the best results. (You see the same circular, double loop learning in Agile ways of working.)
If we arrange the six areas of design work in a process and apply the double loop learning, we arrive at the following diagram:
A new design process model
I did not draw all the double loop lines. I drew the line back to the questions that are the foundation, the place where the most fundamental assumptions lie. But all phases can loop back into each other. More often than not, you discover that the functional solution doesn’t work the way you thought it up when you design an engaging interaction for it. Insights from the next engagement creation loops back to the solutions design phase. This goes for each phase. The most impact comes from the phases looping back to the fundamental assumptions in the question phase. It could happen that you find that your assumptions and questions have been wrong once the whole solution is live and completely built. That is the most expensive way to find this out. That is why a Lean Startup approach for testing hypothesis, questions, assumptions is advisable in situations with a lot of uncertainty. | https://medium.com/design-leadership-notebook/a-new-model-of-the-design-process-35ea103441be | ['Dennis Hambeukers'] | 2020-12-24 10:56:51.167000+00:00 | ['Design Strategy', 'Design Leadership', 'Model', 'Design Model', 'Design'] |
SAM: Strategic Asset Manager | This blog is written and maintained by students in the Professional Master’s Program in the School of Computing Science at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit {sfu.ca/computing/pmp}.
1. Motivation & Related Work
Financial markets investment decisions are more than just crunching numbers. It is tough for the majority of us without any formal training to gain the necessary information to make investment decisions. An uninformed investor has various questions on where he should put money and how much should he risk. Hence, an intelligent system is required that can make use of the hypothesis that stock market prices are a function of information, rational expectations and the newly revealed information through news and financial reports about a company’s prospects. Therefore, we have an opportunity to leverage the power of machines to build intelligence harnessed from symbiosis of numerical and textual features.
Stock Prediction has been a famous problem but it isn’t solved yet, else the richest person in the world would not have been Mr. Bezos. Though there is a lot of work focusing on building models that can predict prices for the next day, there are shortcomings in the work for forecasting prices for a longer time period. Being able to forecast future values, and using these to further forecast values ahead is the strategy we adopted to help investors with their financial decisions. This strategy overcomes the low significance of a next day prediction for an investor who needs information bound on a longer time frame to make investment decisions. SAM is able to guide investment strategy by being able to analyse the trends of the market and help you decide BUY and SELL strategies to maximize profits.
2. Problem Statement
We aim to build a system that can evaluate an investment decision taking into account the stock’s historical performance, global news sentiment and company’s Edgar reports. While doing so, we have a few hypothesis that we aim to confirm. The questions we try to answer are:
Q. How can machine learning suggest investment decisions?
Q. How do changes in a company’s annual reports reflect a change in the company itself?
Q. How do uncertainty, sentiments and emotions help in analysis and prediction?
Q. Do global news and economic indicators play a role?
2.1 Challenges
Processing Edgar reports poses a huge challenge due to the size of the files and the variability in syntax differences among companies in their reports. [1] helped us gain a formal understanding of the processing of these files to download & process them. It was tricky to merge our analytics and ML work with a AWS backed chatbot into a single application to provide a fluid user experience in making stronger investment decisions. There can also be many short term factors that influence a company’s immediate stock price which is not easy for the model to capture accurately. In addition to this, feature performance varies for each stock and there cannot be a single solution to forecast stock prices for all the companies. Attention and efforts are required to hyper-tune prediction models for each company to capture insights and make accurate predictions.
3. Data Science Pipeline
Figure 1: Data Science Pipeline
To understand the pipeline defined above, we can break it into 4 components:
1. NLP on Edgar:
Using the quarterly IDX files, we were able to generate a master file for the companies of our interest. This file had the location of Edgar reports filed by the companies which were programmatically downloaded from Edgar servers. These files were pre-processed eliminating the HTML formatting instructions among others leading to a reduction in their size by upto 50%.
Uncertainty reflects a company’s imperfect or unknown market factors whereas sentiment would involve its positive and negative outlook. Both the features were generated to calculate a polarity score and uncertainty score used as features in the model.
Particular sections of the document had to be extracted to run other checks to test our hypothesis. Legal Proceeding section had to be extracted to perform text similarity and find if the changes in this section over the years reflect a change in the company itself. Similarly, Management’s Discussion and Analysis section was extracted to analyse emotions of the management’s outlook.
2. Data and Machine Learning
The stock price data extracted from Yahoo Finance API consisted of open, close, high and low features. All these four were averaged to calculate the mean price for the day. All the NLP features were combined with this price to prepare a time series data. After evaluation, LSTM model was chosen to make the predictions because it resulted in a lower RMSE compared to XGBoost. The model was hyper-tuned for parameters such as lookback days, batch size, optimizer etc. to get a better accuracy of predictions. Features such as economic indicators had to be dropped from the model to get a better output.
3. News and Wikipedia scraper
Global news was accessed using Google Cloud Platform and mined using BigQuery. Sentiment processing was done on 100 daily articles for data over 5 years to generate the global news sentiment feature. Introductory paragraphs and logos were scraped from wikipedia for S&P 100 companies to allow for a comparison tab in our dashboard that can help us contrast the average stock price and top stock holders for the companies.
4. Chatbot
Figure 2: Chatbot Architecture Design
We designed a chatbot using AWS services to help the user gain more information from a company’s Edgar report. We used AWS Comprehend which is a natural language processing service to find insights and relationships in text using machine learning. AWS Lex was used for building conversational interfaces into the application. BERT was hosted on AWS EC2 and files were stored on AWS S3 which were used to answer user questions on Edgar reports. AWS Lambda was used to run code without provisioning or managing servers and acted as the central co-ordinator between all the components to work with Lex and deliver the output. The UI was provided by Kommunicate IO and the javascript was embedded into the application.
4. Methodology
4.1 Data Collection
Stock Data: Yahoo Finance API was used to extract stocks data for each company from the year 2015–2019. It was then stored in a PostgreSQL database and merged with company information scraped from wikipedia. Data for top 30 mutual funds was also accessed through the API and stored in the database.
Edgar Reports: The 10-K files from 2014–19 were accessed from Edgar servers. A total of 483 10-K reports were processed and analysed.
Economic Indicators: Data for leading indicators such as BCI(Business Confidence Index), CCI(Consumer Confidence Index) and CLI(Composite Leading Indicator) were downloaded from OECD web portal.
4.2 Text Similarity
We used regular expressions to extract sections of interest from Edgar reports. For finding a change in company’s legal proceedings, a cumulative text similarity was applied over the years using Jaccard Similarity, Cosine Similarity and fasttext’s pre-trained model accessed using Gensim.
Jaccard Similarity has an inherent flaw due to which as the size of the document increases, the number of common words tend to increase even if the documents talk about different topics. Cosine Similarity, on the other hand calculates similarity by measuring the cosine angle between two vectors. This approach is advantageous because even if the two similar documents are far apart by the Euclidean distance (due to the size of the document), chances are they may still be oriented closer together. The smaller the angle, higher the cosine similarity.
During our evaluation, we found the feature cosine similarity to give more accurate results compared to the similarity obtained by using a pre-trained fasttext model.
4.3 Sentiment analysis
It is the interpretation and classification of emotions(positive, negative and neutral) within text data using text analysis techniques. Sentiment analysis allows businesses to identify customer sentiment toward products, brands or services. We applied sentiment analysis on 10-K filings to calculate the report’s polarity for the specific company and year.
4.4 Emotion Analysis
We extracted the Management’s Discussion and Analysis section to analyse the management’s outlook. To do so, the Sentiment and Emotion Lexicons developed by the National Research Council of Canada were used to give us text association with certain categories of interest such as joy, trust, fear etc. This can help us evaluate if the management is happy with, angry at and fearful of the market positioning and their targets.
4.5 Machine Learning & Forecasting
All the features from the Edgar reports, sentiment analysis on news data, mean price of financial instruments and economic indicators were combined to create data for time series analysis. The data was split into training and testing set. Before converting it into time series data, the features were normalized using Sklearn’s Standard Scaler.
LSTMs are very powerful in sequence prediction problems because they’re able to store past information. This is important in our case because the previous price of a stock is crucial in predicting its future price. We modeled a neural network using Keras with two LSTM layers, two dropout layers and a rmsprop optimizer with a dense layer for the output.
Generally a lookback value ranging between 20–30 days was used depending on the model’s evaluation for a particular company. The approach is to generate a prediction for one future time step using the 30 past values, adding the new prediction to the array and removing the first entry from the same array to predict the next time step with an updated sequence of 30 steps. Predictions are made for 90 days window to evaluate the returns from a financial instrument.
Figure 3: Rolling Window
4.6 Chatbot
BERT(Bidirectional Encoder Representations from Transformers) is used to perform a wide variety of NLP tasks including question answering among others. Here we used a pre-trained model (BERT-Large), trained on SQuAD v1.1 dataset to answer our specific questions from the company’s Edgar reports. BERT is deployed on an EC2 instance and interacts with AWS Lambda function to provide the answer to the chatbot via AWS Lex.
Entity recognition is the process of identifying particular elements from text such as names, places, quantities, percentages and times/dates. Identifying the general content types can be useful to analyse the data in Edgar reports to compare them over the years and find changes.
Key phrase extraction can be used on a string containing a noun phrase that describes a particular thing. It generally consists of a noun and the modifiers that distinguish it. Each key phrase includes a score that indicates the level of confidence that AWS Comprehend has that the string is a noun phrase. Scores provided by Comprehend is then used to determine if the detection has high enough confidence and the top 10 values are returned.
Figure 4: Features of chatbot
Figure 5: Sentiment extraction
5. Evaluation
The features from the Edgar reports and features from sentiment analysis from news data does have an impact on stocks prediction. We compared our LSTM model against predictions that we got from using the mean prices as the only feature. RMSE and prediction for 90 days were used as a factor to evaluate the model. Depending on the model prediction, model hyper-parameters were tuned to get the accurate prediction.
Figure 6: Prediction over 90 days
Figure 7: Trend Capture for Segments
The model is able to follow the trend of the stock prices giving us an indication if the market will go above or fall in the future so that BUY and SELL strategies can be made. A single new prediction is made using past 30 days and a rolling window is applied to get the next prediction each time having the previous 30 days as an input.
6. Data Product
Below is a demo of our user interface where we can run analytics and interact with SAM.
Video 1: Frontend Demo
7. Lessons Learnt & Future Work
This project allowed us to explore the background of financial markets and experiment with factors that may influence price forecasting. We were able to apply NLP techniques to work with metrics of text similarity, sentiment analysis and emotion extraction. The machine learning cycle is an iterative process to experiment with a variety of features and parameters which guaranteed us to build a stronger knowledge base. Using the services by AWS, we were able to build an intelligent automated system to parse Edgar files and mine relevant information for analysis. We also gained exposure working with Dash which provides an easy to integrate application with Python.
In addition to this, we can conclude that more work can be done in the same field to generate better features. A company’s 10-Q filings can be equally important to have more accurate predictions. Also, company specific news should play a better role in influencing the stock price and we would look at ways to gather this information for better accuracy.
8. Summary
Our machine learning approach uses NLP features generated from Edgar reports, global news sentiment and historical price data to forecast future values. LSTM model was used in conjunction with a rolling window approach to forecast 90 days values. Based on the returns, BUY and SELL strategies are then offered to the investors. SAM provides an easy to use interface to make investment decisions. It allows us to analyse a company's historical performance as well as compare its uncertainty and emotion results. Live executions of AWS services makes it possible for it to mine NLP features as well as answer user questions on Edgar reports based on a pre-trained BERT model. We have achieved knowledge from this project with a future scope of further building new features and hyper-tuning models. The problem of stock prediction is far from over, still more features can be analysed to give a stronger result and capture short term volatility to secure investments.
References
Ashraf, Rasha, Scraping EDGAR With Python (June 1, 2017).
Journal of Education for Business, 2017, 92:4, 179–185. Available at SSRN: https://ssrn.com/abstract=3230156
2. Time Series Forecasting: A Deep Dive
3. AWS Documentation: https://docs.aws.amazon.com/ | https://medium.com/sfu-cspmp/sam-strategic-asset-manager-dd2d680a52dc | ['Anuj Saboo'] | 2020-04-20 07:33:30.344000+00:00 | ['Data Science', 'Chatbots', 'Stock Market', 'Big Data'] |
« You know nothing Andreas… » - Chronicle of a 27 years old developer | Experience feedback
The Chronicles of a 27 Year Old Developer in the Software Engineering World
What I’ve learned (so far) from working at big companies, and some principles that inspired me
“You know nothing Andreas”
I used to be someone that takes shortcuts and learns on the go, then whenever I need it, I read the documentation. Most of the things in my life are that way: act, then step back and consolidate.
Did I take the wrong way?
You know what: by doing so, it worked pretty damn well so far 😆 Until I understood one thing by dint of making applications :
The software complexity naturally increases and we cannot avoid that: We call that the software entropy.
The thing is that this curve is not linear as we might think: it’s exponential.
What does it mean is that you need to take into consideration the future (even if abstract) of your program? Is it a one-shot program? Does it expect to grow highly? If you don’t think about that, you’ll probably go straight to the following case quote.
« Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. » — Alan Kay
Many developers think they master programming because they know how to use some frameworks. Many managers assume they can just add human resources to some projects and the productivity will increase.
But I guess, all of this is a bit more complicated.
My way on the developer career in a nutshell
Since my childhood, I’m passionate about computer science. I fell in love with automation and algorithmic. It even helps me represent and understand many concepts of life that are not directly related to IT.
I did short studies because I didn’t really trust school or even private ones with the hidden business around them. I learned almost everything on the field or with experimented pairs.
The school illusion
At school, I learned what I might consider today « a good way of building software ». Things like Xtreme Programming, design patterns…
But there was an issue: I was not ready to assimilate and fully understand that knowledge at this specific moment.
I was also young and dumb. I had not the same cognitive capabilities and abilities to focus, assimilate, and put to work what I was learning.
I thought wrongly that discussing with my classmates was more fun and relevant at this time than really try to understand what I was learning and why it would be awesome to apply it in my whole life.
During school, you lack real feedback on your work, everything is pretty abstract. School is good, but it’s not enough.
The sad truth about Digital/Software Service Companies
After school, I joined a company and the labor market.
I quickly started specializing in a specific framework because everybody was doing such… Without really asking myself about that.
I learned a lot from experienced developers, that was great.
But the truth was that I was just producing code like a worker in a factory chain. Some things felt so irrelevant at some point that I started thinking about how to bring change. However, the inertia in such a system is just overwhelming and I couldn’t make things change even if I wanted to. I was not even sure if what needed to change (according to me) was really the issue or if I was wrong.
One thing that I’m sure of is that with the big agencies, while it’s never clearly stated and assumed: code quality is amputated for the deadline, the market concurrency, and the budget.
Most of the time, miscommunication and misalignment within teams or with the business cause the majority of project failures or difficult situations.
That’s the truth about a great number of big companies, they are soul harvester and press people like lemons until all the juice went off, many developers share those feedback in terms of burnout.
Suffering from the Dunning-Kruger effect
Even though after practicing a lot at work and in my free time, I was improving my skills and I thought I was a good developer, even a really good one.
I was using the last available technologies like GraphQL, Javascript Superset’ Typescript, modular architecture.
The truth is that I was still doing shit code with big transaction scripts where instructions are just chained one after one and with no real dependencies management. My code was greatly coupled to one or two technologies or frameworks and I had no possibility to change that layer.
What happened to me is perfectly described by a cognitive bias named « Dunning-Kruger Effect »
In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people with low ability at a task overestimate their ability. It is related to the cognitive bias of illusory superiority and comes from the inability of people to recognize their lack of ability. Without the self-awareness of metacognition, people cannot objectively evaluate their competence or incompetence. — Wikipedia
TLDR; A beginner usually does not know enough about a field that he becomes totally biased on his or her ability to accurately assess his or her positioning concerning these skills.
The truth was that I was still not able to deliver proper software, even with cutting-edge tools.
Most of the time I delivered, but you know: Every delivery I wasn’t really confident.
Back to basics
The more you learn things, the more you understand that things have to be simple.
By not having cutting-edge tools bringing solutions to my problems, I started thinking about the « first principle » which is a way to reason about the most basic building blocks of somethings.
Then using that method, I built an inverted dependencies tree around that problem.
Elon Musk in the following video explains how he used first principles to build cheaper batteries.
Then I started reasoning: in the IT-sphere the most basics things in my opinion are :
Making a product that fills a business need
Beings able to maintain, adapt this software to this business need
Until that moment, I was thinking about how can the product needs to fill my technologies requirements (and hype), but in fact, it should’ve been the opposite: What are the tools and the code required to fill my business need?
After searching around that, I found back my old lessons at school and also much content about something called Domain-Driven Design or Clean Architecture.
Digging deeper, I’ve rediscovered software architecture and the ways to build my applications. Those ways of thinking teach how separating the core value of my software from the implementation details (like the framework) could be beneficial. When we can define clear boundaries in-between domains and build everything from the business need, our application becomes highly modular and aligned with the future requests of change.
Today I’m here, and after more than 10 years of development, I’m still learning a lot of things and I will probably in the future. | https://medium.com/javascript-in-plain-english/what-10-years-of-working-as-a-deverloper-has-taught-me-about-software-engineering-9890b1b92c38 | ['Andréas Hanss'] | 2020-11-02 19:08:02.581000+00:00 | ['JavaScript', 'Software Development', 'Coding', 'Software Engineering', 'Programming'] |
5 Years of React Native: Things I Wish I Had Known When I Started | 5 Years of React Native: Things I Wish I Had Known When I Started
TL;DR: React Native is great and I intend to keep using it. However, terms and conditions apply
When I started with React Native back in 2016, I was quite skeptical. Frameworks such as PhoneGap and Cordova had already existed for a while, yet nobody seemed to take hybrid development seriously. Everything felt like a workaround and native expertise was constantly required to do anything beyond the basics.
It took a few projects — some successful and some less so — to fully take in the advantages, caveats, and pitfalls of React Native. In this article, I will summarise these experiences and how they reflect on hybrid app development in general.
To make it more objective, I will use these system quality attributes as guiding principles: | https://medium.com/better-programming/5-years-of-react-native-things-i-wish-i-knew-when-i-started-a3205490e72c | ['Stanislav Sopov'] | 2020-12-04 17:04:09.487000+00:00 | ['React Native', 'Android', 'Programming', 'iOS', 'React'] |
“STQ product”: Big Platform Updates | Dear supporters!
We are sure you will be excited after reading this news to the end. We develop our marketplace continuously adding more new functions and improving those which are already implemented; here we go — meet big platform updates! You are able to check most of them right now if you explore storiqa.com, but still we decided to prepare this post to let you know about all existing updates immediately!
Payment System
Crypto and fiat payments
We have finished implementing our own payment system: for now, you can sell and buy different unique goods with both crypto and fiat currency. Sellers select STQ, ETH, BTC or EUR as a payment currency for their goods. This became possible due to API for processing crypto payments to the sellers (in case of having the orders paid with cryptocurrencies) and integration with Stripe payment system.
Filtering System and Pricing
Buyers can filter goods by payment currency at the searching page. All goods’ prices are shown in the selected currency, if it’s a cryptocurrency — you will see an equivalent cost in fiat as well.
Crypto payment page
In case of crypto payments, on the payment page there is now “amountCaptured” field added which allows the users to see what exact amount of currency is received by Storiqa and make a payment with several transactions.
Instant conversion
In case of crypto goods, you choose the currency which is more convenient for you, no matter what cryptocurrency the seller sells in. In order to gain complete work automation, we have further developed the billing for crypto payments testing. The platform will automatically convert the currency to the one chosen by you, it will be done at the current exchange rate.
Same is for fiat payments: if you buy goods with fiat, you can pay with any currency (using the bank card), and the system will convert this amount to EUR automatically.
Make note, that conversions crypto-fiat and fiat-crypto are not supported at the current stage, and you can’t buy goods with fiat if the seller sells in crypto (and vice versa).
Finances
In the shop settings, here is a new “Finances” tab that allows sellers to add bank cards (sellers’ fees will be charged from the indicated card in future) and specify bank account details (for funds assessments after successful selling).
Improving management
We have improved order management both for buyers and sellers. Also, we have implemented the possibility to change the category for your goods in shop settings.
New Starting Page
We have changed the main page design. Now storiqa.com is more informative and stylish and completely shows our mission.
Cart sections
If you add fiat and crypto goods to the cart, you will see that it has two different sections for fiat and crypto goods to avoid possible mix-ups.
Similar goods
We have added a new section on the product page that is called “Similar goods”. In order to create this function, we developed extra requests for integration with Rocket Retail. “Similar goods” section shows different products from equal categories that have approximately same characteristics and might suit you as well.
We are happy to know that you support us and follow our updates and we promise that we will keep up the good work and come back with new features. Stay tuned! | https://medium.com/storiqa/stq-product-big-platform-updates-699922950d9c | [] | 2019-02-22 08:00:29.321000+00:00 | ['Development', 'Cryptocurrency', 'Storiqa', 'Stqmarket', 'Updates'] |
If You Do These 7 Things, You’ll Be Able To Achieve Any Goal You Set | Mindset
The most important part.
Your mindset ensures you have the skills to think correctly.
That includes attitude, resilience and capacity to be the person you need to be to achieve the goal you set your mind to.
You need to be able to command your ego and recognize the difference between your ego self and your observer self.
This includes but isn’t limited to believing in yourself, having confidence, managing fears we all have — and an attitude that knows how to win.
It also requires believing you’re entitled to getting what you want, which is one of the biggest blockers of goal achievement.
Skill Set
Critical to have the skills to achieve the goal.
You can’t become a master writer if you don’t have the capacity to write well (practice every day) or don’t have anything interesting to write about (either first hand experiences or masterful creativity to connect dots that no one else has connected before).
Invest in your skill development and learning — the more the better.
The more you learn and experience, the more you master your skill.
A skill should be targeted toward a particular niche (toolset) and an apprenticeship is one of the most effective way to learn and develop your skill-set.
To further hone those skills, you need to test your work in situations that matter. Ie. the market.
How often is you’re work being ‘battle tested’ or shipped? What’s the market saying?
You need a lot of that feedback. I’m currently launching a course on getting people to treat you how you want to be treated.
I’ve relentlessly written on the topic, faced the problem myself exhaustively, solved and overcame it for myself with coaching and the exact prescriptions I prescribe.
Most importantly, have validated the market watching my articles go viral (strong signal) and email sign ups.
I know I’m the perfect person to dominate this niche.
In a nutshell, skill-set requires a lot of practice and experience — and it needs to get seen by the field and market itself.
Tool Set
The right tool set often means the right technology, capital or other help to deeply master your craft, understand the people you’re working with or selling to and reaching them (the market) in the best way you can.
Depending on how you look at it, tool set could be the least important though training is the most important.
In the context of personal goal setting to eventually acquire the toolset — those things requires creating the conditions and environment for what I call absolute resourcefulness.
Absolute resourcefulness ensures you find everything you need at the right time to accomplish the goal.
Absolute resourcefulness is the result of the right mental state and situation to achieve what you want.
For example, many amateurs think “having connections” is the key to success.
“If only Ashton Kutcher would share my article I’d go viral” or if so and so would introduce me to so and so who would make me successful by signing me or buying my product etc.”
The fact of the matter is, mentors will help anyone when they see promise and they feel like they look good by making an introduction for you.
In fact, they benefit by introducing the “up and coming start up” to the investor or the “up and coming film maker” to the producer.
So the question to ask yourself is:
“What can I do to make this person look like a star while helping me out?”
And that starts with the work.
Your job is to make them look cool by helping you and you’ll be able to get any introduction you want.
You connect with influencers by being valuable to them.
Ashton Kutcher shared this article of mine, and yes, one of my best friends was his co-founder at his media company, which at the time, was using his FB account to share viral news.
Though my friend got my foot in the door, by no means, and I repeat, by no means did he just send my article to Ashton to share it as a favor.
You should assume that doesn’t exist — and though nepotism does of course exist in various situations, it’s often way too costly.
Had I “forced” my best friend to shove the article down Ashtons throat to share it, he never would have helped me out again — and you always want to play the long game.
From the outset my friend told me given he has thirty people and needs to lead by example, he can’t play nepotistic favors and said my article would get tested like every other article.
If it tested well, they’d elevate it to the influencer pages which included Ashton’s FB with 18 million followers.
My article happened to be a very powerful piece that was trending on Medium, and was republished in the Ny Observer.
It was my entrepreneurial journey, and I had invested in exclusive art for it (which I still use for my writing today), so the piece was adding value to their site for sure.
Furthermore, when the piece ran prior to it elevating to Ashton’s page, I literally spent the whole day harassing every friend I had calling in every chip for them to share it.
It costed me a whole day of hustle.
Finally, I was sitting in a coffee shop where I was working from, and my email began getting flooded.
I looked up and I saw my article had been shared by Ashton Kutcher and looked at the follower number next to his name. It was over 18 million followers at the time.
This was the quote and screenshot shared.
Screenshot of my article being shared by Ashton Kutcher
I was so happy. I worked so hard for it and we had just finished building an app we were driving downloads too so watching 300 downloads come in was nice (even then only 300! Makes you realize how influential ‘influencers’ actually are?)
The point is that even with my dear friend being his actual business partner, I still had to earn it. You always have to earn it.
Always assume you’re going to have to earn it always and the only way to have someone help you is by helping them.
Absolute resourcefulness is the ultimate tool in the tool set.
Here are the principles for the right mindset, skillset and tool set to ensure every goal you put your mind to gets achieved.
2. Go All In With Crazy Uncertainty
Burn your boats and take the fucking island. — Tony Robbins
Get scared, real scared. I used to be scared shitless when I’d spend money on myself whether it was a conference or a course or training for with limited funds in the bank.
You’d probably guess from my writing that I’m financially much more well off than I actually am, but my bank account isn’t flooded by any means and I’m still living on a lean budget.
I’m kicking off a new company and still investing heavily in myself but now I’m ready to accept the abundance the world has in store.
I’m ready to breakout.
This was my personal breakthrough whether it’s for my start up or for my new found fire with writing (because I invested in an expensive writing course).
This is because I’ve passed what Benjamin P. Hardy calls my point of no return — the point where you’re absolutely committed, and there’s no way of retreat.
It took me two years to finally appreciate and understand that this is an absolute must to get what you want.
If you have three months of runway in the bank and you fail, will you actually be homeless? Can you imagine the state of intensity that brings out of you?
Of course, some people actually would be homeless (disclaimer don’t go risking your family fortune because I said too) and I’m not trying to discount facts of poverty or anything else.
For the average fearful professional (used to be me), you need to get comfortable being uncomfortable.
Trust me, it’s never as scary in the moment as you project it will be whether it’s public speaking or cold approaching prospective mates.
I used to be scared talking to women. That fear of rejection was unfathomable for me. Literally after finally being forced to do it only twice, I realize it’s the easiest thing in the world and that rejection has nothing to do with me.
Even if it does, it’s likely not a good match either way but it took purging the fear to overcome it.
3. Invest In Yourself Like Your Life Depends On It
One of the biggest lessons I’ve learned is you don’t value what you don’t pay for.
When you invest in something, you make it a part of who you are and identify with it “as me”.
So the more you invest “in me”, the more you value yourself and therefore focused you become.
I just spent money on two courses and they’re bringing out the fire in me and I’m hitting my writing stride and breaking through in my business as a result (both courses were on both topics respectively).
Since the writing course began, I’ve published a long form piece every day which never happened until I invested in the course. Before that, I was slogging a long slowly but surely but never the intensity and fire I have now.
My work got picked up in the Mission, Medium’s #2 publication, and it was because I wrote a viral article that did extremely well.
The best connections I ever made were from a conference I spent $5,000 on. My dad told me to save the money. Had I listened, I wouldn’t have met the contacts who I ended up becoming roommates with, and who had my article shared by Ashton Kutcher.
My friend just invested $100k in a mastermind group and I’m positive it will raise his profile and business to the world stage it deserves to be on.
Money always ends up becoming electronic numbers in a bank account.
Using it wisely by investing in yourself is tangible and priceless and returns itself in droves.
4. Possess The Confidence To Put Yourself Out There
Whether it’s talking to potential customers, public speaking or hitting publish, you need to be in a confident state of sharing yourself through your work.
If you’re worried about what people think or your brand and reputation, you don’t possess the confidence required to ensure your goal gets achieved.
This was a huge learning for me as I used to be obsessive over every single detail.
During my first start up, I badgered my co-founder about how we appeared in the press (it never even happened because I didn’t let it).
The fact is, no one cares or has time to care and people forget.
You’ll find the bolder your headline and more comfortable you are saying or doing something, the less people actually care.
I sat on my ‘boldest’ piece for a year because I was calling marriage a violent institution. I published it finally with the help of my coach and no one cared. Crickets. Literally have gotten ten likes since publishing July 1.
People like what’s familiar and comfortable and safe.
Meanwhile, this piece went viral immediately and has been liked by almost 1,000 people.
People generally don’t share the deep dark things they don’t talk about but agree with.
Part of why noone thought Donald Trump could win, but the silent majority quietly voted for him.
5. Declare Goals Publicly and Use Deadlines To Force Creativity and Resourcefulness
I have a goal to acquire 1,000 subscribers prior to my next coaching call in my writing course. I’ve only hit 265 so far.
With the call being five days away, am getting resourceful. I’m going to reach far and wide to hit the goal. Call in chips that I otherwise wouldn’t have.
Now, I didn’t set negative consequences if I missed the goal from the outset so perhaps I would have worked with more intensity from day one now that we’re approaching the goal.
But had I said “if I don’t hit this goal, I’ll give a charity I despise $1,000”, I would have worked with more fire. Imagine if I would have upped the ante and made it $5,000.
Now imagine if I made it $10,000 to the American Nazi party or ISIS or something horrible like that (thanks Tim Ferris for those recommendations).
Not that I’d do it but you get the point.
Yes, subscribers are important to me but they apparently aren’t important to “burn my boats and take the fucking island”.
I will make them so on the next one!
So the question is, how bad do you want your goal? What price are you willing to pay for it?
6. Measure and Report Goals And Have Someone Unbiased Hold You Accountable
“What you measure, you improve. What you report, you accelerate.” — Legendary Management Expert Peter Drucker
Having to report to someone else who doesn’t care for you personally creates a healthy fear that spurs action. This is the exact purpose of board meetings and goals set at each one.
Others holding you accountable don’t see all your hard work that give you moral high ground to miss, they just look at the results. So it forces you to be results oriented.
At the beginning of our start up, our seed investor who put up our first $500k had us do calls with him every two weeks.
I’d have to wake up for them every other Friday at 6am because I was based in San Francisco and he was on the East Coast.
We’d send a deck with activity updates and goals — and every one of us would show up for a meeting to talk about our activity.
This did two things:
When you’re live with an authority, you tend to make bolder goals and when you know you have to report them to everyone, you can bet you go out of your way to follow through.
Now that we’ve stopped those meetings as we’ve progressed through that incubation period with that investor, I’ve noticed a significant drop in urgency.
We haven’t written goals down and we don’t go to sleep every night knowing we’re reporting them in front of the tribe every other Friday morning.
Real force functions work and having an objective, results only oriented person hold you accountable is critical to achieve results and grow quickly.
7. Get Outside Help With Someone Who Has Achieved What You Want To Achieve
“Never take advice from someone you wouldn’t trade places with.” — Darren Hardy
To that end, get coached by someone who has achieved your goal. Only take advice from someone who you’d trade places with.
That ensures the decisions you make as a result of their advice comes from the right place. There are too many teachers who think they know what to do because of study though don’t know how to teach it when it matters because they only know the knowledge or material or don’t have the deep experience required to transfer their mastery over to you.
That’s why I think the best psychologists are entrepreneurs or winning marketers, not often psychologists themselves.
Entrepreneurs and marketers have to deeply understand what makes a human being tick to act, and what motivates them to spend hard earned money, have employees work for them for years, rely on them and more.
They have lead people when it matters and got them to buy things with a lot on the line.
The average practitioner whether it’s a psychologist or medical doctor or lawyer has generally been risk averse which is what led them down that traditional path in the first place.
They often haven’t put themselves out there in a way that exposes themselves to people and situations to deeply understand the trials, tribulations and challenges people go through and the complexity of the self that comes with it.
This is of course not all black and white and not to say doctors and psychologists can’t be entrepreneurs but thinking about the traditional path from college to practitioners to make a point (which most of them would agree with me on).
Some nuance here of course but you get the point.
Conclusion
Make plans to create the mindset, skill set and tool set to achieve your goals.
Create the conditions and context for absolute resourcefulness and accountability to bring the deep creativity and commitment required to achieve whatever goal it is you want.
Add a master coach and you’re guaranteed to hit every goal you set your mind to. | https://medium.com/swlh/if-you-do-these-7-things-youll-be-able-to-achieve-any-goal-you-set-ebe50250647b | ['Aram Rasa Taghavi'] | 2019-01-24 16:10:40.406000+00:00 | ['Resolutions', 'Life Lessons', 'Goals', 'Entrepreneurship', 'Life'] |
How to build a simple time series dashboard in Python with Panel, Altair and a Jupyter Notebook | How to build a simple time series dashboard in Python with Panel, Altair and a Jupyter Notebook
Two filters + one interactive area chart in roughly 25 lines of code.
I’ve been using Altair for over a year now, and it has quickly become my go-to charting library in Python. I love the built-in interactivity of the plots and the fact that the syntax is built on the Grammar of Graphics.
Altair even has some built-in interactivity through using Vega widgets. However, I have found this to be limiting at times and it doesn’t really allow me to create layouts the way I would want to for a dashboard.
Then I found Panel. Panel calls itself a “high-level app and dashboarding solution for Python” and it’s part of the HoloViz ecosystem managed by Anaconda. I’d heard of HoloViz before (and it’s relative overview site, PyViz), but never really spent the time to dive into the landscape. So here we go!
At first glance, what I love about Panel is that it is plotting-library-agnostic — it supports nearly all visualization libraries. So you don’t have to be a loyal Altair user to learn a bit about making dashboards in this post. That being said, compared to the other code samples in the Panel example gallery, I think the integration with Altair feels really intuitive.
Here are a few other really nice things about Panel:
It’s reactive (updates automatically!)
(updates automatically!) It’s declarative (readable code)
(readable code) Supports different layouts (flexible)
(flexible) Fully deployable to a server (shareable)
to a server (shareable) Jupyter Notebook compatible (but not dependent…Altair, however, is dependent on Jupyter. So I don’t advise trying this tutorial in something else.)
Let’s make a dashboard using Panel
Here’s what we are going to build: the simplest of little dashboards, composed of an area chart and two filters. We’ll also add a title and subtitle for good measure. All within a Jupyter Notebook.
The dashboard!
This tutorial will break the code into chunks and walk through it bit-by-bit, but if you just want dive into the full code (with comments), the Github repo is here.
Now for the code!
First, as always, import those dependent libraries. Here’s what you need:
import panel as pn
import altair as alt
from altair import datum
import pandas as pd
from vega_datasets import data
import datetime as dt
Updated August 27: please make sure that your Altair package is version 3.2 or above, otherwise you’ll get some data formatting errors.
Then we need to add two special lines of code, one for Altair and one for Panel. The first tells Altair to set the Vega-Lite rendered to Jupyter Notebook (if you’re using Jupyter Lab, check the Altair docs for alternative). The second line tells Panel to accept Vega (which powers Altair) as an extension. You can learn more about how extensions work in the components section of the Panel docs.
alt.renderers.enable(‘default’)
pn.extension(‘vega’)
Since we’re using some sample data from the vega_datasets package, let’s preview our dataframe.
the “stocks” dataframe from vega_datasets
Now the fun part: let’s make some widgets! We’ll be making a dropdown and a date range slider to filter our data.
The dropdown widget takes two parameters: a title for your widget and the “options”.
# create list of company names (tickers) to use as options
tickers = [‘AAPL’, ‘GOOG’, ‘IBM’, ‘MSFT’] # this creates the dropdown widget
ticker = pn.widgets.Select(name=’Company’, options=tickers)
Then we’ll create the date range slider. You can access this using the same pn.widgets method. The range slider takes four parameters: start date, end date, default starting date and default ending date.
# this creates the date range slider
date_range_slider = pn.widgets.DateRangeSlider(
name=’Date Range Slider’,
start=dt.datetime(2001, 1, 1), end=dt.datetime(2010, 1, 1),
value=(dt.datetime(2001, 1, 1), dt.datetime(2010, 1, 1))
)
Widgets done! Now let’s add a title and subtitle, so it’s clear to someone else what this dashboard is about. Panel uses Markdown so it’s easy to specify headings.
title = ‘### Stock Price Dashboard’ subtitle = ‘This dashboard allows you to select a company and date range to see stock prices.’
Note that we are just declaring variables at this point. Nothing has been built. But now, we start to get into the dashboard-building stuff.
To create a reactive dashboard, we need to tell our Panel object what to “depend” on. This effectively tells Panel to listen for changes in our widgets, and then reload the chart. This line will act as a decorator for the function: the ticker.param.value and date_range_slider.param.value will be used within our get_plot() function, specifically the Altair bit to manipulate the chart.
@pn.depends(ticker.param.value, date_range_slider.param.value)
We’re reactive. Now it’s time to create the plot directly below this line. Let’s write a function that does all our plotting dirty work. This will contain all the data shaping/manipulating as well as the code that creates out Altair chart. We will split this code into three parts using comments: 1) format data, 2) create pandas filters and 3) create Altair object.
def get_plot(ticker, date_range): # Load and format the data
df = source # define df
df[‘date’] = pd.to_datetime(df[‘date’]) # create date filter using values from the range slider
# store the first and last date range slider value in a var
start_date = date_range_slider.value[0]
end_date = date_range_slider.value[1] # create filter mask for the dataframe
mask = (df[‘date’] > start_date) & (df[‘date’] <= end_date)
df = df.loc[mask] # filter the dataframe # create the Altair chart object
chart = alt.Chart(df).mark_line().encode(x=’date’, y=‘price’, tooltip=alt.Tooltip([‘date’,’price’])).transform_filter(
(datum.symbol == ticker) # this ties in the filter
) return chart
Almost there! Now we need to create our final Panel object. Panel objects can consist of rows and columns. Since this is a simple little dashboard, we will just use two columns.
First we create our single row. Then, we fill it with the contents of two columns. Our first column will contain our 1) title, 2) subtitle, 3) dropdown and 4) date slider. The second column will display the chart.
dashboard = pn.Row(pn.Column(title, subtitle, ticker, date_range_slider),
get_plot # our draw chart function!
)
And we’re done! Simply call your dashboard variable and see your tiny little app in all of its beauty.
Deploying your dashboard
Another cool thing about Panel is it’s ability to deploy apps through a Bokeh server. For now, we’ll simply take our dashboard and add it as a “servable” local app for Bokeh so we can test our dashboard functionality. For all the details on deploying, check out the extensive deploy and export Panel page and the Bokeh docs on running a Bokeh server.
Adding this line of code:
dashboard.servable()
Will make your dashboard discoverable to Bokeh server. Now we’ll need to pop over to the command line to start our server. Run the below code to start your server as a localhost. The “ — show” command just tells Bokeh to pop open a new tab in the browser with your app displayed as soon as the server is ready. You can copy/paste this line into the terminal:
panel serve --show panel-altair-demo.ipynb
And there it is! Our little stock price app. Of course, this is using a standard, demo dataset. But hopefully you can start to see how you can plug in your dataset and create a more useful application. Think of the three columns of the dataframe simply as placeholders:
symbol → <your categorical variable>
date → <your dates and/or times>
price → <your values>
Even better, you can create a dashboard with multiple charts included. All without leaving the comfort of you cozy Jupyter Notebook.
I’m pumped to dive into Panel in more detail to start making more complex dashboards, quick prototypes, internal tools, etc. Happy coding!
Dig it? There’s more. Data Curious is a weekly newsletter that shares cool tutorials like this one, plus interesting data articles to read, datasets to analyze and data viz to get inspired. Sign up here. | https://towardsdatascience.com/how-to-build-a-time-series-dashboard-in-python-with-panel-altair-and-a-jupyter-notebook-c0ed40f02289 | ['Benjamin Cooley'] | 2019-08-27 11:22:59.479000+00:00 | ['Python', 'Data Analysis', 'Data Science', 'Data', 'Data Visualization'] |
How haiku makes you a better writer | photo credit: (mike912mueller- creative commons)
Poetry is the rhythmical creation of beauty in words.
— Edgar Allan Poe, nineteenth-century American author and poet
Writing poetry is hard enough. Now add rules, impose structure, and leave off the page as much as you put on. That’s haiku. It’s been said haiku is the very soul of poetry.
Haiku poetry requires structure and demands the poet follow certain rules. Contemporary “Western” haiku has exactly one stanza comprised of three distinct lines; the whole poem uses just 17 syllables, no more, no less.
The first line has five syllables, the second has seven. The third (and final) line has five more syllables. This 5–7–5 pattern is widely interpreted, and many of the traditional haiku poems didn’t always follow it.
Purists include a kigo in the poem. This is a single word, or short phrase, that symbolizes the season of the poem and sometimes includes a reference to nature or natural phenomena. Modern-day poets don’t always include this reference, but it’s always nice when it’s there.
Like this:
the snow falls briskly (5)
in winter I am riding (7) (kigo)
bareback on a horse (5)
By Chuck Douros
In the haiku above, “the snow falls briskly” is a slightly more subtle seasonal depiction of the season of winter, and “in winter I am riding” is more direct; both are examples of kigo.
I started writing haiku as a strategy to become a better non-fiction writer. Huh? That sounds counter-intuitive and a little crazy. How can a non-fiction writer become better at his craft by learning to write haiku? I’ll explain — it won’t take long. In a word: brevity. Haiku forces the writer to carefully select only the most meaningful words. It requires intense discipline to place not only the right words for the story, but the right order as well. Finally, well-written haiku is provocative and leaves your imagination running wild. The poem leaves an indelible picture in your mind’s eye. These are all very valuable traits of a good non-fiction writer as well. It’s too easy to regurgitate facts, figures, research, and data on pages and pages of ordinary drivel. Better non-fiction material incorporates all the style and brevity of a great poem.
In haiku the half is greater than the whole: the haiku’s achievement is in what it omits.
— Robert Spiess, American haiku poet
In 2010, I entered a national haiku poetry contest with a distinctly organic theme: truffle mushrooms. The elusive subterranean mushroom is prized in the culinary world and very hard to find in nature. My poem received the grand prize across the nation, was inspired by a friend who found her true love during a truffle hunt.
it was our first time
you and I unearthed much more —
now we search as one
All forms of writing benefit from thoughtful, careful, word selection. Take a page from the masterful American poet, Robert Spiess, and dare to leave off the page, more than you put on the page. It’s not as easy as it sounds. | https://medium.com/chuck-douros/how-haiku-made-me-a-better-non-fiction-writer-9a3e3d9d7cb2 | ['Chuck J Douros'] | 2020-04-13 16:26:27.150000+00:00 | ['Haiku', 'Non Fiction', 'Poetry', 'Web Content', 'Writing'] |
NF Shows How To Be a Christian Rapper Without Being a Christian Rapper | NF realizes that what our society needs is not more little rap songs about Christianity, but more little rap songs by Christians—with their Christianity latent.
Photo by Yvette de Wit on Unsplash
Several weeks ago, hip-hop artist NF shocked the music world by shooting to the top of the Billboard Artist 100 charts thanks to the unexpected success of his latest album, “The Search.” As news of NF’s climb reverberated around the internet, numerous media outlets ran profiles on the 28-year-old Michigan native and Christian Nathan Feuerstein, who is better known as NF.
Those profiles compared NF to another white rapper from Michigan, Eminem. Indeed, in an interview several years ago, NF admitted no one has influenced his music more than the Detroit rap legend. As one critic recently noted, NF models himself after his predecessor in both substance and style, also drawing from his traumatic childhood and coming from “the technical school of rap, where the height of artistry is cramming as many syllables and as much internal rhyme into each bar as possible.”
A prime example of NF’s gut-wrenching subject matter is his previously released “How Could You Leave Us?” in which NF directs righteous anger at his mother who died of a drug overdose when the rapper was only 18:
“I don’t get it mom, don’t you want to watch your babies grow? I guess that pills are more important, all you have to say is no But you won’t do it will you?/ You gon’ keep popping ’til those pills kill you I know you gone but I can still feel you.”
“The Search” has thus far been similarly praised for its authenticity and NF’s willingness to humbly admit his mental health struggles, especially his diagnosis with obsessive-compulsive disorder. In the track “Leave Me Alone,” he raps:
“Diagnosed with OCD, what does that mean? Well, gather ’round That means I obsessively obsess on things I think about That means I might take a normal thought and think it’s so profound (leave me alone)/ Ruminating, fill balloons up full of doubt/ Do the same things, if I don’t, I’m overwhelmed Thoughts are pacing, they go ’round and ’round and ’round It’s so draining, let’s move onto something else, fine.”
‘Are You a Christian Plumber?’
While NF isn’t a carbon copy of Eminem — the former doesn’t swear in his lyrics, whereas the latter hasn’t met a four-letter word he doesn’t like — his best line doesn’t come from his fast-flowing clean lyrics but from an interview in 2016 in which he was asked if he classified himself as a Christian rapper. NF responded:
Not at all. I mean, I’m a Christian, but I’m just an artist. I’m a musician. You know what I mean? To me, it’s like if you’re a Christian and you’re a plumber, are you a Christian plumber? That’s the easiest way for me to explain it. I just make music.
NF’s attempt to distance himself from the label of Christian rapper is reminiscent of Lecrae doing the same several years ago when admitting he wanted to “transcend the genre.” At first glance, both Lecrae and NF’s desire to shed the C-word from their personae seems like a calculated move to maintain a larger, more mainstream audience in the hopes of making more money — a renunciation of faith for riches.
But that assessment doesn’t give credit to NF’s wise and deep understanding of the gospel ethic of work, as evidenced by his reference to Christian plumbers. C.S. Lewis had a similarly captivating line about this very subject in his essay “Christian Apologetics.”
“What we want is not more little books about Christianity,” wrote Lewis, “but more little books by Christians on other subjects — with their Christianity latent.”
NF grasps this principle well. He realizes that what our society wants and needs is not more little rap songs about Christianity, but more little rap songs by Christians — with their Christianity latent.
We Need More Latent Christianity
By laying down the mantel of Christian rapper and instead making music that is authentic, full of pain, yet performed with humility and with lyrics that don’t burn listeners’ ears, NF has become the living embodiment of Lewis’ words. It would be easy for NF to drape himself in the garb of American Christianity, proclaim himself a Christian rapper, and sing about the Father, Son, and Holy Spirit.
That is not meant to disrespect those who do so, but to show there is room for men and women to genuinely maintain their faith while not feeling obligated to commercialize it. NF understands he can serve others and glorify God by rapping with excellence and humility and by addressing the pain he has endured.
More recently, Timothy Keller, who some consider a modern-day Lewis, has written about this topic as well. In his 2012 book about the intersection of faith and work, “Every Good Endeavor,” the New York City pastor wrote:
Some people think of the gospel as something we are principally to ‘look at’ in our work. This would mean that Christian musicians should play Christian music, Christian writers should write stories about conversion, and Christian businessmen and -women should work for companies that make Christian-themed products and services for Christian customers. Yes, some Christians in those fields would sometimes do well to do those things, but it is a mistake to think that the Christian worldview is operating only when we are doing such overtly Christian activities. Instead, think of the gospel as a set of glasses through which you ‘look’ at everything else in the world. … The Christian writer can constantly be showing the destructiveness of making something besides God into the central thing, even without mentioning God directly.
By appealing to the “Christian plumber,” NF shows a profound understanding of faith and work that we would all do well to learn from and imitate. To be sure, as Keller mentions, undoubtedly many men and women should be doing work in which they explicitly articulate their faith.
But NF’s words serve as a reminder that those who are not pastors, worship leaders, or Christian businessmen or businesswomen are not second-class citizens in the Kingdom of God. Christians have almost unlimited potential to live out their faith as plumbers, accountants, politicians, or any number of thing, as NF lives out his, with their Christianity latent.
John Thomas is a freelance writer. His writing has appeared at The Public Discourse, The American Conservative, and Christianity Today. He writes regularly at medium.com/soli-deo-gloria.
A version of this article first appeared at The Federalist. | https://medium.com/soli-deo-gloria/nf-shows-how-to-be-a-christian-rapper-without-being-a-christian-rapper-75389efd7b1e | ['John Thomas'] | 2019-09-02 10:57:16.060000+00:00 | ['Work', 'Mental Health', 'Religion', 'Spirituality', 'Christianity'] |
2018's leading mobile trends to look out for | 2018's leading mobile trends to look out for
As we look ahead to 2018, mobile trends and challenges are once again hot topics at the forefront of digital marketing.
Since working in the digital industry, I can’t recall a single year where mobile was not the frontrunner for trend of the following year — 2018 is no exception.
While mobile remains one of the most utilised cross-generation technical devices, it also remains one of the big technical mysteries. For such an advanced technological device, reaching people at their most digitally engaged, for me it drops short. For research and branding, it ticks all the boxes. As a sales end point, we have a way to go. So, what does the year ahead hold for mobile?
Driving advertisers to new customers
Today, consumers are concentration poor. They expect messages to be succinct, informative, and relevant — or you lose them before you’ve even started. So, what concepts can 2018 deliver that will aid interaction and engagement? In turn, how can these concepts build relationships between advertisers and audiences that will ultimately drive new customers?
According to Google, 96% of users reach for their mobile phone in order to conduct research. With this in mind, advertisers need to ensure their mobile sites are just as informative as their desktop versions. They must create an excellent user friendly environment too, considering the smaller screen size.
What should advertisers and agencies consider for the year ahead to adapt to this user behaviour?
Speed is everything
Firstly, it sounds simple but speed really is everything.
There should be a distinction between your mobile and desktop SEO strategies as search intent varies across device. Understanding the opportunities, and threats, should make you rethink how to approach mobile SEO. With mobile, the key focus should be speed and navigation. Nichola Stott, MD of theMediaFlow, highlights that sites taking more than 3 seconds to load on mobile had an incredible 53% abandonment rate!
Be Dynamic
Our mobiles are such a personal thing, so our content should be too .
Think of a new customer versus a returning customer. Each user should be treated separately and strong consideration of the message you wish to communicate to them should be taken. Thinking about dynamically rich content that will suit each user’s needs will provide more relevant messaging. Thus, the user should be more engaged in your communication. Strive to create breakthrough.
Artificial Intelligence
A certainty in media conversations now and next year, especially with the recent resurgence of Blade runner, artificial intelligence most definitely extends to mobile.
Mobile app data can provide powerful insights which should be digested and plugged into all marketing plans. Using this data in the right way allows advertisers to deliver highly relevant content that is customised based on audience attributes.
The downside is that this approach takes time and investment. However, some publishers are already offering AI driven marketing solutions. A great first step to utilising mobile data effectively is plugging your activity into these AI solutions. In turn, the data harvested can be implemented in your advertising.
Video is expected to continue to dominate
Video streaming has reportedly accounted for 75% of internet traffic in 2017. With mobile, and tablet, providing easy viewing platforms to users, we should expect mobile video ad spend to increase next year. The Huffington Post reports that the average viewer engages in 36 minutes of online video via mobile compared to 19 minutes on desktop.
Players such as YouTube, Facebook, Snapchat, and Twitter are expected to add further interactive elements in the year ahead to take full advantage of this ever-increasing form of user interaction.
The mobile app is 10 years old
In the summer of 2018, the mobile app celebrates its 10th birthday!
Statista are forecasting that 197 billion apps will be downloaded in 2017, jumping to 352 billion in 3 years’ time. ComScore has reported that the average user interacts with just five apps — led by Facebook. So, if your app is lucky enough to be one of the few that are regularly used by a mobile user, then you need to ensure your app is the perfect extension of your website offering.
Make your app a priority. It provides on-the-spot access to making a sale with an advertiser. It should be seen as equally important as a brand search campaign due to its place in the consideration phase of your audience. You can find out more about launching the perfect app download strategy here.
Next year, mobile advertisers will continue to face challenges such as security and ad blocking. But, there will be a plethora of exciting new projects in the pipeline. Predicted advancement in consumer interaction via voice tech and developments in augmented reality make for a thrilling 2018 for mobile.
Whatever route you take, ensure you’ve covered the basics. The user needs to remain at the forefront of any activity. For existing customers, make it simple for them to find what they need. In finding new audiences, utilise data to limit wastage and improve advertising performance.
Be clear, be engaging, be disruptive. | https://medium.com/syzygy-london/2018s-leading-mobile-trends-to-look-out-for-b42645ee6f6a | ['Sophie M'] | 2017-10-18 10:09:15.671000+00:00 | ['Mobile', 'Advertising', 'Marketing', 'Digital Marketing', 'Mobile Marketing'] |
Off with their heads — the rise of the modern CMS | Wasn’t this supposed to be easy?
You paid a glamorous agency to design your new website. You spent thousands of hours and a gazillion dollars laboring over every aspect of the design to get it just right. After rounds and rounds of meetings and reviews and arguments and sweat and tweaks and nudges… finally you have a design you are happy with. Even the board is happy with it. And they want it. They want it now.
So let’s build this sucker. And let’s launch it. Then you can finally say thank you and goodbye to those expensive design and development contractors and get on with managing the content in your gleaming new website yourself. For you, in your wisdom, chose to invest in the best CMS! And now the web is your oyster.
No?
No. Usually not.
An expensive legacy
Content Management Systems, or CMS, often require incredible levels of financial, technical, and even political investment. For years they have promised clients the opportunity to take control of their websites without the need to write code or understand infrastructure. But more often than not, they leave their aspirations of being liberated and creative in tatters.
Back in the late and early nineties, CMS were expensive. Very expensive. No, even more than the number you are probably thinking of now.
The industry was dominated by exotic software solutions, sold by serious sales executives with silver business card holders. Comfort in open source software or a product that wasn’t delivered in sturdy, cling-wrapped presentation cases was yet to take hold.
I undertook my first project to select a CMS vendor back in 2001 when there were far fewer options available. I worked for a software company who had built and were hosting a website for an insurance firm. They wanted to be able to update their news page, and perhaps one day, edit the phone number in their footer without getting any of that messy HTML stuff all over their fingers.
They asked us for a CMS, so we went shopping.
The first two products we found were from companies with offices in London. Quite locally to us as luck would have it, because in order to purchase a license for the CMS from either of these companies we would first need to sit in their office across a large board room table. They were keen to talk about how we would have to pay them just under £1M for the licenses. (I’m unsure of the exchange rate back in 2001, but can we just agree that whatever it was, this is, in layman’s terms, a crap ton of money?)
I wore my best suit to that meeting. I didn’t want to look like a fool while we negotiated a £1M deal to make it easier for my client to update the contact details in their footer.
Of course, the use case for a CMS usually goes further than this. But back then, any flexibility came at an alarming price.
Paying this sort of money for the license (not the hosting infrastructure, not the consultancy, not the training for the developers and authors, not the design nor the build, just the license) surely demonstrates impressive levels of eagerness to remove developers from the equation. Or, perhaps, an over-optimistic view of just how much ease and freedom this kind of tool will provide.
Luckily though, times have changed. Yes, you can still pay a lot of money for a CMS, and the challenges in using them effectively are well documented. But over the years more people put their weight behind trying to solve this problem. New players have entered the market making the cost and the scarcity of the skills come down. The days of giant CMS vendors dominating the entire market may be numbered. And now we seem to be on the verge of something of a revolution. A CMS is now attainable on any size of budget and without a team of specialized consultants to roll it out.
And I no longer need my suit for when I’m searching for the right CMS.
Why was this so difficult?
Before we look at the new approach to CMS which is rising in popularity, it’s worth understanding the some of the history, and some of the challenges which have led legacy models to be challenged.
It once seemed that we had just a small number of complex CMS products available to us. Our choices were extremely limited and things were expensive as a result. But time brought greater acceptance of open source software, and with it, more and more attempts to deliver affordable and approachable CMS solutions.
For example, Drupal grew from a humble message board platform in 2001 to an open source CMS supported by a large community of active developers. Around the same time Wordpress started to bring CMS-like features to a growing community of bloggers. Both projects were empowered by the availability and relatively low cost of infrastructure available for hosting PHP and mySQL.
Other projects began to emerge as more people tackled the challenge of competing with the larger, established CMS vendors. As an industry, we were discovering that we could meet some of the technical challenges inherent in a CMS, and that was empowering. But our approach to usability and also safeguarding front-end code left quite a bit to be desired.
The market started filling up with products which were trying to compete on the basis of how many features they had. The more features and flexibility a product could list, the more desirable, future proof, and valuable it was deemed to be. This was a dangerous path.
We came to expect a CMS to do far more than manage content. It’s a huge misnomer. The most popular and expensive CMS often have a laundry list of tempting features. We expect them to allow us to do everything from customizing our page layouts, to powering e-commerce, to handling user-generated content — all while generating front-end code and being a core part of the hosting infrastructure serving a site to a global audience.
Bundling all of these capabilities into one, monolithic piece of software is incredibly challenging. Each area is full of craft and nuance and subtlety, Yet vendors have tried to package them up with an admin interface for anyone to manage through the click of some buttons.
Managing your site CMS became insufferably difficult. You needed experts to help you use it. And what it delivered fell short of the standard people wanted.
When we try to design a product capable of doing everything for everyone, we often find ourselves with a product so complex that nobody can use it for anything.
Focus
It sounds like I’m stating the obvious, but so many of the features done poorly by a legacy CMS relate to managing the presentation, rather than just the content. After your huge investment to establish the design of your site (remember how excited the board were?), you run the risk of undermining the design with every single content edit.
A good CMS should protect your design investment. Not help you to destroy it.
Happily, the Headless CMS approach has been gaining momentum, and it’s all about focussing on doing one thing well. That thing is managing content.
How do headless CMS work?
Headless, or decoupled CMS, provide the ability to define the structure of your content, offer an interface to populate, manage the content in that defined structure, and then serve the content via content APIs.
In this way, the management of content is not wrapped up with the ability to modify the design. The presentation of the content through the carefully crafted interface, is handled away from your CMS, protecting it from that overzealous content author, who thinks that the headings in their article really need “a little extra pop”.
The admin interface a headless CMS provides is not a part of the infrastructure you host to serve your site. Putting distance between the mechanics of managing your content (along with various user management and publishing workflow) and the mechanics of hosting your site is extremely attractive.
When the CMS was part of the hosting infrastructure it would typically compile the page a visitor requested at the time they requested it. That involved activities like asking a database what content should go into which template and cooking it up to serve á la minute. This means that both your site and your CMS would need to be able to scale to handle the load of whatever you throw at them.
Having a level of abstraction bodes well for scaling and security. The more distance we can place between traffic to our site, and the moving parts which drive the CMS, the better our chances of protecting it from those with malicious intent.
Added peace of mind.
Performance and craft
The benefits of decoupling the management of the content from the control of the design go beyond the aesthetic we discussed earlier. They impact performance, too.
When a traditional CMS allowed authors to manipulate the design of your site, it needed to generate some of the code for the site automatically. Features like drag and drop and WYSIWYG editors bring with them code generated automatically for you by the system.
Most front-end developers will start fidgeting at that thought. And I’m right there with them.
This generated code was devised long before your site was ever being discussed. It was not made for you. It was made to serve a generic purpose. It has been designed to be massively flexible so that it can be used time and time again. That’s hard to do and so we often pay a penalty for it as it introduces a variety of code smells into our sites. I’ve grumbled about this before. You never want visitors to your site to be able to smell your CMS.
Developers responsible for the performance and browser support of a site need control over its code if they are to do a good job of delivering on the promise of the design. A headless CMS gives them back this control by being agnostic to the tools which consume it. In this age of responsive web design and broadening contexts for where and when our visitor use our sites, keeping control over how the code is crafted, in the hands of the experts could not be more important.
Trends in web development continue to advance. As browsers and devices evolve, we need the ability to employ the best techniques possible when rendering the site into its various templates. Abstracting the content via a headless CMS creates a clean separation which allows us to render it with whatever build tools, templating language, or static site generator we might choose.
Content portability
With a headless CMS, you can break out of the monolithic model where all of your eggs are in one basket and your content can reach only as far as your CMS could support. Not only can you select what tools generate and present the content on your site, you can also publish your content in new formats and into other channels.
For example, if a new RSS-like format was to be defined, or presenting content in AMP format were to become attractive, that would be no problem for a Headless CMS. The content is not coupled to any one presentation format. You can expose it with new templates to support emerging formats as they come along.
Another example, structured content served through APIs can more readily be consumed by email campaign tools, or social media campaign tools. It allows you to lean on the expertise of specialists in each of these fields. And on those of areas that we have not even considered yet.
Our content is liberated to go anywhere.
Momentum and adoption
There is growing enthusiasm for this approach. Not only from the developers who build on top of such architectures, or from content authors who have become more empowered and productive than before, but also from businesses looking to avoid the kind of investment and lock-in I was subject to. (I’m still not sure that we ever managed to update the phone number in that footer!)
The market for headless CMS products appears to be thriving.
Contentful, Prismic and Siteleaf are just a few of the players in a rapidly-growing space that’s receiving lots of validation. (Contentful secured a $28M series C funding round at the end of 2017) These companies already have impressive client lists adding weight to the argument that this approach is suitable for big brands with high-traffic and richly-featured sites.
It seems that the positive results of using this type of CMS is becoming increasingly apparent to CMS customers, and even products such as Wordpress are evolving to also support a headless mode.
Where next?
Momentum towards a headless CMS model is helping to demystify and democratize what was once an exclusive and stuffy market. When they are not seen as the domain of only the big enterprise-grade vendors, all kinds of innovations spring forth.
The shift is not limited to the headless model alone.
We’ve seen CMS products which pursue simplicity by building atop file-based systems instead of databases. We’ve seen CMS implementing GraphQL to allow even more expressive and simplified querying of our content. We’re even starting to see CMS like Netlify CMS which solves common version control and integration challenges by delivering a natural authoring experience on top of Git.
Whatever happens next, we should not expect that the only solutions to managing content on a site have to be overwhelmingly complex or prohibitively expensive.
Labelling something as “reassuringly expensive” needs to be a habit that we put behind us as we explore modern approaches to meeting old challenges and assess each one on its merits and not just on its price tag.
I reserve my suit mostly for weddings now. Although it’s getting a little snug around the waist. | https://medium.com/netlify/off-with-their-heads-the-rise-of-the-modern-cms-e0089538aed6 | ['Phil Hawksworth'] | 2018-06-14 14:26:58.924000+00:00 | ['CMS', 'Development', 'Web Development'] |
How to End a Newsletter Email Using the Two Basic Closings | The Newsletter Email With No Closing
These emails might include a brief message and are followed by links to resources, articles, or stories the newsletter is promoting.
Sometimes the email goes straight to a list of the articles recommended. I really like how The Good Men Project does this. The links are in a large blue font right under the featured image.
You can quickly scroll and find what you might be interested in reading. This is important when 10-12 articles are listed. By the way, some newsletters contain way too many links. I’d say 12 is too many already.
The Good Men Project
Brain Pickings by Maria Popova simply drops the few blog posts she’s offering (three at most) in the body of the email. This gets you to read the first sentence or two and, sometimes, through to the end.
Brain Pickings
In short
If the email contains more than three links, make the links stand out with a large font and a different color so the reader can easily scroll and click.
Consider dropping the whole story in the body of the email if you’re promoting one or two posts. The reader will only need to decide whether or not to open the email to know if they want to read your content. The fewer decisions and clicks, the better. | https://medium.com/better-marketing/how-to-end-a-newsletter-email-using-the-two-basic-closings-27be9438aa47 | ['Daniella Mini'] | 2020-12-18 14:43:08.363000+00:00 | ['Newsletter', 'Email Marketing', 'Ideas', 'Email', 'Writing'] |
Cracking The JavaScript Coding Interview | Cracking The JavaScript Coding Interview
Part 1: Strings
This is the first part of a series of articles that will help you to prepare for coding challenges. In this article, I will focus on the problems related to JavaScript strings.
Prerequisite: Before going forward, I am assuming you should have basic knowledge of JavaScript. At least you should know how it works.
Note: This is not a copy of Gayle_Laakmann’s book. The book is awesome. I love reading it whenever I get time. However, some samples/questions are taken from the book. While the book solves problems using Java, I have solved the same questions with JavaScript (TypeScript).
Strings: Strings are unique and it is used for almost every data set. JavaScript strings are simple and less complex. However, that is also the problem with JavaScript. If you don't pay attention, it can lead to a memory issue. Here, I have picked some of the easy and common problems for strings.
1. Is Unique
For a given string, find all the characters in the string are unique.
There are many ways to solve this problem. The simplest using a Map of the chars. Iterate over all the characters of the string and set that in the map. If the map already has that character return false from the function.
export const isUniqueChars = (txt = "") => {
const chars = new Map<string, true>();
for (let i = 0; i < txt.length; i++) {
if (chars.has(txt.charAt(i))) return false;
chars.set(txt.charAt(i), true);
}
return true;
}; console.log(isUniqueChars("background")); // true
console.log(isUniqueChars("bawdyhouse")); // true
console.log(isUniqueChars("rhythm")); //false
You can also solve this problem using ES6/ES2015 operator.
export const isUniqueChars2 = (txt = "") => {
const chars = new Map<string, true>();
// [...txt] is equivalent of Array.from(txt)
return ![...txt].some((char) => {
if (chars.has(char)) return true;
chars.set(char, true);
});
};
console.log(isUniqueChars2("background")); // true
console.log(isUniqueChars2("bawdyhouse")); // true
console.log(isUniqueChars2("rhythm")); //false
2. Check Permutation
Given two strings, find out that one string is permuted to another or not. i.e abc and acb are permuted to each other.
The simplest solution, You can sort the strings and match both strings.
const sort = (str: string) => [...str].sort().join(); export const isPermuted = (str1: string, str2: string) => {
if (str1.length !== str2.length) return false;
return sort(str1) === sort(str2);
}; console.log(isPermuted("abc", "acb")); // true
The above solution is simple but not performant. The sorting algorithm has a complexity of n log(n). We can solve this problem by counting the number of chars. Keep the counts of all characters in one string and match with the count of chars in the other string. If it mismatch, the given strings are not a permutation of others.
export const isPermuted2 = (str1: string, str2: string) => {
if (str1.length !== str2.length) return false;
// map to keep count
const chars: { [key: string]: number } = {}; for (let i = 0; i < str1.length; i++) {
if (!chars[str1.charAt(i)]) chars[str1.charAt(i)] = 0;
chars[str1.charAt(i)]++;
} for (let i = 0; i < str2.length; i++) {
if (!chars[str2.charAt(i)]) chars[str2.charAt(i)] = 0;
chars[str2.charAt(i)]--;
if (chars[str2.charAt(i)] < 0) return false;
}
return true;
}; console.log(isPermuted2("abc", "acb")); // true
console.log(isPermuted2("abc", "acd")); // false
3. URLify/encodeURI
Given string replace all special char with its encoded value.
Yes, you can use encodeURIComponent to encode special chars. However, this example can be very useful to build util to encode any kind of string.
// map to keep track of all special chars const specialChars: { [k: string]: string } = {
"@": "%40",
" ": " %20",
"#": "%23",
"%": "%25",
"^": "%5E",
"&": "%26",
":": "%3A",
"<": "%3C",
">": "%3E",
}; export const encodeString = (txt = ""): string => {
// /\W/g is reg to find not Alphabetic chars
return txt.replace(/\W/g, (m) => specialChars[m] || "");
}; console.log(encodeString("<name:deepak>")); // %3Cname%3Adeepak%3E
You can use the same technique to build an emoji text builder.
const emojiChars: { [k: string]: string } = {
love: "💚",
india: "🇮🇳",
i: "ℹ️",
};
const WORD_REG = /(\w+)/g;
export const encodeString2 = (txt = ""): string => {
return txt.replace(WORD_REG, (_, m) => emojiChars[m.toLowerCase()] || "");
}; console.log(encodeString2("I love India")); // ℹ️ 💚 🇮🇳
Here in the above solution, I am using regex to tokenize words and replace them with their emoji.
4. String Compression
Write a program to compress a string replacing repetitive chars with its count. ie. aaabbbb => a3b4.
The solution to this problem is can be tricky and simple. We can use match and repeat regex to tokenize string and replace method of the string with HOC function.
export const compress = (txt = "") => {
return txt.replace(/(\w)(\1+)/g, (_, m1, m2) => `${m1}${m2.length + 1}`);
}; console.log(compress("aaabbbb")); // a3b4
Another solution could be, iterate through chars and count the consecutive chars. /(\w)(\1+)/g is the regex to find all char and match repetition(\1). `\1` indicates repetition of matched char previously
5. Abbreviation
Create an abbreviation function, which takes a string and returns a string while taking the first and last char and replace the remaining chars with the number of chars. For example, internationalization will become i18n.
const abbrev = (text: string = "") => {
if (text.length < 3) return text;
const first = text.charAt(0);
const last = text.slice(-1);
const remLen = text.length - 2;
return `${first}${remLen}${last}`;
}; console.log(abbrev("internationalization")); // i18n
In the above solution, I have use methods String.charAt and String.slice to get first and last char respectively. We can use charAt instead of slice. But in that case, we need to find the last index and do some calculations. I am also using ES2015 template string to append string.
Let’s make this a little complex while adding more edge cases. Let’s convert the function which can take a group of words separated by special chars. i.e. I love javascript! will become I l2e j8t!
const word = /\w+/g;
const abbrevPlus = (text: string = "") => {
return text.replace(word, (matched: string) => abbrev(matched));
}; console.log(abbrevPlus("I love javascript!")); // I l2e j8t!
Here in the above solution, we are using String.replace method which accepts a RegExp and a callback function/higher-order function to find all the valid word and replace with its abbreviation. Just nice! Looks simpler than splitting and joining back.
Hope you will like the article, please let me know in case you have some common problem to be solved as part of samples. | https://medium.com/javascript-in-plain-english/cracking-coding-interview-javascript-strings-8c26fb043cd8 | ['Deepak Vishwakarma'] | 2020-12-17 05:05:11.765000+00:00 | ['JavaScript', 'Algorithms', 'Web Development', 'Coding', 'Software Engineering'] |
7 Tips for How to Be Successful on News Break | If you’re a frequent writer on Medium, you’ve likely already heard about News Break, the news app which is now catering to content creators.
Their incentives are attractive:
a guaranteed minimum monthly income,
ad revenue share,
referral bonuses for getting readers to download the News Break app and bringing on new content creators,
and other cash awards.
There are, of course, requirements for reaching that guaranteed minimum monthly income.
The number of articles you publish, when you publish those articles, and your page views and follower counts may factor in whether you reach that guaranteed minimum or not.
I can’t share the specific details of my contract (confidentiality agreement), but I can say that once you apply and receive a contract, it’ll be clear what you need to do to reach those minimums.
Screenshot by the author
I reached all of my contract requirements in order to make the guaranteed monthly minimum payment in four days.
Here’s how:
1. Clickbait is your friend.
Unlike Medium, you can go nuts with your titles on News Break. It actually was difficult for me at first to come up with clickbait headlines for articles because I was so used to avoiding them altogether.
While clickbait is often about overpromising and underdelivering, you can get around this by working on the emotional value of your headline. Pack your headlines with power words. Be specific (A list of tips? HOW MANY tips? HOW was that incident life-changing?). Use “This/These.” Make them think, “Whaaa?” so they click click click.
Here are some of my articles that have done well:
“It Was Only a Matter of Time Before I Cheated on My Husband” (91k page views)
“4 Ways to Make a Girl Crazy for You” (37k page views)
“5 Things Science Says Guys Don’t Find Attractive in Girls” (29k page views)
Here are some articles from other writers that have done incredibly well:
“How a Teenage Girl Became the First Survivor of the Deadliest Virus Known to Man”
“What It’s Like to Date As a Demisexual”
“How I Got Proof of Life After Death as My Husband’s Final Gift”
If clickbait headlines are tough for you to come up with, try these helpful generators: Sumo headline generator, Phrase Generator, or Title Generator.
Pro tip: If an article doesn’t do well at first, try changing its title! One writer told me she changed one SIX times before it finally started gaining traction.
2. Add the follow widget to the bottom of all of your articles.
News Break isn’t as intuitive about getting your fans to follow you.
When you are drafting an article to publish, the button on the top right under the Title bar says “widget” when you hover over it and then “follow” when you click it.
Make sure to add that to ALL of your articles, so it’s easy for your fans to click to follow you.
Here’s what it looks like:
Screenshot by the author.
3. Know your audience.
While I’d consider Medium’s audience rather educated, liberal, and worldly, News Break’s readers are American, politically conservative, and primarily male.
I quickly discovered this just by reading the comments on some of the first articles I published. Be aware that the comments can be very unfiltered and…mean. It’s recommended that you don’t read them unless you’re wanting a better sense of your audience.
In my talk with the creators of the platform, it also seems like many live in the American south.
Articles that do well cater to and/or upset that particular audience. This audience particularly responds to emotional hooks and stories.
Therefore, the following article topics do well:
personal essays on a range of topics (family, parenting, friendships, death, etc.)
being for or against masks, President Trump/President Elect Biden, or the Republican/Democratic Party or its platform
relationship/dating advice for men
gender/sexual identities
relationship/dating problems
Tips:
Keep your paragraphs short.
Make your writing easy to skim (lists, subheadings, etc.).
Check your grammar and spelling (using something like Grammarly).
Consider keeping your writing level to a 7th to 8th grade level, which is the reading level that newspapers in the U.S. cater to. You can use this analyzer. Shoot for a Fry Readability Grade Level (shown on the right) of 7 or 8.
4. Pay attention to your keywords.
Unlike Medium where you can tag your own articles, News Break automatically pulls keywords from your article.
Here are the keywords that appear on the bottom of this article:
Screenshot by the author.
Some of these can be very relevant (like “relationship advice” and “infidelity”), but some of these can also be very irrelevant (like “Europe” and “circumstance.”).
If you want an article to appear under a certain tag, make sure you’re using valid and similar keywords. Instead of, for example, using the word “cat” multiple times in an article, also use the words, “feline,” “kitten,” “pet,” etc.
5. Publish.
You’ll never learn if you don’t try, so publish maybe four articles on four different topics.
Pay attention to what does and doesn’t do well.
If none of them do, try another four, or try writing an article that is in the vein of something on the list I mentioned above.
The important thing is not to give up. You may be surprised by what takes off.
Note: Profanity and ANY sexual references are not allowed (even using the word “sex” or “sexy” can get a story flagged, so you’re better off just not using them). It can be annoying (I literally have to ctrl+F each of my stories for naughty words because I’m a potty-mouth), but the more you get used to writing “cleanly,” the better.
6. Stick with the winners.
Like any platform, even one in its early stages like News Break, you have to find the “winners” and learn from them.
Follow as many fellow creators as you can, and pay attention particularly to those that already have over 1k followers and 200k+ views. Those are writers who are doing something right.
You’re welcome to follow me. Also check out the pages of the following even more successful writers:
Shannon Ashley
Matt Lillywhite
Tracey Folly
Kerry Kerr McAvoy
Elle Silver
Joe Donan
When you find a “winner, ” scroll through all of their articles. Since you won’t be able to see how many pages views a particular article has, look for the ones with the highest number of likes/comments.
Read those “winning” articles. Pay attention to their headlines. The topics. How the writer structured or formatted the story. LEARN why it did well, and see if it inspires you to write your own.
7. Learn from your successes, not your failures.
Just like on Medium, an article can totally flop. Even one you think might do really well.
When it does, don’t try publishing another article in that same style/type/subject. Don’t try to repeat your failure.
I published a parenting article, for example, that has so far gotten all of three page views. THREE.
Am I going to publish another 20 parenting articles? God no. I’m going to mess with the title to see if that’ll improve things and then look back at my successful articles and see if there’s another way I can write a post on the same topic, from a different angle, etc.
While News Break is still in its infancy, it’s a great place to try out new things, introduce your work to new readers, and get some extra money too. These tips aren’t guarantees, but these are the tactics I and the other successful writers on the platform have employed. We got followers quickly once we began putting out content News Break’s audience wanted to read. | https://medium.com/inspired-writer/the-real-truth-about-how-to-do-well-on-news-break-b1e785711827 | ['Tara Blair Ball'] | 2020-12-16 13:03:49.233000+00:00 | ['Freelancing', 'Writing Tips', 'Blogging Tips', 'Blogging', 'Writing'] |
Create the Chrome Extension to Improve Productivity for bloggers and Release to the Market | Create the Chrome Extension to Improve Productivity for bloggers and Release to the Market Sean HS Follow Nov 30 · 6 min read
Introduction
People who work as developers or bloggers usually open a lot of Chrome tabs to collect information or references and turn them into markdown syntax. This kind of job is so messy that I believe that it’s worthy to solve.
There are two reasons for me to decide to create a utility with a Chrome extension.
We can copy contents, turn them into markdown syntax, and paste them without switch tabs or extra monitor
We don’t have to leave tabs for content that we haven’t read yet
I will only record a rough overview of this article. It will introduce the following sections. | https://medium.com/a-layman/create-the-chrome-extension-to-improve-productivity-for-bloggers-4dba5fadd516 | ['Sean Hs'] | 2020-12-06 02:21:40.201000+00:00 | ['JavaScript', 'Software Development', 'Google', 'Project Management', 'Chrome Extension'] |
VR Storytelling: A Guide for Marketers | VR Storytelling: A Guide for Marketers
By Heike Young
The internet is no longer flat. Virtual reality (VR) and augmented reality (AR) experiences have become incredibly engaging, entertaining, and educational for consumers. But how will VR/AR change the game for marketers — especially those who don’t yet see the effects in their company’s industry?
To talk us through the lay of the land, we talked to one of the prime experts in VR storytelling: Sarah Hill. Sarah is the CEO and chief storyteller at StoryUP.
In their own words, “StoryUP is a tribe of immersive journalists, developers, game designers, filmmakers, graphic artists and other digital creatives who use virtual reality storytelling to create a sense of empathy that affects change.”
Sarah has a background in journalism as a news anchor and is a twelve-time mid-America Emmy award winner. Her message is all about how marketers can use this new form of content to give people new experiences, create empathy, and inspire.
For our full conversation, download this week’s episode of the Marketing Cloudcast — the marketing podcast from Salesforce. If you’re not yet a subscriber, check out the Marketing Cloudcast on iTunes, Google Play Music, or Stitcher.
Take a listen here:
You should subscribe for the full episode, but here are a few takeaways from our conversation with Sarah Hill.
What is VR?
VR stand for virtual reality, experienced through a 360° 3D video. It uses computer-generated environments that enable the viewer to observe everything around themselves, touch, and even pick up 3D objects inside a space.
Everything in the VR world can be viewed via headset, which can be as simple as the Google Cardboard device. “It’s just a cardboard box, costs about $5, and has some velcro on it. These experiences are just deployed on your smartphone,” Sarah explains.
From a marketing perspective, VR has proven to be a uniquely successful storytelling format, as consumers don’t mind spending minutes or hours immersing themselves in it. Sarah says that videos done in 360 also tend to get more views. “You don’t even need a cardboard device or headset. As we know from Youtube 360° and Facebook 360°, you can view those experiences in the browser.”
Think of VR as an alternative solution to a physical experience.
Sarah first became involved with VR after learning about a volunteer program called The Honor Flight Network, which provides physical flights for veterans to visit Washington D.C. to see their memorial in person. However, about 90% of World War II veterans aren’t phyically able to take that journey.
StoryUP wanted to find a solution that would enable these veterans to experience their memorial through virtual reality, and the program they came up with is called honoreverywhere.com.
“Virtual reality allows veterans the ability to travel without leaving their hospital beds or their room, and a lot of the content out on our apps right now is for people looking for that escape,” she says.
So as you’re thinking about marketing-plus-VR experiences for your own company, think about the in-person experiences you want customers and prospects to have. Is it important for them to do that physically, or might VR be a less expensive (but still effective) option?
Make video more immersive with VR.
You probably already use some form of video in your marketing strategy. When comparing a 360° video with a fixed-frame video, StoryUP determined that “VR video is a stickier kind of video.” It had thousands more views, with higher rates of total length of video watched and more shares. And remember that these videos don’t have to be viewed through a VR headset. Sarah is confident that “even outside the headset, this is a better way to do video.”
If you’re developing a marketing strategy with a heavy VR component, Sarah, recommends that you reach out to StoryUP or a similar company. But for those who are just curious or want to test the limits of what VR can do for your brand, she shares some really great, affordable equipment you can use in the full episode. Either way, Sarah points out, “It’s a great way for brands and marketers to stick their toe in the water of immersive content.”
VR is an empathy machine.
For marketers working with charities, VR is an excellent way to reach a specific group of people with your message. This type of media, when viewed in a headset, has been proven to light up empathy centers in the brain.
“VR has the ability to place your audience inside that story and to have a greater sense of empathy for what that charity or foundation is experiencing,” Sarah says. Plus, VR is actually increasing donations for charities and nonprofits who’ve tried it.
VR is about depth, not reaching the biggest audience.
Sarah explains that “VR isn’t about reach — it’s about depth.” If you are looking to reach millions of consumers with a quick tweet, VR might not be the right tool for you. But if you have a group of people that you would like to reach on a deeper level, VR may be the perfect tool.
Provide the ultimate personalized shopping experience through VR.
Many companies and industries are starting to integrate mixed reality in their marketing with interactive websites. This gives them the ability to personalize the whole shopping experience for the customer.
Some furniture companies even allow the customer the option to add a piece of furniture to their living room. “You can be in your living room and decide where you want that couch to go and you can superimpose that couch that you saw in a store to see if it will fit,” Sarah says.
“When people talk to you in VR, when people are looking at you in the camera, it’s really like they’re talking to you and it feels like you’re having this solitary experience with somebody on the other side of the screen. It does something to your brain to trick you into thinking it’s real,” Sarah says.
Tons of possibilities for marketers here. And that’s just scratching the surface of our conversation with Sarah Hill (@SarahMidMO). Get the complete scoop on the latest immersive content experiences you can create through virtual reality in this episode of the Marketing Cloudcast.
Join the thousands of smart marketers who already subscribe on iTunes, Google Play Music, and Stitcher.
New to podcast subscriptions in iTunes? Search for “Marketing Cloudcast” in the iTunes Store and hit Subscribe, as shown below.
Tweet @youngheike with marketing questions or topics you’d like to see covered next on the Marketing Cloudcast. | https://medium.com/marketing-cloudcast/vr-storytelling-a-guide-for-marketers-b04f26575a25 | [] | 2017-04-19 16:15:09.768000+00:00 | ['Marketing', 'Social Media Marketing'] |
The Compulogical Fallacy | In their classic work, The Philosophical Foundations of Neuroscience M.R. Bennet and P.M.S. Hacker (BH) gave the name mereological fallacy to the logical disorder at the heart of much neuroscientific thought at the time. Then, and sadly still to this day, neuroscientists commonly assigned various cognitive attributes to the brain that can only logically be attributed to a whole human being. Examples include things like having memories, desiring things, seeing, tasting, judging, evaluating, etc. Their intent was to show the logical contradictions that arise as a result. In my view they were quite successful in that endeavor.
Today other fields, technology/computer science are falling into the same trap that befell and continues to befall neuroscientists. In an analogous fashion to the mereological fallacy the computer sciences are assigning various cognitive attributes to computers that can only logically be assigned to human persons and some (non human) animals. I have dubbed this, the compulogical fallacy in honor of BH’s work. Table 1 shows a comparison of the two fallacies.
Table 1: Mereological fallacy vs. Compulogical fallacy
In essence the compulogical fallacy describes the logical contradictions that arise when we apply characteristics/behaviors/attributes/skills/abilities to machines and computers that can only rightly be applied to human beings and some (non -human) animals. The term machine learning is one of the most oft cited (by me) examples of this fallacy. The two words (each by their very definitions) when combined in that order result in a term that is a logical contradiction and the creation of something that is logically impossible, a learning machine. A machine cannot learn for if it did it would no longer be a machine. The same could be said for any computer (machine) and intelligence. A truly intelligent computer/machine, were it someday possible to create, or were it to be “born” or “emerge” would no longer be a computer/machine but something else entirely, something not human or machine.
No one approach to this problem works best but there are at least three viable solutions. One could redefine the words in the terms or one could argue that the act of creation of the term somehow changes the meanings of the words of which it is composed. A much easier solution would be to drop the use of the term machine learning and replace it with something that is actually descriptive and logically coherent. Any of these solutions could be acceptable thought the first two come with a host of problems. The first would be the most difficult as each word’s meaning has been fixed in the English lexicon with it’s standard/ accepted definition for over 100 years. The second has similar problems and arguably another which is that word/term mutations of the sort described are rarely successful and typically fail to catch hold with the general public. The last would be the most appropriate and easiest though it seems there is very little chance of it ever happening as the natural law offending term has been in use for so long now. Instead the proponents of machine learning have selected none of the above and continue to insist on using an absurd term (without any acknowledgement of its absurdity) to describe something they believe is a foundational field and critically important to many aspects of modern computing. | https://everydayjunglist.medium.com/the-compulogical-fallacy-d368546e65d8 | ['Daniel Demarco'] | 2018-03-20 02:21:19.230000+00:00 | ['Artificial Intelligence', 'Philosophy', 'Technology', 'Machine Learning'] |
A Look Into TikTok’s Origins, Controversy & Future | Within the past few years there hasn’t been any other app that has gone from being controversial, loved, to controversial again as much as TikTok has. In fact, it wasn’t always known as TikTok to begin with. Let’s look at how TikTok evolved from being a different app, becoming one of the most utilized apps, to grabbing the attention of the President & the world.
From Musical.ly to TikTok
TikTok wasn’t always known as it is today. Founded in 2014, it actually started as an app by the name of Musical.ly that allowed users to lip sync multiple different songs in 15-second to 1-minute intervals. Several speed options along with filters and effects could also be applied- which isn’t so different from TikTok today. While Musical.ly got nowhere near as popular as TikTok, the app managed to get around 90 million active users in June of 2016. A little over a year later a company known as ByteDance acquired the app for $1 Billion USD and ended up merging it with an app of their own (TikTok).
All users of Musical.ly were brought over into TikTok and from that point on Musical.ly was no more. A lot of the aspects and features of Musical.ly however lived on and were integrated into TikTok. These included being able to film a video alongside a different users video, filter effects, etc. While many old features remained there were a bunch of new, contemporary additions that eventually led to its huge success as well. These included the “For You” page which suggests content based on your activity within the app and features such as video replies that allowed you to react to comments from ones previous videos.
Exiled to Accepted
“TikTok went from having a community of dancing stars to memes, hilarious original content, and more which just kept expanding.”
While the hype around TikTok has gotten to the point it’s at today, it wasn’t actually always loved as much as you’d think. In the beginning TikTok was looked down upon by many and “cringey” as much as its predecessor Musical.ly was. It wasn’t until other content entered the app which expanded the user base in a way that allowed others to be part of a different sub-community. TikTok went from having a community of dancing stars to memes, hilarious original content, and more which just kept expanding. By utilizing hashtag features users were able to create pretty much anything you could think of. Those videos could then be searched by their respective hashtag, allowing users who wanted to watch certain content an easy way to find, follow, and keep coming back to it.
The app was gaining traction ever since acquired but it wasn’t until late 2019 where there was a huge uptick in active users. Today, TikTok stands at ~800 million monthly active users according to Oberlo. That’s a 788% increase from 2016 and quite an accomplishment considering it’s only been a few years.
Power of Social
As TikTok grew we also saw a rise of in- app celebrities such as Charli & Dixie D’Amelio, Addison Rae, Isabella Avila, Loren Grey and more. Mansions were even dedicated to certain groups of TikTokers where they could collectively create content such as the Hype House & Sway House. Multiple other users also began growing immense followings and ended up landing huge brand deals with companies. One in particular, Hollister Co., teamed up with the D’Amelio sisters to launch a back-to-school campaign. Their competitor, American Eagle, also took advantage of this and teamed up with Addison Rae to launch their AExME Back to School 2020 campaign.
Other than advertisers benefiting from TikTok’s large base — seemingly ordinary users essentially launched their own careers from within the app. TikTok changed the lives of many and it goes to show how powerful growing a following is and the role that social apps play. If users could create such a huge impact for themselves- how much of an impact could those behind these apps make? What’s the scale of those powers and should we be concerned of their own foreign governments intervening?
The Controversies
“I look at that app as so fundamentally parasitic, that it’s always listening, the fingerprinting technology they use is truly terrifying.”
Security & privacy threat concerns started in late 2019 back when the app was gaining a lot traction. As more people were using the app, more security concerns grew with it. This led the US government to launching a national security review into ByteDance. The Committee on Foreign Investment in the United States (CFIUS) which reviews the deals by foreign acquirers for possible security risks at a national level, started an investigation into the BtyeDance x Musical.ly deal. What did they find? Unfortunately their reviews are confidential to the public and there were no final conclusions which got disclosed. However, we do know that ByteDance is a Chinese founded company headquartered in Beijing and has been backed by multiple other Chinese investors. While that in itself isn’t something that should be looked at as a red flag, what the app does that you don’t notice should.
Users began taking a closer look at the app a few months after the CFIUS conducted their review. In early April of this year a user on Reddit reverse engineered the app and found multiple security threats which included intrusive tracking. Reddit’s CEO and co-founder, Steve Huffman, even commented about the apps dodgy work during a panel discussion. “I look at that app as so fundamentally parasitic, that it’s always listening, the fingerprinting technology they use is truly terrifying.”
All of this investigating and communist China conspiracy building eventually caught the attention of the President of the United States which has put us exactly where we are today. On August 6th, Trump signed an executive order which would effectively ban the app from the United States starting September 20th. The catch is if an American company acquires the app TikTok could be spared. Microsoft and reportedly Twitter are in talks with ByteDance to make a deal- Will it happen? We’ll find out soon enough.
Path Ahead
What the future of TikTok here in the United States will be remains a mystery, however there are several rivals that are here to stay — Byte, Triller, Clash Video and Instagrams new Reels feature are all reminiscent to the fundamentals of TikTok. It’s going to be interesting seeing how each of these evolve and compete for the success TikTok has seen. Are we going to see more surges of influencers? Will there be new ways for Brands to advertise on these platforms? It’s always exciting when competition forces innovation which in turn benefits the marketing and strategy efforts of brands.
TikTok is an app the world will always remember no matter if these are its final days in the US or not. It has created such an impact and tension that we haven’t seen with other apps that it truly begs for us to step back and take in account the power these social apps have. Should users and brands be more concerned? Only time will tell. | https://medium.com/swlh/a-look-into-tiktoks-origins-controversy-future-6cca94c1d2ff | ['Matt Houser'] | 2020-08-12 23:46:55.490000+00:00 | ['Bytedance', 'Social Media', 'Marketing', 'Influencer Marketing', 'Tiktok App'] |
50 Free Books on LGBTQ History and Politics | LGBTQ folks comprise a unique minority group. Unlike race or ethnicity, where one is born into a family that often teaches them their culture, native tongue, and history, LGBTQ people don’t usually have relatives to learn from. We have to search for this knowledge ourselves.
The internet and the subsequent information age made this process far easier — unfortunately, it also made it less common and less effective. With the influx of information, much of it becomes shallow. The nature of online articles dictates quick, easy digestion, thus they focus more on compactness than accuracy and correctness. A concerning number of LGBTQ youth, especially, seem under the impression that our history and politics began with the Stonewall Riots (which in itself is surrounded by myths — according to Marsha P. Johnson, Miss Major, and LGBTQ historian David Carter, Sylvia Rivera wasn’t actually at the riots, and Marsha didn’t get to the bar until well after they started) and ended with same-sex marriage.
Gaining knowledge about the world before we entered it is a crucial pleasure, whether it’s done via timelines and descriptions of events or first-person accounts from certain time periods. When we learn about those who came before us, we often learn about ourselves and can better envision our future. The following is a chronologically-organized list of literature concerning the legacy of LGBTQ communities, movements, politics, and identity — not just in the United States! — that you can download or read online free of charge.
(Keep in mind that due to the publication years, some information will be dated compared to our contemporary understandings of things, but still informative as a product of their time.)
Books
Honorable Mentions
While the list below doesn’t consist of books per se, its contents remain valuable resources for uncovering the past of LGBTQ communities: | https://medium.com/an-injustice/50-free-books-on-lgbtq-history-and-politics-10fb364382c0 | ['Kravitz M.'] | 2020-12-16 01:37:48.030000+00:00 | ['Kravitz M', 'Books', 'Queer', 'LGBTQ', 'History'] |
How to Conquer Cohort Analysis With a Powerful Clinical Research Tool | How to Conquer Cohort Analysis With a Powerful Clinical Research Tool
Why your doctor understands customer retention better than you do
In SaaS or consumer subscription settings, small changes in churn can radically impact revenue growth.
Product managers, growth hackers, marketers, data scientists, and investors all need to understand how business decisions impact user retention.
With so many recurring revenue businesses going public, Silicon Valley should get the picture by now.
Believe it or not, however, medical researchers measure customer retention better than you do.
What?
Sounds bold, but it’s not. Over decades, clinical researchers have refined precise and rigorous ways of measuring retention-except instead of customer retention, they measure patient survival.
The gravity of life and death means researchers take great care in measuring treatment efficacy.
To do this, clinical researchers use a statistical method called the Kaplan-Meier estimator. The formula elegantly solves a frequent issue that pops up in cohort retention analysis: making valid comparisons within and across groups of cohorts of different lifespans:
Despite the fancy formula, survival analysis using Kaplan-Meier (KM) is actually quite simple and delivers much better results than other methods:
In this post I’ll explain these results, breakdown the KM estimator in simple terms, and convince you to use it for retention analysis.
The bottom-line: if you are an operator or investor who wants to properly measure customer cohort retention, Kaplan-Meier is the way to do it.
Two inevitabilities: Death and Churn
The core problem the KM estimator helps us deal with is missing data.
Cohort data is inherently flawed in that more recent cohorts have fewer data points to compare against older cohorts. For example, a five-month-old cohort can only be compared with the first five months of a ten-month-old cohort. The retention rates of a cohort of customers acquired seven months ago can only reasonably be compared to the first seven month retention of older cohorts.
Imagine you had the full retention history of the previous 12 monthly cohorts and you wanted to predict the 12-month retention curve of a newly acquired customer. It’s not at all obvious how to do this.
To understand this better, let’s visualize a simpler example with only five cohorts:
You might first try to calculate average retention across cohorts. This is problematic for two reasons:
The simple average will not be representative if our cohorts differ in size
For any given month we can only average over cohorts that have been alive at least that long, so we effectively average over fewer and fewer cohorts over time
We can see the second issue below. With both the simple and weighted average, we get strange results when performance oscillates across cohorts:
Assuming we don’t re-add returning users who previously churned into their original cohort, retention cannot possibly tick up after declining-it’s a one way street. This is an artifact of our flawed method, as 5-month retention cannot exceed 4-month retention by definition.
A third, related problem arises when comparing groups of cohorts to other groups, for example, comparing 2016’s group of monthly cohorts to 2017’s. As we’ve just shown, using averages to estimate retention curves for each group doesn’t work, which means we also cannot compare one group to another.
Questions? Ask your doctor
Believe it or not, clinical researchers deal with this same issue all the time.
Customer cohorts are analogous to groups of patients starting treatment at different times. Here the “treatment” is the time of customer acquisition and “death” is simply churn.
Or, imagine if the “2016 cohorts” and “2017 cohorts”, rather than being year-grouped cohorts, were groups receiving different treatments in a clinical trial. We want to quantify differences in patient survival rates (customer retention) between the two groups.
Pharmaceutical companies and other research outfits regularly contend with this. Patients start treatment at different times. Patients drop out of studies, by dying, but also by moving locations or deciding to stop taking the medication.
This creates a host of missing data issues at the beginning, middle, and end of any patient’s clinical test record, complicating analysis of effectiveness and safety.
To solve this problem, in 1958, a mathematician, Edward Kaplan, and statistician, Paul Meier, jointly created the Kaplan-Meier estimator. Also called the product-limit estimator, the method effectively deals with the missing data issue, providing a more precise estimate of the probability of survival up to any point.
The core idea behind Kaplan-Meier:
The estimated probability of surviving up to any point is the cumulative probability of surviving each preceding time interval, calculated as the product of the preceding survival probabilities
That strange formula above is simply multiplying a bunch of probabilities against one another to find the cumulative probability of survival at a certain point.
Where do these probabilities come from? Directly from the data.
KM says our best estimate of the probability of survival from one month to the next is exactly the weighted average retention rate for that month in our dataset (also called the maximum likelihood estimator in statistics parlance). So if in a group of cohorts we have 1000 customers from month one, of which 600 survive until month two, our best guess of the “true” probability of survival from month 1 to 2 is 60%.
We do the same for the next month. Divide the number of customers that survived through month 3 by the number of customers who survived through month 2 to get the estimated probability of survival from month 2 to 3. If we don’t have month 3 data for a cohort because it’s only two months old, we exclude those customers from our calculations for month 3 survival.
Repeat for as many cohorts / months as you have, excluding in each calculation any cohorts missing data for the current period. Then, to calculate the probability of survival through any given month, multiply the individual monthly (conditional) probabilities up through that month.
Though a morbid thought, measuring patient survival is functionally equivalent to measuring customer retention, so we can easily transfer KM to customer cohort analysis!
Putting Kaplan-Meier to the test
Let’s make this clearer by applying the Kaplan-Meier estimator to our previous example.
The probability of surviving month 1 is 69% (total customers alive in month 1 divided by total in month 0). The probability of surviving month 2, given a customer survived month 1, is 72% (total customers alive in month 2 divided by total in month 1, excluding the last cohort which is missing month 2 data). So the cumulative probability of surviving at least two months is 69% x 72% = 50%. Rinse, wash, and repeat for each subsequent month.
Side-by-side comparison reveals the superiority of KM:
What’s great about KM is it leverages all the data we have, even the younger cohorts for whom we have fewer observations. For example, while the average of all the available cohorts at month 3 only uses the data for cohorts 1–3, due to its cumulative nature, the KM estimator effectively incorporates the improved early retention of the newer cohorts. This yields a 3-month retention estimate of 38%, which is higher than any of the cohorts we can actually measure at month 3.
This is exactly what we want -cohorts 4 and 5 are both larger and better retaining than 1–3. Hence, it is likely that the 3-month retention rate for a random customer picked among these cohorts will exceed the historical average, as the customer will likely be in cohorts 4 or 5.
Using all the data is also nice because it makes our estimates of the tail probabilities much more precise than if we could only rely on the data of customers who we retained that long.
Kaplan-Meier curves also fixes the wonky behavior in the right tail of the retention curve by respecting a fundamental law of probability: cumulative probabilities can only decline as you multiply more numbers.
Recommended by 95% of doctors
This analysis could easily be extended. Let’s go back to the 2016 vs 2017 example-we could run the Kaplan-Meier calculation on each respective group of cohorts and then compare the resulting survival curves, highlighting differences in expected retention between the two groups.
While I won’t cover it here, you can also calculate p-values, confidence intervals, and statistical significance tests for Kaplan-Meier curves. This lets you to make rigorous statements like “the improvement of cohort retention in 2018 relative to 2017 was statistically significant (at the 5% level)”-cool stuff:
Kaplan-Meier is a powerful tool for anyone who spends time analyzing customer cohort data. KM has been battle-tested in rigorous clinical trials-if anything it’s surprising it hasn’t caught on more among technology operators and investors.
If you’re a product manager, growth hacker, marketer, data scientist, investor, or anyone else who understands the deep importance of customer retention analysis, the Kaplan-Meier estimator should be a valuable weapon in your analytics arsenal. | https://towardsdatascience.com/how-to-conquer-cohort-analysis-674a2dea3472 | ['Nnamdi Iregbulem'] | 2019-06-04 13:58:40.515000+00:00 | ['Technology', 'Data Science', 'Venture Capital', 'Startup', 'SaaS'] |
Coronavirus Is The Smokescreen For Trump’s Agenda | Coronavirus Is The Smokescreen For Trump’s Agenda
The President’s contradictory policies and statements suggest that, for Trump, this is all about winning the election.
The President is scared, no doubt about it.
Not like every other world leader — particularly Boris Johnson — who are scared of the threat of the coronavirus to their people. Rather, Trump is frightened by the threat of consignment to the list of one-term Presidents, without even a primary challenge to blame for it.
After all, the polls have Biden 4 points, 6 points, 8 points — even 11 points — above Trump, a margin which would be hard to overcome even with the distorting effect of the electoral college, unfit for purpose. What’s more, the former vice-President is on the rise in the polls, and leads the President in many battleground and red states, like Florida and Arizona.
All that, with Biden still $187 million behind Trump in donations.
But even in this scenario, you might expect the President of the United States to be focused on dealing with the crisis that he says makes him a “war President”, rather than on his re-election campaign.
No such luck.
The President, in the hopes of re-building the economy quickly and entering the final months of the campaign on the bounce, is doing everything he can to ensure that states re-open and get America going again.
His action is mostly, as described by Gov. Jay Inslee of Washington, “background noise” compared to the actual work going on to establish the scientific realities and how and when states should be reopened.
This background noise nevertheless cuts through the real work going on and makes it into the news cycle, and thus into the social media feeds and television screens of millions of Americans. Trump knows that he can look like he’s doing something without actually doing anything.
The biggest attempt his administration has made to really put the meat onto the bones of this policy was laughable. The Opening Up America Again document, which deploys Trump’s favourite fourth slogan word to appeal to the nostalgic tendencies of his 17th-century-orientated base, sets out, very briefly, three phases of reopening.
It gives no detail around these three phases, and I think most members of the public could have come up with something comparable to it. “Re-open a bit, then a bit more, then fully” is not really a complex three-stage plan.
However, while the President is seemingly keen to get the economy going again and reopen the country — even if he doesn’t have “total authority” as claimed repeatedly at one of his curséd press briefings — Trump has also issued an executive order halting most immigration, in a bid to further mitigate the virus.
These two ideas are clearly contradictory.
How can it be that we need to simultaneously re-open and double down?
Indeed, we do not need to take this course of action. These two actions are simply different parts of Trump’s agenda; the first a vote-winner (by re-building the economy); the second a base-please which is sure to rally his 40%.
Coronavirus, for 45, has always been an immigration issue, an “America First” issue, not a scientific and natural issue.
This has never been about following expert advice (such as his public dismissal of the recommendation to wear masks when outside, again at one of the damned briefings). It has always been about ‘solving’ the ‘virus of immigration’.
Trump’s cited reason for attacking immigration is protecting American workers both from the virus and from replacement by cheap immigrant labour. Clearly, he doesn’t understand that any issues in this regard might be resolved more easily and humanely by raising the minimum wage.
This sort of policy mess is exactly why I would vote for Biden if I were living in America. | https://medium.com/the-national-discussion/coronavirus-is-the-smokescreen-for-trumps-agenda-335b31dc5c4c | ['Dave Olsen'] | 2020-04-24 17:31:42.303000+00:00 | ['Trump', 'Immigration', 'Covid 19', 'Politics', 'Coronavirus'] |
What is Your Worry Telling You? | Why We Worry
Worrying is just like fear, curiosity, and our ability to form judgments. It’s hard-wired in us. Worrying makes us human. It played its role in helping mankind survive by contributing to our decision-making process.
Planning ahead allows us to recognize danger headed our way. Worrying then spurs us into action, either to prevent or soften the blow if/when the threat becomes imminent.
Let’s say, for example, you’re worried about losing your job. You’ll probably step up your game — or at the very least, update your resume and check out who’s hiring. Worrying in this regard is totally natural. The problem arises if you let your worries linger without taking any action.
Worrying isn’t Bad, We Make it Bad
Worry itself is a messenger sent from your body and mind to warn you of potential threats. That’s it. It’s not psychic. It doesn’t know what’s actually going to happen, it’s showing you what could happen if things continue in the same direction.
Your job is to consider what Worry is telling you. Is the threat viable? Is it a threat or insecurity? If it is a threat, can you change the outcome?
When you don’t do your job, Anxiety shows up. Anxiety is a whole other messenger, with a whole different attitude. But this article is about Worry. Anxiety requires an article itself.
Worries are Thoughts
The most important aspect is noticing the difference between thoughts and facts. Our minds are compelling, and sometimes it can be tricky to differentiate between the two.
Dr. Marina Harris came up with a brilliant analogy of thinking about your worried and Anxious Thoughts Like Internet Pop-Up Ads in your brain. The same way a pop-up interrupts our scrolling, worry and anxiety often have the same effect on our days. The best thing you can do is acknowledge what it is — a thought. Worries aren’t facts, but that doesn’t mean you ignore it or shove it aside.
The whole point is to differentiate between whether it’s actually worth worrying about.
When Worrying Becomes a Problem
On the one hand, allowing Worry to run your life doesn’t lead to a happy ending. Too much of anything is a bad thing, even happiness. On the other hand, you can’t bury it either.
Ignoring any of your emotions is not a solution — it’s postponement. Burying your worry doesn’t help anyone, it only increases its severity when it resurfaces later.
Above I said Worry is a messenger sent by your subconscious to convey a message. Well, shutting the door in its face isn’t going to make it go away. It’ll just start yelling louder and bang on the door until you finally let it in. By that point, the entire situation is elevated.
So What Can You Do?
Your job — ask questions.
Is the threat viable?
The act of worrying creates a physical response to prepare you against an oncoming threat. Technically it can happen for anything from what you’ll have for dinner to potential armageddon. How imminent is your worry?
Is it a threat or an Insecurity?
A threat is when something has the potential to negatively impact your survival or well being. For example, being in a toxic relationship is a threat to your mental, and possibly physical, health.
Insecurity revolves more around your social status and internal beliefs. For example, if you think you’re being funny and someone says you’re not, your insecurity may feel like your sense of self is being threatened. These are great moments for self-reflection.
Can you change the outcome?
My dad used to always say, worrying is like a rocking chair — it gives you something to do but you aren’t going anywhere. When there’s nothing you can do to help or prevent your worry then what do you have control over? Find a different outlet for your energy, or make peace with the fact there’s nothing you can do.
Final Thoughts
Question yourself constantly. Is there anything you can do to prevent your worries from becoming reality? If so, go for it. But if not, then you need to trust yourself that you can overcome whatever is thrown your way.
Worrying about everything all the time doesn’t actually fix anything. You’re training your mind to focus only on the negatives and by doing so, you’re rewiring your brain to live in fear. Take back your control. You deserve to breathe again. | https://katrinapaulson.medium.com/what-is-your-worry-telling-you-287d45ebc2da | ['Katrina Paulson'] | 2020-11-22 17:44:14.768000+00:00 | ['Mental Health', 'Self', 'Self Aware', 'Personal Growth', 'Emotional Intelligence'] |
What I Learned from NaNoWriMo | I learned a few things from participating in the National Novel Writing Month, a challenge to write 50,000 words in November.
I first learned that the trick to writing 50,000 words in a month is to write your thoughts as they form without critiquing them. You do have to find time to do this, and you have to be willing to use this time to spew crap. You have to learn this quickly, or you’ll fall too far behind.
You do have to find time to do this, and you have to be willing to use this time to spew crap. You have to learn this quickly, or you’ll fall too far behind. You’re going to spew a lot of crap. Buckets of crap. I wrote tons of sentences that were basically, “She did [this thing], then she did [that thing], then she did [that first thing] some more, then she said ‘Wow, look at this great [thing] I just did!’”
Buckets of crap. I wrote tons of sentences that were basically, “She did [this thing], then she did [that thing], then she did [that first thing] some more, then she said ‘Wow, look at this great [thing] I just did!’” I like using the word “suddenly” quite a bit. This is one of the uglier truths about myself that I discovered. Adverbs are like a drug.
This is one of the uglier truths about myself that I discovered. Adverbs are like a drug. You have to make time to write. I fell into the habit of taking chunks of my day and scribbling hundreds words at a time. I woke up half an hour early and wrote while drinking coffee. I wrote during my lunch break. If I didn’t meet my word count by the end of the day, I stayed up until I did.
I fell into the habit of taking chunks of my day and scribbling hundreds words at a time. I woke up half an hour early and wrote while drinking coffee. I wrote during my lunch break. If I didn’t meet my word count by the end of the day, I stayed up until I did. You have to give stuff up to find the time. Before I started, I’d guess I was skimming 300 articles a day from my RSS feed. I solved that by deleting Feedly from my phone. Also watched less TV and YouTube.
Before I started, I’d guess I was skimming 300 articles a day from my RSS feed. I solved that by deleting Feedly from my phone. Also watched less TV and YouTube. You have to have the support of your loved ones. You’re going to be seeing less of them. You need to be sure they’ll still be there come December. I completed NaNoWriMo in 2015 on my first try. When I told my wife I wanted to do it again in 2016, she grumbled something about being single again.
You’re going to be seeing less of them. You need to be sure they’ll still be there come December. I completed NaNoWriMo in 2015 on my first try. When I told my wife I wanted to do it again in 2016, she grumbled something about being single again. It helps if you give yourself occasional rewards. Once I was falling too far behind, so I set a goal of three days of 3,000 words per day. As a reward, I bought myself a pen.
Once I was falling too far behind, so I set a goal of three days of 3,000 words per day. As a reward, I bought myself a pen. Writing is easier than I thought. Editing is really hard. At the end of one month, you have a 50,000 word bucket of crap. It takes months to make that into a bucket of not-crap. I ended up failing to do that second part. It’s like climbing a flight of stairs, only to find that you have ten more flights to go.
At the end of one month, you have a 50,000 word bucket of crap. It takes months to make that into a bucket of not-crap. I ended up failing to do that second part. It’s like climbing a flight of stairs, only to find that you have ten more flights to go. Writing by hand was a stupid idea. I did it this way because fountain pens are fun and typing is not. Then I was stuck with the task of typing in 50,000 illegible words. Neal Stephenson writes his books by hand. He probably has beautiful handwriting.
Please don’t try to read it.
I wanted to get a discounted copy of Scrivener. That’s why I took the challenge. I completely surprised myself by winning in 2015. I wanted to do it again in 2016, but this time I wanted to get something amazing out of the deal.
I planned and plotted and outlined. In October, I did NaNoPrepMo. And I started on November 1. By November 10 I’d only typed 4,670 words, and I quit. My heart wasn’t in it. I was too distracted.
2016 was a horrible year, and things don’t seem to be getting much better. I’m not too interested now in writing some “fantasy novel in a world with different kinds of magic and a YA female protagonist”.
Instead I want to run around screaming, flailing my arms about. I also want to hide under the covers.
Which brings me to the last thing I learned:
I started writing a book about an old, weary knight, setting out on a last quest before retiring. 10,000 words in, I found a character travelling with him who I liked better. I turned her into an adult, gave her a conflict and a goal and a love interest, then I wrote 40,000 more words about her. She became the protagonist. Sometimes, you end up in a very different place from where you expected to be.
And suddenly, this article comes to a totally separate ending than I’d planned. | https://medium.com/nanowrimo/what-i-learned-from-nanowrimo-ee7495de3ac1 | ['Russell Jelks'] | 2017-03-26 06:46:12.093000+00:00 | ['52 Week Writing Challenge', 'NaNoWriMo', 'Writing'] |
Solving for Publishing Dilemmas | Having my cake while feeding all of you
Solving for Publishing Dilemmas
Short-form Solution
Photo by Igor Rodrigues on Unsplash
I love KTHT; KTHT is my tabernacle and its denizens are my family. Yet our numbers are just enough for two companies and some of my battles require an army. Communication SNAFU’s abound and I fear my family gets lost to me. I think messages like this broadcast within KTHT can bridge the gap and I hope my Muse, our fearless 𝘋𝘪𝘢𝘯𝘢 𝘊., agrees.
Yesterday I was accepted as a writer for ILLUMINATION-Curated and two of my pieces were published there.
This story completes the tetralogy that is the dMan (father) saga and ties it together with the story of my Mark of Cain and the very happy ending that such foretells.
This 5-minute read is a profound story of the meaning of past lives and soul healing. Please read and let me know whether you believe it’s a parable with a message that I have standing to deliver, a parable that I have no standing to write, or a true story, in which case my standing is unquestionable regardless of how one feels about the message.
Please and thank you.
YG
N.B.: The term SNAFU originated in WWII among the troops on the ground: “Situation Normal; All Fucked Up.” | https://medium.com/know-thyself-heal-thyself/solving-for-publishing-dilemmas-53c9613fd1c1 | ['Yohanan Gregorius'] | 2020-12-27 16:33:20.613000+00:00 | ['Life Lessons', 'Storytelling', 'Spirituality', 'Healing', 'Racism'] |
Running Hot & Cold | Written by
Daily comic by Lisa Burdige and John Hazard about balancing life, love and kids in the gig economy. | https://backgroundnoisecomic.medium.com/running-hot-cold-2d2464b950bb | ['Background Noise Comics'] | 2020-01-14 01:20:18.195000+00:00 | ['Humor', 'Global Warming', 'Comics', 'Climate Change', 'Weather'] |
Hyperreal materials in packaging design | Hyperreal materials in packaging design
The fast evolution from local agriculture to food industry created a mismatch between the production behind most supermarket products and the food narratives embedded in western cultures. Accessibility to remote or non-seasonal products elonged the supplier chain up to a level were production and consumption are almost disconnected. This gap created an opportunity for companies to create narratives around the products to reinforce branding as a competitive tool, and consumer demand for authenticity urged design to counterbalance that with hyperreal material narratives that are now possible with the current advanced printing technologies. This article identifies four categories of material simulations and examines how they are used as a communication tool. The relationship between brand narratives and product attributes is analysed using Jean Baudrillard’s four stages of simulacrum, and lastly, the relationship between actual and simulated materials is explored using Boris Groys concept of media sincerity.
Shopping at the supermarket I am striked by a soup broth. The packaging features the iconic tablecloth pattern as a background that covers all the sides of the Tetra Brik. Inevitably, the pattern reminds me of my grandparents and all the times I had lunch at their home as a student. The product does not have a creative naming, just a plain description including a vernacular Spanish word for home soup (“puchero”) and “natural”, the only word that appears twice in the frontal. The logotype looks clunky, the illustration looks like a cookbook from the 70s and the only font used is the popular and controversial Comic Sans. The layout is messy and fails to follow any basic criteria of what good design should be.
Material simulation can be found on multiple products, like furniture or user interface design, that has been largely discussed in the past years. Why is it so common to find simulated materials in the supermarket? Wood and linen, silver and gold, rugged cardboard and old engravings are extensively used. What kind of messages are they conveying and what is their connection with the products they are attached to? This article will focus on food packaging design and aims to clarify the communication role of this simulations, looking at the social context where they flourish and using the concepts of hyperreality and media sincerity from the philosophers Jean Baudrillard and Boris Groys respectively.
1. Materials and brand narratives
During the 20th century there has been a radical evolution in the way we produce, buy, cook, and eat food. From local agriculture to large multinational companies, from rooted tradition to flashy cuisine innovation, from homemade to home delivery, from a duty to a hobby, and form slow oven cooking to a quick microwave ping and to ready-to-eat products.
This changes occurred in such a short period of time that our habits and notions about food don’t exactly match. Along with this social changes, food production turned into a regular industry: global, complex and competitive. Brands must therefore meet demanding requirements about price, food conservation, transportation and legal obligations. The same requirements apply both to products and packaging. This scenario leave and narrow space for packaging designers, that also need to follow marketing briefs, with specific brand values and product descriptions. Every new product launch is an investment, and those who don’t achieve success in the few weeks quickly disappear.
This pragmatic and economic vision completely clashes with the traditional notion of food we inherited as part of our culture. The supermarket is filled with references about tradition, nature and history. As a society we got used to the comfort and convenience of industrial food products but we are reluctant to accept explicitly the system that makes this happen. We need products that last but don’t like conservatives, we like fruit from the other side of the globe but don’t like to see the food process needed for such a long trip. We want the lowest prices but are not willing to accept the poor labour conditions they imply. The more steel tank in the industry the more rustic wood in the pack. Instead of low-wage workers we like to see a charming smiley grandfather.
The complexities of the food industry logistics created a deep divide between producing and consuming, products now follow through a complex chain of suppliers that makes very difficult for the consumer to remain connected with the origin of the food, something that happeded spontaneously in rural lifestyle. Living in the city makes this task even more difficult. This divide between the produce and consumer creates a huge space for the brands to make beautiful promises about nature and quality. And it’s in this point where materials simulations become handy, they are used as a tool to cover the industry with fairytales. Stories we are told in advertising, stories we like and got used to. They fill our pantry. Maybe neither the brands nor the citizens were very interested on unveiling the fairytales wrapping industrial products. What kind of narrative is told and by which materials?
To analyse the myriad of simulations we can found in the supermarket we identified four common material simulations. Each of them helps us to illustrate one communication concept.
1.1 Wood and cardboard
The origin of wood and cardboard is not the same, since wood is a material present in the nature and cardboard is manufactured. But their connotations and the way they are used are similar. Both materials evoke a rustic atmosphere and emphasise the natural and raw feeling, and are not treated as an object but as a background texture, usually covering large parts of the pack. Because of its fine grain and light colour the cardboard texture allows greater legibility of texts with no need to modify its original colour. Considering that some packs are actually made of this material the simulated cardboard printed over the real cardboard can be extremely realistic, because the appearance and the flexibility touch match. This texture is common in eco and healthy products, where the concepts of natural and raw are more relevant. These concepts are also relevant in fruits and vegetables in general. The narrative is then based on the purity, the lack of human intervention and ultimately the lack of comercial effort. If glossy paper is the material of brochures and commerce the cardboard becomes, by opposition, the icon of authenticity. Looking at the packs that illustrate this group the idea of non-designed pack may seem unconvincing, the packaging is obviously carefully designed but the story makes his way into the supermarket, at least until it becomes a standard.
1.2 Tablecloth
As in the wood is used as a texture. This motif relates to powerful concepts like home tradition or family, and its derivatives like picnic, homemade or conserves. The four examples belong to four different products: bread, confiture, broth and yogurts. The yogurt is a peculiar case: the company La Fageda was born to employ people with a mental condition, is small and it’s clearly local (the name comes from a Catalonia region). If the story behind large-scale factories it’s not particularly appealing in this case the brand does have a nice story, but even so transparency seems not to be an option. Since all the products are human made the narrative here cannot be the purity of the wood but the care they were produced with. If the factory it’s not the icon of care, then the grandma-meal tablecloth and the homey kitchen is seen as the refugee of caring production, as the ancient artisan workshop of food.
1.3 Metals
The metals are another of the simulated materials used to transfer their aesthetic values to the product. The shining quality inherent to the metallic materials like (aluminium cans or metal-coated paper) make it easier to simulate gold. Carton packs need to use printed glare effects that lack the metallic touch. The most realistic technology over carton is gold stamping, which actually contains metallic particles (usually bronze, aluminium, copper and zinc), but that implies increasing the cost and its environmental impact.
Gold is, above all the other materials mentioned, the most symbolic. We can appreciate that the name of the material is part of the product naming, something difficult to imagine with any other material. Significantly, gold is not present in any basic nutritional food (not even the top quality bread would use gold) but in expensive or pleasure products, food that needs to persuade with reasons other than health or energy.
The values carried with the material refer to quality (as a certification), social prestige and status (referring to hierarchy and marking the product as the top choice). Silver is sometimes used as a sign of a high-tech product, but gold it’s not the top because of newness but because of unquestionable and classic quality, a safe and reaffirming choice.
1.4 Ink and engravings
As the last group we want to include an element that is both material and a typographic style, all the forms of writing and type setting using a historic tool or material in a very expressive manner. Here the material is just underneath the word, but — as the examples prove — the visual representation of the material is celebrated as much as the text meaning, if not more. What visual element is more efficient bringing the history to the viewer, text or font-material? Its historic connotations make it appropriate for products with ancient recipes, where the older the food the better. That is the case of wines and liquors, but also cheese or pizza. If in gold the quality was a question of price and status, the narrative here it’s that the product is good because remained intact, the age acts as an argument for quality. It’s the narrative of the the essence: the pizza is good because its is the original pizza, cooked in the old way, the way that identifies us. It’s really about the notion of a pure identity.
2. Relationships between material narratives and product attributes
Underlying the values mentioned (tradition, nature, prestige, history, etc.) we can identify the seek for authenticity. The abuse of this concept it’s a tacit confession that after so many simulations in our lives we are no longer sure we can call it reality with capital R. The messages delivered by the packaging examples analysed can be interpreted as a lack of collective self-esteem, or at least a certain discomfort the age we are living.
The sense that when it comes to food old times used to be better has pushed packaging design to project of our ideals into the past and recreate a mythical tradition that may not have actually existed. This idea of false recreation was formulated by the french philosopher Jean Baudrillard, who coined the term hyperreality, defined as the simulation of something that never existed. In his book Simulacra and Simulation (1981) Baudrillard establishes four stages of simulacrum:
The simulacrum is a “reflection of a profound reality” The simulacrum “masks and denatures” an obscure reality The simulacrum masks the lack of a deeper reality The simulacrum has no relationship to any reality, becomes its own simulation
If we apply these four stages to the simulacrum materials and the narratives they enact we could rewrite the stages to evaluate the relation of the material with product, regarding not only the food it contains but also aspects like geographical origin, production process or nutritional benefits. The stage one would be packs where the material narrative is a faithful presentation of the product, where the material openly reveals the production process of the food and production materials used in the packaging.
Hiperreality does not only rely on what design elements are used (e.g., materials, images or fonts) but also in the way they are being used and the role they play in the narrative. Overacting, for example, is a very common sign of hiperreality in design. Pushing the visual signs too far reveals the effort in conveying certain aesthetic values and the communication strategy behind. The hiperreality may be effective and appreciated by some easy-going consumers but may also be a flaw to a more visually literate consumer, who might then realise the backstage of the packaging.
To illustrate each stage we use a product as an example, but the analysis model could be applied in multiple ways, a single brand can be at different stages depending on the product analysed. The lines between stages can be fuzzy and could be seen as degrees of fidelity between the material narrative and the product, the upper the stage the more inconsistent is the narrative from the product attributes, and therefore more autonomous.
Fig 2.1. Examples of simulacrum in packaging design
Stage 2. The material narrative enhance some qualities of the product and hide some of its flaws.
An example could be the Don Simon wine Tetra Brik printed with wood (image A). Wine really uses natural ingredients (even the low-range product like this one) so the natural origin is not masking the product. But using a plastic container in the wine sector sets the product far away from the wine culture, and therefore needs to actively repair the mismatch between the plastic and wine narrative, even at the price of appearing ostensibly fake. Regarding the illustration, the naive village featured would be more suitable to the third stage, the narrative moves one step further away from the production of the product.
Stage 3. The material narrative masks fundamental aspects of the product, the communication values compete side by side with the product values.
The cardboard packaging used in many frozen pizzas (image B) does not support any printed simulacrum material, the actual material is available by touch and can be seen without printed images in the interior while interacting with the packaging. Nevertheless because the material is covered with a printed photograph it does not reveal the material upfront, it’s not enhanced and definitively not part of the narrative, which is led by the ink effect on the product name.
The Dr. Oetker is a well-known brand and started selling pizzas in 1970’s. The product is competitive, but it has a major flaw: it’s a German product with a very non-italian brand name. Instead of proudly adapt the pizza to the German culture like Pizza Hut did in the USA, the brand is using the original Italian narrative and imagery. The chalk material goes hand in hand with a handwriting font that is not actually handwritten but a digital font. Significantly all the text in the front is in Italian, while Buitoni uses also local languages. By pursuing the Italian essence Dr. Oetker does not own the communication became even more Italian than the original, which is exactly what hyperreality is about.
Stage 4. The material narrative is not based on any product attribute, the communication is emancipated from the product. The brand narrative is the goal and uses the product as a medium.
A powerful and popular example of this stage is the Spanish beer brand Estrella Damm and the annual big campaign promoting a so-called mediterranean lifestyle (“Mediterràniament”) with unforgettable summer party nights. The spot (images C) pretends to be the trailer of a movie with celebrity actors, prestigious directors and title sequences, but there is no actual movie. Again, the trailer is a great packaging to a non-existing movie. Significantly many media announce the campaign launch — not as advertising but in the news section — as the beginning of the summer. Advertising dressed as popular culture. After the launch the brand sponsors numerous concerts (similar to the featured in the ad), serves the product and invites people to publish summer images on social media with the campaign hashtag (#mediterraniament). That is, including themselves in the narrative and act as an ad media to their followers. According to some critics consuming the product surrounded by all this messages and sponsored events is a consumerism equivalent of eating the Eucharistic wafer, it’s a meaningful act, one of identity and belonging.
Fig 2.2 Pasta and bread comparison from bio and regular brands
* * *
Below all these layers of meaning remains a product that could be easily confused with other brands in a blind test. But bars don’t serve blind beers. As product differentiation has become a harsh battleground for brands the focus has shifted to communication and brand narratives. Marketing budgets are bigger than product innovation (10 times bigger according to Gartner 2014), and the product never competes alone in the shelf, big brands developed an army of communication tools to defend wrap the food with narratives that come to be inextricable from the food. Or as Baudrillard might suggest, they become the product.
According to the philosopher brands have the need to create narratives and sense of belonging in order to survive:
Basically, what goes for commodities also goes for meaning. For a long time capital only had to produce goods; consumption ran by itself. Today it is necessary to produce consumers, to produce demand, and this production is infinitely more costly than that of goods […]. For a long time it was enough for power to produce meaning (political, ideological, cultural, sexual), and the demand followed; it absorbed supply and still surpassed it. Meaning was in short supply, and all the revolutionaries offered themselves to produce still more. Today, everything has changed: no longer is meaning in short supply, it is produced everywhere, in ever increasing quantities — it is demand which is weakening. And it is the production of this demand for meaning which has become crucial for the system. Without this demand for, without this susceptibility to, without this minimal par- ticipation in meaning, power is nothing but an empty simulacrum and an isolated effect of perspective. (Baudrillard, 1983:27)
Packaging prioritises messages over product and thus, printed media over transparent packaging that yield to the food. To describe this interface between the product and the consumer the design Guy Julier (2014) uses the term mediation, a powerful tool to control the view of the consumer.
Another reason to cover the food is that some industrial products might not have a pleasant appearance by themselves, specially if compared to fresh or less processed food. Moving packaged food to a transparent container leaves the food alone, disables the mediation and causes important changes in its perception. In contrast to supermarkets organic food stores provide more examples of minimal transparent packaging (image 6), along with more detailed product information and less brand narratives attached to the product. The different packaging approach could be related with the product: when the origin and production are good news being transparent is a competitive advantage. And vice versa, when the story behind the product is bad or boring news (complexity and cold efficiency are not charming) the communication strategy moves the consumer attention to a brand new narrative, not created by the product but over the product, they are not organic extension of the product but overlapped on it. The lack of factual link between the product attributes and the brand narrative increases the demand of authenticity and urges the design to counterbalance that void with hyperreal material narratives.
Why do those narratives survive rational thinking? Returning to the tablecloth texture mentioned earlier, it is clear that the packaging uses the material simulacrum only as a communication sign, not to cheat anyone. The consumer can notice the actual material: no matter how realistic the tablecloth is printed the plastic touch of the tetra brick or the shining metal cap leave no space for confusion. But nevertheless the tablecloth is effectively communicating the values and brand narratives, because simulacra found in packaging does not aim to convince rationally but to work at a subconscious level. The same applies to all the examples shown, the consumer knows that the supermarket confiture is not homemade, but senses the homemade aura anyway.
This normalisation of the absurd is also present in Baudrillard (1983: 10), the author argues that the masses are reluctant to rational thinking, they prefer the meaningless use of signs rather than meaning and sense:
[…] the masses scandalously resist this imperative of rational communication. They are given meaning: they want spectacle. No effort has been able to convert them to the seriousness of the content, nor even to the seriousness of the code. Messages are given to them, they only want some sign, they idolise the play of signs and stereotypes, they idolise any content so long as it resolves itself into a spectacular sequence. What they reject is the “dialectic” of meaning. Nor is anything served by alleging that they are mystified.
According to the French philosopher hyperreality end up blurring the boundaries between reality and simulation, deleting opposite concepts and the possibility of a logical discourse, resulting in a confusing amass of signs that allows no intelligibility or critical perspective (1978:21):
Everything changes with the device of simulation. In the couple “silent majority / survey” for example, there is no longer any pole nor any dif- ferential term, hence no electricity of the social either: it is short-circuited by the confusing of poles, in a total circularity of signalling (exactly as is the case with molecular communication and with the substance it informs in DNA and the genetic code). This is the ideal form of simulation: collapse of poles, orbital circulation of models.
This state of confusion — a sort of “meaning nihilism”– is the ground of hyperreal packaging materials. When we get used to see traditional and handmade narratives in industrial products these concepts no longer mean anything other than the simulation of themselves. Misusing and abusing of a word is a way of deactivating its power. The ultimate consequence of this scenario is that it’s harder to actually say something, it’s harder to take visual style seriously, to keep the ethics within the aesthetics. Everything is becomes a simulation game.
A speechless mass for every hollow spokesman without a past. Admirable conjunction, between those who have nothing to say, and the masses, who do not speak. Ominous emptiness of all discourse. No hysteria or potential fascism, but simulation by precipitation of every lost referential. Black box of every referential, of every uncaptured meaning, of impossible history, of untraceable systems of representation, the mass is what remains when the social has been completely removed. (Baudrillard 1983:6)
3. Material sincerity
Just few years before Baudrillard wrote about simulation Victor Papanek (1972) criticised material simulacrum and stood up for a strictly honest use of materials, a faithful position that can also be overlooked as naive in the current supermarket age:
An honest use of materials, never making the material seem that which it is not, is good method. Materials and tools must be used optimally, never using one material where another can do the job less expensively and/or more efficiently. The steel beam in a house, painted a fake wood grain; the moulded plastic bottle designed to look like expensive blown glass; the 1967 New England cobbler’s bench reproduction (‘worm holes $1 extra’) dragged into a twentieth-century living room to provide dubious footing for Martini glass and ash tray: these are all perversions of materials, tools, and processes. (Papanek 1972:27)
He was critic with design being used to make products appear more expensive than they are, with either excessive packaging or simulated materials:
[…] packaging of perfumes, whisky decanters, games, toys, sporting goods, and the like. Designers develop these trivia professionally and are proud of the equally professional awards they receive for the fruits of such dedicated labour. Industry uses such ‘creative packaging’ […] in order to sell goods that may be shabby, worthless, or just low in cost, at grossly inflated prices to consumers. (1972:187)
What Papanek is asking for is the material to present itself as it is — be it the product or the packaging — a sort of material sincerity. The power of materials to efficiently communicate promoted the usage of insincere materials, where the actual materials remain silent serving as a medium to display the simulated material.
In the iconic book Understanding media Marshall McLuhan (1964) coined the famous sentence The medium is the message. McLuhan thought of media as a secondary message attached to the first message, the content. But while the content message was a conscious one, the medium message was unconscious, and thus a sincere one. McLuhan’s book aimed to move the focus to media and not only content, as he thought was usual at that time.
Fifty years later, the German contemporary philosopher Boris Groys (2000) suggests that innocent media sincerity is something of the past. According to Groys, media is no longer spontaneous and truthful but rather something to be suspicious about, as it may mask the actual truth underneath.
The packs mentioned earlier show the utterly conscious usage of actual and simulated materials in brand communication, along with the carefully planned communication strategy behind the product. As food companies became bigger and global competition grew, brand narratives and packaging design became increasingly planned, design proposals suffered more corporate meetings, and more approvals were needed to finally launch a new packaging. Adding steps and actors in the process results in a more engineered and controlled communication, a more politically correct tone of voice and, according to Groys (2000:52), a severe loss of freshness and credibility:
The impression of sincerity is weaker still if representatives of an institution or a culture keep singing the same old song, which is perceived as a fixed and well-known part of their identity. The same is true of texts, images, or films that are produced according to well-known conventions, because they merely confirm the expectations we already have of such cultural products. […] In our culture, sincerity does not stand in opposition to lying, but in opposition to automatism and routine. […] If we can detect no movement, no displacement, no disturbance on the medial surface, then the submedial subject appears to be completely still.
Groys argues that the only way of accessing the submedial truth is by an accident that reveals the interior: “a spontaneous deviation from the program, an interruption, a mistake, or, to put it differently, he hopes for the emergence of a different, strange, uncommon sign amid the usual routine. Precisely such a sign is then judged to offer an insight into the interior of the other.” (2000:53). Groys used the term media sincerity to describe this accident.
Over the past decades the technology in packaging industry has evolved and diversified the range of solutions and materials available, and the advanced printing machinery has reduce the space for media sincerity to the minimum, but nevertheless looking closer it’s possible to find cracks in the wall. The examples (Fig.3) show examples of media accidents, like mandatory numeric codes and printing errors. Both reveal the materials, technologies and industrial processes, and the commercial and legal frame they work within.
The codes visualise the industrial manufacture brands usually try to hide, and precisely because they work against the planned narrative we give them credibility, according to Groys (2000:50–53):
The exchange of the alien for the proper creates the impression of sincerity only when it incorporates those signs that arc commonly associated with the menial, the vulgar, the poor, the disagreeable, or the depressing. […] this insight into the interior of things also generates a feeling of trust within the observer. It creates the feeling that he finally knows what things really look like on the inside. | https://pauderiba.medium.com/hyperreal-materials-in-packaging-design-6e4b9887c02b | ['Pau De Riba'] | 2020-10-31 16:06:03.349000+00:00 | ['Materials', 'Hyperreality', 'Packaging Design', 'Design', 'Simulation'] |
New Beginnings | New Beginnings
I just published my very first book!
Source: Moonlight Confessions
Dear Medium readers,
I am proud to announce that I am officially a published author! My book “Moonlight Confessions” is now available on Amazon.
Moonlight Confessions is a collection of heart wrenching poems that I have tirelessly been working on for the last few months. Each page is filled with raw emotion and honesty as I transform my life struggles into pieces of art with poems, extended prose, and short stories about love, loss, and heartbreak.
I want to thank Medium as a platform and its amazing community for allowing me to earn the confidence and drive to publish my work for the whole world to see. It’s truly been a beautiful journey and I’m eternally grateful for all my followers.
Writing this collection was not easy. In fact, there was not a single day that went by without tears strolling down my face. I had to revisit some of the most terrifying and heartbreaking moments of my life in order to be in tune with my emotions. Afterwards, I would utilize my intense emotions and feelings into writing some of my favorite pieces I have ever written.
For those contemplating about writing your own book, go for it! I wish you the very best and hope you end up loving the process just as much as I did.
Sincerely,
Karin Cho | https://medium.com/follow-your-heart/new-beginnings-25ff063cbfcb | ['Karin Cho'] | 2020-09-14 06:37:21.784000+00:00 | ['Book Recommendations', 'Books', 'Publishing', 'Books And Authors', 'Poetry'] |
clickOutside using useRef hooks | clickOutside using useRef hooks
Hooks are a new addition in React 16.8. They let you use state and other React features without writing a class.
One of the hooks is useRef. We will look into the application of useRef, on how to detect a click outside in React.
Let’s create a new React application
npx create-react-app react-clickoutside
Styling with Tailwind CSS
Follow this guide installing Tailwind CSS in React.
Let’s create a simple dropdown menu
Now let’s create simple dropdown menu, basically a toggle button.
Replace App.js with following codes.
import React from 'react';
import './styles/main.css';
import Layout from './components/Layout' const App = () => {
return (
<Layout>
<h1 className="text-3xl text-black pb-6">Outside</h1
</Layout>
);
} export default App;
As you can see now, we need Layout.js, let’s create subfolder components in src directory. Add Layout.js with following codes. I am assuming you familiar with Tailwind CSS styling.
import React from 'react';
import Header from '../components/Header' const Layout = ({children}) => {
return (
<div className="bg-gray-100 font-family-karla flex">
<div className="relative w-full flex flex-col h-screen overflow-y-hidden">
<Header />
<div className="w-full h-screen overflow-x-hidden border-t flex flex-col">
<main className="w-full flex-grow p-6">
{children}
</main>
</div>
</div>
</div>
);
} export default Layout;
And, we need to add Header.js which contains the dropdown menu. Don’t forget add in image file sign-in.png into the same folder.
import React, {useState, useRef} from "react";
import myImg from './sign-in.png'; const Header = () => {
const [isOpen, setIsOpen] = useState(false);
const handleClick = () => {
if (!isOpen) {
setIsOpen(true)
}
if (isOpen) {
setIsOpen(false);
}
} return (
<header className="w-full flex items-center bg-white py-2 px-6">
<div className="w-1/2"></div>
<div className="relative w-1/2 flex justify-end">
<button onClick ={handleClick} className="realtive z-10 w-12 h-12 rounded-full overflow-hidden border-4 border-gray-400 hover:border-gray-300 focus:border-gray-300 focus:outline-none">
<img src={myImg} alt="sign-in" />
</button> {isOpen && (
<div className="absolute w-32 bg-white rounded-lg shadow-lg py-2 mt-16">
<a href="#" className="block px-4 py-2">Sign In</a>
</div>
)}
</div>
</header>
);
} export default Header;
Now start the application by executing command npm start or yarn start.
Now you can open dropdown menu by clicking the button and close it only by clicking the button.
Close the dropdown menu by click outside
Come back to the purpose of this article, is to demonstrate the application of useRef hooks. To detect the click outside, we use useRef hooks in Header.js
import React, {useState, useRef} from "react";
import myImg from './sign-in.png';
import useClickOutside from "../lib/clickOutside"; const Header = () => {
const [isOpen, setIsOpen] = useState(false);
const handleClick = () => {
if (!isOpen) {
setIsOpen(true)
}
if (isOpen) {
setIsOpen(false);
}
} const pullDown = useRef();
useClickOutside(() => setIsOpen(false), pullDown); .......
We will create subfolder lib in src directory. Add file clickOutside.js with following codes
import { useEffect } from "react"; const useClickOutside = ( closeModal, ref ) => {
const handleClickOutside = (e) => {
if (!ref || !ref.current.contains(e.target)) {
closeModal();
}
}; useEffect(() => {
// add when mounted
document.addEventListener("click", handleClickOutside, true);
// return function to be called when unmounted
return () => {
document.removeEventListener("click", handleClickOutside, true);
};
}, []); // eslint-disable-line react-hooks/exhaustive-deps
}; export default useClickOutside;
clickOutside.js has ‘click’ event listener that handle click outside by receiving callback function closeModal and ref hooks from its parent.
Essentially, useRef is like a “box” that can hold a mutable value in its .current property. You might be familiar with refs primarily as a way to access the DOM. If you pass a ref object to React with <div ref={myRef} /> , React will set its .current property to the corresponding DOM node whenever that node changes. However, useRef() is useful for more than the ref attribute. It’s handy for keeping any mutable value around similar to how you’d use instance fields in classes.
Let’s finish up Header.js by adding ref in <button> tag.
<button onClick ={handleClick} ref={pullDown} className="realtive z-10 w-12 h-12 rounded-full overflow-hidden border-4 border-gray-400 hover:border-gray-300 focus:border-gray-300 focus:outline-none">
<img src={myImg} alt="sign-in" />
</button>
the final codes in Header.js
import React, {useState, useRef} from "react";
import myImg from './sign-in.png';
import useClickOutside from "../lib/clickOutside"; const Header = () => {
const [isOpen, setIsOpen] = useState(false);
const handleClick = () => {
if (!isOpen) {
setIsOpen(true)
}
if (isOpen) {
setIsOpen(false);
}
} const pullDown = useRef();
useClickOutside(() => setIsOpen(false), pullDown); return (
<header className="w-full flex items-center bg-white py-2 px-6">
<div className="w-1/2"></div>
<div className="relative w-1/2 flex justify-end">
<button onClick ={handleClick} className="realtive z-10 w-12 h-12 rounded-full overflow-hidden border-4 border-gray-400 hover:border-gray-300 focus:border-gray-300 focus:outline-none">
<img src={myImg} alt="sign-in" />
</button> {isOpen && (
<div className="absolute w-32 bg-white rounded-lg shadow-lg py-2 mt-16">
<a href="#" className="block px-4 py-2">Sign In</a>
</div>
)}
</div>
</header>
);
} export default Header;
Start the app again.
click outside works! You can find the code in github repo
See you in next post. | https://paulho1973.medium.com/clickoutside-using-useref-hooks-77f5dcd6c29e | ['Paul Ho'] | 2020-11-03 03:02:57.978000+00:00 | ['React', 'Hooks', 'Useref'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.