title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
I am | I am
A poem about progress
@julian unsplash.com
I am one more step
forward today for
recognizing that
my life has shifted.
The weight that long
lifted, allowed me
to gain immense
heart strength
to carry on,
despite the
uncertainties.
I no longer need to
analyze heart
situations.
I am simply the heart
that beats forward
everyday.
The memories of the
past, though tempting
to savor no longer defines
who I am today.
I am who I am today
not because of my past
but because of my future.
That future is determined
by the strong core within.
It is determined by
incremental progress
everyday.
During heavy hurricane
induced currents that
can topple ships,
I have faith in finding
my way to shore.
To savor and share these
dark experiences are as
sweet as sharing the ones
that light up the sky.
There are no shortcuts,
no compromises
for a life well-lived. | https://medium.com/jun-wu-blog/i-am-323f0e9b0b28 | ['Jun Wu'] | 2020-10-05 23:56:06.014000+00:00 | ['Poetry', 'Mental Health', 'Self', 'Self Improvement', 'Poem'] |
What Is J.K. Rowling Trying to Prove? | In what appears to be a shockingly tone-deaf attempt to get the world to agree with her ideas about transgender people, J.K. Rowling has done something really strange.
She’s turned herself into a villain.
In her latest book under the pen name Robert Galbraith, she’s written a cross-dressing male murder suspect inspired by the real-life serial killers Jerry Brudos and Russell Williams. Brudos had a fetish for women’s shoes and murdered four women in Oregon in the late 1960s. Williams broke into more than 80 women’s homes to steal their underwear and committed at least two counts of rape and murder.
On the new book’s website, Rowling writes that “The suspects in Dr Bamborough’s disappearance include a womanising patient who seems to have developed feelings for her, a passive-aggressive husband who wanted her to quit her job to become a full-time mother, and a sadistic serial killer active in the 60s and 70s, who was loosely based on real life killers Jerry Brudos and Russell Williams — both master manipulators who took trophies from their victims.”
Within the novel itself, detective Cormoran Strike fears the missing GP Margot Bamborough is the victim to Dennis Creed, a man called a 'transvestite serial killer' for murdering his victims while wearing female clothing.
So, here’s the thing. It doesn’t really matter whether or not a “transvestite serial killer” is the perpetrator by the end of the book. The damage has already been done because we have an author who was once lauded for her inclusion now going out of her way to tell readers that trans women aren’t real women and that men are routinely dressing up as women, just to victimize “biological females.”
In a statement to CNN, a British charity called Mermaids, which supports transgender children and their families, expressed concern about Rowling’s new book:
“This is a long-standing and somewhat tired trope, responsible for the demonization of a small group of people, simply hoping to live their lives with dignity.”
Mermaids also pointed out that this isn’t the first time Rowling has used a transgender character for a murder suspect:
“We are disappointed to hear that the author might be propagating the same, long-standing and hurtful presentation of trans women as a threat.”
Frankly, I’m having a hard time seeing the end game here. The Harry Potter author seems dead set upon proving her theory to the world that trans women can’t be trusted. It is, apparently, imperative in her mind that people admit that sometimes men who hurt women also crossdress as women or take their clothes as trophies. I mean, okay, but more often than not, the men who hurt women don’t crossdress at all. Seriously. The vast, vast majority of men who assault women are cisgender and not wearing any sort of female clothing.
So, why is she so hellbent on pretending that a rare occurrence is actually the norm? Worse yet, why is she so determined to conflate “crossdress killers” with trans women at all?
Honestly, I don’t understand where she’s coming from. I don’t understand if the comments she’s made about transitioning are coming from her genuine thoughts, or if her thoughts on the matter have been muddled by some sort of fear, shame, self-loathing, or basic misogyny.
Yes, I understand that J.K. Rowling repeatedly brings up women’s rights and appears to paint her views as staunchly feminist. But in reading through her very abrasive comments and that now-infamous blog post, I can’t help but feel a certain level of… internal conflict.
“The writings of young trans men reveal a group of notably sensitive and clever people. The more of their accounts of gender dysphoria I’ve read, with their insightful descriptions of anxiety, dissociation, eating disorders, self-harm and self-hatred, the more I’ve wondered whether, if I’d been born 30 years later, I too might have tried to transition. The allure of escaping womanhood would have been huge. I struggled with severe OCD as a teenager. If I’d found community and sympathy online that I couldn’t find in my immediate environment, I believe I could have been persuaded to turn myself into the son my father had openly said he’d have preferred. When I read about the theory of gender identity, I remember how mentally sexless I felt in youth. I remember Colette’s description of herself as a ‘mental hermaphrodite’ and Simone de Beauvoir’s words: ‘It is perfectly natural for the future woman to feel indignant at the limitations posed upon her by her sex. The real question is not why she should reject them: the problem is rather to understand why she accepts them.’ As I didn’t have a realistic possibility of becoming a man back in the 1980s, it had to be books and music that got me through both my mental health issues and the sexualised scrutiny and judgement that sets so many girls to war against their bodies in their teens. Fortunately for me, I found my own sense of otherness, and my ambivalence about being a woman, reflected in the work of female writers and musicians who reassured me that, in spite of everything a sexist world tries to throw at the female-bodied, it’s fine not to feel pink, frilly and compliant inside your own head; it’s OK to feel confused, dark, both sexual and non-sexual, unsure of what or who you are.”
It’s hard to read that section of her blog post and not feel a great deal of pain and longing in her voice. Without a doubt, it’s offensive how she’s painted transgender as a choice kids pick up because they’re depressed, unhappy, or easily persuaded by the media or peer pressure. It’s undoubtedly gross that she’s hurling the same accusation that’s long been used on queer folk — as if their identity isn’t real, and there’s some “cult” that seeks out new recruits and “converts.”
“They’re going to turn you gay,” is another tired trope, but I suppose, “They’re going to turn you trans,” is just the next stage of discrimination. So, I certainly don’t want to downplay the pain and hurt J.K. Rowling has caused (and continues to cause) by such barbed remarks.
Initially, her arguments reminded me a lot of the “All lives matter,” or “Blue lives matter” crowds. She’s so fixated on being right that she’s completely missed the point. Nobody is saying that safe spaces for women don’t matter. But statistically speaking, she’s siding with those who would make all of us women much less safe.
Even so, I keep coming back to that section of the blog post where she posits that she would have liked to be a man when she was young and struggling with her place as a woman. Tone-deafness aside, it’s interesting, right?
Transfeminist writer Julia Serano says in her book, Whipping Girl, that transphobia is fueled by the insecurities people have about gender and gender norms. One thing J.K. Rowling has made clear in all of this drama is that she’s got a lot of her own baggage about gender and being a woman.
She sounds pretty damn insecure, actually.
What sucks is that in all of her own baggage, she’s so busy bringing others down. But given her apparent commitment to a transphobic “logic” that clearly makes no sense, I can’t help but think that the one person she’s trying the hardest to convince is really herself. | https://medium.com/honestly-yours/what-is-j-k-rowling-trying-to-prove-d5bcd2fd8bde | ['Shannon Ashley'] | 2020-09-21 22:26:31.277000+00:00 | ['Women', 'Books', 'LGBTQ', 'Film', 'Culture'] |
The Only Of My Profession | Let’s talk shop.
People will define you based upon what you do, by what puts bread upon your table. And that’s ok. Because for what I do, I am the definition of my profession. I define the profession of my choosing in mind, body, and soul. When the title of my profession is cast into the air as spoken word, it is my face that illuminates behind the eyes on all breathing brings imaginations.
I take my profession seriously. There is no room for deviation, delay, or error. Any misstep I may take or hesitation I may have will cause undue, unrepairable, unreconcilable damage to the very fabric that makes up our past, present, and our future.
My life is my profession and my profession is my life. I am a one of only, for there are no others like me, and there shall be no others like for I am one and I am my profession.
To some my profession is cruel and consuming, leaving only a trail of pain and sorrow, and misery. Though to those who see without the Vail of ignorance and humanity, see that my profession is necessary. Necessary, is my profession, as is the sun rising in the morning, the moon high in the sky at night, or the blood that pulses and flows through your body.
My profession sustains life by taking life. For an excess of life shall unduky lead an infinite absence of life.
I am the darkness and I am the light.
I am beginning and I am end.
I am joy and I am sorrow.
I am the only of my profession.
For I am Death. | https://medium.com/wonderarium/the-only-of-my-profession-cd44613d492 | ['Patrick Goldman'] | 2019-12-03 03:20:32.430000+00:00 | ['Poetry', 'Writing', 'Short Fiction', 'Creative Writing', 'Short Story'] |
Senior Technical Expert Allen Liu Joins Vite Labs as Engineering Director of Vite International | Allen Liu
Allen Liu
Senior technical specialist with 10 years high concurrency experience in multinational firms, Scrum expert, OCJP, PMP.
He graduated from the University of Science and Technology of China as bachelor of engineering. Afterwards, he obtained his master degree in computer science in Fudan University. He joined HP as a software development lead after graduation in 2005.
In 2008, he joined IBM China lab to lead the development of IBM FileNet content engine. He served as a senior architect and a product manager.
IBM FileNet is an industry-leading enterprise content management solution which covers the entire content ecosystem of process management, user collaboration, data protection, and content transformation.
During this period, he has accumulated abundant experience in high concurrency system development and project management. He has been dispatched to the United States and Japan several times and is proficient in managing distributed teams.
He joined Vite Labs in July 2018 as the Engineering Director of Vite International, responsible for development of the engineering team overseas.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Official: https://www.vite.org/
Telegram:
Twitter:https://twitter.com/vitelabs
Discord:https://discordapp.com/invite/CsVY76q | https://medium.com/vitelabs/senior-technical-expert-allen-liu-joins-vite-labs-as-engineering-director-of-vite-international-a2480107ad9d | ['Vite Editor'] | 2018-10-04 18:26:05.278000+00:00 | ['Startup', 'Blockchain', 'Cryptocurrency', 'Ethereum', 'Bitcoin'] |
10 Reasons To Write On Medium During the Pandemic | If you are fortunate to not have too many disruptions during the pandemic, then you are lucky. Between working remotely, taking care of your family, if you have spare time, it’s a good opportunity to share your thoughts on Medium.
You can earn extra money by taking part in the Medium Partnership Program.
You can also write to sharpen your writing skills. In any job today, communication is the key. Writing well often means you’ve set the foundations of speaking well in public and meetings.
You can use the Medium app to speak your articles. It will record and transcribe instantly into the Medium editor.
Here are 10 reasons why I consistently write on Medium during this pandemic.
Social Distancing Can Mean Virtual Closeness
Writing on Medium for a year taught me that you can have a kind of virtual closeness with your readers. You may be social distancing but because you read certain types of articles or you write about certain subject matters, you establish new friendships with publication editors, fellow writers in certain topics, and your readers.
Using your comments to get to know people and their work can allow you to make discoveries. This type of interaction has brought me closer to my audience.
Bring Issues To The Forefront
In emergencies, some people might react well and others may not. When you spot injustice, inequality, and problems in our society, the impulse is for all of us to say something. If enough of us make noise about what our values are, then naturally, our democracy is reaffirmed.
But, if injustice, inequality, and issues are amplified continuously without people who care for it, then it can become cancers in our society.
This is a good time to think about things that are bothering you and write about them.
Air Your Feelings
When you are with your immediate family 24 hours a day and 7 days a week, you might feel that you are spending too much time with them. Yet, you can’t exactly air out your feelings with them in the room. Then, write them down on Medium.
You can remain anonymous on Medium. Other readers on Medium might be able to give you emotional support. Think about a virtual hug from that mom who also wrestled with tending to a husband and kids all day.
Chances you may find that your feelings are widely shared by others.
Distraction
A new hobby always distracts me from the current turmoil. Reading or writing on Medium is an easy hobby to pick up. You don’t need to buy equipment. You just have to signup for a membership and you are good to go.
You can write a few times to see if it suits you. If you have projects that you are working on in your spare time, then write about those.
For me, writing about my projects make me feel more motivated to complete my projects.
Stay In Tune With Creativity
During the pandemic, all of us are all scrambling at work. A lot of us are dealing with small projects that are repetitive but have to get done. There’s not creativity used there. We want to be working on long term projects that we have ample time to be creative in.
Writing on Medium every day can help you air that creativity. Have you tried writing a poem about your day? Have you created satire about remote work?
Sharpen Your Muscles For the Long Distance
Most often, when I write on Medium, I am sharpening my mental muscles for the long-distance journey through my projects. Last year, I was only able to write one good article a day. Now, I write at least 3 or 4 a day in different outlets.
Now, when I sit in front of my computer to work on taxes, programming, or reading academic papers, my concentration kicks in immediately. It’s because I’m now used to the routine.
Set aside one hour every day to sharpen those mental muscles, you won’t regret it.
Discover Yourself
After a whole year of writing about my journey through the last 30 years, I realized that I’m more sure of all the decisions that I’ve made. At each stage, when I evaluated why I made the decisions I made, there were reasons. I found these reasons while reflecting on them in my writing.
After writing on Medium for a year, I realized that I have a lot more passions in life and in my career than what I was limited to. We often pigeon-hole ourselves to our immediate interests because we are not meeting new people. But, on Medium, while reading about other people’s interests, you may discover that you wanted to go on adventures that you didn’t know about.
Build Habits During Uncertainty
I’ve weathered many storms in my life. During each storm, it’s best to put your head down and adjust your life toward the new normal. I take this time to instill some new habits into my routine so that no matter what happens from today onwards, I have these scheduled activities to lean on.
These activities that I defer to every single day provide a sense of safety for me during uncertain times.
If anything, I can count on my morning yoga and writing session. I can always count on my afternoon coffee.
Practice The Art of Sharing
Sharing does not come naturally for people. There’s an inherent compassion and selflessness that you have to tap into. In our capitalistic society, people often make others feel “stupid” for sharing. Knowledge sharing when you are a knowledge worker seems almost absurd to some.
But, I’ve learned in this past year that if you assume your learning is dynamic, then you will have no problems sharing. In the end, I’ve received more from the interactions when I shared than what I have shared.
In about a week of releasing an article that contains the knowledge that I share, I’ve moved onto gaining other types of knowledge. If anything, sharing is a validation of your knowledge and your experiences.
That validation gave me confidence.
Gain Mindful Space
I credit writing on Medium for me to gain a kind of mental space or clarity about my life. Every week, when I write on Medium, not only do I make sense of my thoughts, I’m also inspired by other people’s thoughts on Medium as well.
Once I put down my thoughts, I gain the space in my heart and mind to take in more experiences.
I often do yoga before and after my writing session. This is because these two activities are so similar. Putting words down often seems like a long exhale where you arrive at the destination that is yourself. | https://medium.com/jun-wu-blog/10-reasons-to-write-on-medium-during-the-pandemic-fd5cc27d58fd | ['Jun Wu'] | 2020-04-03 09:56:54.137000+00:00 | ['Writing', 'Writing On Medium', 'Pandemic', 'Writing Tips', 'Freelancing'] |
Configuring the Webserver and Setting up Python environment on the Docker | Designed by Sachin Kumar
Configuring the Webserver and Setting up Python environment on the Docker
Let’s see how we can configure the Webserver and setting up the Python Environment on the top of Docker Container.
Before starting the configuration I think you should know a little bit about Docker. So let’s start with the introduction of Docker without any delay.
Introduction of Docker
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
Installation of Docker
Hopefully, now you have a clear idea of why docker is used by us. So let’s see the installation of docker.
Note: Here I am installing the Docker on the top of AWS Redhat Instance But you can use any operating system with the same approach.
Login with root power using sudo su command and go inside the /etc/yum.repos.d/ directory using the below-mentioned command.
cd /etc/yum.repos.d/
After going inside the yum.repos.d directory, create a repo file for docker software. So use any editor vim or vi to create the file.
vi docker.repo
Now write the below-mentioned script inside the docker.repo file to install the Docker software.
After saving the docker.repo file and run the below-mentioned command to install the docker software.
yum install docker-ce --nobest -y
So docker software has installed or not you can check with help of the below command.
rpm -q docker-ce
Now you can see Docker has installed on the top of AWS Redhat Instance. So, start the docker services using the below command.
systemctl start docker
After starting the service just check the docker status that it is running or not, using the below-mentioned command.
systemctl status docker
Now you can see in the above screenshot that Docker has a running state now. But when you will shut down your system and restart again. At that time, you have to restart the docker services again using systemctl start docker so if you want that you have not to do the same thing again & again then run the below command for enabling docker services permanently.
systemctl enable docker
Now Docker installation has done. So you can use all the docker services as you want.
Configuration of the Webserver
Before setting up a server you must know the real meaning or definition of a server. So, the Server is a program that provides the client with any kind of services. For example, a web server provides our websites, a database server provides us data. This means every server has work to do and for every different work we want them to do, we have to choose different servers. So let’s see which steps we have to do to configure the WebServer. Also, web server architecture will be solved here as the below mentioned.
The architecture of WebServer on the Docker Container
3 Steps to Configure a WebServer
Install the Server Program
Configuration of Server
Start the server
So let’s see step by step how to configure the server on the docker. But before configuring, pull the docker image from DockerHub using the below-mentioned command.
Note: You can pull any image from the docker hub which you want using the same below command.
docker pull imagename:imageversion
Note: docker images command is used to see the docker images whatever your docker has.
So launch the docker container using the below command.
docker run -it --name containername imagesname:imageversion
Now container has been launched and also you can see in the above screenshot before launching the container you are seeing root@ip-172-31–41–26 but after running the above command root@3de7caacc4d0 has taken place.
Now the container is ready for the configuration of the Webserver So install the webserver using the below-mentioned command.
yum install httpd -y
After installing the Webserver go inside the /var/www/html directory and make a webpage as the below mentioned.
cd /var/www/html
Make a webpage according to your comfort.
After making the webpage you have to start the webserver but when you will start it in a regular way then you will face this type of problem.
So resolving this problem run the below command.
/usr/sbin/httpd
Copy your container IP and try to access the webpage through your local server using the below-mentioned command.
curl http://containerip/webpage.html
We are able to access the webpage now. So Webserver has configured.
Setting up the Python Environment
let’s see how we can set up the python environment as well as how python works on the docker container. but before setting up the python environment you have to launch a docker container first. So let’s start.
First of all, you have to pull the image from the docker hub using the below-mentioned command.
docker pull imagename:imageversion
You can also check all the images whatever you have inside your docker setup using the below command.
docker images
Now launch the docker container using the below command.
Note: Here containername you can give according to comfort.
docker run -it --name containername imagesname:imageversion
Now container has launched and also you can see in the above screenshot before launching the container you are seeing root@ip-172-31–41–26 but after running the above command root@1a90da6ce750 has taken place.
So now install python in the container using the below command.
dnf install python3 -y
Check the Python has installed or not using python3 -V
Make workspace for the python as mentioned in the below screenshot and write any python code.
So I have written a small code inside my test.py file as mentioned below.
Now run the code using python3 filename.py
So you can see test.py file has successfully run.
Conclusion
In this article, we have seen how we can configure the Webserver and set up the Python environment on the top of Docker as well as how we can install the docker software on our local server and AWS instance as well. So hopefully you have really enjoyed this article.
I tried to explain as much as possible. Hope You learned Something from here. Feel free to check out my LinkedIn profile mentioned below and obviously feel free to comment. I write Cloud Computing, Machine Learning, Bigdata, DevOps and Web, etc. blogs so feel free to follow me on Medium.
Thanks, Everyone for reading. That’s all… Signing Off… 😊 | https://medium.com/hackcoderr/configuring-the-webserver-and-setting-up-python-environment-on-the-docker-91f173272511 | ['Sachin Kumar'] | 2020-11-01 08:14:09.139000+00:00 | ['Web Server', 'Docker', 'Python', 'Containers', 'Apache'] |
[tw]²: Nov 4–10, 2019 | #45 | [tw]²: Nov 4–10, 2019 | #45
The week leading up to Excel 2019 | TWTW #60
Monday (04/11/2019)
The pressure to work for iBeTo is on. I am half-minded of quitting but I know if not now, I probably will never find the time to work on that idea. Leon is prepared to make sacrifices as well to ensure we roll out a decent project by Thursday morning. Need to decide how to proceed.
Academics is going rather poorly for me as I am not investing time in learning curriculum matter currently — last couple of weeks I was engaged with the Devsprints planning work so this was quite missed out. Yet, as a result of my months of procrastination to get the team together to work, we now have to rush and do what is possible within 48 hours.
I really wish I worked in an organic manner. To bad even a year after starting TWTW, that aspect of my nature (of not keeping things for the last minute) has not changed.
If we proceed, we plan to create just 3 basic, yet separate, units:
An application to collect 3D data of portions from the real world. A system to generate 3D data of portions on AR Application. An interface to display analytics and insights to reduce food wastage.
Part #1 would take quite a while to complete (but definitely possible), while Part #2 is quite straight-forward thanks to prior experience. Part #3, on the other hand, is a whole other dimension we are yet to venture — but the crux.
Tuesday (05/11/2019)
Spent quite some time working with Leon to attempt completing iBeTo project in the evening. Took a lot of pictures from different angles in the hopes of achieving Photogrammetry.
Finalized the research paper I wished to teach in class. Here are the relevant links.
Wednesday (06/11/2019)
Completed the presentation for iBeTo. We do not have a good prototype to show though.
Thursday (07/11/2019)
Presentation was mediocre. We figured out 3D scanning using a Photogrammetry app from Playstore. Did not win (no surprise there) but made me realize how dependent I was on others to create what I wished for — this dependence proved to be a bottle neck. Trying solo projects moving forward.
Friday (08/11/2019)
Went hometown for cousins engagement planing. Spent quite some time thinking about the Teaching Assistant Club activities that I wish to carry out in the next semester in our college.
Saturday (09/11/2019)
Today is the engagement. Good to meet family members.
Sunday (10/11/2019)
Back at home, taking rest and bracing for the multitude of work that remains at college. | https://medium.com/life-documented/tw-%C2%B2-nov-4-10-2019-45-ac8a18fed23c | ['Joel V Zachariah'] | 2019-11-10 09:30:06.171000+00:00 | ['Techfest', 'Artificial Intelligence', 'Education', 'Twtw', 'Writeup'] |
Hall-of-Famer Michael Loeb: Take the Plunge or Lose the Opportunity | Many millennials dream of being their own boss. It’s easy for some of us to make excuses for why “now isn’t the right time” to start your business — the economy is heading toward recession (it has to be, right?), your corporate job offers great and stable health benefits, and even a 401K. But succumbing to these easy outs can rob you of the freedom to explore the endless realm of potential your next great idea could have for you and the world around you.
You’re Going to Be Told No, So Deal With It
You may never live on a yacht enjoying the seven-figure life, but you know what? There’s lots of opportunity out there. The average consumer’s needs and wants evolve daily. And whatever you create just might be the next thing your peers desire and expect to see in the marketplace. Today’s fast changing world provides an unlimited opportunity for self-making entrepreneurs that want to capitalize off of the latest consumer trends, many of which are fueled by technology.
There is one reality you need to grow accustomed to in the business world, regardless of your experience, and that’s recognizing that at some point, you will be told “no.” And, that’s okay. Accept it and move on.
Even Michael Loeb, the founder and creator of magazine subscription giant, The Synapse Group — also known for volunteering his home for CNBC’s hit show, Billions — is no stranger to hearing the word “no.” But he hasn’t let it impede his forward momentum. In fact, in 2002, Loeb was inducted into the Direct Marketing Association’s Hall of Fame because of his experience and affiliation with over 50 startups; among them Priceline.com. And he hasn’t stopped yet.
While delivering his keynote address to the attendees of Advertising Week in New York last year, Loeb went into a rare inside glimpse of the challenges he faced when he first created Synapse.
Getting fired from Time, Inc. in the early ‘90’s was the impetus for Loeb to start The Synapse Group, the world’s largest magazine retailer. Through his audacity and his unwillingness to limit Synapse’s growth, he outlined how he was able to expand his startup to partner with all major credit cards and a variety of publishers which represented more than 700 magazine titles.
At the end of the day, he ultimately sold the business back to his former employer for $800 million. As a side note, incubated inside of Synapse was Priceline, which was founded by Synapse’s co-founder, Jay Walker, and co-funded by Loeb.
When It Comes to Your Idea, Take the Plunge Before Someone Else Does
As I always say, good isn’t good enough, good is for the other guy.
For Loeb, having a great idea today could mean it’s gone tomorrow. As the CEO ended his keynote, he told attendees that “[he] wished he had started [his] entrepreneurial journey sooner.”
Taking into account the opportunity, the available resources, willpower, and, of course, timing can make all the difference between success and failure, especially in today’s digital age.
“We are living in a time of major disruption and I encourage everyone here to not hesitate. Seize your passion, energy, and conviction and funnel it into making your startup a company that can truly create positive change for a specific industry or community.”
Had Loeb not built Synapse at the precise moment he did, he wouldn’t have been able to disrupt the subscription market. And he likely wouldn’t have been able to help build Priceline and earn the knowledge and experience to start Loeb Enterprises.
Today, Loeb’s fast establishing Loeb.nyc, something he calls a ‘venture collective,’ that is comprised of roughly two-dozen startups and early stage investments including digital platforms, direct-to-consumer products, and enterprise solutions.
The ever-growing portfolio of Loeb.nyc companies includes All The Rooms, Butler Hospitality, Mercato, Payoneer, SummitSync, and Thnks — among other top names. With holdings around the country, Loeb.NYC is headquartered in a Midtown Manhattan tower where it occupies three floors.
In a recent Forbes interview, Loeb identified five other entrepreneurs — Steve Jobs, Thomas Edison, Tom Brady, Roger Federer, and his very own father, Marshall Loeb –who have each in their own way inspired him to continue on his journey of building, backing, and/or funding successful companies. | https://medium.com/hackernoon/hall-of-famer-michael-loeb-take-the-plunge-or-lose-the-opportunity-44572b9f3a85 | ['Drew Rossow'] | 2019-04-10 06:20:07.753000+00:00 | ['Synapse Group', 'Entrepreneurship', 'Michael Loeb', 'Finance', 'Inspiration'] |
Exploring the Barcode Universe | Exploring the Barcode Universe
A Computational Thinking Story With the Wolfram Language
Photo by Johannes Plenio on Unsplash
Incomprehensible by humans, but child’s play for computers and phones: Barcodes are everywhere. Every product, every package, and every store shelf has them in copious amounts. Black and white patterns, often lines and sometimes dots, provide our silicon friends with a little number that encapsulates what the object is all about.
Online services like barcodelookup.com provide databases with millions of items to turns those little numbers into a wealth of information, like product name, product category, and vendor-specific information.
In the Wolfram Language, you can read barcodes and also create barcode images. It does not come with a built-in service to interpret barcodes but in this story, I will show you how to connect to the API from the BarcodeLookup service.
Let’s start with generating barcodes ourselves. The function to use here is called BarcodeImage. It can generate the most common types of barcodes. For example, this generates a UPC barcode image:
BarcodeImage["123456789999", "UPC"]
(image by author)
Here is an example of a QR code. Most smartphones will offer to open the web page when you point the camera at it:
BarcodeImage["https://www.wolfram.com", "QR"]
(image by author)
Automatically recognizing a barcode is done with the BarcodeRecognize function. It works even when the barcode is only part of the image, although it picks up an occasional “stray” barcode. In this case, the result is correct:
(image by author)
Next, to interpret the code “04150800414” I wrote a simple Wolfram Language function that binds to the API from the Barcode Lookup service.
It can be accessed by its name:
BarcodeLookup = ResourceFunction[
"user:arnoudb/DeployedResources/Function/BarcodeLookup"
]
Now we can look up the product information. The “key” is a small alphanumerical string required and provided by the Barcode Lookup service API.
product = BarcodeLookup["041508300414",key]
And you can drill down programmatically to get, for example, the product name:
In[]:= product["products", 1, "product_name"] Out[]= "San Pellegrino Pompelmo Grapefruit Sparkling Fruit Beverage, 11.15 Fl. Oz., 6 Count"
The application possibilities are almost endless here. With these functions, it becomes easy to build applications for tracking inventory, generating reports, and automate restocking procedures. | https://towardsdatascience.com/exploring-the-barcode-universe-6c80dbebb356 | ['Arnoud Buzing'] | 2020-09-24 14:04:35.869000+00:00 | ['Wolfram', 'Computational Thinking', 'Image Processing', 'Data Science', 'Barcode'] |
Implementing Aspect Based Sentiment Analysis using Python | Aspect Based Sentiment Analysis also known as Feature Based Sentiment Analysis is a technique to find out various features, attributes, or aspects from a given text and their respective sentiments.
In this article, we will look at how we can implement ABSA using Python and various NLP tools such as StanfordNLP and NLTK.
The paper that I have referenced for the implementation of the code is published by Nachiappan Chockalingam in which he had explained everything in very detail about ABSA. I highly recommend you to first read this amazing paper and then jump onto the code, as it will make it very clear of what’s happening in the code.
Paper: Simple and Effective Feature Based Sentiment Analysis on Product Reviews using Domain-Specific Sentiment Scores
Let’s Get Started
Installing Essential Libraries
Open up your terminal and install the following Libraries
pip install pandas
pip install numpy
pip install nltk
pip install stanfordnlp
For PyTorch installation go onto this PyTorch website.
Importing Libraries
Open up your Jupyter Notebook and import the following libraries,
Now download the Stanford English model and some nltk tools that will be later used for extracting the Dependency relation in the text and other text preprocessing tasks.
Create a sample text review on which we will perform ABSA.
txt = "The Sound Quality is great but the battery life is very bad."
LowerCase the text and tokenize the Sentence.
Now for each sentence in the <sentList> tokenize it and perform POS Tagging and store it into a Tagged List.
Now there are many instances where a feature is represented by multiple words so we need to handle that first by joining multiple words features into a one-word feature.
Tokenize and POS Tag the new sentence.
Now we will use the Stanford NLP Dependency Parser to get the relations between each word.
Now we will select only those sublists from the <dep_node> that could probably contain the features.
Now using the <dep_node> list and the <featureList> we will determine to which of the words these features in the feature list are related to.
So as you can see we have got the feature words and for each word a list of words it is related to.
Now select only the feature Nouns List from the <fcluster>.
So with this, we have got the list of the features and their respective sentiment words within a sentence, now all you have to do is to check whether the sentiment word is positive, negative or neutral.
Full Code
Conclusion
So with this, we have seen a basic implementation of Aspect Based Sentiment Analysis using various NLP tools and techniques.
The code explained here can be very much improved by adding various NLP preprocessing techniques like Coreference Resolution, Slang Words Removal, Negation Handling, Sarcasm Detection, etc.
If you found this article useful do Clap and Share and feel free to ask any doubts regarding the article. Always open to any suggestions or ways to improve the code :)
and feel free to ask any doubts regarding the article. Always open to any suggestions or ways to improve the code :) Connect with me on LinkedIn or GitHub.
References
Updates and Fixes
If anyone is facing the errors with StanfordNLP module, try using this alternative module called “stanza” as mentioned in the comments. | https://medium.com/analytics-vidhya/aspect-based-sentiment-analysis-a-practical-approach-8f51029bbc4a | ['Rohan Goel'] | 2020-12-29 07:10:17.299000+00:00 | ['Stanfordnlp', 'Python', 'NLP'] |
Memorise 75 Ayat from the Qur’an in Just 15–20 Minutes a Day | By Shamsiya Noorul Quloob
The Prophet ﷺ said, “The one who was devoted to the the Qur’an will be told on the Day of Resurrection: ‘Recite and ascend (in ranks) as you used to recite when you were in the world. Your rank will be at the last Ayah you recite”. [Abu Dawud and At-Tirmidhi]
With the precious month of Ramadan upon us, it benefits us to try our best to memorise the Qur’an. During Ramadan when everyone is trying to maximise their good deeds, we can benefit immensely from the repetitive nature of Qur’an. Memorising one verse is equal to the memorisation of 31 or 5 verses in some cases.
During Ramadan when the reward of every good deed is with Allah alone, we can dedicate just 15 to 25 minutes, and end up memorising more than 70 verses of the Qur’an. However, we can be one step ahead of the race to win Allah’s pleasure by starting early and making a conscious effort to start memorising now. But first, let’s understand the nature of the Qur’an.
So, why does the Qur’an repeat itself?
Before we understand the repetitive nature of Qur’an, let us understand ourselves. Qur’an calls us ‘insaan’. Linguistically speaking, “Insaan” comes from the root word ‘nisyân’ which means ‘to forget’. The second one is ‘unsiyah’, which means ‘to relate, to love-be loved, to become close to’. Ibn Abbas (r) said that al-insân is called so because of his forgetfulness. Our very name tells us about our nature, we are a forgetful lot. The things that are hardest to internalise are the things that are mentioned most in the Qur’an. Taqwa (righteousness, piety, goodness) of Allah is repeatedly mentioned because it is something we tend to forget the most in this world that constantly bombards us with materialism.
Below are 10 verses from the Qur’an that we will often encounter. Let us memorise and extract guidance from them [all transliterations are from Quran.com]. In the last 2 days of Ramadan, ensure that your ranks in jannah are raised 71 times, In sha Allah!
VERSE #1: 2 Verses [Surah 2, 31: Verse 5]
أُولَٰئِكَ عَلَىٰ هُدًى مِنْ رَبِّهِمْ ۖ وَأُولَٰئِكَ هُمُ الْمُفْلِحُونَ Ola-ika AAala hudan minrabbihim waola-ika humu almuflihoon
Those are on [right] guidance from their Lord, and it is those who are the successful. [Click here for audio]
Reflection:
Successful refers to all bounties that come from Allah. They may be:
Physical gifts e.g. food, clothing, houses, gardens, wealth etc. or
Intangible gifts e.g. influence, power, birth and the opportunity flowing from it, health and talents etc. or
Spiritual gifts e.g. insight into good and evil, understanding of men, the captivity of love, etc.
We are to use all bounties in humility and moderation. We need to share them with for the well-being of others. We are to be neither ascetics nor luxurious sybarites, neither selfish misers nor thoughtless prodigals. The right use of bounties will, In sha Allah, lead to an increase future bounties. Believers receive blessings because they submit to Allah’s will. They will do well in this life (from the highest standpoint) and they will reach their true goal in the future, In sha Allah.
VERSE #2: 6 Verses [Surah 2, 3, 29, 30, 31, 32: Verse 1]
الم Alif, Laam, Meem Alif, Laam, Meem [Click here for audio]
Reflection: Huruf-e-muqatta’at — the disjointed letters.
A large number of scholarly books have been written over the centuries on the possible meanings and the probable significance of these disjointed letters — the muqatta’at. Opinions have been numerous but without a final conclusion.
There is no reliable report of Prophet Muhammad ﷺ having used such expressions in his ordinary speech, or his having thrown light on its usage in the Qur’an. More importantly, none of his companions seemed to have asked him about it.
These letters are usually used in the Qur’an when one needs to pay attention. The Arabs knew these letters but didn’t know them in this fashion and it caught their attention. Only Allah knows what their true meaning is.
VERSE #3: 2 Verses [Surah Al Baqarah 2: 42] & [Surah Al-Imran 3: Verse 71]
وَلَا تَلْبِسُوا الْحَقَّ بِالْبَاطِلِ وَتَكْتُمُوا الْحَقَّ وَأَنْتُمْ تَعْلَمُونَ Wala talbisoo alhaqqa bilbatiliwataktumoo alhaqqa waantum taAAlamoon [Surah Al Baqarah 2: 42]
And do not mix the truth with falsehood or conceal the truth while you know (it). [Click here for audio]
With a slight difference:
يَا أَهْلَ الْكِتَابِ لِمَ تَلْبِسُونَ الْحَقَّ بِالْبَاطِلِ وَتَكْتُمُونَ الْحَقَّ وَأَنْتُمْ تَعْلَمُونَ Ya ahla alkitabi limatalbisoona alhaqqa bilbatili wataktumoonaalhaqqa waantum taAAlamoon [Surah Al-Imran 3: Verse 71]
O People of the Scripture, why do you confuse the truth with falsehood and conceal the truth while you know [it]? [Click here for audio]
Reflection There are many ways of preventing access to the truth. One is to tamper with it or dress it up with colours of falsehood; half-truths are often more dangerous than obvious falsehoods. Another is to conceal it altogether. Those who are jealous of a believer will not allow his credentials or virtues be known, or might seek to vilify him or conceal facts that would draw people to him. When people deprive themselves of the light (of which we are ourselves witnesses), they are descending to the lowest depths of degradation. They are doing more harm to themselves than to anyone else.
VERSE #4: 4 Verses [Surah Al-Qamar 54: Verses 17, 22, 32, 40]
وَلَقَدْ يَسَّرْنَا الْقُرْآنَ لِلذِّكْرِ فَهَلْ مِنْ مُدَّكِرٍ Walaqad yassarna alqur-ana liththikrifahal min muddakir
And We have certainly made the Qur’an easy for remembrance, so is there any who will remember? [Click here for audio]
Reflection The Qur’an’s guidance for man’s conduct are plain and easy to understand and act upon. Is this not in itself an act of grace from Allah? What excuse is there for anyone who fails?
VERSE #5: 31 Verses [Surah Ar-Rahman 55 : Verses 13, 16, 18, 21, 23, 25, 28, 30, 32, 34, 36, 38, 40, 42, 45, 47, 49, 51, 53, 55, 57, 59, 61, 63, 65, 67, 69, 71, 73,75, 76]
فَبِأَيِّ آلَاءِ رَبِّكُمَا تُكَذِّبَانِ Fabi-ayyi ala-i rabbikumatukaththiban
So which of the favors of your Lord would you deny? [Click here for audio]
Reflection Both the pronoun ‘your’ and the verb ‘will ye deny’ are in the Arabic in the dual number. The whole surah is a symphony of duality which leads to unity. All creation is in pairs. Justice is the conciliation of two opposites to unity, the settlement of the unending feud between right and wrong. The things and concepts mentioned in the surah are in pairs; man and outer nature; sun and moon; herbs and trees; heaven and earth; fruit and corn; human food and fodder for cattle; and so on throughout the surah. Then there is man and jinn.
‘Will ye deny?’ That is, fail to acknowledge either in word or thought or in your conduct. If you misuse Allah’s gifts or ignore them, that is equivalent to ingratitude or denial or refusal to profit by His infinite grace.
VERSE #6: 12 Verses [Surah Mursalat 77: Verses 15, 19, 24, 28, 34, 37, 40,4 5, 47, 49] [Surah Mutaffifin 83:10] & [Surah At-Tur 52: Verse 11]
[Surah Mursalat 77: Verses 15, 19, 24, 28, 34, 37, 40,4 5, 47, 49] [Surah Mutaffifin 83:10]
وَيْلٌ يَوْمَئِذٍ لِلْمُكَذِّبِينَ Waylun yawma-ithin lilmukaththibeen
Woe, that Day, to the deniers.
[Click here for audio]
With a slight difference [Surah At-Tur 52: Verse 11]
فَوَيْلٌ يَوْمَئِذٍ لِلْمُكَذِّبِينَ Fawaylun yawma-ithin lilmukaththibeen
Then woe, that Day, to the deniers,
[Click here for audio]
Reflection That day will be the day of woe to the wrong doers or rebels against Allah and his Truth described in two aspects and a day of joy and thanksgiving to the righteous, who are described in three aspects in verses 17 and 28. The rebels are described as being those who openly defy truth and plunged into wrong doing or who jested with serious matters. They were also those who did not have the courage to plunge openly into wrongdoing but who secretly indulged it. They were also those who wasted their life in doubt and petty quibbles. It is difficult to say which attitude did more harm to themselves and to others. However, the mercy of Allah is open to all if they repent.
VERSE #7: 8 Verses [Surah Ash-Shu-ara 26: Verses 9, 68, 104, 122, 140,1 59, 175, 191]
وَإِنَّ رَبَّكَ لَهُوَ الْعَزِيزُ الرَّحِيمُ Wa-inna rabbaka lahuwa alAAazeezu arraheem
And indeed, your Lord — He is the Exalted in Might, the Merciful.
[Click here for audio]
Reflection One who is able to carry out his will and plans. And verily your Lord, He is truly The All-Mighty means, the One Who has power over all things, to subdue and control them. ‘The Most Merciful’ means, towards His creation, for He does not hasten to punish the one who sins, but He gives him time to repent, and if he does not, then He seizes him with a mighty punishment. Abu Al-Aliyah, Qatadah, Ar-Rabi` bin Anas and Ibn Ishaq said: ‘He is Almighty in His punishment of those who went against His commands and worshipped others besides Him.’ Sa`id bin Jubayr said: ‘He is Most Merciful towards those who repent to Him and turn to Him.’
VERSE #8: 2 Verses [Surah Waqi’ah 56: Verses 9, 41]
وَأَصْحَابُ الْمَشْأَمَةِ مَا أَصْحَابُ الْمَشْأَمَةِ Waas-habu almash-amati maas-habu almash-amat
[Click here for audio]
وَأَصْحَابُ الشِّمَالِ مَا أَصْحَابُ الشِّمَالِ Waas-habu ashshimalima as-habu ashshimal
[Click here for audio]
Meaning of both verses: And the companions of the left — what are the companions of the left?
Note: There is a slight difference due to grammatical rules, but the meaning of the two verses are the same.
VERSE #9: 2 Verses [Surah Waqi’ah 56: Verses 8, 27]
فَأَصْحَابُ الْمَيْمَنَةِ مَا أَصْحَابُ الْمَيْمَنَةِ Faas-habu almaymanati maas-habu almaymanat
[Click here for audio]
وَأَصْحَابُ الْيَمِينِ مَا أَصْحَابُ الْيَمِينِ Waas-habu alyameeni maas-habu alyameen
[Click here for audio]
Meaning of both verses: The companions of the right — what are the companions of the right?
Note: There is a slight difference due to grammatical rules, but the meaning of the two verses are the same.
VERSE #10 : 2 Verses [Surah Al-Kafiroon 109: Verses 3, 5]
وَلَا أَنْتُمْ عَابِدُونَ مَا أَعْبُدُ Wala antum AAabidoona maaAAbud
Nor are you worshippers of what I worship.
[Click here for audio]
VERSE #11: 2 Verses [Surah Ash-Shuara 26: 183 and Surah Al-Hud 11:85]
وَلَا تَبْخَسُوا النَّاسَ أَشْيَاءَهُمْ وَلَا تَعْثَوْا فِي الْأَرْضِ مُفْسِدِينَ
Wala tabkhasoo annasaashyaahum wala taAAthaw fee al-ardimufsideen [Surah Ash-Shu’ara 26: Verse 183]
وَيَا قَوْمِ أَوْفُوا الْمِكْيَالَ وَالْمِيزَانَ بِالْقِسْطِ ۖ وَلَا تَبْخَسُوا النَّاسَ أَشْيَاءَهُمْ وَلَا تَعْثَوْا فِي الْأَرْضِ مُفْسِدِينَ
Waya qawmi awfoo almikyala walmeezanabilqisti wala tabkhasoo annasaashyaahum wala taAAthaw fee al-ardimufsideen [Surah Al-Hud 11: Verse 85]
[Click here for audio]
Meaning of phrase in both verses: And do not deprive people of their due and do not commit abuse on earth, spreading corruption.
Note: There is an extra line in 11:85 but it is followed by the same verse found in 26:183. Here is an excellent video for further discussion on this verse.
Bi’ithnillah this can be the beginning of Qur’an memorisation for all those who want to be future huffadh; a jump start towards a life-long commitment to understanding and living Qur’an. Always remember the best form of remembrance is the word of Allah and the best means to get closer to Him is through his own speech. Masha Allah, how easy Allah has made it for us to memorise the Qur’an and increase our rewards. So let’s maximise this opportunity and make sincere efforts to learn these ayah, and benefit from the rewards of memorising 75 ayah from the Qur’an this Ramadan, In sha Allah. | https://medium.com/how-to-memorise-the-quran/memorise-75-ayat-from-the-qur-an-in-just-15-20-minutes-a-day-572288005eb1 | ['Qāri Mubashir Anwar'] | 2015-11-21 08:50:23.011000+00:00 | ['Quran', 'Islam', 'Productivity'] |
Setup Vue.js Hello World In Visual Studio Code | Using Git Bash in VS Code
Download Git Bash from https://git-scm.com/downloads for your specific operating system.
As you are clicking through the Git installer, I suggest using all of the default installation settings unless you really know what you are doing.
We are now going to add Git Bash as an integrated terminal inside of VSCode.
Open a new terminal in VS Code (Ctrl + Shift + `) or Terminal → New Terminal.
Open the command palette (Ctrl + Shift + P) or View → Command Palette.
Type “Terminal: Select Default Shell”.
You should see the options below:
Select Git Bash.
Select the + button in the terminal window.
You should see something like this.
Checkpoint: Type in the following command to ensure you have correctly installed Git.
git --version
Depending on what version you have installed, this should appear. | https://towardsdatascience.com/setup-vue-js-hello-world-in-visual-studio-code-15d4edccd6e2 | ['Ryan Gleason'] | 2020-06-28 05:47:03.170000+00:00 | ['Programming', 'JavaScript', 'Software Development', 'Vuejs', 'Web Development'] |
Leading-Trim: The Future of Digital Typesetting | Small change, big implications
Beyond craftsmanship and making handoff more efficient, we hope leading-trim will turn a new page for digital typesetting, eventually motivating improvements to other standards and platforms, starting with OpenType.
Leading-trim works by browsers accessing the font metrics to find, for example, the cap height and baseline. As the standard font format, OpenType specifies what metrics to include in the font file. OpenType has been jointly developed by Microsoft and Adobe as an extension of Apple’s TrueType font format since 1997. While today OpenType has robust support for Latin scripts and CJK languages, it still lacks key metrics for other less commonly used writing systems such as Hebrew or Thai. As people adopt leading-trim, we hope this leads the way for us to add more font metrics of other writing systems to OpenType.
Ultimately, we hope leading-trim helps improve OpenType and its internationalization by ensuring equal typographical capabilities across the world. That’s just the start of the ecosystem. Once leading-trim becomes available in all the modern browsers, desktop applications that are built using web technologies, such as Figma, Teams, and VS Code, will also be able to utilize it.
The impact can also go beyond the web. Sketch has already added snap targets for cap height and baseline. Instead of holding down the Option key to show bounding box to bounding box spacing, you can hold down Control + Option to see baseline to cap height spacing. It makes measuring optical spacing a lot easier. More importantly, this shows the slow shift in the way people approach digital typesetting. We are hoping leading-trim can further encourage this change. And through this mindset change, beyond just snap targets, leading-trim might just become a new text rendering standard in our design tools and extend to our operating systems. | https://medium.com/microsoft-design/leading-trim-the-future-of-digital-typesetting-d082d84b202 | ['Ethan Wang'] | 2020-08-18 15:54:58.051000+00:00 | ['CSS', 'UI Design', 'Typography', 'Design', 'Microsoft'] |
How Messaging Has Changed Human Interaction | In the early ’90s, five Israeli developers realized that most non-Unix users had no easy way to send instant messages to one another. The terminal was reserved for power users, and well-designed software applications with a user-friendly GUI were still rare. They got together and started working on a cross-platform messaging client for Windows and Mac and gave it the catchy name ICQ (“I seek you”).
It didn’t take long before early versions of ICQ had most of the features we take for granted in today’s instant messaging apps:
ICQ Version 99a
With ICQ 99a, the platform featured conversation history, user search, contact list grouping, and the iconic “uh-uh” sound that played whenever you received a message. Within a very short time, ICQ amassed millions of users during a time when global internet traffic was a fraction of what it is today.
One of the critical challenges during this period was that users weren’t online at all times. During the age of 56K dial-in modems, chat rooms could feel like hanging out at an empty bar. The team came up with an ingenious and deceptively simple concept for users to let others know when they were available to chat: the online status.
Rise of the online status
The online status was the first widespread instance in digital communication of users giving up a tiny bit of privacy to make a service more engaging and useful. It all started as a seemingly win-win situation: By turning your online status into something that’s shared and visible to everyone in your contacts, it made your computer a less lonely place.
When you signed on to the service, your friends would immediately get notified. As a result, most users found themselves chatting to someone within minutes. The product’s engagement increased, and the issue of lonely chat rooms soon became a thing of the past.
While ICQ was taking the internet by storm, others quickly took notice and an array of messaging platforms started popping up.
MSN Messenger on Windows XP
The most infamous alternative to ICQ was MSN Messenger. Microsoft Messenger included all the features that defined ICQ’s success. The press release even emphasized the online status as one of its key features: “MSN Messenger Service tells consumers when their friends, family, and colleagues are online and enables them to exchange online messages and email with more than 40 million users.”
In 2001, Messenger became the single most used online messaging service in the world. With over 230 million unique users, the platform’s quick rise soon led to new challenges.
How transparent do we want to be?
As the MSN user base increased, more users lamented that they didn’t feel like they were in control. Upon logging on to the service, they immediately got pinged by people they didn’t necessarily want to talk to. The problem of lonely chat rooms was effectively replaced with a new problem: How can users be in control of who they want to talk to?
For many, not replying wasn’t a viable option as they felt guilty about ignoring incoming texts. It soon became clear that the automatic sign-in and public online status wasn’t without its flaws.
Microsoft’s response was to introduce a new feature that enabled users to “appear” offline. With this small change, users gained back some level of control over how openly they shared their online activity. It wasn’t all perfect, though.
Every change involving micro-privacy has a counter-reaction that can go from barely noticeable, to harmful, to downright problematic.
In its wake, the offline status left behind a trail of paranoia that gave rise to tools that allowed users to screen whether friends had blocked them. These third-party tools encouraged anyone to become a cyberspace Sherlock Holmes and check in on their contacts’ statuses.
As we will see, this is a common chain of events in the realm of messaging. Every change involving micro-privacy has a counterreaction that can go from barely noticeable, to harmful, to downright problematic. So what is micro-privacy?
Micro-privacy in everyday products
When I say micro-privacy, I’m referring to the small nuggets of information that reveal something about a user’s online activity.
What characterizes micro-privacy is that a minimal amount of information can have huge repercussions on product engagement, user behavior, and well-being.
In simple terms, design teams can build more engaging products by reducing privacy on two ends: either between the provider and its users or among the users themselves. We spend a lot of time worrying about the former, but almost completely neglect the latter.
Let’s have a closer look through another example that might feel strangely familiar.
Are you still there?
Microsoft was in trouble. Their platform gained a lot of traction but one of the things that kept plaguing the early versions of MSN was flaky internet connections. When two users talked to one another, you could never tell whether the person you were talking to was still there, whether they went away, or whether their connection had simply timed out. Sometimes sending a message felt like sending it into a vortex. You never knew whether you were going to get something back.
In order to better set expectations, the chat community developed a linguistic toolbox to let others know when they might not respond immediately. As a result, chat rooms of the early 2000s were full of acronyms like AFK (away from keyboard) and BRB (be right back).
Then a team of engineers at Microsoft came up with a genius micro-interaction that would redefine the psychology of messaging as we know it forever.
In order to set expectations and make conversations feel more engaging, the team introduced what they called the typing indicator. Every time users started writing a message, it sent a signal to the server that would in turn inform the person on the other end that the user was typing. This was a massive technical bet considering the cost of server space. Around 95% of all MSN traffic was not the content of the messages itself, but simple bits of data that would trigger the iconic dots to show up and disappear!
Karen is typing…
From an engagement model perspective, the typing indicator flipped all the right behavioral switches that got people hooked. Every time someone started typing, it created anticipation followed by a variable reward. Today, this is a well-researched area in psychology that serves as a foundation for anyone attempting to build addictive products.
The typing indicator elegantly solved what the team had set out to solve. But it also did a bit more than that. Apart from increased engagement, it also single-handedly introduced a whole new level of emotional nuance to online communication. This seemingly small detail inadvertently conveyed things no message by itself ever could. Picture this scenario:
Bob: “Hey Anna! It was so great to meet you. Would you like to go out for a drink tonight?”
Anna: Starts typing…
Anna: Stops typing…
Anna: Starts typing again…
Anna: “Sure!”
How convinced is Anna really? You might have experienced it yourself: The angst of prolonged typing indicators followed by a short response or even worse—nothing! Bob might have been happier if he hadn’t observed Anna’s typing pattern. But he did. And now he wonders how such a tiny animation can have such a profound impact on how he feels.
It turns out, Bob isn’t alone. It didn’t take long before users started coming up with strategies and hacks to regain control over their micro-privacy and online activity, from typing their message into a document and then copy/pasting it over, to first thinking hard before even attempting to write something.
This problem gets further exacerbated in modern applications that involve group chat, always-on messaging services, and dating apps. But this was still before the iPhone came along to change the internet as we know it.
Today, typing indicators are ubiquitous. And while we can’t argue that it made messaging more useful, it also made it more addictive by playing an innocent but powerful sleight of hand: We were handed an exciting pair of cards, at the cost of someone observing us from the other side.
Of course, this wasn’t the last time we happily played along.
Where have you been?
Divorce lawyers in Italy know something that you and I don’t. But it first took a shift in technology for them to get to that insight. That shift kicked off in late 2007, when we went from a type of internet we used at home and at the office to the type of internet that was with us at all times.
The introduction of the iPhone marked a technical leap that affected every aspect imaginable in computing and with it, every aspect of society.
When former Yahoo! engineers Brian Acton and Jan Koum tried the iPhone for the first time, they immediately saw huge potential in the device and its App Store model. They started working on a new type of messaging app that included an online status as part of the core messaging experience. They gave it a catchy and memorable name — WhatsApp — to sound like the colloquial “what’s up?” everyone is familiar with.
Growth was relatively slow and the two almost decided to give up on their venture. That changed when Apple introduced a new service that almost instantly catapulted their brainchild to the top of the App Store: the push notification system. With that, their user base shot up to 250,000 in no time.
There were a couple of things that made WhatsApp different and attractive. First, it sent messages over the internet so users no longer had to pay for every single SMS. Second, it reintroduced the online status that had originally been developed during a time of chat rooms and flaky internet connections over a decade earlier. And third, it featured the infamous typing indicator we’ve all come to love. All these things combined made WhatsApp feel lightyears ahead of any traditional SMS application of its time.
Today, WhatsApp has more than a billion users and it’s the preferred way of sending messages in many countries all around the world. One of those countries is — you guessed it — Italy!
According to Gian Ettore Gassani — president of the Italian Association of Matrimonial Lawyers — WhatsApp messages sent by cheating spouses play an integral role in 40% of Italian divorce cases citing adultery, writes Rachel Thompson from Mashable.
The thing that often led to those deeply troublesome insights? The “last seen online” indicator. Unlike the traditional online status of the early 2000s, “last seen” added a new level of insight to written chat: The exact time someone last used WhatsApp.
Last seen online indicator (WhatsApp illustration)
Like any service that turned the knob on micro-privacy, the outcome was predictable—high user engagement at the cost of reduced user-to-user privacy.
What does it mean when your spouse was last seen online at 4:30 in the morning? Why would someone not pick up the phone minutes after they had just been seen online? How come your secret crush and your best friend always seem to be online at the same time—coincidence?
Coincidence or not, users decided to start doing something about it to get their micro-privacy back. In very little time, the internet lit up with tons of articles and tutorials both through written and step-by-step video instructions. These tutorials ranged from creating a fake last-seen status, to freezing the time display, to disabling it altogether.
The last seen “feature” had such strong psychological impact on users that some started referring to it as Last Seen Syndrome (LSS). In her research about how WhatsApp impacts youth, Anshu Bhatt notes “This app has been found to be highly addictive, which leaves a trace that becomes difficult to control.” The myriad of articles offering advice on how to control privacy, limit time spent in the app, and outsmart the last seen indicator further offers a glimpse into the challenges many users are facing today.
And just when it seemed there wasn’t any more micro-privacy we would willingly disclose, there was still one tiny area that went largely overlooked…
Now you see me!
Replying late to incoming texts or emails used to be simple: A short “only saw this now” was good enough to get back to someone without any feeling of guilt or fear of retaliation. Today, we’re all in need of a better alibi.
It was again a seemingly small “detail” that deeply reshaped our experience and expectations toward one another. Like many of the ideas we’ve discussed so far, this one too can be understood as loosely inspired by technology that was invented decades earlier. In this case, it was email.
Manually entering an email address was (and still is) an error-prone process. The idea of sending messages digitally was both novel and hard to grasp. Upon hitting the send button, users had very little information as to whether their message was delivered, pending, or aborted. To offer more transparency and make email more understandable, Delivery Status Notifications (DSN) were introduced. Through DSN, users gained more insight into what happened to their message after hitting the send button.
Fast forward 30 years and the industry keeps solving similar problems, but in a slightly different context and a slightly different moment in computing history.
In 2011, Apple introduced iMessage. What made iMessage different from its predecessor was that it seamlessly migrated users from sending messages through the traditional SMS protocol, to sending them over the web. This set the foundation needed for iMessage to evolve beyond a simple text messaging app.
Among the many newly introduced changes was an inconspicuous “feature” that quickly became known as one of the most contentious and controversial moves in the messaging space: read receipts. | https://medium.com/swlh/the-loss-of-micro-privacy-baa088f0660d | ['Adrian Zumbrunnen'] | 2020-02-14 21:33:34.228000+00:00 | ['Ideas', 'Technology', 'Design', 'Privacy', 'Social Media'] |
Understanding Clustering in Unsupervised Learning | There are three case in Unsupervised Learning
Clustering, Dimensionality Reduction, and Association Rule Clustering : grouping data based on similarity patterns
There are methods or algorithms that can be used in case clustering : K-Means Clustering, Affinity Propagation, Mean Shift, Spectral Clustering, Hierarchical Clustering, DBSCAN, ect.
In this section, only explain the intuition of Clustering in Unsupervised Learning
Clustering : Intuition
Clustering a data based on similarity patterns into 1 groups
Clustering a data based on similarity patterns into 2 groups
Clustering a data based on similarity patterns into 3 groups
Clustering a data based on similarity patterns into 4 groups
How do we know a point has the same group as another point?
As mentioned above : based on similarity patterns
How to measure the similarity of a point to another point?
The answer is : based on distance
How to measure the distance of a point to another point?
There are several ways to measure distance
Euclidean Distance
Manhattan Distance
Minkowski Distance
Hamming Distance
Euclidean Distance
Euclidean Distance represents the shortest distance between two points.
source : Role of Distance Metrics in Machine Learning
Mathematically, we can write this formula as
Example case :
In this case, the Euclidean Distance between the points is 6.3
Manhattan Distance
Manhattan Distance is the sum of absolute differences between points across all the dimensions.
Mathematically, we can write this formula as
Example case :
In this case, the Manhattan Distance between the points is 8
Minkowski Distance
Minkowski Distance is the generalized form of Euclidean and Manhattan Distance.
Mathematically, we can write this formula as
Minkowski distance can work like Manhattan or Euclidean distance. The selected P value will determine how the Minkowski distance works
q = 1: Manhattan distance
q = 2: Euclidean distance
Hamming Distance
Hamming Distance measures the similarity between two strings of the same length. The Hamming Distance between two strings of the same length is the number of positions at which the corresponding characters are different
Mathematically, we can write this formula as
Example case : | https://arifromadhan19.medium.com/understanding-clustering-in-unsupervised-learning-b0d7a5f61f03 | ['Arif R'] | 2020-11-01 16:31:25.812000+00:00 | ['Machine Learning', 'Euclidean Distance', 'Unsupervised Learning', 'Manhattan Distance', 'Clustering'] |
How we build products at Everoad. Part tres: Cadence. | Part tres. We ship & support. Relentlessly.
Another former boss of mine — the great James Cox, now building the future of subscription-based mobility (check out Canoo) — nagged me for months to improve my ‘operating cadence’. I remember walking up and down Market street thinking “so what exactly does he mean there”. Was I supposed to attend more meetings? Was I not communicating enough? Or did I just need to toss in more stuff in my weekly schedule?
It took me a bit of time — and, true story, some dancing classes too — to understand what an operating cadence meant. Building a product, a company or the coolest restaurant in Europe (you’re welcome, The Barbary), requires groove. That’s right, groove. Having a plan is good. Translating that plan into a roadmap, even better. But operating a roadmap is what truly matters. In other words, it’s all about execution. So today, we’ll show you how we execute at Everoad, to ship products and support them, relentlessly.
Turning problems into projects.
If you remember from our previous chapter, we plan every quarter, narrowing our priorities to a list of around 30 items. These items are problems, meaning that we have a clear view of what’s not working but we have no idea — at this stage — of how we’ll solve them.
This is where we turn into project mode. There is a lot of literature on how Product Management is not Project Management (here, there and there). I tend to agree with them but I also believe that a good Product Manager is an amazing Project manager meaning that they can leverage a set of working methodologies to build products and features that address user needs while providing return on investment for the company (just to be clear, ROI is not just a financial metric).
The very first thing we do when kicking off work on one of our priorities is to create a Product Requirements Document. A PRD is a very common tool among Silicon Valley companies, but it hasn’t really made its way across the Atlantic yet. For us, PRDs are the very backbone of each project. They help us document and store information in a single spot, they foster collaboration for projects that are at least ten-hand-made, and most importantly they provide transparency to the entire organization about what we plan to build.
Breaking down projects into smaller ones.
Now, even with the help of a well-written PRD, the task would be immense for our team to execute alone. They wouldn’t know what to do or where to start given the number of things required for an idea to become live. To help drive clarity, we have chopped projects into 5 distinct phases. Just like breaking down problems into smaller ones help better address them, breaking projects into smaller phases help team-members focus on specific tasks, thus increasing their efficiency and our overall velocity. | https://medium.com/everoad/how-we-build-products-at-everoad-part-tres-cadence-e1935c3b054c | ['Benjamin Chino'] | 2020-01-08 10:11:52.924000+00:00 | ['Startup', 'Cadence', 'Product', 'Product Management', 'Execution'] |
How to Make Sense of “I Don’t Like It” | Designers! We’ve all had the experience of showing our work to someone — usually someone with more power and clout than we have— who looks at our work, shrugs, and says “I don’t like it.”
It could be a client, or a manager, or a colleague. But it’s happened to every single one of us. It’s so frustrating. There’s no acknowledgement of our hard work, no understanding of the decisions we’ve made. And worse, there’s no way of knowing what you should change to make them like it.
It’s easy to blame the stakeholder, of course. They should be giving better feedback! They don’t understand my value! They aren’t listening when I explain my choices! How can I create something they do like if I don’t have clear direction?
The unspoken question in your mind is probably something like: well, how do I make you like it?
Sometimes, this question even starts to cramp our design work. It becomes easy to fall into the trap of saying “I’m not going to design this way, because my product manager doesn’t like it.”
No, no, no. “My PM doesn’t like it” is not useful on its own.
I’ve got some news for you: this lack of useful feedback is not our stakeholders’ problem. The ways we talk about our work in design-only critique aren’t things they have insight into. The language we use to explain, rationalize, and defend our work isn’t language they use day to day. So we can’t blame them for not knowing how to talk about design intelligently. As designers, this is our problem to solve.
We can’t blame our stakeholders for not knowing how to talk about design intelligently. As designers, this is our problem to solve.
First, we can avoid this situation in the first place by talking about our work with more clarity and purpose; next, we can learn to solicit better feedback from colleagues; and finally, we can spread a culture of better design feedback throughout our organizations.
Here’s how.
How to avoid getting this feedback
Start with goals
I’ve seen so many feedback sessions where designers open by showing their work, and asking “what do you think?” Or, “I’m open to any feedback you have.”
This doesn’t set the the session up for success—it practically invites “I don’t like it.”
Remember Design Critique 101: start with the goals of the project. Mention those goals at the start of every formal design review. Even if you’re meeting with your product manager who created the goals of the project, remind them before you show them your next iteration. If you need a structure for this, check out Jared Spool’s Short Form Creative Brief.
Why do this? To make sure everyone has the same goals in mind. Or as Jared Spool puts it, to make sure everyone is actually working on the same project.
Sometimes the goals are obvious, but sometimes there are goals the PM has only in her head. Or maybe the PM had a recent stakeholder meeting where the project goals had to shift, and she hasn’t shared that with you yet. Or maybe there are people in the room who aren’t aware of the goals at all.
Regardless, it’s a smart idea to start every design review by stating the goals and making sure everyone’s in agreement. Even if everyone is already on board, it sets a tone and frames the conversation.
To put it another way, it becomes harder for your PM to say she doesn’t like that color blue when you’ve just reminded her that you’re focused on increasing user acquisition by 10%.
Structure the conversation
I’ve witnessed design reviews that were run by a product manager, even though the designer was in the room. This isn’t good —it disempowers the designer to feel ownership over their work, and is a missed opportunity to showcase the thought process that went into the design.
Additionally, when the PM presents the work, it removes an opportunity for the designer to practice articulating the goals of the project. Everyone on the team should be able to do this, not just the product manager. Alignment on goals is everyone’s responsibility, and everyone should be able to articulate them.
As the designer on the project, design review is your meeting, regardless of who created the calendar invitation. You should be on the hook not only to present and discuss your own work, but also to structure the session in a way that solicits the most effective feedback.
In other words: If your work is being discussed, you should be presenting your work, and it’s your meeting.
If your work is being discussed, you should be presenting your work, and it’s your meeting.
(Caveat: if it’s an executive-level review, the product manager, or someone above you in the food chain, may choose to discuss the work whether or not you’re in the room. This can be all right, depending on the type of review it is.)
Here’s some ideas for structuring the conversation for the best possible outcome.
Open by restating the project goals, and asking for the feedback you want. If you want visual design feedback, ask for it; if you’re specifically not asking for visual design feedback because that’s still in flux, say that out loud. If content isn’t final, say that too. Give yourself a time limit, and stick to it. This is a feedback session, not just a presentation, so make sure you give your stakeholders more time to discuss your work than you spend presenting. Don’t present directly from Sketch. This might be okay for designer-only critique depending on your team, but up your game for stakeholder meetings. All that panning and zooming in a formal meeting is distracting for many stakeholders. Present from a tool that has inline comments. I once managed a designer who presented from Zeplin, and recorded the feedback in Zeplin live during the meeting. This is smart—it allows direct commenting on certain areas of the design, and also demonstrates to the stakeholders that their feedback was heard and recorded. It also allows other stakeholders, ones who may not have been in the review, the opportunity to see those comments. And bonus: the designer gets to track all of the feedback in the same place. Ask stakeholders to hold their thoughts until the end of your presentation. Provide paper (I like post-its) and pens so your colleagues can write their thoughts down — this relieves them of the anxiety that they might forget some critical piece of feedback. And it helps get everyone off their laptops. Request that it be a laptops-down meeting. You can’t give thoughtful design feedback when you’re looking at your own screen. This can be a delicate conversation to have; I often indicate that it’s a laptops-down meeting in the invitation, and remind people at the beginning of the meeting, so the conversation is less awkward. (Suggested language to use: “Good morning, thanks for coming. Reminder that I’d prefer this be a laptops-down meeting if possible, so I can get the most effective feedback.”)
What to do when you hear it anyway
Sometimes, even when you do everything to avoid hearing “I don’t like it,” you still get it anyway.
What to do? Ask them why. The trick is to come across as receptive to the feedback, and curious to learn more. Here are some phrases to use.
“Ok, thanks for letting me know. What don’t you like about it?
“Ok, thanks. Tell me more about that.”
Use your “five whys.” Go deep until you understand the root of the problem. Prompt them if you need to. The conversation might go a little bit like this.
PM: “I don’t like it.” You: “Ok, thanks. What don’t you like about it?” PM: “It feels too… busy.” You: “Ah, I understand. The project goals mean that we have to have a lot of information on the page at once, and it’s a challenge to figure out how to get it all on there.” PM: “Yeah — I guess I just don’t know where to look first.”
Boom. You’ve just identified that you have a visual hierarchy problem. That’s a problem you can address!
Use frameworks to spread a culture of good critique
Many years ago, I was working on some enterprise software, partnering with engineers who had some very strong opinions. They had built the software from scratch, without a lot of input from design, and weren’t willing to implement design decisions that they didn’t pick apart and explicitly approve.
I wasn’t experienced enough to know how to solicit better feedback from these engineering stakeholders, and the only feedback I got from them was “I don’t like it.” And yes — I tried to dig deeper, but I was iterating in circles.
Six weeks into the job, I was so frustrated that I started polishing up my resume.
Thankfully, my manager at the time stepped in. She ran a workshop with the designers and engineers to create a framework and structure for giving and receiving better design critique. She calls it Embodied Critique; you should go check it out.
Ever since then, I’ve been a huge fan of creating and socializing critique frameworks. Frameworks give you hooks to hang your thinking on. And better, if all people in an organization understand the framework, it becomes an effective, meaningful shortcut in your design reviews.
Personas are a type of critique framework. If you can center the conversation around the human being using your software, and if you know enough about them to understand what they care about, you can often have a really great discussion around a given design approach. Personas let you answer questions like:
What does your target user’s day look like? Is their workflow interrupted a lot? Design for an interruptible workflow.
What’s the median age of your target user? A lot of people over 40 have eyesight issues — make sure your color and font size choices take this into consideration.
What’s your user’s understanding of industry jargon? Stakeholders love to iterate on content. Frame your content choices in terms of your persona’s fluency with the subject area.
The most recent framework I’m enjoying is Tenets and Traps. Developed by folks at Microsoft and elsewhere, this approach tries to put a cohesive framework around questions like:
What are the well-established tenets that make a design “good?”
How do we recognize good design when we see it?
What are the common UI traps that can degrade these tenets?
The best thing about critique frameworks: they communicate that design operates with rigor. We aren’t just making our decisions out of nowhere, or from our “gut.” Frameworks create a process that everyone can participate in, and assist in understanding how we make decisions.
The best thing about critique frameworks: they communicate that design operates with rigor.
Pick a framework. Practice it within the design team, see how it goes, and then try rolling it out to the entire department or company. | https://medium.com/swlh/how-to-make-sense-of-i-dont-like-it-5859fd1f60cb | ['Janet Taylor'] | 2019-09-29 16:09:15.889000+00:00 | ['Feedback', 'Critique', 'UX', 'Design'] |
Home | Every sad song
All the best songs
Start with home
Walking away
Running towards
Or just missing home
But every song
That's ever been written
Is about home
It is true
And every piece I write
Every lyric, or poem
All the prose I compose
They all tip toe
Gently and ever so quietly
Around about home
Home I miss
Home I never had
Or the home I am making
When we first met
You had this kind of sparkle
A grumpy glow
It shimmered in the whites of your eyes
It left trails wherever you walked
You took coffee
And the piano music
And you made it a fairy tale
You grasped at stories
And wove them in your hair
In your clothes
And in your soul
Then you dared me
To follow the thread
Of copper blue magic
To wherever it led
I stayed up all night
Unraveling every tale
Until I was drunk in a basement
With a story and a headache
I didn’t stop till you told me
That in all the good tales
There is a home hidden away
I read the stories
Wrote the stories
Till your arms became
Home
Home
Home
Till your story and my story
Tangled and twisted
And your magic became my magic
And I became a blue bronze strand
Of the nest you wove from the world
And you became
Home
Home
My Home | https://medium.com/the-pom/home-df9c41df0d7f | ['Luisito Gavara'] | 2020-07-02 03:53:21.298000+00:00 | ['Poetry', 'Pomprompt', 'Poetry On Medium', 'Storytelling', 'Home'] |
How It Felt to Live Surrounded by AIDS Before Treatment | I don’t want to answer your question, because it’s very painful to take myself back there. But our stories need to to be told, I suppose, so I’m going to try.
I was outside the US and mostly isolated from the epidemic during much of the the 1980s. In my deployed military world, AIDS happened to other people, far far away. I worried about HIV in a detached sense, but not in a personal one. Then I left the Air Force and landed in Manhattan in 1990.
When the wheels of that DC-10 set down at JFK, they were delivering me into the very peak of plague I barely comprehended. It’s so hard to bring it to life with mere words, to describe. I felt so much in the next few weeks.
Overwhelming fear.
Stubborn optimism.
Fierce anger.
The more I integrated into my new world, the more tragedy I experienced, the more my soul seemed to catch fire. I lived and worked in Chelsea and Greenwich Village. Almost all my friends and coworkers were gay men.
Nobody knew who was next.
You have to understand that given the long incubation period and the even longer progression to illness, that many gay men were at risk because of sexual contact they’d had before they knew they needed to be safe.
The Sword of Damocles dangled everywhere.
We argued about whether we should be tested. Did we even want to know if the virus was slowly and relentlessly multiplying in our blood? What was the point of knowing? No medicine could help us.
Funerals for young men happened every damn day.
Purple blotches stained faces on every corner. Refugees from the Bible Belt were terrified of going “home” to die. They wanted to stay with their friends as they drew their last breaths.
The living tended the dying.
Cooke was my friend.
He had been a fashion model. From somewhere inside that gaunt face peered the beautiful man he had once been. He was cheerful and friendly as he died. He didn’t want to be a burden.
He didn’t make it.
Neither did Allen. Or Phillip. Or Antonio. Or Charles. I have tears streaming down my cheeks as I remember them all.
I think of one sweet boy in particular. He was little more than a child when he gave up precautions and sero-converted on purpose. He couldn’t take it. He didn’t know how to live surrounded by all that death. He gave up and he died. The virus was merciful.
It took him fast.
We fought back, though.
We educated ourselves and we organized. We cared for our friends first. We fed them, we nursed them, we bathed them, and we buried them.
We fought for money for research, and we fought for affordable treatment. We fought the apathy of our government and we fought the homophobic evil of the Roman Catholic Church, which at least in New York poured so much energy and money into fighting safer sex education.
We lived all along.
What else is there? We danced and sang. We partied in the streets. We roared our defiance into the dark night. We lived and we loved as we died.
Then one day it ended.
Just like that.
Effective treatment came out toward the end of the nineties. It was like a miracle. People who were almost dead recovered overnight.
We looked up and the sun burnt our eyes as it rose to end a long, dark night. We didn’t know how to feel. We didn’t know what to think next. We didn’t know what to do next. We blinked and asked ourselves an important question.
How do you recover from that? | https://jfinn6511.medium.com/how-it-felt-to-live-surrounded-by-aids-before-treatment-9cf780acc4d2 | ['James Finn'] | 2019-10-19 12:23:36.035000+00:00 | ['This Happened To Me', 'HIV', 'Mental Health', 'LGBTQ', 'Activism'] |
Numpy Crash Course — Building Powerful n-Dimensional Arrays with NumPy | Introduction
NumPy is a Python library used to perform numerical computations with large datasets. Numpy stands for Numerical Python and it is a popular library used by data scientists, especially for machine learning problems. NumPy is useful during pre-processing the data before you train it using a machine learning algorithm.
Working with n-dimensional arrays is easier in Numpy compared to Python lists. Numpy arrays are also faster than Python lists since unlike lists, NumPy arrays are stored at one continuous place in memory. This enables the processor to perform computations efficiently with NumPy arrays.
In this article, we will look at the basics of working with Numpy including array operations, matrix transformations, generating random values, and so on.
Installation
Clear installation instructions are provided at the official website of NumPy, so I am not going to repeat it here again. Please find the instructions here.
Working with NumPy
Importing NumPy
To start using NumPy in your script, you have to import it.
import numpy as np
Converting Arrays to NumPy Arrays
You. can convert your existing python lists into NumPy arrays using the np.array() method.
arr = [1,2,3]
np.array(arr)
This also applies to multi-dimensional arrays. Numpy will keep track of the shape (dimensions) of the array.
nested_arr = [[1,2],[3,4],[5,6]]
np.array(nested_arr)
NumPy Arange Function
When working with data, you will often come across use cases where you need to generate data.
Numpy as an “arange()” method with which you can generate a range of values between two numbers. The arange function takes the start, end, and an optional distance parameter.
print(np.arange(0,10)) # without distance parameter
OUTPUT:[0 1 2 3 4 5 6 7 8 9] print(np.arange(0,10,2)) # with distance parameter
OUTPUT: [0 2 4 6 8]
Zeroes and Ones
You can also generate an array or matrix of zeroes or ones using NumPy (trust me, you will need it). Here's how.
print(np.zeros(3))
OUTPUT: [0. 0. 0.] print(np.ones(3))
OUTPUT: [1. 1. 1.]
Both these functions support n-dimensional arrays as well. You can add the shape as a tuple with rows and columns.
print(np.zeros((4,5)))
OUTPUT:
[
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
] print(np.ones((4,5)))
OUTPUT:
[
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
]
Identity Matrix
You can also generate an identity matrix using a built-in Numpy function called “eye”.
np.eye(5) OUTPUT:
[[1., 0., 0., 0., 0.]
[0., 1., 0., 0., 0.]
[0., 0., 1., 0., 0.]
[0., 0., 0., 1., 0.]
[0., 0., 0., 0., 1.]]
NumPy Linspace Function
NumPy has a linspace method that generates evenly spaced points between two numbers.
print(np.linspace(0,10,3))
OUTPUT:[ 0. 5. 10.]
In the above example, the first and second params are the start and the end points, while the third param is the number of points you need between the start and the end.
Here is the same range with 20 points.
print(np.linspace(0,10,20))
OUTPUT:[ 0. 0.52631579 1.05263158 1.57894737 2.10526316 2.63157895 3.15789474 3.68421053 4.21052632 4.73684211 5.26315789 5.78947368 6.31578947 6.84210526 7.36842105 7.89473684 8.42105263 8.94736842 9.47368421 10.]
Random Number Generation
When you are working on machine learning problems, you will often come across the need to generate random numbers. Numpy has in-built functions for that as well.
But before we start generating random numbers, let's look at two major types of distributions.
Normal and Uniform Distribution
Normal Distribution
In a standard normal distribution, the values peak in the middle. The normal distribution is a very important concept in statistics since it seen in many natural phenomena. It is also called as the “bell curve”.
Uniform Distribution
If the values in the distribution have the probability as a constant, it is called a uniform distribution. eg. A coin toss has a uniform distribution since the probability of getting either heads or tails in a coin toss is the same.
Now that you know the two main distributions work, lets generate some random numbers.
To generate random numbers in a uniform distribution, use the rand() function from np.random.
print(np.random.rand(10)) # array
OUTPUT: [0.46015141 0.89326339 0.22589334 0.29874476 0.5664353 0.39257603 0.77672998 0.35768031 0.95087408 0.34418542] print(np.random.rand(3,4)) # 3x4 matrix
OUTPUT:[[0.63775985 0.91746663 0.41667645 0.28272243] [0.14919547 0.72895922 0.87147748 0.94037953] [0.5545835 0.30870297 0.49341904 0.27852723]]
To generate random numbers in a normal distribution, use the randn() function from np.random.
print(np.random.randn(10))
OUTPUT:[-1.02087155 -0.75207769 -0.22696798 0.86739858 0.07367362 -0.41932541 0.86303979 0.13739312 0.13214285 1.23089936] print(np.random.randn(3,4))
OUTPUT: [[ 1.61013773 1.37400445 0.55494053 0.23133522] [ 0.31290971 -0.30866402 0.33093618 0.34868954] [-0.11659865 -1.22311073 0.36676476 0.40819545]]
To generate random integers between a low and high value, use the randint() function from np.random
print(np.random.randint(1,100,10))
OUTPUT:[64 37 62 27 4 33 23 52 70 7] print(np.random.randint(1,100,(2,3)))
OUTPUT:[[92 42 38] [87 69 38]]
A seed value is used if you want your random numbers to be the same during each computation. Here is how you set a seed value in NumPy.
To set a seed value in NumPy
np.random.seed(42)
print(np.random.rand(4)) OUTPUT:[0.37454012, 0.95071431, 0.73199394, 0.59865848]
Whenever you use a seed number, you will always get the same array generated without any change.
Reshaping Arrays
As a data scientist, you will work with re-shaping the data sets for different types of computations. In this section, we will look at how to work with the shapes of the arrays.
To get the shape of an array, use the shape property.
arr = np.random.rand(2,2)
print(arr)
print(arr.shape) OUTPUT:[
[0.19890857 0.00806693]
[0.48199837 0.55373954]
]
(2, 2)
To reshape an array, use the reshape() function.
print(arr.reshape(1,4))
OUTPUT: [[0.19890857 0.00806693 0.48199837 0.55373954]] print(arr.reshape(4,1))
OUTPUT:[
[0.19890857]
[0.00806693]
[0.48199837]
[0.55373954]
]
In order to permanently reshape an array, you have to assign the reshaped array to the ‘arr’ variable. Also, reshape only works if the existing structure makes sense. You cannot reshape a 2x2 array into a 3x1 array.
Slicing Data
Let's look at fetching data from NumPy arrays. NumPy arrays work similarly to Python lists during fetch operations.
To slice an array
myarr = np.arange(0,11)
print(myarr)
OUTPUT:[ 0 1 2 3 4 5 6 7 8 9 10] sliced = myarr[0:5]
print(sliced)
OUTPUT: [0 1 2 3 4] sliced[:] = 99
print(sliced)
OUTPUT: [99 99 99 99 99] print(myarr)
OUTPUT:[99 99 99 99 99 5 6 7 8 9 10]
If you look at the above example, even though we assigned the slice of “myarr” to the variable “sliced”, changing the value of “sliced” affects the original array. This is because the “slice” was just pointing to the original array.
To make an independent section of an array, use the copy() function.
sliced = myarr.copy()[0:5]
Slicing multi-dimensional arrays work similarly to one-dimensional arrays.
my_matrix = np.random.randint(1,30,(3,3))
print(my_matrix)
OUTPUT: [
[21 1 20]
[22 16 27]
[24 14 22]
] print(my_matrix[0]) # print a single row
OUTPUT: [21 1 20] print(my_matrix[0][0]) # print a single value or row 0, column 0
OUTPUT: 21 print(my_matrix[0,0]) #alternate way to print value from row0,col0
OUTPUT: 21
Array Computations
Now let's look at array computations. Numpy is known for its speed when performing complex computations on large multi-dimensional arrays.
Let’s try a few basic operations.
new_arr = np.arange(1,11)
print(new_arr) OUTPUT: [ 1 2 3 4 5 6 7 8 9 10]
Addition
print(new_arr + 5) OUTPUT: [ 6 7 8 9 10 11 12 13 14 15]
Subtraction
print(new_arr - 5) OUTPUT: [-4 -3 -2 -1 0 1 2 3 4 5]
Array Addition
print(new_arr + new_arr) OUTPUT: [ 2 4 6 8 10 12 14 16 18 20]
Array Division
print(new_arr / new_arr) OUTPUT:[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
For Zero division errors, Numpy will convert the value to NaN (Not a number)
There are also a few in-built computation methods available in NumPy to calculate values like mean,standard deviation, variance, etc.
Sum — np.sum()
Square Root — np.sqrt()
Mean — np.mean()
Variance — np.var()
Standard Deviation — np.std()
While working with 2d arrays, you will often need to calculate row wise or column-wise sum, mean, variance, etc. You can use the optional axis parameter to specify if you want to choose a row or a column.
arr2d = np.arange(25).reshape(5,5)
print(arr2d) OUTPUT: [
[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]
] print(arr2d.sum())
OUTPUT: 300 print(arr2d.sum(axis=0)) # sum of columns
OUTPUT: [50 55 60 65 70] print(arr2d.sum(axis=1)) #sum of rows
OUTPUT: [ 10 35 60 85 110]
Conditional Operations
You can also do conditional filtering using NumPy using the square bracket notation. Here is an example. | https://medium.com/manishmshiva/numpy-crash-course-building-powerful-n-dimensional-arrays-810edc87dcc7 | ['Manish Shivanandhan'] | 2020-09-22 18:25:55.786000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Numpy', 'Data Science', 'Deep Learning'] |
GDPR Vs blockchain. The ‘right to be forgotten’ versus the technology that never forgets | The General Data Protection Regulation, or GDPR for short, will come into force across Europe from 25 May 2018. GDPR gives individuals more control over data held about them by organisations. Individuals will be able to order an organisation to carry out a range of actions with their data including exercising their ‘right to be forgotten’; to have the organisation delete all the data it holds about them.
At Studio Block we work on blockchain technology; digital data-stores with a high data-integrity where information can’t be deleted even if you want to do so. GDPR and blockchain seem to be at loggerheads with one another.
Anonymous data
The key to the puzzle is anonymous data. Truly anonymous data does not count as personal data in the eyes of the GDPR. The new law applies to ‘information relating to an identified or identifiable natural person’. The key part is that term; ‘identifiable’. It does not apply to data that ‘does not relate to an identified or identifiable natural person or to data rendered anonymous in such a way that the data subject is no longer identifiable.’
Interestingly GDPR talks about the data owner being a ‘natural person’. The definition of this is a ‘human being; a real and living person, possessing the power of thought and choice’. So, if you die, no one else has GDPR rights over your data and companies can continue holding the data of the dead. In contrast, the exact meaning of ‘anonymous’ is not well defined. Just removing a name from some data does not necessarily make it anonymous (the combination of age, gender and postcode is enough to exactly identify 87% of the USA population). In the terms of GDPR, data is anonymous if you can’t process it and link it to someone by legal means, for example by cross referencing it with other data that you have a legal right to use.
There was a ruling in Germany recently where dynamic IP addresses were classed as personal data because a website operator has the legal means (via the ISP) to identify the visitor whos IP address it was.
Pseudonymous data
Pseudonymisation is a sort of half-way house to anonymous data. It means storing the identity items of the personal data separately from the main body of the personal data and then linking them together in the system with some sort of association. Any request for deletion of data can be fulfilled by deleting just the identity items, leaving the rest of the data present in the system as non-personal, anonymous data (it actually still is personal data in a way, but you just don’t know whose it is).
Pseudonymisation reduces risks with data storage but it doesn’t automatically make data exempt from GDPR. The data complies with GDPR only if it is pseudonymised correctly, i.e. when you get rid of the key record then the remaining data becomes truly anonymous. Once again the GDPR is clear that if you can cross reference with other data sources and get the person’s identity then pseudonymisation isn’t really pseudonymisation:
“Personal data which have undergone pseudonymisation, which could be attributed to a natural person by the use of additional information, should be considered to be information on an identifiable natural person” (or in other words ‘If you’ve pseudonymised it but you can still work out who it’s about then it hasn’t really been pseudonymised!’).
For example, if you pseudonymised some information simply by taking the name details out of it but left the address in it, then that data would still count as personal data because someone could legally get hold of voter records and link that address to the person.
Tread carefully
The conclusion seems to be that blockchain can be used in a post GDPR world but that you have to tread carefully and plan all your data structures and processes with the restrictions of GDPR in mind at all times. This mindful approach to developing data systems is itself enshrined as part of the GDPR legislation.
A final thought to ponder on is that if appropriate care is not taken and personal data ends up on the blockchain, then the very nature of the blockchain itself makes conventional prosecutions difficult. As a blockchain is not really owned by anyone and as it is stored in multiple copies on different computers then who should get prosecuted under GDPR? The person with a copy on their machine? The creator? And even though fines can be levied, the blockchain can’t be undone, the data on the blockchain will always be there.
If you’d like to chat about your tech project, get in touch with Simpleweb today.
Want to read more?
Find out about scaling trust using blockchain: An Interview with Hugh Karp of Nexus Mutual | https://medium.com/simpleweb/gdpr-vs-blockchain-the-right-to-be-forgotten-versus-the-technology-that-never-forgets-8f5b484aa7e1 | [] | 2018-04-13 15:32:20.597000+00:00 | ['Technology', 'Startup', 'Blockchain', 'Gdpr', 'Privacy'] |
10 Essential GitHub Repos For Software Developers | 10 Essential GitHub Repos For Software Developers
Amazing coding resources, free courses, interview preparation, programming best practices, and more
GitHub can also be a tool for learning and growth
Besides being a great tool for maintaining code, GitHub can also be a tool for learning and growth. As a Software Developer, I am always on the lookout for useful GitHub repos that I can learn and find inspiration from. Here are 10 of my favourite.
GitHub stars: 80.2k
This is a fantastic resource for anyone who is looking to build something and is after some guidance on exactly how to approach it. You can also just find lots of really interesting stuff by browsing through the list.
GitHub stars 79.8k
One of the differences between a Software Engineer and a Software Developer is that the Engineer is more likely to have a good grasp of algorithms and data structures. But whatever your background, this repo provides a thorough list of many different algorithms, data structures and answers to many typical questions you might expect to come across in a Software Engineering interview.
GitHub stars: 64.6k
Whether you’re a person looking to get into coding, or a self-taught developer who is already in the industry, the OSSU curriculum provides loads of free study for anyone who is looking to study Computer Science.
GitHub stars: 59.2k
Well over 100 snippets of code covering all sorts of things in JavaScript, from typical algorithms, to common tasks you might find yourself needing to do. Well worth a look!
GitHub stars: 46.1k
Ever wanted to learn how to build a proper app in a given language/technology? This is the repo for you! Going way beyond the typical “todo” app, RealWorld examples go ahead and flesh out an entire ‘Medium-style’ app, with all the bells, whistles, and best practices included!
GitHub stars: 158k
It’s exactly what it sounds like. Loads and loads of free programming books to help take your knowledge and understanding to the next level.
GitHub stars: 105k
Having the ability to design a large-scale system is highly valuable and something that many of the big tech companies will expect from you if you’re looking at any Senior Software Engineering (and higher) roles. It’s also a critical skill if you plan to build any large scale system for anything you’re working on. This guide provides loads of information that will help to prepare you.
GitHub stars: 86k
A curated list of lots of different libraries, frameworks, and technologies built in Python. An excellent guide for anyone looking to learn a new programming language or simply looking to level up their existing knowledge of Python.
GitHub stars 51.5k
I cannot get enough of best practice guides. So when I found this one, it felt like something I definitely had to include. One of the curses of being self-taught is that you don’t always begin with best practices. So having detailed guides such as this help to quickly level up your skillset.
GitHub stars: 46.2k
Similar to the curated Python list we saw earlier, this repo includes loads of valuable resources related to the field of Machine Learning.
And there we have it! 🎉
I hope you found this useful and will learn something new from any of the 10 repos we have covered today. If there are any other great GitHub repos that you feel I should know about, leave a comment with your suggestion and I’ll be sure to check it out.
And if you enjoyed this article, you can get more similar content by subscribing to Decoded, our YouTube channel! | https://medium.com/javascript-in-plain-english/10-essential-github-repos-for-software-developers-6a42ebba279 | ['Sunil Sandhu'] | 2020-11-05 14:00:57.356000+00:00 | ['JavaScript', 'Web Development', 'Software Development', 'Software Engineering', 'Programming'] |
Sending Email with Java Mail. Email stands for Electronic Mail that… | Email stands for Electronic Mail that makes our life easier and reliable we are sending our text in a minute to anyone in the world in just a minute. Email providers like Gmail, Outlook, Yahoo providing email services free for personal users that a really great thing for us because we are using email for making our daily life work easier, example universities using email to send information to their students like class scheduling, Job offers info, sending data from one department to another, Students are using email, Companies are using email for sending their data and etc we are using the email in our daily life but what about if we need a robot that sends an email at a specific time when we are not available, let take an example why we need Email bot, Supposes you made a scraping bot and you want the feature that when the scraping is done, it should email you the output data, in that case, we need the Email Bot.
In this article we will walk through how we develop an Email bot with Java language using the Java Mail module, Well without wasting any time let straight jump into it.
Installation Java:
We need Java JDK which stands for a java development kit and will be needed for writing the java programs. You can download the latest version of JDK from the Oracle website here Site_URL . Follow the following steps to install them Java JDK properly in your operating system.
Step 1: Download JDK from the site
Step 2: Install the JDK
Step 3: Install the Java JDK bin the Environmental Variables
Step 4: Verify the JDK by typing Java in the Command prompt
Step 5: Write your first Hello world test program
Step 6: Compile and Run your java program
For Mac os and other Operating system Guide please visit that Site
JDK or JRE?
Many new java programmers also confused about what to choose between for the java compiler JDK or JRE. Well JRE Stands for Java Runtime and needed for running the Java program while JDK is a development Kit and includes the JRE in it such as compiler and debugger so you don't need to download the JRE. With JDK you will get a JRE install of the same version.
Download Module:
We need the Java Mail module for coding our Java Email bot, download the java mail latest version from the following site.
URL: https://www.oracle.com/java/technologies/javamail-releases.html
Coding Part:
First thing first we will load the modules of java mail and java built-in modules. From line 1 to 2 we loaded the built-in modules of java and from line 3 to 5 we loaded the Activision packages and at last, we loaded the Java mail modules. We will work through each functionality of function that we loaded from the Java modules in the next code part.
We create a public class with a name the same as the name of the java file in which we are currently working. Well in the first part of the code we loaded the required modules and in the next part we created a class naming EmailUtill this will be our main class and within the class, I made another static class name Email in which I pass the 4 parameters one session type and other 3 was String type that was a body, subject and Reciever Email. From line 21 to 25 we set up the new session using MimeMessage msg = new MimeMessage(session); and then using msg. headers function we are populating the headers with the information of what type of email we are sending I set it to test/html format and encoding will be utf-8 that including almost languages charset.
In line 27 I had used the msg. setFrom() function in which we create a new InternetAddress which holding our login details Email and password and on the next line msg.setReplyTo() function holding the email of the receiver. From line 31 to 35 we are setting up the email body, subject, and date. In the next line, I use the msg.setRecipients() function in which add the information we store on the internetAddress, and at last, on next Transport.send(msg) we delivered our email to the receiver email. if you had noticed we used the try and except method that was useful if we got any bug in the time execution we will get a msg of error occurred using the printStackTrace() function.
Send Email in Java with Attachment:
We had seen how we can send a simple email with java, now we will learn how we can send an email with an attachment it can be a file or an image.
If you saw the Code We are setting up the body, subject and login details, etc like we one in the previous code and from line 14 to 17 new message body part using the mimeBodyPart() function and passing the body part as a parameter and next step we MimeMultipart() in which we are passing the mimeBodyPart() as a parameter this process is done so we send our message body part with attachment, from line 20 to 27 we are setting up the filename that we need to send, storing the filename in string datatype variable, I store a text file but you can attach any file format you want to send to receiver emails.
The next step is passing the filename in the function setFileName() and then the email body that we made with the MimeMultipart() function and pass this to addBodyPart() function and on the next line msg.setContent() function we are passing the multipart datatype variable which includes all the parts of the message including body, subject and content/file and in last we send the email with Transport.send().
Send Email with Image Attachment:
In this part we will walk through that how we can make a java mail program that can send mail with an image attachment, In simple words, you can view the image in the Email body, Take a look at the Code below.
If you saw the Code we already cover half of the code in the previous section, From line 23 to 38 we will set up the image has an attachment in an email, I again created a new MimeBodyPart() function and pass the body in it and next, I declared a string datatype variable and store the name of the image with its extension and on next lines, we set the headers to the image by using setHeader() so that email header when read by backend so they know that the email has an image viewer attachment.
On the next line, I created new MimeBodyPart() and using its method I pass setContent() I pass the name of the image that will be shown as a title if you notice I used <h1></h1> tags that because email always opened has an HTML view so <h1> the tag will make a heading size text, next final lines we again using Transport.send() to deliver our email.
So in the end we learn how we can send Emal using Java mail. I hope you learn something from this article and feel free to share your opinion. | https://medium.com/javarevisited/send-email-with-java-mail-3379285f109 | ['Haider Imtiaz'] | 2020-12-07 07:34:45.423000+00:00 | ['Software Development', 'Coding', 'Java', 'Programming', 'Email'] |
Why I Decided to Stop Using a Pen Name | At the beginning of April, I decided to take a leap and start writing on Medium.
I hadn’t been on the site very long. Like, probably twenty-four hours.
But from the moment I made my account and started reading some of the articles, I absolutely knew I had to start writing too. This was the perfect place to start publishing, the perfect place to start a blog. I was so hooked I didn’t wait to figure anything out. I just picked a pen name, made a new email, and started writing.
Turns out, I still had quite a few things to learn (I still do!). But by using my pen name, I felt completely safe to publish anything I wanted. I could bare my soul and it would be fine because no one I knew would ever find it. I didn’t have to risk any embarrassment while I tried something new. And, if I quit, no one had to know about that, either.
The perks of a pen name were numerous:
Got to go by a much cooler name than my real one.
People could easily pronounce my pen name.
The pen name embodied a certain feel I wanted to portray as a writer (think J.K. Rowling or J.R.R. Tolkien).
I got to write without fear of judgment while I figured out a new platform.
At the same time, I was also on a job hunt. It’s my senior year of college and I was starting to panic about what I was going to do when summer finally hit and I still hadn’t landed a full-time job. But then, through Medium, I discovered freelance writing.
It was like I had been trying really hard to focus on an image, squinting and blinking furiously, but the shapes wouldn’t form together. When I learned about freelance writing, it was like I blinked once and suddenly — there it was. All the images aligned and came into focus with dazzling detail.
Why was I desperately hoping someone would hire me? I know I’m qualified! I should hire myself!
Well, after watching as many Youtube playlists on the topic, reading a dozen articles, and downloading several ebooks, I soon realized I was going to need a website and a portfolio.
People were going to have to read my writing.
And attach my name to it.
Marketing yourself is the name of the game.
I now had a choice to make. Was I going to perpetuate this pen name deal into my actual line of work? Would I be using this name for clients? Having people call me by it in person?
As I thought more about the situation, I realized a few key things:
It’s going to be difficult to live my entire life as a different person. Bruce Wayne and Clark Kent know what the heck I’m talking about.
Bruce Wayne and Clark Kent know what the heck I’m talking about. People love to see faces. They don’t want to just think you’re writing is great, they want to fall in love with the person too.
And finally:
I can’t hide.
The truth is, if you want to be successful at something, anything, you’re going to have to start somewhere (usually at the bottom). And you’re also going to have to tell people about it.
As much as I may want to find a tiny corner of the internet, set up shop, and then never let anyone know what I’m doing, that isn’t a very honest way to live my life.
It’s much more important to let people know who I am, what I’m doing, and yes, that I’m going to make blunders along the way.
As much as possible, I believe that we should tell people the truth about ourselves for two reasons:
Because what kind of a person are you if you always hide your interests/goals/pursuits? and You’ll find so many like-minded individuals out there if you just let them know you’re out there too.
People need people. We can’t be islands and we can’t pretend to be something we’re not. Taking the leap to let other people know about something in your life is scary, but so often, once we’ve taken that leap we discover they either do the same thing too, or they want to help!
While I will be abandoning my old persona, I don’t regret using the pen name for a while. It helped me get started and hit the ground running without reservations.
But now I’ve decided to start telling everyone the truth about myself, and I’m never going back. I may find some people who judge me, yes. I’m also going to find ten more people for each of those judgy ones who are supportive and excited about my new endeavors.
Here’s to all the future blunders,
G.C. | https://medium.com/a-life-of-words/why-i-decided-to-stop-using-a-pen-name-88592ae4009f | ['Grace Claman'] | 2020-05-18 18:48:21.752000+00:00 | ['Pen Names', 'Writing Life', 'Freelance Writing', 'Writing', 'Personal Growth'] |
3 Simple Outlier/Anomaly Detection Algorithms every Data Scientist needs | 3 Simple Outlier/Anomaly Detection Algorithms every Data Scientist needs
Get an in-depth understanding about outlier detection and how you can implement 3 simple, intuitive and powerful outlier detection algorithms in Python
Photo By Scott.T on Flickr
I’m sure you have come across a few of the following scenarios:
Your model is not performing as you wanted it to. You can’t help but notice that some points seem to differ greatly from the rest.
Well congratulations, because you might have outliers in your data!
What are Outliers?
Photo can be found in StackExchange
In statistics, an outlier is a data point that differs significantly from other observations. From the figure above, we can clearly see that while most points lie in and around the linear hyperplane, a single point can be seen diverge from the rest. This point is an outlier.
For example, take a look at the list below:
[1,35,20,32,40,46,45,4500]
Here, it is clearly easy to see that 1 and 4500 are outliers in the dataset.
Why is there outliers in my data?
Usually, outliers can occur from one of the following scenarios:
Sometimes they can occur by chance, possibly because of a measurement error. Sometimes they can occur in the data as data will rarely be 100% clean without any outliers present.
Why are outliers a problem?
Here are several reasons:
Linear models
Let’s suppose you have some data, and you want to predict house prices from it using Linear Regression. A possible hypothesis could look like this:
Photo By Author
In this case, we are actually fitting the data too well(overfitting). However, note how all the points are situated are roughly the same range.
Now, let’s see what happens when we add an outlier.
Photo By Author
Clearly, we see how our hypothesis has shifted, and therefore, inference will be much worse that it would be without the outlier. Linear models include:
Perceptron
Linear + Logistic Regression
Neural Networks
KNN
2. Data imputing
Photo by Ehimetalor Akhere Unuabona on Unsplash
A common scenario is to have missing data, and one of two approaches can be taken:
Remove instances with missing rows Impute data using a statistical method
If we were to go with the second option, we could have problematic imputations, as outliers can greatly change the values of statistical methods . For example, going back to our fictional data with no outliers:
# Data with no outliers
np.array([35,20,32,40,46,45]).mean() = 36.333333333333336 # Data with 2 outliers
np.array([1,35,20,32,40,46,45,4500]).mean() = 589.875
Clearly this analogy is quite extreme, but the idea remains the same; outliers in our data is usually a problem, as outliers can cause serious problems in statistical analysis and modelling. However, in this article, we will be looking at a few ways into how we can detect and combat them.
Solution 1: DBSCAN
Photo By Wikipedia
Density-based spatial clustering of applications with noise(or, more simply, DBSCAN) is actually an unsupervised clustering algorithm, just like KMeans. However, one of its uses is also being able to detect outliers in data.
DBSCAN is popular because it can find non-linearly separable clusters, which can’t be done with KMeans and Gaussian Mixtures. It works well when clusters and dense enough, and are separated by low-density regions.
A high-level overview of how DBSCAN works
The algorithm defines clusters as continuous regions of high density. The algorithm is quite simple:
For each instance, it counts how many instances are located within a small distance ε (epsilon) from it. This region is called the instance’s ε- neighbourhood. If the instance has more than min_samples instances located in its ε-neighbourhood, then it is considered a core instance. This means that the instance is located in high density region(a region with many instances inside it.) All instances inside a core instance’s ε-neighbourhood are assigned to the same cluster. This may include other core instances, therefore a single long sequence of neighbouring core instances forms a single cluster. Any instances that are not a core instance, or are not located in any core instance’s ε-neighbourhood are outliers.
DBSCAN in Action
The DBSCAN algorithm is very easy to use thanks to Scikit-Learn’s intuitive API. Let’s see an example of the algorithm in action:
from sklearn.cluster import DBSCAN
from sklearn.datasets import make_moons X, y = make_moons(n_samples=1000, noise=0.05)
dbscan = DBSCAN(eps=0.2, min_samples=5)
dbscan.fit(X)
Here we will instantiate DBSCAN with a ε-neighbourhood length of 0.05, and 5 to be the minimum number of samples required for an instance to be considered a core instance
Remember, we do not pass in our labels as it is an unsupervised algorithm. We can see the labels the labels that the algorithm produced using the following command:
dbscan.labels_ OUT:
array([ 0, 2, -1, -1, 1, 0, 0, 0, ..., 3, 2, 3, 3, 4, 2, 6, 3])
Note how some labels have values equal to -1: these are the outliers.
The DBSCAN does not have a predict method, only a fit_predict method, meaning that it can’t cluster new instances. Instead, we can use a different classifier to train and predict on. For this example, let’s use a KNN:
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=50)
knn.fit(dbscan.components_, dbscan.labels_[dbscan.core_sample_indices_]) X_new = np.array([[-0.5, 0], [0, 0.5], [1, -0.1], [2, 1]]) knn.predict(X_new) OUT:
array([1, 0, 1, 0])
Here, we fit the KNN classifier on the core samples and their respective neighbours.
However, we run into one problem; we have given the KNN data without any outliers. This is problematic, as it will force KNN to choose a cluster for new instances, even if the new instance is indeed an outlier.
To combat this, we leverage the leverage the kneighbors method of the KNN classifier, which, given a set of instances, returns the distances and indices of the k nearest neighbours of the training set. We can then set a maximum distance, and if an instance exceeds that distance, we qualify it as an outlier:
y_dist, y_pred_idx = knn.kneighbors(X_new, n_neighbors=1)
y_pred = dbscan.labels_[dbscan.core_sample_indices_][y_pred_idx]
y_pred[y_dist > 0.2] = -1
y_pred.ravel() OUT:
array([-1, 0, 1, -1])
Here we have discussed and implemented DBSCAN for anomaly detection. DBSCAN is great because it’s quick, has only two hyperparameters and is robust to outliers.
Solution 2: IsolationForest
Photo By Author
An IsolationForest is an ensemble learning anomaly detection algorithm, that is especially useful at detecting outliers in high dimensional datasets. The algorithm basically does the following:
It creates a Random Forest in which Decision Trees are grown randomly: at each node, features are picked randomly, and it picks a random threshold value to split the dataset into two. It continues to chop away the dataset until all instances end up being isolated from each other. An anomaly is usually far away from other instances, so, on average(across all Decision Trees), it becomes isolated in less steps that normal instances.
IsolationForest in Action
Again, thanks to Scikit-Learn’s intuitive API, we can easily implement the IsolationForest class. Let’s see an example of the algorithm in action:
from sklearn.ensemble import IsolationForest
from sklearn.metrics import mean_absolute_error
import pandas as pd
We will also import mean_absolute_error to measure our error. For the data, we will use a dataset that can be obtained from Jason Brownlee’s GitHub:
url=' https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv' df = pd.read_csv(url, header=None) data = df.values
# split into input and output elements
X, y = data[:, :-1], data[:, -1]
Before we fit an Isolation Forest, let’s try fit a vanilla Linear Regression model on the data and get our MAE:
from sklearn.linear_model import LinearRegression lr = LinearRegression()
lr.fit(X,y) mean_absolute_error(lr.predict(X),y) OUT:
3.2708628109003177
A relatively good score. Now, let’s see if the Isolation Forest can improve the score by removing anomalies!
First, we will instantiate our IsolationForest:
iso = IsolationForest(contamination='auto',random_state=42)
The most important hyperparameter in the algorithm is probably the contamination parameter, which is used to help estimate the number of outliers in the dataset. This is a value between 0.0 and 0.5 and by default is set to 0.1
However, it is essentially a randomised Random Forest, so all the hyperparameters of a random forest can also be used in the algorithm.
Next, we will fit the data to the algorithm:
y_pred = iso.fit_predict(X,y)
mask = y_pred != -1
Note how we also filter out predictions values = -1, as just like in DBSCAN, these are considered as outlier.
Now, we will reassign the X and Y with the outlier-filtered data:
X,y = X[mask,:],y[mask]
And now let’s try fit our Linear Regression model to the data and measure the MAE:
lr.fit(X,y)
mean_absolute_error(lr.predict(X),y) OUT:
2.643367450077622
Wow, a good decrease in cost. This clearly demonstrates the power of the Isolation Forest.
Solution 3: Boxplots + The Tuckey Method
While Boxplots are quite a common way to identify outliers, I really find that the latter is probably the most underrated method to identify outliers. But before we get into the Tuckey Method, let’s talk about Boxplots:
Boxplots
Photo By Wikipedia
Boxplots essentially provide a graphical way to display numerical data through quantiles.It is a very simple yet effective way to visualise outliers.
The upper and lower whiskers show the boundaries of the distribution, and anything above or below is considered to be an outlier. In the figure above, anything above ~80 and below ~62 is considered to be an outlier.
How Boxplots work
Essentially, the box plot works by splitting the dataset into 5 parts:
Photo from StackOverflow
Min : the lowest data point in the distribution excluding any outliers.
: the lowest data point in the distribution excluding any outliers. Max : the highest data point in the distribution excluding any outliers.
: the highest data point in the distribution excluding any outliers. Median (Q2 / 50th percentile) : the middle value of the dataset.
: the middle value of the dataset. First quartile (Q1 / 25th percentile) : is the median of the lower half of the dataset.
: is the median of the lower half of the dataset. Thirds quartile (Q3 / 75th percentile) : is the median of the upper half of the dataset.
The Interquartile Range (IQR) is important as it is what defines outliers. Essentially, it is the following:
IQR = Q3 - Q1 Q3: third quartile
Q1: first quartile
In a boxplot, a distance of 1.5 * IQR is measured out and encompasses the higher observed points of the dataset. Similarly, a distance of 1.5 * IQR is measured out on the lower observed points of the dataset. Anything outside these distances is an outlier. More specifically:
If observed points are below (Q1 − 1.5 * IQR) or the boxplot lower whisker, then they are considered outliers.
Similarly, if observed points are above (Q3 + 1.5 * IQR) or the boxplot upper whisker, then they are also considered outliers.
Photo By Wikipedia
Boxplots in Action
Let’s see how we can detect outliers using Boxplots in Python!
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np X = np.array([45,56,78,34,1,2,67,68,87,203,-200,-150])
y = np.array([1,1,0,0,1,0,1,1,0,0,1,1])
Let’s plot a boxplot of our data:
sns.boxplot(X)
plt.show()
Photo By Author
So we see that we have a median value of 50 and 3 outliers in our data, according to our boxplot. Let’s get rid of these points:
X = X[(X < 150) & (X > -50)] sns.boxplot(X)
plt.show()
Photo By Author
Here, I have basically set a threshold, so that all points less than -50 and greater than 150 will be excluded. And the result; an even distribution!
The Tuckey Method Outlier Detection
The tuckey method outlier detection is actually a non-visual method of the box plot; the method is the same, except that there is no visualisation.
The reason I sometimes prefer this method, as oppose to the boxplot, is because sometimes taking a look at the visualisation and making a rough estimate of what the threshold should be set as is not really effective.
Instead, we can code an algorithm that can actually return the instances that it defined as outliers.
The code for the implementation is the following:
import numpy as np
from collections import Counter def detect_outliers(df, n, features):
# list to store outlier indices
outlier_indices = [] # iterate over features(columns) for col in features:
# Get the 1st quartile (25%)
Q1 = np.percentile(df[col], 25)
# Get the 3rd quartile (75%)
Q3 = np.percentile(df[col], 75)
# Get the Interquartile range (IQR)
IQR = Q3 - Q1 # Define our outlier step
outlier_step = 1.5 * IQR # Determine a list of indices of outliers outlier_list_col = df[(df[col] < Q1 - outlier_step) | (df[col] > Q3 + outlier_step)].index # append outlier indices for column to the list of outlier indices
outlier_indices.extend(outlier_list_col) # select observations containing more than 2 outliers
outlier_indices = Counter(outlier_indices)
multiple_outliers = list(k for k, v in outlier_indices.items() if v > n) return multiple_outliers # detect outliers from list of features
list_of_features = ['x1', 'x2']
# params dataset, number of outliers for rejection, list of features Outliers_to_drop = detect_outliers(dataset, 2, list_of_features)
Basically, this code does the following:
For every feature, it obtains:
The 1st Quartile
The 3rd Quartile
The IQR
2. Next, it defines the outlier step, which, just like in boxplots, is 1.5 * IQR
3. It detects outliers by:
Seeing if the observed point is < Q1 — outlier step
Seeing if the observed point is Q3 + outlier step
4. It then checks selects observations that have k outliers(in this case, k = 2) | https://towardsdatascience.com/3-simple-outlier-anomaly-detection-algorithms-every-data-scientist-needs-e71b1304a932 | ['Vagif Aliyev'] | 2020-10-25 09:16:54.151000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Data', 'Data Visualization'] |
Which One Is the Best Node.js Framework: Choosing Among 10 Tools | Development frameworks are used to organize the development progress. Developers get the ready structure for their code base, can apply reusable elements, and increase product speed. Using web frameworks for the front end is common — developers use frameworks to work with JavaScript.
Using frameworks for backend is slightly less common, but it is just as helpful. When we work on web development projects, we apply frameworks to a client- and server-side alike — because it’s highly efficient and convenient.
How to choose the best Node.js framework?
When we choose the best Node JS framework for our projects, we always pay attention to determining factors. Normally, it takes experience to select the fitting Node.js framework for a particular task, but there are some rules that work well for all products.
Scalability
Node.js web frameworks provide a defined structure for a codebase. In the long run, they decide what characteristics your product will have, how the app will process the data, and handle computing. You want a framework that isn’t too opinionated — it shouldn’t limit you in possible ways of executing the project. If the framework boxes you one method, it definitely is not good enough.
On the other hand, you want to be able to use packages, Node.js libraries, and reusable code frameworks. This is where the ecosystem comes in. You want a framework with an actively contributing community, educational materials, and the one used across many industries.
Functionality
If you define quality standards for your Node JS frameworks selection early on, you’ll have an easier time narrowing down the options. The list of optimal functionality is certainly subjective for each development project — still, we have selected our favorites to give you an idea of the big picture.
Support of declarative programming : such programming describes the platform saved by a feature and its solution. We prefer the frameworks that support declarative metadata describing the parameters and middleware of Node.js handlers.
: such programming describes the platform saved by a feature and its solution. We prefer the frameworks that support declarative metadata describing the parameters and middleware of Node.js handlers. Cluster management : it’s nice when a framework allows organizing, previewing, editing, and managing clusters, as well as sorting them by their characteristics.
: it’s nice when a framework allows organizing, previewing, editing, and managing clusters, as well as sorting them by their characteristics. Middleware support : middleware is software that helps developers improve their application’s performance, security, etc. It’s not a framework, but it can be integrated. Middleware helps to optimize your application functionality and to deliver a better experience.
: middleware is software that helps developers improve their application’s performance, security, etc. It’s not a framework, but it can be integrated. Middleware helps to optimize your application functionality and to deliver a better experience. Batch support: not all frameworks are equally good at handling multiple background processes simultaneously. We prefer frameworks that let us access the same databases and APIs caches, regardless if they are currently running. This allows us to get many things done as soon as possible.
Best Node.js API frameworks
We will review the best Nodejs frameworks, describing their respective advantages and use cases. To make this list, we have analyzed statistics, functionality, use cases, and advantages.
Express.js
Express.js is the most popular framework for Node.js. It allows reusing code to process data in web applications, storing user sessions, managing cookie files, and handling the payload. Without a framework, Node.js requires you to rewrite a lot of repetitive processes from scratch.
Express combines Node.js API frameworks with browser-specific APIs that aren’t normally supported by the runtime environment. You can connect the backend code to the browser and store certain files directly in the browser. This is why Express is a top choice for dynamic content. It quickly responds to users’ requests, uploads text, images, videos, and other content on the page.
Advantages
Easy route handling with URLs and HTTP protocol;
Supports middleware, allowing developers to install helpful tools for speed, responds, and performance improvement;
Supports multiple template engines;
Works well with both static and dynamic content;
Easily integrates with SQL and NoSQL databases;
As the most popular Node.js framework has a rich open-source ecosystem.
Companies that use Express.js
Twitter
BlaBlaCar
Accenture
Uber
Meteor.js
Meteor.js has a reputation for being a go-to Node.js framework for fast development. Its packages and libraries have a lot of reusable components. You will be presented with a very clear structure that will allow you to get your project off the ground in several weeks.
The framework can be integrated with Cordova, which means developers are able to build native mobile apps while reusing Meteor’s code. That said, Meteor is hardly the best option for complex long-term projects; the ecosystem of the framework has the reputation of being a ghost town — the framework has been getting increasingly less popular over the years. Still, it’s a good choice for time- and cost-efficient solutions.
Advantages of Meteor.js:
A fast Node.js framework for MVP development and prototype;
Smart packages: a single command can be used to connect multiple features;
Small codebase: in Meteor, developers can get quite a lot of functionality with only a few lines of code.
Companies that use Meteor.js
Mazda
Accenture
Deloitte
Koa.js
Koa is one of the simplest Node.js frameworks out there. It’s elegant and lightweight. On top of that, the installer file doesn’t feature a built-in templating tool or router. Many Express plugins and libraries have been adapted to Koa. It’s similar to Express, only simpler and slightly less universal.
Koa is not hugely popular among enterprises, and it’s considered a startup solution that offers a lightweight approach to web development. The framework mainly gained traction when Express wasn’t updated for some time, but now, after Express’ team resumed active support, the framework is losing its initial traction.
Advantages of Koa.js
Koa.js doesn’t use callbacks, unlike Express and other frameworks, which allows avoiding callback chaos;
Easier error handling: developers can spot technical issues with a try/catch command;
Koa doesn’t have a built-in middleware that could make the application heavier and slows down the software;
Many Express packages and libraries are translated to Koa, so Koa developers get to benefit from Express’ rich ecosystem;
Koa doesn’t spam middleware; in the situations where other frameworks require another layer of software, Koa allows simply writing functions instead.
Companies that use Koa
Bulb
GAPO
Sails.js
Sails.js is a popular framework for small projects. Similarly to Meteor, it doesn’t have the most active community which was probably affected by the absence of any recent updates. However, that aside, Sails is an elegant framework that integrates well with JS frameworks and databases.
Its model-view-controller is similar to what Ruby on Rails offers but supports more data requests. It’s a popular choice for real-time services and data-based applications. The framework is famous for its ability to generate REST JSON automatically and default HTTP support.
Advantages of Sails.js
Sails is one of the most popular Node.js MVC frameworks;
The framework supports multiple databases simultaneously;
Sails has multiple integrations and plugins, and has a solid support of Angular;
The framework automatically generates REST APIs;
Companies that use Sails.js
Lithium Technologies
Greendeck
Nest.js
Nest.js is a framework that uses JavaScript, TypeScript, Functional Programming, Object-Oriented Programming, and Reactive Programming. It can be integrated with other Node.js frameworks, notably Express.
Node.js’ architecture is hugely inspired by Angular — to the point where both teams participate in the events together. The logic behind code providers, controllers, pipes, interceptors is similar to Angular’s structural solutions. Developers that worked with Angular before will find Nest easy to master.
Advantages of Nest.js
Integration with Express: add-ons and packages for Express can be easily reused for Nest;
Smooth integration with Angular: teams that use Angular for front-end development, will have no issues connecting Nest to their codebase;
A quickly-growing community: many Node.js frameworks had their growth peak back in 2013, but Nest.js keeps the momentum and can now easily compete with Express;
Ready-to-use patterns: Nest.js is a highly predefined framework that does a lot of basic development activities for developers;
Fast performance;
Easy-to-learn: if you have used Angular or Express before, you would easily switch to Nest.
Companies that use Nest.js
Adidas
Autodesk
Neoteric
Sanofi
LoopBack.js
An open-source Node.js framework that uses ready modules to connect Node.js to APIs of HTML5, iOS, and Android. With LoopBack, applications can record data, upload files, create emails, create push notifications, register users, and request information from databases.
The application supports the SQL and NoSQL databases, allowing conversion of web applications to mobile apps, running the software locally, or in the Cloud, and changing the settings of push-notifications.
Advantages of LoopBack.js
Rich built-in modules: developers can use CRUD APIs, ready code-fragments for tasks like defining user management and access permissions, or interacting with a server- and client-side APIs.
Organized code: LookBack is an opinionated network that limits users in their coding freedom. While such an approach limits creative possibilities, it also offers a well-defined structure and increases the development speed.
A lot of built-in functionality: predefined functionality is already enough to build a simple project;
Accessible official support: the company that developed the platform has been acquired by IBM and is now supported by its official team;
A code that defines relations and models is stored in a separate JSON file.
Companies that use LoopBack.js
GoDaddy;
Department of Energy of the United States;
Symantec
Hapi
Hapi.js has the reputation of being a safety-focused Node.js framework, and we can confirm that it is the case indeed. The team clearly prioritized code quality control and verification over extensive functionality, but ultimately, came up with a great mix.
First and foremost, Hapi checks each installed NPM package. We already discussed how problematic Node Package Manager content could be — Hapi solves this issue by running a security test. Hapi focuses on enabling advanced cookie functionality, secure HTTPs, and authorization settings.
All security updates in Hapi’s development have a clearly defined immutable ownership. If something goes wrong, developers know whom to hold accountable. Hapi also offers a collection of plugins, although not as many as in Express or Nest.
Hapi’s advantages
Doesn’t require middleware to perform many essential and advanced tasks;
An in-built algorithm for creating and processing cookie files;
High-performance stability and safety: the framework was designed for a Walmart team to support increased traffic during Black Fridays;
Flexible RSS memory, event loop day, and V8 heap;
Suitable for microservice development: Hapi itself is built as a microservice and uses an HTTP protocol for communication.
Companies that use Hapi
Gozova
Boodle
Adonis.js
Adonis.js is a relatively new Node.js framework for microservices development. The tool itself consists of multiple packages that run the server-side of the application. The framework provides quite a lot of abstraction over Node.js — you won’t be working with typical runtime’s syntax.
The framework was hugely inspired by Laravel, so developers who worked with this PHP framework will perhaps have the easiest time transferring to Adonis.js.
Advantages of Adonis.js
Adonis has native support of Express, MEAN, and Koa;
Adonis.js is not an opinionated framework — it can be compared to Vue.js by the degree of freedom;
Developers can customize WebSocket, ORM, routing, JWT, and other features;
Adonis.js is the first Node.js framework to support JSON APIs natively;
Flexible template engine for dynamic content delivery.
Companies that use Adonis:
Caravelo
R5Stats
Synergy Global Network
Keystone.js
It’s a framework for static and dynamic content management that is commonly used for content-heavy web projects. Information portals, content management systems, online editorials, forums, social media, and e-commerce platforms are the most common applications of Keystone.
Keystone consists of a range of modules, supporting core functionality for backend, UI tools, web protocols, and Mongoose 4 — a database for object modeling in Node.js.
Advantages
A flexible tool for dynamic Node.js development that can handle large masses of content quite well and easily;
Intuitive UI that consists of decentralized modules;
Smart data models enabled by native Mongoose support;
A real-time framework for managing, tracking, and publishing updates;
A native add-on for image editing, storage, and management;
Location tracking with Google Places;
Intuitive embedding with Embedly.
Companies that use Keystone.js
Sony
Vodafone
Macmillan
Total.js
Total.js started as a Node.js framework for real-time applications. REST servers, e-commerce platforms, and Internet of Things projects, but now turned into an entire platform. The platform now combines the functionality of the platform and library, providing, on the one hand, established structure for a web project, and on the other, reusable functionality codebases for different types of projects.
The platform offers more than 100 services for JavaScript development, Cloud computing, code sharing, cooperation, UI development.
Advantages
Computing services and support of Cloud deployment;
An active open-source community;
Code editor;
Thousands of custom Node.js libraries;
A framework precisely describes the structure of content management systems, forums, information portals — developers can start building solutions right away;
Support of classical and dynamic routes; developers can use the existing ones or edit WebSocket on their own;
A built-in mechanism for automatic compression of HTML, CSS, JavaScript, HTML.
Companies that use Total.js
Medium
CapitalOne
Trello
Conclusions
Even though Node.js frameworks technically aren’t essential for backend development, they make a huge difference in development efficiency, product performance, and code quality.
Having access to predefined templates, libraries, and middleware allows developers to save time on writing repetitive code, getting through thousands of callbacks, and struggling to integrate Node.js with front-end frameworks.
We can’t say for sure which one is the best Node js framework 2020, although we do have our favorites. Still, when a Node.js project comes along, we always base our choice on the product’s needs and characteristics.
Some frameworks are better equipped to handle dynamic content, while others fit best for MVP development.
A professional team is able to pinpoint the key product characteristics and take them into account while choosing a framework. You are always welcome to talk to our team to get a third opinion on the project’s tech stack. | https://medium.com/dailyjs/which-one-is-the-best-node-js-framework-choosing-among-10-tools-87a0e191eefd | ['Sasha Andrieiev'] | 2020-12-16 16:50:54.581000+00:00 | ['Nodejs', 'Js', 'JavaScript', 'Development', 'Framework'] |
Why You “Get What You Pay For” in SEO | Photo by Kaleidico on Unsplash
“You get what you pay for.” It’s an adage most of us have become familiar with, though its practical usage varies. For most consumer products, there is some degree of correlation between price and quality, but of course there are exceptions to every rule — there are cheap products that work perfectly well, and expensive products that are overpriced garbage.
With search engine optimization (SEO), you’ll be paying for a service — not a product — and because commodities aren’t involved, pricing has much more room for variance and fluctuation. But in my experience, the adage definitely holds true; you get what you pay for in SEO.
But why is this the case? Why do cheap SEO agencies and providers consistently underperform the top professionals in the industry?
Experience and Strategy
In SEO, along with any other area of professional service or work, you’re going to pay more for an experienced candidate. Dedicated professionals have spent years honing their skills and refining their approaches, and the strategies they bring to the table are the product of those trials and lessons. An amateur may be able to bluff their way through the basics, but won’t be able to build you a custom-fitted strategy the way someone with years of experience can.
If you go with the amateur, you’ll spend countless hours running back to the drawing board, ultimately compromising your results and making your amateur costlier than an experienced professional would have been.
Staffing and Customer Care
Though not always the case, more expensive agencies tend to hire a more robust, experienced, and diverse staff than their less expensive counterparts. With a more expensive agency, you’ll likely get your own dedicated account representative, and you’ll have more individual specialists working on your account, rather than one or two generalists. This can help you get better results — and better service if and when something goes wrong.
Quality Content
From the time you come up with new topic ideas to the final publication of content to your site, the higher your quality is, the better it will be for your campaign. You’ll need to find the best topics to write about, optimize the headlines both for click-throughs and for search engines, spend time researching and writing a well-thought-out and detailed original piece, optimize it for search engines, find multimedia content to go along with it (at least occasionally), then publish and distribute it.
If you couldn’t tell, that takes a ton of work — anybody can throw a “decent” article together, but it takes a true professional several hours to complete a landmark piece — and those hours cost money.
Quality Links
Link building isn’t nearly as simple as it used to be. Google looks at the quantity and quality of links pointing back to your domain to loosely evaluate how authoritative — or trustworthy — your site is, then rank it accordingly. The problem is, you can’t just go around posting links wherever you feel like it. Your links must be natural, which usually means you have to establish relationships with external publishers, produce truly amazing content to be published on those publications, and work hard to increase the visibility and value of those pieces (while constantly scouting for new opportunities).
If someone offers to build a link for $5, you know they’re taking shortcuts. This is a lengthy, intensive process, and there are no “tricks” to make it cheaper — at least, not without the prospect of a penalty.
Troubleshooting
Even seasoned SEO professionals are going to run into problems — disappearing links, drops in rankings, visibility issues, and so on. The question is, will they know what to do about it? One of the most important lessons experience teaches you in SEO is how to troubleshoot problems, and only a veteran — or a team of professionals working together — will be able to do so effectively. Troubleshooting isn’t a service you can expect when you’re paying a trivial monthly rate for ongoing work.
Survival of the Fittest
All of these factors are evidence that cheap SEO solutions are usually gimmicks that don’t work — or worse, leave you facing penalties. But what about inexperienced SEO practitioners masquerading as seasoned experts? Wouldn’t they be able to charge exorbitant amounts for their services and fool people into believing they’re better than they actually are?
This is a legitimate possibility, but don’t worry too much — when one person falls for a scheme like this, they usually report it, or leave a negative review. All it takes is a quick search of the company name, and you should be able to get a reasonable idea about their history and performance potential. The “bad” services tend to weed themselves out eventually.
How Much Should You Really Be Paying?
You’re going to hate me for this, but… it depends. No two businesses are going to have the same needs or the same goals. A small business will have a much lower budget than a big business, and businesses in different niches will need SEO for different things (e.g., needing local SEO visibility vs. promoting specific eCommerce product pages). If you have in-house workers with SEO experience, that will also factor into your decision.
Still, no matter whether you seek an agency, a freelancer, or a full-time worker, you should expect to pay at least a thousand dollars a month — any less than that, and I’d be dubious of the potential results. For mid- to large-sized businesses, several thousand dollars a month is a minimum. A few years ago, I published this article at Search Engine Watch which goes into more specifics.
No matter what, you’ll need to do your research in advance. Ask for references. Ask for proof of results. Ask about methodologies, and educate yourself about what SEO truly entails. The more you know, the better decision you’ll be able to make — regardless of the specific price points involved.
For more content like this, be sure to check out my podcast, The Entrepreneur Cast! | https://jaysondemers.medium.com/why-you-get-what-you-pay-for-in-seo-88de92ebc574 | ['Jayson Demers'] | 2020-07-27 18:47:51.996000+00:00 | ['Content Marketing', 'SEO', 'Search Engine Optimizati', 'Entrepreneurship', 'Online Marketing'] |
Why My Best Ideas Come While Doing the Dishes Each Night | When I was a kid, one of the worst things I could hear my parents say to me after a meal was “Chris, do the dishes.” This was a mindlessly boring daily chore which needed to be done, and it seemed to last forever — especially on a nice summer night when all my friends in the neighborhood were outside playing games and I was the one stuck inside.
Growing up, the house I lived in didn’t have a dishwasher, we didn’t really need one — with three boys, we were the dishwashers. Today, our house still doesn’t have a dishwasher, and with four kids…well, I am still the dishwasher (wait, how does that happen?). But, honestly, I kind of like it this way. As I’ve grown into this stage of life called adulthood, I’ve come to realize the many benefits of completing this once loathsome chore.
One such benefit is coming up with topics for upcoming articles…
The many benefits of doing the dishes
Doing the dishes each evening allows me some time for peace and quiet — for some much-needed silence. With working full-time, combined with being a husband and a father of four, things can get pretty hectic. Doing the dishes allows me some time to relax, to reflect on the day, to look out the window over the backyard as the sun sets and appreciate all that I have. It allows me some much-needed time to think and ponder. It’s peaceful, mindless, repetitive, and oddly meditative.
Plus, the kids aren’t anywhere in sight. Don’t get me wrong, I love my kids to death, but the endless barrage of questions a 4-year old can lob at you can be unrelenting sometimes. When I’m doing the dishes they don’t dare come around — lest they get too close and get stuck drying or putting the dishes away.
When I’m doing the dishes I can’t really do anything else. I can’t check my phone (water and smartphones don’t usually mix well), I can’t watch tv, I can’t read, and I can’t be outside with the kids. I’m basically stuck. And this is actually enormously beneficial because it gives my brain a chance to slow down and unwind from the break-neck speed of daily life.
My mind is thus free to wander and think of all sorts of things. I reflect on the day, I think about the days ahead, I daydream, I think of books to read, I contemplate current events, I come up with topics to write about (in fact, the idea for this very article occurred to me while doing the dishes, go figure), the list is endless.
What the research says
I’m not alone in thinking there are great benefits to mundane tasks such as doing the dishes, either. Two fellas you may have heard of, Bill Gates and Jeff Bezos, have admitted they do the dishes every night at home too. When someone asked Bill Gates at a Reddit Ask Me Anything Event in 2014, “What is something you enjoy doing that you think no one would expect from you?” He replied, “I do the dishes every night — other people volunteer, but I like the way I do it.”
Research backs this up as well, finding that mundane repetitive tasks such as doing the dishes can increase your creativity, reduce stress, and improve your mindfulness (it also uses less water).
From Real Simple.com: “A new study conducted by researchers at Florida State University suggests the simple act of washing the dishes could actually encourage a state of mindfulness, which has been linked to improved well-being, reduced levels of stress, and even immune system boosts.”
Additional benefits
I get the same benefits from other mundane household chores as I do from doing the dishes, such as ironing my clothes. Many times, I’ll spend a Sunday evening ironing all the work shirts I plan to wear throughout the week. Not only does this afford me the previously mentioned benefits (less stress, creativity, mindfulness, improved well-being), it also saves me time.
By ironing my five work shirts in one evening I have more time during my mornings. This culminates in one less decision I need to make each day, staving off decision fatigue for a little bit longer. Deep Patel says that by bedtime the average person has made 35,000 decisions. Every decision requires time and energy, and depletes our willpower. And let’s face it, with four kids, I need all the willpower reserves I can get — otherwise it’s ice cream in front of the tv come dinnertime.
Conclusion
So, there you have it. One simple daily task that not only contains health benefits, but boosts your creativity, and reduces your stress.
What could be better?
If we re-frame the seemingly boring or mundane tasks in life, such as washing the dishes, they can actually take on a new meaning and provide tangible benefits — it all depends on how we look at it.
Stuck thinking of an idea for your next blog post? Do the dishes!
Stressed out from work? Do the dishes!
Want the chance to zone out, detach from the day, and just daydream?
Do the dishes.
Maybe this is what Mom and Dad had in mind all those years. Maybe they were on to something… | https://medium.com/swlh/why-my-best-ideas-come-while-doing-the-dishes-each-night-5986cfadf3e7 | ['Chris Schatz Ed.D'] | 2019-07-21 18:01:01.226000+00:00 | ['Self Improvement', 'Life Lessons', 'Self', 'Productivity', 'Mindfulness'] |
Are you Smart-working or Dumb-working? | Are you really working ‘smart’? Or just ‘remote’?
Here comes the catch. By investigating these data a bit more, I found the confirmation of an extremely important concept:
Remote working is just executing your task somewhere else, Smart working requires a whole new approach to work and different capabilities.
During the lockdowns, organizations have necessarily adapted to go on collaborating and to ensure that the most important processes could be carried on remotely. Most have simply transplanted existing processes to remote work contexts, imitating what had been done before the pandemic. This has worked well for some organizations and processes, but not for others.
Organizations should also reflect on their values and culture and on the interactions, practices, and rituals that promote that culture. A company that focuses on developing talent, for example, should ask whether the small moments of mentorship that happen in an office can continue spontaneously in a digital world.
Balancing Time & Space
Struggle to find where to start? PwC says that executives and employees agree on the top-two requirements remote workers need to increase their productivity — better equipment and greater flexibility in work hours.
Let’s talk about the latter. Some of the biggest changes that we expect are:
Shifts : working 9to5 makes no sense anymore, the same goes for “1 day a week in the office”. Fluid time windows will accommodate workers’ needs. If you have a working spouse/partner and a couple of kids going to school, you know exactly what I’m talking about.
: working 9to5 makes no sense anymore, the same goes for “1 day a week in the office”. Fluid time windows will accommodate workers’ needs. If you have a working spouse/partner and a couple of kids going to school, you know exactly what I’m talking about. Commuting & vacation : Flexible times to “clock in & out” of the office to minimize traffic pollution, together with the ability to take PTO anytime during the year. That means you won’t have to schedule your holiday in August, especially for people living in Europe.
: Flexible times to “clock in & out” of the office to minimize traffic pollution, together with the ability to take PTO anytime during the year. That means you won’t have to schedule your holiday in August, especially for people living in Europe. Location: Focus time, production time, execution time… doesn’t matter how we call it, but it will be mostly done at your place. The office will shift into a collaboration place, where (social-distant if needed) meetings or co-creative sprints will take place. And very likely, those offices evolve from giant offices or campus into smaller spaces, better distributed across urban and sub-urban areas.
Smart or Dumb? Take this quick assessment
If you’re an employer, put yourself in your employees’ shoes. If you are an employee, just answer these few questions:
Am I responsible to perform the same tasks in the same way I was before the pandemic?
Do I have to go through the same internal processes to get things done?
Have I been provided with any new tools that made my work-life easier?
Has my company responded to my concerns about health and safety matters?
Is my company checking my well-being, physical and mental?
Is there a strategy and procedure in place to go back to the office?
If most of your answers are NOs… I’m pretty sure you’re not in the smart club. Sorry.
Conclusion
Smart organizations will boldly question long-held assumptions about how work should be done and the role of the office. There is no one-size-fits-all solution.
The answer, different for every organization, will be based on what talent is needed, which roles are most important, how much collaboration is necessary for excellence, and where offices are located today, among other factors. Even within an organization, the answer could look different across geographies, businesses, and functions, so the exercise of determining what will be needed in the future must be a team sport across real estate, human resources, technology, and the business.
Yet, less than half of executives plan to take steps to help manage workloads or set clear rules on when people must be available. It’s time to change this.
Dedicated service designers, change managers, process specialists, content strategists, data analysts… and a passionate sponsor to lead them. This is just a preview of a team that can really change your post-pandemic employee experience. Before your productivity plummets.
Additional Resources | https://fedino82.medium.com/are-you-smart-working-or-dumb-working-8052e928867e | ['Federico Francioni'] | 2020-11-11 16:49:20.678000+00:00 | ['Employee Engagement', 'Remote Working', 'Innovation', 'Productivity', 'Covid 19'] |
The Four Principles of Good Writing | The Four Principles of Good Writing
Use these powerful tactics to become a better writer.
Image by Carolyn V on Unsplash
Writing is beautiful, but it can also be intimidating. I often feel anxious when I write, and I am sure I am not alone: apprehension about writing is very common, especially among amateur writers.
Having some level of anxiety associated with writing is normal and means that you care about writing well. In excessive quantities, however, stress can be an obstacle and may cause writer’s block.
The writing process doesn’t have to be frightening. If you feel unsure about the quality of your work, try new tactics to improve your writing skills.
According to Shani Raja — a former editor for the Wall Street Journal — there are a few practical techniques writers can use to write better. The secret to becoming an exceptional writer, he says, boils down to four principles: Simplicity, clarity, elegance, and evocativeness.
Simplicity
Simplicity is a key ingredient to make your prose impactful, and there are many things you can do to simplify your language and create writing that is tight — in other words, writing in which all points are expressed efficiently.
First, avoid using fancy language, at least without a purpose. If your word choice is purposeful and clear, using colorful language is not a problem. However, pointlessly using big words just to impress your readers makes your writing slow and heavy. Try to simplify your sentences and to express yourself plainly. If your writing is plain, you will get your message across in the shortest time possible.
In order to achieve maximum efficiency (aka “tight” writing), you also have to cut excess words. To explain this point, Raja uses an example he found in a book on journalism. Imagine a sign above a fishmonger’s that says:
“Fresh Fish Sold Here”
Can you think of which of those words could be deleted? “Here” is redundant because if you are reading the sign, you probably are in front of it. “Fresh” is also redundant because nobody would buy fish that is not fresh. “Sold” is also unnecessary, and even “Fish”, because you can smell it. Now, this is just a humorous example, but the takeaway message is: there is always room for improvement, so keep on cutting until you find the optimal way to express your idea.
To simplify your writing you should avoid redundant words (e.g. “unexpected surprise”), implied words (i.e. words that imply what’s being said, such as “I usually tend”), and long words: short words are faster to read, therefore they are preferable, but you can choose a long word if it makes your meaning more clear.
Don’t create unnecessary ceremony in your paragraphs, like capitalizing words for no reason — I’ve seen a lot of this, and I may even do it myself: putting a capital letter at the front of a generic word, like “government”. A capitalized word is a distraction because it draws attention to itself, and we need to remove all visual distractions if we want to make our writing simple and clear.
Finally, a tip that is particularly important for non-native speakers like me: avoid double negatives (“That attitude won’t get you nowhere”) and be careful to not overuse punctuation, especially commas, as this may compromise clarity and elegance.
Clarity
Clarity is probably the most important ingredient. As a writer you cannot get away with it: clarity must come first.
Through simplicity, you can remove the heaviness; through clarity, you make sure that your message gets across beautifully. If the reader doesn’t understand what you are saying it’s your fault, not theirs.
How do you achieve clarity? First, you need to ask yourself: What am I trying to say? Get your idea down on paper and polish it until you have a meaningful point, otherwise, you may leave a fuzzy idea in your writing.
Second, make sure the causal elements in your sentences line up properly: the logical connection between your ideas should be as clear as possible, enabling the reader to get the meaning of the sentence instantly, without having to make inferences.
Raja also introduces the concept of comma splicing, which is a kind of mistake that happens when you use a comma to join two independent clauses, like: “I love you, I am angry with you now”. You can fix a comma splice by adding conjunctions (“I love you, but I am angry with you now”), by changing punctuation, or by making separate sentences. Comma splicing makes the sentences out of focus and needs to be corrected.
To eliminate any shade of doubt, you should get good at recognizing when ambiguity is present in your prose. Misplaced modifiers, for example, can create confusion and misunderstanding: a misplaced modifier is a word, phrase, or clause that is improperly separated from the word it modifies (e.g. “The pedestrian was hit by a car, sitting on the curb”, should be instead: “A car hit the pedestrian who was sitting on the curb”).
You don’t want your readers to go back and re-read a sentence because they were confused by it, do you? This happens when writing is not straightforward — the reader needs to make a huge effort to understand the meaning of a sentence or paragraph. Try to express your ideas in the clearest way possible, removing any formal or redundant words, as well as any buzzwords and specialized terms (jargon), unless you are addressing a very specific audience.
If you don’t want to harm the clarity of your writing, be highly conscious of your tenses: don’t switch between past, present, and future in your sentences. Finally, remove all the useless information to clear up space for the main idea. Remember that you don’t have to say everything at once, you can save some details for later.
Image by Markus Winkler on Unsplash
Elegance
Elegance is what makes your writing flow. It’s the ingredient that gives your prose order and grace.
Compared to simplicity and clarity, elegance is a more difficult concept to understand, but I will try to explain what I have learned about this powerful tool so you can experiment with it.
Having a house style is critical if you want to write with elegance. A house style is a set of style principles that give unity and consistency to your writing, making it sound more authoritative. Style guides provide rules about capital letters, spelling, date formats, headings, etc. You too can give your articles a look of orderliness using the concept of house style. Whether you choose to italicize your titles or use the Oxford comma, try to avoid inconsistency: choose your own recognizable style rules and stick to them.
Another important quality of exceptional writing is narrative elegance. Everything that you write has a narrative, meaning a certain organization. When your content is well organized and there is a structured progression of ideas, your narrative flows nicely. To give your writing this elegance, you have to create order in your ideas: divide your narrative into meaningful sections and look at which one flows better into the other. By arranging your ideas in a beautiful way, you can achieve narrative elegance.
You also want to consider elegance as a key principle in your decisions about paragraphs. Organize your material in an easy-to-follow, elegant, and logical way. Don’t let your paragraphs get too long: huge blocks are unappealing for readers. Switch paragraphs every time there is a meaningful shift in emphasis.
Another way to bring more elegance to your writing is to make sure paragraphs and sentences transition well. Your prose should move along with gracefulness, with a certain rhythm or musicality.
Finally, try to balance the weights and structure of ideas in your writing, avoiding mistakes in parallel constructions and word echoes (i.e. repetitions of the same word in proximity).
Evocativeness
Evocativeness is the piece that completes the puzzle: evocative writing is writing that has the power to move you emotionally and to fire up your imagination. As a writer, you don’t always need evocativeness, but this ingredient can add something to your prose if you know how to use it properly.
There are a few ways to brings elements of evocativeness to your writing. One way is to create variety: avoid sameness. Make your writing stimulating, don’t use the same kind of words.
Add freshness: ready-made stock phrases like “Needless to say”, “At the end of the day”, “In all likelihood”, etc. as well as hedging words such us “Somehow”, “Somewhat”, “It seems”, “It appears” just make your writing dull. If a word doesn’t add value to your sentence, remove it. Make conscious choices.
If you want to make your writing more stimulating for your readers, focus on people and actions because these are things they can visualize. When you’re trying to create visual imagery carefully consider the structure of your sentences: put the cause first and the effect second to make it easier for your readers to see a picture. In terms of evocativeness, using the passive voice can also make your writing less clear as the picture you are trying to create forms more slowly than if a sentence is phrased actively.
The takeaway
I know, this is a lot to take in, but I hope you found this information interesting and perhaps useful as I did.
It takes a lot of time to perfect one’s writing skills. You will get better with effort and experience.
These principles that I just illustrated are powerful tools you can add to your writing. Keep perfecting these techniques as you write, applying them to your prose for the best result each time. And practice, practice, practice until you find your own voice. | https://medium.com/curious/the-four-principles-of-good-writing-bcf90d421c85 | ['Giulia Penni'] | 2020-12-18 09:52:18.901000+00:00 | ['Writing Tips', 'Writing', 'Writers Block', 'Writer', 'Writers On Writing'] |
Kotlin vs. Groovy: Which Language to Choose | Application of Computer Science to a growing range of fields and general advancements in technologies push programming languages to constant improvement and adaptation to the present-day needs. Now we have a bunch of languages serving different purposes: some of them emerged as an independent project, while others bud off from established and well-known languages.
The colossus of Java, for example, has a number of offspring; some of them have proved to be a success. One of them, Kotlin, was backed by Google as the official language for Android development in 2017 and was reported to be the second most loved and wanted programming language in 2018 Stack OverFlow survey and remains in Top 5 in this year’s survey. Another successful member of Java-based languages is Groovy that is gaining popularity among developers. At the same time, the 2018 Stack OverFlow survey listed Groovy among the most dreaded languages. In this setting, it seems unfair to compare the languages, but let’s see whether Groovy is so dreadful compared to Kotlin and, generally, which of them to choose as another addition to your bag of skills.
Overview
Kotlin
Kotlin is developed by Jetbrains — a company well-known in the Java world for its IDE named IntelliJ IDEA — and open sourced in 2012. It is a high-level, statically typed programming language that runs on Java Virtual Machine (JVM) and can be compiled to JavaScript source code or handle the LLVM compiler infrastructure.
Though internally Kotlin is reliant on the present Java Class library, its syntax may not be specifically compatible with Java. Kotlin has aggressive type inference to decide the type of values and expressions for which type has been left unstated. This makes it less verbose comparing to Java.
Kotlin has a practical mix of features from Java, C# and other new languages. It generally shows many improvements over Java such as null safety or operator overloading, though lacking certain convenient Java properties such as ternary operator and checked exceptions. However, both languages are completely interoperable, so they can co-exist in the same application.
Besides, since Android Studio 3.0 (published in October 2017), Kotlin is also part of the Android SDK and is involved in the IDE’s installation package as an option to the standard Java compiler. The Android Kotlin compiler allows user to target Java 6, Java 7, or Java 8-compatible bytecode.
Kotlin is much appreciated by developers for its interoperability, code security, and accuracy.
Groovy
Groovy is an object-oriented programming language for Java platform that is distributed through the Apache License v 2.0.
The key feature of Groovy is that it aspires to combine the best of two worlds: it supports both static typing typical for Java and more relaxed dynamic typing similar to that of Python.
Moreover, it can be used as both a programming language and a scripting language for the Java Platform. Like Kotlin, Groovy is compiled to Java Virtual Machine (JVM) bytecode and interoperates seamlessly by different Java code and libraries.
Generally speaking, Groovy has a Java-like syntax, but it takes on the ease of more moldable languages, such as Python and Ruby. Groovy can be called a modern Java enhancer, since it provides greater flexibility and introduces special features to applications, such as safe navigation operator (?.), the concept of Closures, Traits, runtime dispatching of methods, Groovy String, Array initialization and many others.
Groovy is a testing-oriented development language with syntax that supports running tests in IDEs, and Java build tools like Ant or Maven. Besides, it provides native support for markup languages like XML and HTML and domain specific languages.
What can also make it attractive to developers is its short learning curve: for Java developers, it is just a step away from the usual syntax, for new learners — it is relatively easy and modern.
Applications
Kotlin
Given the fact that Kotlin as an official Android development language at Google I/O, its most obvious application is Android development.
Speaking more generally, Kotlin is great for developing server-side applications, allowing developers to write concise and expressive code while maintaining full compatibility with existing Java-based technology stacks.
The most prominent applications of Kotlin are rather impressive and include the following giants:
Pinterest moved away from Java to Kotlin for their Android Application Development Gradle built developing android files (APK files) for both IDEA and Eclipse Evernote integrated Kotlin in their android client Coursera built an online courses application for a range of courses Uber made internal tooling processes on Kotlin Atlassian and Trello did a full code conversion of the old codebase. Kickstarter helps find resources for people showcasing creativity.
Groovy
Since Groovy is so similar to Java, it is sometimes difficult to find a distinguishing application for it. One thing that is a definite benefit is that Groovy enables to write scripts besides classes, so you can write applications and scripting with the same language. Groovy scripts are a perfect fit for tasks that change often. Since Groovy is a part of JMeter distribution, it is a good idea to use it for scripting and possibly to migrate the scripting logic developed in other scripting languages, to Groovy.
Examples of using Groovy as a scripting language are rather numerous:
1. Netflix uses Groovy for server-side scripting to offer various levels of filtering Besides, Netflix Spinnaker is implemented in Groovy.
2. Oracle’s fusion middleware is using Groovy scripts in its business component suite.
3. LinkedIn use Groovy also in their “Glu” open source deployment & monitoring automation platform.
The other direction of practical applications for Groovy is to use it as an embedded business language (a Domain-Specific Language):
1. National Cancer Institute uses it to do scientific simulations
2. JPMorgan, MasterCard and other financial institutions use Groovy for its nice DSL capabilities. | https://medium.com/sciforce/kotlin-vs-groovy-which-language-to-choose-47e4369fb905 | [] | 2019-07-30 10:32:31.101000+00:00 | ['Kotlin', 'Software Development', 'Java', 'Programming', 'Programming Languages'] |
The ugly face of Feminist Consumerism | Airtel, one of the leading telecom providers from India aired an advertisement that was supposed to break gender barriers at work, but the ad also reinstated another long standing gender stereotype in the Indian society.There were discussions on social media and it became a news hour debate on TV (probably sponsored by Airtel). I questioned but also defended the ad with few people.
Feminist ideas were used in advertising to make a false promise of empowerment and sell products based on that. Craven A was the first brand to use women’s empowerment message as a way to market their brand. Owing to these ads and the socio cultural norms it purported, women smokers in the US increased from 5 percent in 1923 to 18 percent in 1935 (Penny,2014). The most famous campaign was from Virginia Slims in the 1960s. The extremely disturbing aspect of these advertisements was the fact that they were selling a product that will potentially kill women in the name of women empowerment.
In the recent years, we especially see a surge in the ads targeting women in developing countries that show faux feministic ideologies to sell products that indeed reinstate the same stereotypes. A case in point is the Dove’s real beauty ads that camouflage women’s empowerment message to sell their skin care products.
Recently, FCKH8, an online retailer of clothes, made children from 6–13 speak out supporting gender, race and feminism. But, there were two major issues, one they were eschewing many F-words and the brand was trying to sell anti-sexism t-shirts. This was the same company that tried to sell Ferguson t-shirts when the case was widely publicised in the US.
Johnston and Taylor (2008) call this as ‘feminist consumerism’. The problem with campaigns like Dove’s “Real beauty” or Pantene’s “Sorry, but not sorry” or FCKH8’s campaign is that they expect women to buy a product to feel empowered. These products reestablish the same societal constructs that feminism is against. Grassroots feminist activists make more impact in bringing in change to the society, but the problem is they don’t have the same budgets as these multi-billion dollar companies.
When we voice against objectification of women in advertising, don’t you think even faux idealisms that promote consumerism are also equally dangerous?
List of references: | https://medium.com/sylvianism/the-ugly-face-of-feminist-consumerism-307469f09252 | ['Sylvian Patrick'] | 2018-06-23 07:42:01.741000+00:00 | ['Marketing', 'Feminism', 'Digital Marketing', 'Consumerism', 'Faux Feminism'] |
AutoML with Prevision.io | Overview of Prevision.io auto-ml platform
What is Automated Machine Learning?
Each data scientist, whatever his level of expertise, would tell you that applying traditional end to end machine learning process to real-world business problems is very tedious, time and resource consuming and challenging.
Automated machine learning addresses these issues by applying a systematic process of iterative and time consuming tasks required to develop a machine learning model. All repetitive steps such as model building / training/ selection / tuning .. are fully automated and parallelized.
What about Prevision.io ?
Prevision.io provides an automated machine learning platform to generate and deploy highly accurate predictive models on cloud or on-premise. We propose a friendly interface that can be used without any prior technical knowledge or infrastructure and build standalone models.
At the implementation level, the platform incorporates machine learning best practices from top-ranked kagglers / data scientists to ensure highly efficient models while keeping all the complexity aspects opaque to users.
Prevision.io activity is basically focused in France and is the only French vendor in the AI Cloud Provider. In January 2020, Prevision has been Named a Visionary in the Gartner Magic Quadrant.
How to use Prevision Auto-ML Solution?
Data scientists and machine learning developers who want to keep hold of their ML workflow code source can take advantage of the great power of Prevision AutoML services via Prevision python package without our front-end application. We developed a Software Development Toolkit that allows you to build and launch machine learning use cases within Prevision services. Hence, You can interact with the service in any Python environment (Jupyter Notebooks, Pycharm, VS Code, …). An R package is also provided.
In this post I will show you how to use automated machine learning via Prevision python package, to create a regression model to predict median housing prices. Then I will expose the traditional method and see how it would be much easier to use AutoML comparing to self-made programming machine learning model.
Pre-requisistes
MASTER_TOKEN:
In order to initialize client workspace via the SDK and interact with its corresponding Prevision.io plateform instance, an API Token is required for authentication . It is obtained by going to the user menu and clicking on the API key item:
To copy the KEY clipboard go to the right of the screen and click on copy as shown below:
Install:
git clone https://github.com/previsionio/prevision-python.git
python ./prevision-python/setup.py install
Setting configuration and connection
In the code block below we will import the Python package to initialize the client instance to interact with its Prevision.io workspace. This needs to be done once per session.
import previsionio as pio
import pandas as pd
URL = 'https://XXXX.prevision.io'
TOKEN = '''YOUR_MASTER_TOKEN'''
# initialize client workspace
pio.prevision_client.client.init_client(URL, TOKEN)
Do not forget to change the values of the TOKEN with the generated key and the URL endpoint with the name of your instance in order to continue running this notebook. | https://medium.com/prevision-io/automl-with-prevision-io-9477a50869c0 | ['Zeineb Ghrib'] | 2020-06-12 10:06:23.375000+00:00 | ['Machine Learning', 'Automl', 'Python', 'Regression', 'Data Science'] |
How to make your website clean and maintainable with GraphQL | REST API services, SQL databases, markdown files, text files, SOAP services… can you think of yet another way to store and exchange data and content? Production websites usually work with several different services and ways to store data, so how can you keep the implementation clean and maintainable?
Every Node.js website, regardless if it is a single page application or a regular site, needs to connect to a third-party service or system. At the very least it needs to get content from markdown files or a headless CMS. But the need for other services quickly surfaces. First, it’s a contact form — you need to store its submissions. Then it’s a full-text search — you need to find a service that enables you to create indexes and search through them. And the list goes on and on depending on the size of your project.
What is the problem with that? Well, nothing at first. When you are motivated to finish a project you create a component for each of these functionalities. Communication is encapsulated within the respective components, and after a few quick tests, you are happy it all works. The customer is happy the project was delivered before the deadline, and as a side effect, you also became an expert on a Content as a Service API, form submission services, and automatic search index rebuilding.
You got the website up and running so quickly that you got promoted! And the knowledge of the project and its details with you.
In a few weeks, your colleagues are asked to do some changes to the project. The customer wants to use a different search provider as the original one is too expensive. The developers are also working on another project that needs a contact form, so they thought about using the same component, but store the submissions in a different service. So they come to you asking about the specifics of your implementation.
When you finally give up searching your memory, they will need to do the same research as you did originally to figure out the implementation. The UI is so tightly coupled with the functionality, that when they want to reuse the components, they will probably end up implementing them again from scratch (and maybe copy-pasting bits and pieces of the old code).
Decoupled infrastructure showing GraphQL communication and specific GraphQL resolvers
The Right Level of Abstraction
So how can we avoid these issues to keep our code maintainable and clean? Take a look at the graphic above where I divided the communication with third-party services and the UI. The specifics of each external service API are implemented in the middleware on the back-end of the website. The components on the front-end all use a single way to fetch and submit data — GraphQL.
GraphQL
So what is GraphQL and why use it to communicate between front-end and back-end? GraphQL is a query language, a protocol, that was founded exactly for this purpose — to decouple the data the website front-end needs from the queries required to fetch them. It is similar to a REST API from a functionality point of view as it enables you to query for data. For more information check out the GraphQL homepage.
The main difference is in the way you ask for the data. Let’s say a new developer on the project is tasked with creating a blog page. The page should display blog posts that are stored within a headless CMS. I am using Kentico Cloud, which is a Content as a Service (CaaS) platform allowing you to store various types of content in clear hierarchical structures and obtain the content via a REST API. Therefore the GET request for data using a REST API could look like this:
https://deliver.kenticocloud.com/{projectID}/items?system.type=blog_post
Sample response would be:
{
"items":[
{
"system":{
"id":"0282e86e-8c72–47f3–9d3d-2acf93a8986b",
...
"last_modified":"2018–09–18T10:38:19.8406343Z"
},
"elements":{
"title":{
"type":"text",
"name":"Title",
"value":"Hello from new Developer Evangelist"
},
"content":{
...
}
...
}
}
]
}
The response contains data of all blog posts in JSON form. As the page displays only a list of blog posts, a lot of returned data (starting with content field) are redundant as we do not need to display them. To save bandwidth (which you usually pay for), the developer would need to use additional columns filter:
https://deliver.kenticocloud.com/{projectID}/items?system.type=blog_post&elements=title,image,teaser
They need to know the specifics of the API and probably have its reference open in another browser window while building the query.
Getting the same data with GraphQL is much easier. Its schema is natively describing what the front-end is capable of rendering. The developer needs to specify what data to fetch in graph notation:
query BlogPosts {
getBlogPosts {
elements {
title
image
teaser
}
}
}
(Find more examples of GraphQL queries in this Why GraphQL? article by Shankar Raju.)
Now when you decide to switch the content storage from headless CMS to markdown files or SQL database, the implementation of the blog page will not change. The GraphQL query will still look the same.
How is that possible? Let’s look under the hood for a moment. The separation of the front-end implementation from external services is achieved using the following parts:
GraphQL schema
GraphQL resolvers
Apollo server
GraphQL Schema | https://medium.com/free-code-camp/how-to-make-your-website-clean-and-maintainable-with-graphql-13fe06098656 | ['Ondřej Polesný'] | 2019-02-22 06:30:12.797000+00:00 | ['GraphQL', 'JavaScript', 'Microservices', 'Web Development', 'Nodejs'] |
You Can’t Earn High Self Esteem | We often make the mistake of thinking that we need to acquire certain things to be respected by others. This is a damaging thought because one assumes that one is valued only for what they can get.
A worse thought, however, is the thought that we need to acquire certain things so that we can respect ourselves.
The truth is, a healthy level of regard for yourself cannot and should not be dependent on what you do, what you acquire or what position you have. That’s conditional love and we can’t afford to only be loving to ourselves when we get something valuable.
You have to know that you are inherently valuable. Anything less and the things and people who you consider valuable will not be able to rendezvous with you. If you don’t think you’re appreciated just as you are, people will believe you.
“It took me a long time not to judge myself through someone else’s eyes.” — Sally Fields
Think of the story of the wise man and his ring. When a man who had been criticized his whole life came to the wise man for help, the wise man gave him his ring and told him to sell it in the market, with the asking price being one gold coin.
When the man tried to sell the coin, no one would give him anything close to what he was asking. Most laughed him to scorn.
He returned to the wise man and explained the predicament, to which the wise man simply suggested that he go to a jeweller to appraise the ring.
At the jeweller, the man was offered 58 gold coins but if given more time, he could pay as much as 70 gold coins.
The point of the story is to highlight that not everyone is going to value you, because they simply value other things. That’s okay, but make sure you don’t make your worth dependent on the opinions of others who don’t understand you.
However, I advise you to take it one step further — value yourself. If you are someone who struggles with self-esteem, it’s because you’re waiting on people to validate you. But because you don’t value yourself, you avoid the people who could actually value you. You think to yourself that they couldn’t possibly like you and that certain opportunities are too good for you. This is the bedrock for a negative self-fulfilling prophecy.
“Self-worth comes from one thing — thinking that you are worthy.” — Wayne Dyer
If you decide to be valuable, regardless of what you have or don’t have, you’ll be amazed at the doors that open for you. They were there all along. You just had your back turned to them and instead faced all the dead ends.
Working ourselves to death to earn respect from others or to earn self-respect is putting the cart before the horse. Even if you achieve some great thing, people aren’t going to respect you. They’ll simply look at the person who almost killed themselves just to be liked. | https://alchemisjah.medium.com/you-cant-earn-high-self-esteem-f52b7608311b | ['Jason Henry'] | 2019-04-09 04:33:26.332000+00:00 | ['Self Improvement', 'Personal Development', 'Self', 'Personal Growth', 'Psychology'] |
What’s Next for Mobile Innovation in News? Spurring a New Wave of Collaboration and User-Centered Design. | A big part of the Mobile Innovation Lab’s mission has been to be open about its processes and share what it has learned in the course of its experimentation. As we wrap up two years of work, we still have a lot to share about the results of our final experiments, about where we see big opportunities to better serve mobile audiences and the importance of user-centered design, effective measurement and multidisciplinary work in our innovation projects.
We shared much of what we’ve learned during a day of discussion in New York on March 26. Over the course of the day, members of the lab team and representatives from a broad cross-section of newsrooms discussed the many opportunities — often unknown or unexplored — that mobile offers to create highly relevant news experiences that encourage loyalty among users and open up new possibilities for growth. We also discussed the challenges often associated with innovation work and collaborating across disciplines.
We are now sharing slides and transcripts from the day, below, in the hopes they can be useful to a broader constituency of editors, reporters, product managers, technologists, designers, and others working to bring news into the future. But first…
Why do we think innovation on mobile is still important?
Relationships with readers are essential to the future of media, yet getting the news to them has never been more challenging. Platforms exert outsized influence, making it critical for news outlets to find innovative ways to engage audiences, without intermediaries.
The Lab has found that audiences are eager to consume news in new formats that speak to them in the language of mobile — with notifications, creative use of mobile signals like location data, alternative audio formats, new live coverage layouts and alternative formats for evolving stories. While we have done a lot of work in these areas, we’ve also barely scratched the surface. The opportunities for organizations to drive loyalty and even monetization with useful and relevant news experiences still feels under explored, and under discussed. We hope our event and these notes will help spur on those working within their organizations to take better advantage of these opportunities. | https://medium.com/the-guardian-mobile-innovation-lab/whats-next-for-mobile-innovation-in-news-628780754171 | ['Sarah Schmalbach'] | 2018-03-30 03:00:18.988000+00:00 | ['Innovation', 'User Experience', 'News', 'Mobile', 'Journalism'] |
Top Web Application Development Tools to Build a Website | The world of technology is changing at a rapid pace. Gone are those days when web developers used to burn midnight oil amidst hard drives of a stack of servers. While that may have been true almost a decade ago, these days it is hardly the case. Fast forward to today, there are a plethora of tools to streamline the web development process.
Web development in 2020 is generally done with lots of planning and collaboration between teams, and it has come out of the basement and into the light of day.
To facilitate all activity, there are plenty of web development tools to do the heavy lifting of planning, optimizing and coding a website.
Businesses are staying ahead in the league to provide customers and clients with robust web solutions. With more and more companies creating an active online virtual presence, it becomes necessary to use tools to fasten the development work. While there are several options for site owners to design and develop sites, there is an alternate option to hire a web developer.
Let’s go through the best choices to get your site up and running, and growing traffic in a jiffy.
Collaboration Tools
Slack: Team Messaging app with a mission to make your working life simpler, more pleasant, and more productive.
Trello: Flexible and way to organize everything visually with anything, as well as use as KeyCDN.
Glip: Messaging real-time with integrated task management, video conferencing, shared calendars and more.
Asana: Team collaboration tool to track work and results.
Jira: Built for every member of the software team to plan, track, and release great software or web applications.
Web Application Frameworks
Web Application Framework aka “web framework” is a software framework specifically designed to support web applications development such as web services, web resources and web APIs. To put in simple terms libraries that help you develop your application faster and smarter!
Ruby: A web-application framework to create database-backed web applications according to the Model-View-Controller (MVC).
AngularJS: JavaScript-based open-source front-end web framework to address several challenges faced in developing single-page applications
Ember.js: Open-source JavaScript web framework, based on the Model–view–ViewModel pattern.
Express: Free and open source, web application framework for Node.js, released under the MIT License for building web applications and APIs.
Meteor: Open-source isomorphic JavaScript web framework written with Node.js. Meteor gives options for rapid prototyping and produces cross-platform code.
ASP.net: Open-source server-side web application framework designed for web development to produce dynamic web pages developed and backed by Microsoft to build dynamic web sites, applications and services.
Laravel: Free, open-source PHP web framework for the development of web applications following the model–view–controller (MVC) architectural pattern.
Zend: Open source framework for developing web applications and services using PHP.
Symfony: Reusable PHP components and a web application framework.
CakePHP: PHP framework highly popular to build web applications simpler, faster and require less code.
Package Managers
Package managers track the packages you use and make sure they are up to date and the specific version that you need.
npm: Node package manager for JavaScript helps to create public packages, publish updates, audit your dependencies, and more.
Grunt: JavaScript task runner, a tool used to automatically perform frequent tasks such as minification, compilation, unit testing, and linting. Use Grunt to automate just about anything.
Gulp: Platform-agnostic and simple, Gulp toolkit automates mundane and time-consuming tasks in the development workflow. Integrations are built into all IDEs, and people are using gulp with PHP, .NET, Node.js, Java, and other platforms.
Bower: A web package manager to effectively manage components that contain HTML, CSS, JavaScript, fonts or even image files.
Programming Languages
Programming language is a formal constructed language specifically built to communicate with a computer and create programs aimed at controlling the behaviour.
PHP: Server scripting language and a powerful tool for making dynamic and interactive Web pages.
NodeJS: Based on V8, NodeJS is an Event-driven I/O server-side JavaScript environment. Open-source, cross-platform, JavaScript runtime environment to execute JavaScript code outside of a browser.
JavaScript: Language of HTML and the web.
Python: Programming language to allow working quickly and integrate systems more effectively.
Ruby: Dynamic, open-source programming language with a focus on simplicity and productivity.
Golang: Open source programming language to build simple, reliable, and efficient software.
TypeScript: Open source programming language which is a superset of JavaScript which compiles to plain JavaScript
Database
MySQL: It is considered one of the most popular open-source databases across the globe. An open-source relational database management system (RDBMS), MySQL is free and open-source software under the terms of the GNU General Public License. It is also available under a variety of proprietary licenses.
MongoDB: It is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON documents with the schema.
Redis: Open source, in-memory data structure store functioning as a database, cache and message broker.
PostgreSQL: Robust and high power open source object-relational database system
Prototyping Tools
SKETCH (MACOS ONLY)
Highly popular among designers, Sketch is a digital design platform. This platform includes Precision, the Inspector, Tools, Exporting, Native, and Mirror. If we take into account numbers, there are millions of designers using Sketch to realize their ideas into products. This platform gives designers the power to share and collaborate on designs, as well as designing with Inspector. This is also an option to seamlessly connect all devices for real-time sharing or collaboration.
INVISION
Ranked among one of the most preferred tools, Invision is adept for drawing and prototyping, providing the advance features to store all the projects in the cloud. It ensures your data is safe and nothing will be lost in transit due to the hardware breakdown. InVision is compatible with a wide range of tools like Trello, Basecamp, Jira, Dropbox, and Slack.
FIGMA
As a Browser-based tool, Figma allows real-time collaboration; with vector tools with the capability of illustration, comprehensive prototyping, and code generation.
Multiplayer co-editing
Style libraries to simplify the creation and update design system
Integration with Principle (Mac-only)
Web API connecting Figma to several tools, scripts, and web apps.
MOCKFLOW
MockFlow is a suite of applications that assists in tasks associated with a typical project process. WireframePro app is a perfect option for prototyping, especially if you’re exploring and testing new ideas.
Create new wireframes easily with MockFlow.
Work on initial ideas to build basic layouts quickly to get thoughts into a presentable form.
Plan better UI by rapidly sketching interface layouts in a short time without any complexity.
Finally
We hope these tools help you speed up the web application development process. If you want to include any device, do post a comment below. | https://medium.com/dev-genius/top-web-application-development-tools-to-build-a-website-850bb9ae20a7 | ['A Smith'] | 2020-07-13 08:21:41.492000+00:00 | ['Tools For Design', 'Development', 'Website Development', 'Web Design', 'Web Development'] |
Granny vs Sexy — Top 3 who Care about their Underwear | I was on a blogging zoom call, and someone complained that no one was commenting on the posts they had spilled their heart, soul, and guts into. You know, topics like divorce, abuse, women in politics, were not getting the back and forth community dialogue she craved.
I suggested she write about granny panties. Why?
Because I’ve noticed that whenever someone does so on this platform, it generates a lot of comments.
Those of us who wear underwear feel strongly about our choices, which boil down to comfort vs. sexy in most cases.
Not exactly red vs. blue or vegan vs. paleo, but an issue with camps who defend their choices eloquently and explicitly.
So in lifting up three of my favorite pieces on this foundational topic, and their honest and thought-provoking authors, I also want to acknowledge the community that cares about what it wears underneath it all.
So here goes:
First up is Yael Wolfe for her post called, What do Your Panties say about You?
Yael takes a historical perspective, starting with her mother’s worrying about the accident she might get into where someone sees her unmentionables. How many of us grew up with mothers like hers who feel that the state of the panty is a direct reflection on the quality of their parenting? I suspect many if not most of us.
But if anyone had a mom or dad that said, I don’t care what you’re wearing, I just want to know you’re alright, hooray!
Ms. Wolfe, along with the other two, shares her clear preference for the covering comfort of cotton. If they got the name Granny Panties, perhaps that’s cause grandmothers are wise and sensible women we should listen to and head.
The upshot seems to be that the sexier undergarments are less comfortable and made with fabrics that don’t breathe. And even worse, some have a string going up your crack that at best, take some getting used to.
One thing I noticed is the skimpier they are, the more they cost. And if you want a pair where the manufacturers forgot to sew the crotch seam together, that costs even more.
In her frank and intimate story of her relationship to intimate apparel, Yael includes the relationship of the man with whom she is in a relationship’s relationship to her intimate apparel.
Other writers mention that they wear what they wear with the men in mind, but not necessarily what the man actually thinks of the garments. Nice touch. (Her post has 21 comments.)
Creating a smooth foundation
Shaunta Grimes takes the foundational approach in her story, I’ve Started Wearing Granny Panties and I’m Never Going Back.
Foundational here has two meanings, as I understand it. One is that her choice of underwear is considered foundational because it provides some compression and support.
Given corsets and girdles are largely things of the past, she distinguishes her newfound garments from Spanx and other super tight to hold you in, but far from comfortable shapewear garments.
In other words, these panties provide some shaping and yet are still very, very comfortable. How cool is that! And being for sale at Target as well as online, they are affordable. Not like the $40 I paid for a shaping garment I have to squeeze into.
The other meaning of the term foundational is about providing a smooth foundation for a smooth look under her new wardrobe. Shaunta is all about discovering her style and finding her very own look, as well as a leading blogger and writing teacher.
You can follow her journey on her new publication, A Style of my Own, Shaunta shares with us, via words and photos, how color lights us up or dulls us down.
She’s blazing a new trail many of us hope to follow. So thank you, Shaunta. And makes an excellent case for why what’s underneath the outerwear is crucial for a smooth look. (Her piece has ten comments.)
Every “war” has two sides
To round out this discussion, we have Tracey Folly’s article, My Sensible Undergarments Are Destroying My Sexuality. Tracey had actually converted to Granny panties and then rethought her decision.
It was based on confidence. Sexual confidence.
She previously wrote about her ability to feel sexually confident even in a sensible undergarment. Then things changed, and she re-visited her decision. It wasn’t working out the way she hoped it would.
But rather than go all the way back to scratchy lace and the thongs that drove her crazy, she’s looking for a both/and solution. Underwear that’s both sexy and comfortable. Most likely cotton, but with a little something extra.
I appreciate how she made space for this possibility. She hasn’t written a third post, as far as I know, so we’re not sure how the story ended. But I applaud her willingness to share the changes she went through on her search.
What I couldn’t find and miss very much are all the comments I remember her first article having. There was one in particular by a man which I found deeply moving. He confessed that if he was lucky enough to have a woman take her panties down for him, he was beyond caring what kind they are or how they look.
So maybe it comes down to pleasing ourselves first and foremost.
I honor these writers for their willingness to engage with an intimate subject in public. And while my current vote is for comfort, that wasn’t always the case.
Unlike politics, there won’t be a vote after the debates. Unless you consider our shopping dollars as our ballots.
Hopefully, I’ve gotten to the bottom of the issue. So in the interest of keeping this as brief as possible, I’ll end things here. | https://medium.com/top-3/granny-vs-sexy-top-3-who-care-about-their-underwear-54cce1c883f5 | ['Marilyn Flower'] | 2020-03-12 15:33:27.652000+00:00 | ['Fashion', 'Humor', 'Writing', 'Self', 'Sexuality'] |
Is Technology Outpacing Organizations? | Is Technology Outpacing Organizations?
AI, Capital Markets, and the Disruption of Labor Examined at MIT Event
By Paula Klein
We live in extremely paradoxical and contradictory times. Economics and technology are no exceptions. “Digital progress makes the economic pie bigger, but there’s no economic law that everyone, or even most people, will benefit,” said Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy (IDE), at a summit on April 27.
Moreover, while the pace of AI and machine learning–including robotics, image, and speech recognition — accelerates exponentially, skills, education and organizations are lagging. How do we unravel these contradictions and bridge the gaps?
Many economists, executives, and technologists see prediction models, mobile apps, and data analytics algorithms poised to improve healthcare, banking services, and even agriculture. For others, the tools create more problems than solutions, breaching privacy and displacing workers. All of these considerations were examined during a day of absorbing conversations, panels, and debates at The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor conference, held in New York. The event attracted more than 350 attendees, and offered diverse perspectives among experts from the financial, tech, academic, and business worlds. (See full agenda and list of speakers here.)
Hilary Mason, General Manager, Machine Learning, at Cloudera, and Claudia Perlich, a Senior Data Scientist at Two Sigma, spoke of correcting biases –both human and machine — infiltrating technology tools, as did author and data scientist, Cathy O’Neil. One solution, the human resource platform, Blendoor, was created to eliminate algorithm-bias in hiring, said CEO Stephanie Lampkin.
MIT’s Erik Brynjolfsson (far right) with panelists (from left) Hilary Mason, Claudia Perlich, and Michael Chui, of McKinsey. Photos by Sam Stuart
In the financial sector, Adena Friedman, President and CEO of Nasdaq, and Brian Moynihan, Chairman and CEO, Bank of America, addressed the need for better privacy protections, the promise and perils of cryptocurrencies, and AI-driven applications that can ensure accessible consumer services. Their firms are racing ahead to keep pace. Bank of America has developed its own speech recognition app and Nasdaq is closely watching Blockchain.
Nasdaq CEO, Adena Friedman, listens to Bank of America Chairman, Brian Moynihan.
One common theme was clear: Disruption is apparent everywhere. Workers in all fields must be more like rock climbers than ladder-climbers as they navigate today’s circuitous career paths, according to Lavea Brachman, VP at Ralph C. Wilson Jr. Foundation. She is optimistic that some regions of the country, like Detroit, will find new opportunities for growth and technology jobs as old labor models are upended. Social media, meanwhile, is caught up in a web of false news, as Professor Sinan Aral explained, and just about everyone blames education for failing to train tomorrow’s workers.
Summarizing the mixed messages of the day, IDE Co-director, Andrew McAfee, noted that despite many anxieties, global innovators — like those in the MIT Inclusive Innovation Challenge — are rising to the challenge: Using technology to provide greater benefits and access to the digital economy. Events like this one, he said, remind us that while tech advances will continue, it’s up to society to determine how they’re harnessed and used.
MIT’s Andrew McAfee
Two media accounts of the conference can be found here and here.
Watch full videos here and read more reports from the conference. | https://medium.com/mit-initiative-on-the-digital-economy/ai-capital-markets-and-the-disruption-of-labor-examined-at-mit-event-36c07f3df6b3 | ['Mit Ide'] | 2018-05-31 19:46:48.538000+00:00 | ['Fintech', 'Algorithms', 'AI', 'MIT', 'Machine Learning'] |
Kotlin — Unit Testing Classes Without Leaking Public API! | Introduction
Kotlin is an amazing new up-and-coming language. It’s very actively developed and has a ton of features that make it very appealing.
It’s been steadily gaining market share ever since Google added Android Development support for it (back in 2017) and made it the preferred language for such development exactly one year ago (May 2019)
For any Java developer coming into the language, spoiler alert, there is one big surprise that awaits you — the package-private visibility modifier is missing. It doesn’t exist.
Access Modifiers
An access modifier is the way to set the accessibility of a class/method/variable in object-oriented languages. It’s instrumental in facilitating proper encapsulation of components.
Java
In Java, we have four access modifiers for any methods inside a class:
void doSomething() - the default modifier, we call this package-private . Any such declarations are only visible within the same package (hence the name, private for the package)
- the default modifier, we call this . Any such declarations are only visible within the same (hence the name, private for the package) private void doSomething() — the private modifier, declarations of which are visible only within the same class
— the private modifier, declarations of which are visible only within the same class protected void doSomething() — declarations only visible within the package or all subclasses.
— declarations only visible within the package or all subclasses. public void doSomething() — declarations are visible everywhere.
This wide choice of options allows us to tightly-control what our class exposes. It also gives us greater control over what constitutes a unit of testable code and what isn’t. See the following crude example:
JSomethingDoer, doing things since 2020. Source: https://github.com/stanislavkozlovski/unit-test-kt/blob/master/src/main/java/doer/something/JSomethingDoer.java
We can only directly test three of the five methods above. Because doSecondLittleThing() and feelingAmbitious() are private, our unit tests literally cannot call those methods, hence we cannot test them directly.
The only way to test them is through the package-private method maybeDoSecondLittleThing() , which can be called in our unit tests because they’re in the same package as the class they’re testing.
The package-private modifier comes very useful in this example, because it allows us to test the maybeDoSecondLittleThing() method directly!
Testing would be more cumbersome if we had to test all code paths of the maybeDoSecondLittleThing() method by calling the public doSomething() method.
The package-private modifier also ensures that we don’t leak any such internal methods to packages outside of this one.
Notice the different package.
Kotlin
Kotlin is a bit weirder in this regard. It also has four modifiers:
private fun doSomething() — vanilla private modifier
— vanilla private modifier protected fun doSomething() — the same as Java’s protected — only visible within this class and subclasses.
— the same as Java’s protected — only visible within this class and subclasses. public fun doSomething() — visible everywhere
These three are pretty standard. I want to focus on the fourth one:
internal fun doSomething() — Internal declarations are visible anywhere inside the same module.
This is as good as public inside the same module, in that sense. Many small projects have no need for multiple modules, but they’re likely to have multiple packages.
There is basically no way to limit access to a Kotlin method to only be within the same package.
What does this mean? It means that we either need to make our methods private, thus making our testing life harder, or make them public, thus exposing unnecessary API to components in the same module.
Our same example from above, again in the doer.something package:
But this time, from the doer package, we can call every method:
That’s unfortunate. We don’t want to have a leaky API just because we want to have better tests.
How do we solve it?
Solution 1 — Refactor!
Us refactoring our code. Photo by Oleksandr Baiev on Unsplash
It seems to be commonly believed that we shouldn’t test methods that are/should be private. The idea being that:
all private methods are reachable by the public methods
the public methods expose the interface/contract of a class
private methods are implementation details, and we want to test the class’ functionality, not implementation
While that makes sense in theory, reality is not as black-and-white.
Regardless, you should treat any need to test non-public methods as a code smell.
Re-evaluate your design and rethink whether it makes sense to refactor the logic around such that you get the same test coverage by testing public interface contracts, rather than implementation details.
In some cases, though, it is cleaner and more maintainable to get full test coverage by testing each tiny private method code thoroughly, rather than testing each conditional branch via public methods or creating unnecessary boilerplate (e.g additional classes).
Solution 2 — Reflection
Photo by Simon Zhu on Unsplash
An alternative solution is to keep the methods private and use the language’s reflection features to call into said private methods. | https://stanislavkozlovski.medium.com/kotlin-unit-testing-classes-without-leaking-public-api-871468695447 | ['Stanislav Kozlovski'] | 2020-12-07 06:53:27.296000+00:00 | ['JVM', 'Java', 'Kotlin', 'Unit Testing', 'Git'] |
So Many People Think They Are Immune From Covid | So Many People Think They Are Immune From Covid
But the virus can get them
Measures to stay safe have been given
Most people will probably not get affected with the coronavirus, and many do not know anyone personally who has gotten the virus. People are not careful if they think they are safe from the ravages of the pandemic. They fight against the use of masks and do not practice physical distancing. They protest against being told what to do to stay safe during the current health crisis.
With so many people not willing to adhere to the recommendations from health experts, the COVID-19 pandemic is getting worse instead of better.
Donald Trump said back in the spring of 2020 that the virus would magically disappear when the warm weather came. The summer came and left. The warm weather has gone. Now the cold weather is returning with the pandemic still in full force. It has been almost a year since the pandemic began.
At the beginning of the coronavirus scare, people were told to wash their hands often, cough into their sleeve, and avoid crowds. There was advice to social distance, sanitize everything, and wear masks. Maybe people were not against wearing masks as much as they were against being told that they should. They did not want any mandates. They rebelled.
Now the pandemic has gotten worse instead of better. It is time for us to listen to the medical experts who are telling us what we need to do to curb the spread of the virus.
The White House events and the large political rallies have taken their course. Politics have been involved. Some people who thought they were safe and immune have been affected. The COVID-19 virus is not safe, and people should be careful to stop the spread. | https://medium.com/illumination/so-many-people-think-they-are-immune-from-covid-a5847143e0c0 | ['Floyd Mori'] | 2020-11-16 07:53:02.342000+00:00 | ['Politics', 'Pandemic', 'Covid 19', 'Safety', 'Coronavirus'] |
The Problem with Stoicism | It’s a pet peeve of mine that Stoicism is constantly referred to as a “practical philosophy”. The implied message in this label, sometimes more explicitly expressed by its contemporary adherents, is that it’s opposed to the supposedly inconsequential and nonsense parlour games of other kinds of philosophy.
Stoicism has seen a marked rise in popularity in recent years thanks largely to its promotion within popular psychology and business literature. In a blog post called “Stoicism 101: A Beginner’s Guide for Entrepreneurs”, the self-help author Timothy Ferriss wrote:
“Fortunately, there are a few philosophical systems designed to produce dramatic real-world effects without the nonsense. Unfortunately, they get punished because they lack the ambiguity required for weeks of lectures and expensive textbooks.”*
His guest blogger was the former CMO of American Apparel, Ryan Holiday. Holiday has been the greatest populariser of Stoicism as a practical philosophy “for entrepreneurs” since the publication of his Daily Stoic, a compendium of Stoic quotations, in 2016. Holiday wrote:
“[Stoicism] doesn’t concern itself with complicated theories about the world, but with helping us overcome destructive emotions and act on what can be acted upon. Just like an entrepreneur, it’s built for action, not endless debate.”*
It’s great that Holiday and Ferriss are giving the Stoics so much publicity in the modern age, but their perception of its utility is based on a major oversight. Holiday overlooks the fact that Stoic ethics are based on a Stoic cosmology, that is, a - er - complicated theory about the world.
The particular Roman brand of Stoicism that contemporary popularisers love so much — the philosophies of Seneca (the Roman aristocrat and tutor/speechwriter of Nero), Marcus Aurelius (a later Roman emperor, who is among the five “good emperors”) and Epictetus (the Greek slave turned philosophical teacher), form an ethical system that simply would not exist without the cosmological foundations of earlier Greek Stoicism.
The very basic tenet of Stoic ethics follows the idea that you cannot control external circumstances, but you can control your own thoughts and emotions. In the stoical sense, this isn’t simply that “shit happens”. The Stoics truly believed in determinism — that all events in the universe are predetermined, and human beings have no power over the course of events. The ethics of Stoicism follow logically from that premise.
Chrysippus, the head of the Athenian Stoic school at around 230 BC, who laid out the system of Stoicism as we know it, wrote:
“…all effects owe their existence to prior causes. And if this is so, all things happen by fate. It follows therefore that whatever happens, happens by fate.”
Events simply happen in an unchanging chain of causes. Since he believed everything to be fated, Chrysippus also believed in divination. The future can be read in, say, tarot cards or the palm of a hand, because the order of the tarot cards or the lines in the palm are inextricably linked to events to come.
Seneca, being a later stoic, shared Chrysippus’ cosmology to the extent that he (like Chrysippus) believed fate, nature and God to be the same thing. Even joy and sorrow, Seneca wrote, “is predetermined”.
So all those Seneca quotations that become pithy motivational statements on social media and in self-help books were written from the basis of a cosmology - a complex theory of everything - that was handed down by the earlier Stoics like Chrysippus.
How Should I Live?
Stoicism’s cosmological foundations are simply providing the answer to the question “how should I live?” because as a “complicated theory about the world” it forms the basis on which decisions can be made and attitudes formed.
Practically all philosophies are “practical” in this respect. Let’s take two much-maligned “difficult” philosophers: Jean-Paul Sartre and Immanuel Kant. Jean-Paul Sartre’s Existentialism also attempts to answer the question “how should I live?”, as does Immanuel Kant’s ultra-complex thesis laid out in his dense “critiques”.
Before you attempt to answer the question “how should I live?” you must first take a stab at forming an understanding of the world in as much depth and detail as you can stomach. Most people rest on common assumptions, some seek understanding from philosophical history.
Having developed systems, much as the Stoics did in ancient Greece, Kant and Sartre went on to develop ethical treatises: Sartre found virtue in authenticity of action.
Kant gave us the “Categorical Imperative” of virtue. This is a “deontological” (rule-based) ethical position often confused with, but quite distinct from, the age-old “Golden Rule”. Kant’s Categorical Imperative was that we should act in a way that ought to be a law for everybody.
Both Kant and Sartre made a case for human freedom. Both saw consciousness as the source of our freedom (Sartre saw consciousness as a “nothingness”, Kant saw consciousness as “transcendental idealism”. Yes — I know — both sound complicated, but these ideas can be explained for those who can be bothered to learn them).
Both Sartre and Kant saw human consciousness as something exceptional in nature, the very source of our free will. Stoicism sees consciousness as a spectator as fate blindly unfolds around and through us, freedom lies only in our take on things (the question the Stoics wrestled with was to what extent we are determined).
What would be disturbing to most people — even those who admire Seneca’s maxims — is the passivity of Seneca’s Stoicism: the idea that you can do nothing to effect real change in the world, only to your circumstances as you perceive them.
Wild Beasts and Great Leaps
This is perhaps why (and I’m being speculative now) Stoicism was so popular in the late Roman world. It was an empire that became increasingly cruel to those it ruled over while disparity of wealth widened at an increasing rate.
While Roman aristocrats philosophised in their perfumed and covered sedan chairs, pregnant women were thrown to wild beasts in amphitheatres to the roar of cheering crowds.
Perhaps Stoicism is the right kind of “quiet” philosophy for times just like now, when the rich are getting exponentially richer and the institutions of democracy are attacked from all sides. Stoicism feels right when the world seems to be going wrong.
Kant’s and Sartre’s particular philosophies caught the public imagination at times of “great leaps” in human well-being.
Existentialism flourished in post-war Europe when the old Western empires crumbled. We saw great housing and healthcare projects rolled out while science took great leaps: in genetics, quantum physics and computational science.
Kant’s philosophy took hold as the enlightenment lit up in Western Europe. The fields of law, parliamentary rule, science and economics as we know them sprang up from the intellectual hotbed of protestant Europe. Kant, who was a late starter, wrote of “waking from my dogmatic slumber” before he wrote his Critique of Pure Reason.
Digital technology and individualist capitalism has made our intellectual institutions more positivist and instrumentalist than ever. It seems even Stoicism itself can fall victim to a mechanical appraisal of its merits that can distort or ignore its core concepts.
Maybe the modern proponents of stoicism will say, “well, we’re taking the best part of stoicism — the part about being resilient to misfortune and so on. There’s no need for the other stuff.”
“Stoic” is of course both an adjective and a noun. The small “s” adjective is something like what the English call the “stiff upper lip”, that is, taking what fate throws at you with equanimity.
Well, who could disagree with the small “s” adjective? What these modern writers really mean is that they are taking the colloquial meaning of “stoic”, an attitude to life, to stand for “Stoicism”, a school of philosophy that attempts to explain life.
Well, fine. But let’s do away with this notion of “practical philosophy” then. Because all philosophy is practical and to perpetuate a mere attitude as a philosophy is a dangerous thing.
Thank you for reading. | https://medium.com/the-sophist/the-problem-with-stoicism-cd4183f7a24 | ['Steven Gambardella'] | 2020-07-20 05:50:18.478000+00:00 | ['Tim Ferriss', 'Entrepreneurship', 'Ryan Holiday', 'Stoicism', 'Philosophy'] |
Defining a problem statement — Design Thinking | Define is the second stage of the design thinking process, it is preceded by the empathy phase. This phase is about synthesizing observations about users from the empathy phase and defining an actionable problem statement. Defining the problem statement requires the articulation of the problem to establish a detailed problem statement.
“If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.” — Albert Einstein
Who are we empathizing with?
The purpose of a problem statement is to capture what we want to achieve with our design. It is generated through a variety of questions, through different options, opinions, perspectives, and different ways of thinking about the problem. What difficulties are the users facing? What are the patterns of their everyday behavior? What needs to be changed and what should remain the same? These many questions frame the problem statement in a user-centered way.
A defined problem statement brings in focused clear direction for the stakeholders, project managers, and the team members about the final achievements and deliverables. The goal is to frame a meaningful, detailed, and actionable problem statement which leads to various solutions to ideate, which is the third phase of design thinking.
The Steps to Write a Problem Statement in a Human-Centered way.
All phases of design thinking are a way of solving problems, strategizing, and designing in a human-centered way. A good way to define a problem is to frame a problem statement and write down the observations from analysis to synthesis.
Analysis
To analyze the problem break down the complex concepts into smaller, easier to understand parts from the information collected during the phase of empathy in the empathy map.
The empathy map can be studied and observed further with the below categorization which helps in creating an actionable and tangible problem statement.
Who are we empathizing with? Who is experiencing a problem that needs to be solved? Can these people be further categorized as potential users according to their situations, demographic details, persona, motivation, and behavioral patterns? What are the pain points? What is the real problem? What needs to be accomplished by solving that problem? What pain point needs to be relieved? what are the struggles of the people? Where is the problem happening? What environment does the problem take place in? How many people are experiencing the same problem in the same environment? Are the people getting used to the problem? Do they need a solution at all? Why is the problem needs to be solved? is this problem really worth solving? Does it bring value to the user’s life? Does it also bring value to the business? What are the gain points — What will the problem statement solve? How many people will benefit from it? and what will be those benefits including those of users and business?
Synthesis
Involves putting these pieces back together to form the whole problem statement.
For example, the problem could be — The changing world with social distance norms has affected the mental well being of people.
Now we have to consider, Who is being affected by the problem? Who are we empathizing with? — People who are staying away from home, working remotely, and are unable to meet their friends, family, and go out to make new connections.
A defined problem statement after considering the details would be, “We need to provide a way of easy connection for the extroverts, extroverted introverts who miss going out and one’s who are used to going to work every day”.
Now, is more information need to be observed? like the environment of the problem, where are the users located?
A more defined problem statement would be “We need to build an app for people in urban and sub-urban cities who feel alone and need to feel connected while also creating a new connection, while they stay at home and work remotely”.
There are other points that could be factored in while defining the problem further. Why does it matter? What will this problem solve? A more defined problem statement after factoring all aspects would be, “People who are used to being social and active on a daily basis, need an app where they can interact, indulge in group discussions, play games, join groups and go live to stay connected to people with similar interests and ideals because shared experiences can bring people closer and create a sense of belongingness all over again”.
Not all points are required to be implemented in a single problem statement but it’s important to ask questions and consider as many factors. By the end of the define phase, the phase of empathy turns into a workable and user-centered problem statement.
What next?
The well thought, actionable, and tangible problem statement backs up the next phase of design thinking. In this third phase of design thinking, the team constantly brainstorms to come up with many potential solutions.
Finally, from the quantitative approach of creating as many solutions as possible, the most compelling solution comes into being through cognitive thinking with fun experimental ways of the third phase, Ideate. | https://uxdesign.cc/defining-a-problem-statement-design-thinking-ca4d54edf559 | ['Priyanka J'] | 2020-09-04 17:18:52.292000+00:00 | ['Design Thinking', 'Design', 'Define', 'Innovation', 'Problem Statement'] |
Climate Change is a Class Issue | Climate Change is a Class Issue
We’re not all in this together
On October 8, 2018, the Intergovernmental Panel on Climate Change put out its latest report, outlining the climate effects of 1.5ºC and 2ºC warming, the efforts that will have to be taken for us to hit those targets, and how current emissions-reduction pledges get us nowhere near where we need to be. But it also woke the world up to the need to act — at least for a week or two.
The report outlined how our transportation, land-use, building, energy, food, and other systems need to be redesigned from the ground up to reduce emissions and prepare for a warmer world. We have just 12 years to slash emissions by 45 percent below 2010 levels and hit net zero by 2050 if we’re to have a chance at keeping warming below 1.5ºC. An infographic from the World Resources Institute effectively outlines the difference between 1.5ºC and 2ºC, and, quite honestly, the 1.5ºC scenario looks scary enough — I don’t know why we’d want to risk hitting 2ºC or higher. Yet, the reaction to the report makes it seem like that’s exactly where we’re headed.
Infographic via WRI
The media, which usually does a terrible job at covering climate change, at least put some climate stories on its front page in response to the report before turning its attention back to the Trump reality show that so effectively brings in ad money and eyeballs. For a few days, people at least pretended to care, but what really changed? Not a whole lot.
No governments made significant policy shifts in response to the report, which said that current pledges have us heading for upwards of 3.2ºC of warming, and that’s if countries hold to them — the United States has already pulled out of the Paris Agreement and Brazil may be about to do the same. I’m tempted to descend into despair at this point, but let’s try to remain hopeful. There’s a reason not enough is being done, and identifying the root of the problem could help us to address it.
Western Countries Trying to Shirk Responsibility
We’re often told that climate change is an issue that will affect all of humanity; that our collective future is under threat the more the world warms. But is that really true? Are we all going to be affected in the same way as the mercury soars? Of course not.
If the effects of climate change were going to be equally distributed, we’d be doing a lot more about it. The truth is that geography and income make a huge difference to how severe the impacts of climate change will be, and that explains why so little is being done.
Even though there will undoubtedly be impacts across high-income Western countries, the brunt of the pain will be felt by those living in poorer nations across the world that don’t have the infrastructure or the means to adapt to climate change, and may be either low-lying or at risk of greater desertification as sea levels rise and temperatures increase.
Yet, outside parts of Europe, Western countries aren’t doing nearly enough to reduce emissions. It’s not uncommon to hear conservative politicians or people associated with resources industries in Canada or Australia argue that they contribute a small percentage of global emissions so it’s okay if they don’t cut their emissions as much as the United States and China — with emphasis on the latter.
But that argument fails to account for the true responsibility for climate change — both from historical and per-capita perspectives. Those in the West who don’t want to cut emissions, or at least not quickly, love to point the finger at China — now the world’s largest emitter — to argue the People’s Republic isn’t doing nearly enough. However, instead of looking at China — which is installing record amounts of renewable energy and whose investments in solar panels and electric buses are making them cheaper for the rest of the world — they should be accepting their own responsibility and leading the way.
Carbon emissions are not new; countries have been increasing their use of fossil fuels since the Industrial Revolution, and as a result some countries have emitted a lot more of the carbon budget than others. China may be emitting a lot today, but historically they’re responsible for a much smaller relative percentage of those emissions, while the West has emitted a much larger chunk. | https://medium.com/radical-urbanist/climate-change-is-a-class-issue-cd6c143d38f6 | ['Paris Marx'] | 2018-10-25 16:47:06.063000+00:00 | ['Climate Change', 'Equality', 'Future', 'Politics', 'World'] |
甘特圖也可用!設計師在工作上可以如何使用 Notion 筆記軟體 | Log into Facebook | Facebook
Log into Facebook to start sharing and connecting with your friends, family, and people you know. | https://medium.com/rar-design/%E7%94%98%E7%89%B9%E5%9C%96%E4%B9%9F%E5%8F%AF%E7%94%A8-%E8%A8%AD%E8%A8%88%E5%B8%AB%E5%9C%A8%E5%B7%A5%E4%BD%9C%E4%B8%8A%E5%8F%AF%E4%BB%A5%E5%A6%82%E4%BD%95%E4%BD%BF%E7%94%A8-notion-%E7%AD%86%E8%A8%98%E8%BB%9F%E9%AB%94-d2d5cfe48d1e | ['林育正 Riven'] | 2020-11-16 06:13:24.335000+00:00 | ['Notion', 'Product Hunt', 'Design', 'Evernote'] |
Rebounding From The Pandemic… with AI | Rebounding From The Pandemic… with AI
What Our Past Teaches Us About The Future
Image courtesy of Wikimedia Commons
While 2020 might be remembered as the year the world stopped, there are reasons to be optimistic about what’s in store for us, and for artificial intelligence (AI), in 2021.
A glance at the past might reassure us. In the mid-1300s, my native Italy was devastated by the bubonic plague. Of course, there were many negative effects, but it is also believed to have seeded the Renaissance. Now recognized as one of the most prolific periods of our modern society, the Renaissance is credited with having birthed a new, innovative way of thinking, liberating Italy from its Middle-Aged rigidity. And to think it all took place in Tuscany, one of the regions most affected by the plague in Europe.
Fast forward 700 years to today, where we are seeing the seed of historical changes in the way people relate to AI, clearing the mind from equally rigid preconceptions about this technology.
Just a few months ago, AI was both marveled at and feared. But today, the dialogue around AI has changed dramatically. The global pandemic exposed the need for AI and automation in industries such as manufacturing, where operations were halted due to social distancing requirements and other restrictions. With factories shutting down worldwide, the demand for new technologies — and AI, in particular — skyrocketed to keep things up and running.
1. Manufacturers will look to make sense of their IIoT data.
So, what is in store for us and AI in 2021?
2. AI will augment the human workforce when it comes to tasks such as quality inspections.
Manufacturers will continue on the path of embracing Industry 4.0 initiatives, but now with greater urgency and more concrete objectives in mind. Today’s factory floors will become more complex than ever before. As the Industrial Internet of Things (IIoT) becomes the norm, manufacturers need tools and technology that can extract actionable insights from the data their machines (i.e., sensors and cameras) are collecting. I believe that AI will be the solution here, helping manufacturers make sense of all this data. They’ll then be able to use these insights to stay competitive in the new normal and prepare for any possible future economic disruptions.
3. Manufacturers will prioritize investing in inexpensive, lightweight solutions to accelerate their Industry 4.0 initiatives.
Automation is nowhere near the level it needs to be for work to progress without human supervision. In the new year, I believe we’ll see manufacturers, logistics companies and other human-workforce-heavy industries turn to AI to speed the adoption of technologies aimed at lessening the load and assisting humans in their tasks. I predict that quality inspection — already in the top five applications for AI in manufacturing — will see even greater adoption and widespread deployment in the year to come.
4. Edge AI will beat out the cloud.
We’ll see AI deployed in the form of inexpensive and lightweight hardware. It’s no secret that 2020 was a tumultuous year, and the economic outlook is such that capital-intensive, complex solutions will be sidestepped for lighter-weight, perhaps software-only, less expensive solutions. This will allow manufacturers to realize ROIs in the short term without massive upfront investments. It will also give them the flexibility needed to respond to fluctuations in the supply chain and customer demands — something we’ve seen play out on a larger scale throughout the pandemic.
5. Manufacturers will rely on AI with explainability to learn how and why AI makes decisions.
In the fight between cloud and Edge AI, the latter will prevail for economic (cheaper), latency (faster), and security (on-premises = safer) reasons, placing more and more AI right at the sensor level (e.g., on cameras), where it will immediately be put to use.
Humans will turn their attention to why AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable, and finds more applications in real-world scenarios, we’ll see people start to question the why behind it.
The reason? Trust: Humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to be accurate and “explain” why a product was classified as “normal” or “defective” so human operators can develop confidence and trust in the system and let it do its job.
Like in 14th-century Italy, harsh periods, such as what we are enduring now, unleash epochal changes that force us to rethink our priorities. 2021 will be the year in which we recognize AI for what it is: a powerful, reliable and transparent tool our society and the manufacturing industry need to augment human work and never be caught off-guard again.
Originally published at https://www.forbes.com. | https://medium.com/towards-artificial-intelligence/rebounding-from-the-pandemic-with-ai-7b25655f130f | ['Massimiliano Versace'] | 2020-12-27 19:40:28.058000+00:00 | ['News', 'Pandemic', 'Artificial Intelligence', 'Manufacturing', 'Deep Learning'] |
Top 3 — On Writing and Mental Health, Accepting Rejection, & Expounding on Four Writing Adages | HELPFUL WRITING ARTICLES
Top 3 — On Writing and Mental Health, Accepting Rejection, & Expounding on Four Writing Adages
My Top 3 for November is a compilation of three articles about writing I found beneficial.
Photo by Hannah Grace on Unsplash
What Is Top 3?
Top 3 is a publication where Medium writers support other Medium writers by promoting each other’s work. Medium members are encouraged to post three stories from other writers that they enjoyed reading.
If you want to join, please read the Write for Us Guidelines.
My Top 3 for November on Writing
These are not in order of which I found more helpful or by using a subjective analysis to say which is “best.” I am merely presenting to Medium writers three articles I found constructive and beneficial that they may have missed.
Missy Crystal wrote Being a Full-Time Writer Could Harm Your Mental Health
“We don’t talk about this enough”
I know personally, in 2018, I made $92,000, mostly from writing the content for 66 websites, though I did have several other regular clients and a few other sources of income from writing.
I got so busy, even with outsourcing a lot and a virtual assistant, I was working nearly non-stop. I don’t think I took a full day off. Some workdays lasted 16 or 18 hours, only stopping for meals.
I wound up having a heart attack and spending nearly all that money on hospitalization. I did get most of it back from my health insurance. However, I had to cut way back, and I lost quite a few clients.
Some clients were absolute gems about my illness and let me go right back to writing for them after being cleared medically.
Two CA traffic lawyers I write for said they wouldn’t think of hiring anyone else, so that was comforting. Plus, my septic tank cleaners kept me on, which I had mixed feelings about. Only because it can get tough to think of a new way to say these guys are the best shit-suckers on the planet twice a month.
Anyway, enough about my personal story, this is about Missy Crystal’s article. I just felt the need to share my personal experience about writing full-time and possible health issues.
Missy echoed my point by saying, “I write for a living so I can spend more time with my family, but sometimes I’m only physically present. Like today, for example. My brain shut down approximately 5,000 words into my workday, and now I have nothing to offer.”
One more crucial point Missy makes is,
But burnout and mental health concerns are common in the writing community, despite numerous studies that rave about the benefits of creating written work. Studies that describe writing as therapeutic generally have one major flaw: they’re geared toward casual writers.
So, here it is. Believe me; your health will appreciate you for reading and heeding her advice.
The next subject is a well-written article by my battle-buddy, Tom Handy. Not an actual battle-buddy. We did not go into combat together, but I call everyone I know served “Battle” out of respect.
He chose another extremely essential topic for writers, and that is handling rejection. If you have thin skin or cannot handle rejection, you really need to read this.
The inability to cope with rejection as a writer could cause anguish and even cause you to quit before becoming a well-known writer or columnist if you prefer.
In The Writer’s Guide to Getting Rejected 100 Times, Tom advises, “Don’t let the rejections get you down. Keep writing. Be patient. Stay focused and write your next masterpiece. There are worse things that could happen.”
This is also true of positive critiques and corrections too. Learn to take it for what it is, another writer’s assist. I like to think of those who send me a note to make my work better (including publication editors), my writing team’s Point Guard (PG). Even if you are the only one on your team, you need a PG!
So, here is Tom’s fantastically precise piece.
In this piece by Karen Banes, who takes the advice we as writers have been getting for centuries and expounds on it, she says,
After over two decades working as a freelance writer, I can assure you they don’t go far enough. Here’s my take on how you can use each snippet of advice to really up your game, when it comes to producing a quality piece of writing.
What I particularly liked about this article is that it packs so much into a 4-minute read. She doesn’t drone on or repeat points. Her writing is clear and concise. I suppose that’s why it was “curated or distributed in topics,” as Medium is fond of calling it now. Or is it us that did that?
Her advice, though we’ve all likely heard it before, “Write Like Nobody’s Watching,” is spot on. She says, “Here’s the thing. People are going to be reading, eventually. And they may also be responding, commenting, and analysing what you’ve written. So write your first draft like nobody’s watching, then edit as though the piece will go viral.”
“W rite like the best, wittiest, most articulate version of yourself.”
I’ll leave the rest for you to read here.
Takeaways
We all need help with our writing from time-to-time. Don’t get bogged down with too much work, rejection, or not going far enough with the four pieces of advice most readers need to consider, which are read, write like nobody’s watching, write like you talk, and let it sit.
Hopefully, you can take a little bit from these Top 3 about writing to add to your quality of life as a writer. Contrary to what a lot of internet writers want you to believe, writing full-time is not “a walk in the park.”
About the Author Photo by Jean Springs from Pexels
Stephen Dalton is a retired US Army First Sergeant with a degree in journalism from the University of Maryland and a Certified US English Chicago Manual of Style Editor. Top Writer in Fiction, Short Story, VR, Design, & Creativity. Editor of Pop Off, Top Dalton’s Blog, 100WordStory, B.O.S.S., and SportsShorts100WordsOnly
You can see his portfolio here. Email [email protected]
Website | Facebook | Twitter | Instagram | Reddit | Ko-fi | https://medium.com/top-3/top-3-on-writing-and-mental-health-accepting-rejection-expounding-on-the-four-writing-adages-4c2a59dbaead | ['Stephen Dalton'] | 2020-12-06 16:54:06.352000+00:00 | ['Writing Tips', 'Short Story', 'Top 3', 'Self Improvement', 'Writing'] |
Making Potential Clients More Comfortable: What Works For Financial Behaviorist Jacquette Timmons | The Nitty Gritty
How Jacquette uses different themes every month to inform the activities and questions she poses during her monthly dinner series, The Comfort Circle™
Why Jacquette increased the event pricing from $75 to $150 per person and what went into the decision to host the dinner in the same restaurant month after month
How the dinner series works into her larger business model — and why she’ll be offering self-hosted dinners as well as firm-hosted ones
What’s the future of The Comfort Circle™? Jacquette says that it might include retreats
Financial behaviorist Jacquette Timmons helps people talk about uncomfortable things. One of the ways she does that is through The Comfort Circle™, an intimate gathering where she walks her guests through curated topics about money and life over a three-course meal.
In this episode of What Works, Jacquette shares her perspective on discomfort and why it’s crucial to success, how she uses different topics to curate The Comfort Circle™ experience, and where this in-person event fits into her business model.
If you’re a coach or consultant and you’re looking for new ways to approach your business model, this episode is for you. And even if you aren’t, Jacquette’s stories and experience provide insights into pricing a service and leaning into discomfort: two essential skills that every entrepreneur needs.
We release new episodes of What Works every week. Subscribe on iTunes so you never miss an episode.
Why an in-person dinner on money and life?
“My clients can talk about sex with their friends. But they can’t talk about money because there isn’t an environment where people feel that they can be vulnerable.” — Jacquette Timmons
It was obvious to Jacquette: people need safe spaces to have difficult conversations. Initially, Jacquette considered a traveling conference that would pop-up in several U.S. cities. But she realized that it required too many resources.
Instead, she decided to think smaller and more intimate… and that’s where the idea for The Comfort Circle™ came. The dinner series, which started in January 2017 in New York, provided a space for real talk around money and fit within her business vision while suiting Jacquette’s natural inclination to connect in person. “I know live events are powerful,” she says. “I know I show up powerfully and I connect well with people that way.”
Jacquette consistently hosts The Comfort Circle in the same restaurant for a small group. She says the max is twelve guests because her intent was always to keep the dinners intimate.
How The Comfort Circle™ fits into her larger business model
“The dinner is a lead generation vehicle. It’s an opportunity to get to know me and my approach. Some of them convert into coaching clients. Not every dinner results in immediate conversion or a conversion to a four figure coaching engagement but that does happen.” — Jacquette Timmons
Jacquette uses the monthly dinner series as a way to connect with potential clients — both personal and corporate. “I already do financial workshops for corporations,” she explains, “but there are times when they don’t want to do the same old thing. This is a way of doing something different.”
Different indeed! For example, a law firm hired her to host a customized dinner for 40 people. And instead of charging the typical $150/per person, the firm paid her as a speaker. “I now think of this as there’s a self-hosted version of the dinners and there’s the firm-hosted dinners, which I’m hoping will gain some traction,” Jacquette adds.
Besides strategically using the dinners as a lead generation experience, they also inform her content strategy. Because every dinner has a theme, Jacquette uses that as her marketing focus. For example, she’ll make an announcement for an upcoming dinner in her email, blog posts, and social media focusing on that same theme.
Hear more from Jacquette Timmons on how she uses The Comfort Circle™ as a lead generation strategy and what the future looks like for these intimate dining experiences. | https://medium.com/help-yourself/making-potential-clients-more-comfortable-what-works-for-financial-behaviorist-jacquette-timmons-b0a8f71dae21 | ['Tara Mcmullin'] | 2018-08-10 19:26:01.537000+00:00 | ['Money', 'Marketing', 'Events', 'Podcast', 'Small Business'] |
Choosing the right metrics for your SEO dashboard with Google Analytics (Part 2 of 2) | Choosing the right metrics for your SEO dashboard with Google Analytics (Part 2 of 2)
How to Build a Weekly Workflow for Tracking SEO Analytics
In last week’s tutorial, we explained how to find your organic search traffic in Google Analytics, and more importantly, how to measure the value of your SEO.
But now that you know how to make the case for the value of investing in SEO to your boss or client, what metrics should you actually look at.
And beyond knowing the right metrics, how do you create a weekly workflow for making data-driven business decisions based on your SEO data? After all, if tracking your analytics is not a weekly habit for your team and leadership, then your business isn’t really a data-driven business.
In this week’s post (Part 2), we’ll show you how to use Google Analytics to monitor how your SEO and organic traffic are improving over time. We’ll also show you what SEO metrics to track with a SEO dashboard.
In this article, we’ll cover:
Which SEO-related metrics and reports should you track on Google Analytics? How to build a SEO dashboard to see your key SEO metrics at a glance
Let’s begin!
Which SEO-related metrics and reports should you track on Google Analytics?
Consultants like to say that an analyst shouldn’t “boil the ocean,” meaning you shouldn’t (and can’t) analyze every possible combination of metrics. Well, when it comes to SEO, you also can’t optimize the ocean.
Don’t be this guy. Image via Memegenerator.
So what should you track and optimize? Like with all digital analytics, you should track metrics related to your business goals.
Interestingly, the business goal that many SEO marketers judge themselves by is their ability to rank for certain keywords.
Many business owners and managers, on the other hand, see the goal and value of SEO in terms of increasing traffic, engagement, and/or conversion.
But both of these kinds of SEO goals are important to track. That’s why we recommend measuring both types of goals.
Let’s start by looking at tracking SEO’s impact on traffic, engagement, and conversion (the business leader goals). Then we’ll explore keyword reporting, and how Google Analytics can help you track how your business ranks for the keywords you care about (the SEO marketer goals).
Metrics to Measure SEO’s Impact on Website Performance
Overall organic sessions growth over time (Acquisition > All Traffic > Channels > Sessions column for Organic Search) — this measures how the quantity of your total organic traffic is changing from day to day. If you see a sharp decline, try to diagnose the cause. If you see a significant increase, try to identify what events or marketing activities contributed to the increase, and see if you can replicate these results.
Quality of organic traffic by conversion (Acquisition > All Traffic > Channels > Conversion rate column for Organic Search) — Whereas total organic visits captures the quantity of your organic search traffic, you also want to track the quality (i.e. conversion rate) of your organic traffic.
If your engagement is low, you may be targeting the wrong audience or you may have a problem with your website’s messaging. If your conversion rate for organic search is low (and your engagement is steady), you may have a problem with the user experience of your website. Try adjusting these factors to see how it affects the engagement and conversion of your organic search traffic.
Bounce rate of landing pages (Behavior > Site Content > Landing Pages > Bounce Rate Column) — The bounce rate of landing pages indirectly influences SEO. If people look at your page and immediately bounce (i.e. exit without any further actions), then Google interprets this to mean that your visitor did not find what they were searching on your landing page. In other words, your page wasn’t relevant enough to what that visitor was looking for.
To identify the pages with the highest bounce rate, go to the Landing Pages report and click the “Comparison” view. Then choose bounce rate in the last column. You can then compare each page’s bounce rate with the average bounce rate of your site. You should further investigate any Top 10 landing page that is above 20% of your site average to see what is driving the high bounce rate (e.g. site error, lack of relevant information, messaging, etc).
Other SEO-related metrics you may want to explore include:
Top Organic Keywords by % of New Visits
Top SEO Landing Pages by Entrances
Top SEO Landing Pages by Goal Completions
Site Speed (Behavior >> Site Speed >> Average Page Load Time)
How Google Analytics Can Show You What Keywords to Rank For
A key goal of SEO is to rank well in search engine results for certain keywords. Here are 3 Google Analytics reports to identify which keywords to target and track how your SEO is performing for these keywords.
1) Queries report (Acquisition > Search Console > Queries)
This report displays the search terms that people type into Google to get to your site. This requires you to set up an integration with your Google Search Console account. To learn how to set up the integration, check out this help page.
2) Internal site search (Behavior > Site Search > Search Terms)
Many website have internal site search (e.g. “search on this site”). If that’s the case, your internal site search terms can give you a sense of what people want or expect to find on your site. If there are topics that people are searching (e.g. return product) that you don’t have a page on your website for, then you may want to design new pages based on these queries.
By increasing the chances that visitors find and engage with a page relevant to what they’re looking for, these new pages may help improve your bounce rate and SEO. These search terms may also give you ideas for what keywords you should aspire to rank for.
To use track internal site search queries, you need to enable site search in Admin . It will ask you to type in the query string for searches. To learn more about how to set up Site Search, check out this help page from Google Analytics.
3) Paid and Organic Keyword report in Google AdWords (Pre-defined Reports > Basic > Paid and organic)
The Paid and Organic Keyword report is the best report for making sure search ads and your SEO work together, rather than detracting from each other (i.e. “cannibalizing”).
It does this by helping you identify keywords that are performing well organically, which you may want to reduce your ad spend for, since you don’t want to waste money on Adwords ads if you’re already ranking on the first search results page organically. Conversely if you don’t rank on the first search results page for a keyword, you may want to put more ad spend behind that keyword to display an Adwords ad on the first page.
This report will show you search terms side-by-side with your Adwords metrics and organic search metrics.
This requires that you link your Adwords account with Google Search Console. You can learn more about using this report with this tutorial from Lunametrics.
You may also want to supplement these keywords reports with Google Trends to understand what keywords are trending. Check out our tutorial on using free Google tools for keywords research here.
How to build a SEO dashboard
So far, you’ve learned how to show the value of SEO to your manager or client, and which SEO-related metrics to track. But you probably don’t want this to be a one-off event. To make this sustainable and repeatable over time, you need to turn the SEO metrics we’ve covered today into a custom “SEO Dashboard” on Google Analytics, so that you can look at all your metrics related to SEO on one page.
Step 1: Create a list of the SEO metrics you want to track with Google Analytics
Start by asking yourself a couple of questions:
What business questions do I want to answer with my data? For example, what landing pages are driving the most high-quality organic search traffic to my site? What business decisions do I need to make based on analyzing my data? For example, what organic keywords should I put more ad spend behind in my next Adwords campaign to increase conversions?
With your team, brainstorm all the business questions and decisions that can be addressed (at least in part) by your Google Analytics data, and then write down the metrics that would help you answer these questions or make these decisions.
Choose 3–4 of the most important business questions/decisions, and then choose 2–3 metrics to support each question/decision.
Based on our experience working with many small and medium-sized businesses, here are some of the SEO metrics I would recommend adding to a dashboard for most SMBs:
Acquisition Metrics for Organic Traffic
Total Organic Visits
All Organic Visits Over Time (Timeline)
Engagement Metrics for Organic Traffic
Top SEO Landing Pages by Entrances and Bounce Rate
Top SEO Landing Pages by Goal Completions
Top SEO Landing Pages by Average Page Load Time
Keyword Metrics for Organic Traffic
Top Organic Keywords by % of New Visits
Pages per Visit by Organic Keyword
Keyword phrases sorted by goal completions
Most Successful Keywords by goal completions
These basic SEO metrics are a great place to start, and you can always add or subtract metrics going forward as you learn which specific metrics are most impactful for your business.
Step 2: Create a Custom Dashboard in Google Analytics with your SEO Metrics List
To create a custom dashboard with these metrics, go to Customization >> Dashboards >> Create.
Configuring the custom dashboard is fairly straightforward. After you create a title for your SEO dashboard, add a name for your first widget. For a metric like Total Organic Visits, click “Metric” under Standard, and find “Sessions” in the “Add a metric” dropdown menu.
Then, as with all your SEO dashboard visits, you’ll want to filter for only organic search traffic. Click “Add a filter” and pick “Only show >> Medium >> Exactly matching >> Organic.”
For a widget like “All Organic Visits Over Time,” you’ll want to select the Standard “Timeline” option. For a widget like “Top SEO Landing Pages by Entrances & Bounce Rate,” select Standard “Table.” Choose Landing Page, Entrances, and Bounce Rate as the metrics, and 10 rows. Repeat this process for your other SEO metrics.
Optional: Import SEO Dashboard Templates as a Starting Point
If you’re running low on time, I would recommend importing these two SEO dashboard templates from the Google Analytics Solutions Gallery:
1) To import a custom dashboard from Kissmetrics with similar metrics to the ones I recommended above, you can use this link.
2) You can also use this more simple “SEO Performance” template.
This template tracks:
Total visits from SEO
Non-branded Visits from SEO
Branded Visits from SEO
Most viewed pages from SEO
Search engines used
Traffic sources
Cities finding website through SEO
You can then adjust these SEO dashboard templates based on your business needs.
At Humanlytics, we recommend checking your dashboard at least weekly. Check out our piece on why SMBs should track their metrics on a weekly basis:
Next Steps
As you can tell, learning how to analyze SEO with Google Analytics is not a trivial task. It takes a serious amount of investment in time and learning.
Many of the businesses we talk to are led by very smart and technical cofounders. But even these entrepreneurs who are trained in digital marketing and data analytics often don’t have the bandwidth or resources to distill actionable insights from their SEO data.
That’s why at Humanlytics, we’ve been helping a few dozen businesses optimize their digital channels, including their SEO and organic search traffic.
This is the reason the next feature we’re building in our digital analytics platform is an AI-based tool to recommend the right digital channels to focus on. This AI tool will tell you whether SEO is the right channel for your business based on your Google Analytics data, so you won’t have to waste any money on the wrong marketing activities.
Our AI-based marketing analytics tool delivers recommendations for the right micro and macro conversions for your business. PC: The Daily Dot
In other words, the tool automates everything we’ve explained in this tutorial so you can spend less time learning this stuff through trial-and-error, and more time doing what you do best — running your business.
If you’re interested in beta testing this feature for free (or need help setting up your conversion goals), shoot me an email at [email protected]. Thanks! | https://medium.com/analytics-for-humans/choosing-the-right-metrics-for-your-seo-dashboard-with-google-analytics-part-2-of-2-7167f7deae2b | ['Patrick Han'] | 2018-06-08 19:35:04.354000+00:00 | ['Startup', 'Digital Marketing', 'SEO', 'Google Analytics', 'Analytics'] |
The Pressure to Perform Is Wearing Us Down | The Pressure to Perform Is Wearing Us Down
Here’s what you can do when you’re tasked with 50-leven responsibilities and have to act like everything is okay (when you’re not)
This story is a part of The Burnout Effect, ZORA’s look at the pressures to perform and produce in an already chaotic world.
Hustle culture is as American as apple pie, baseball, and oppressing people of color. The illusion of glory found in grinding is codified in our literature, films, and, probably most effectively, in hip-hop. Nas once quipped that “sleep is the cousin of death.” Nipsey Hussle declared, “I been grindin’ all my life.” And rap’s first billionaire, Jay-Z, rhymed, “I’m not afraid of dying, I’m afraid of not trying.” But our society’s need to be in constant motion to accomplish goals has left many of us burned out, depressed, and hungry for something different.
That was the case for Danielle Young, a writer, influencer, and self-described “internet person” in Brooklyn, New York City. Though Young has built a successful career by working for The Root and Essence and interviewing celebs like Oprah, Idris Elba, and Lena Waithe, she’s also been battling depression for years.
“B.C. — before corona — it would have been very easy to just lay in the bed,” Young tells ZORA. “When I was working at Essence, I was getting to the point where I was having a hard time getting moving. I would be extremely late just because I couldn’t pull it together. I used to qualify it as laziness and would beat myself up about it, especially living in a city like New York.”
As an on-camera personality for a storied brand, Young was doing something she absolutely loved. Still, she felt guilty about not being able to be “on” and working at all times because of her depression.
Between being told we have the same amount of hours in the day as Beyoncé and the barrage of memes that claim we should be starting a business, healing traumas, and completing 50-leven projects, there is a pressure to perform at the highest level possible. That pressure is seemingly tied to almost every part of our existence: our work, personal lives, and side hustles. And our performance isn’t just about what we manage to execute. It’s also how we act with our family and friends, bosses and colleagues, and the audiences some of us have built with our personal brands. It’s the brave faces we put on to look okay, especially when everything is not okay.
The pressure is a lot, and it’s wearing many of us down.
As Young, 35, notes, “It’s very hard to be lazy in a city like this. You see the nurses in the scrubs, you see teachers grading papers on the train, you’re on the train with all types of people who are just working themselves to the bone. So experiencing that and being like, ‘Girl, you’re just going in to write some stories and make some cool videos that you have a good time doing. And you can’t do that? What’s wrong with you?’ I would beat myself up about it really bad, and I didn’t understand why I couldn’t do something that I seemingly loved.”
Though she says she doesn’t “have the capacity to live more than one life” and is not putting on a show for others — even when she’s working on her new interview series, Real Quick, or posting comedy sketches on Instagram — Young admits she felt a sense of urgency to produce content after losing her job in March. She didn’t want to look like she was slowing down.
“There was a moment, a week or two into this [self-quarantine], that I did assert pressure on myself. I felt like, ‘You don’t want people to see that you’re not with this big entity anymore so you ain’t got shit going on,’” she says. “Because I was betting on myself, I felt there was no time to waste because I didn’t have a job.”
“We have to make sure that we are being well, and not just doing well.”
According to Farah Harris, a Chicago-based licensed clinical professional counselor, the pressures we feel may be tied to how we see ourselves.
“When we feel like we have to perform all the time or be ‘on,’ sometimes we get so lost in doing that, we lose the essence of just being,” she says. “Instead, you may think that your worth is dependent on what you’re able to produce or how you’re able to appear to everybody else.”
While there are some benefits to performing — in the workplace, in romantic relationships, and in everyday life — if you’re starting to feel burned out from having to be on at all times, Harris says, “You need to ask yourself: Why do you need to do all this?”
Harris also argues performance burnout may stem from America’s frenetic work ethic that tells us we shouldn’t have any excuses for not doing things, she explains. “I think we feel bad when our bodies want to rest and we say no — we really think we should be doing something. But it’s not healthy.”
Instead, Harris says, “we have to make sure that we are being well and not just doing well.”
Reflecting on this chapter in her life, Young is now focusing on what is most important to her in her professional life — creating content she loves, building her own brand on her own terms, and giving herself grace. She says things are beginning to fall into place.
“A sis was definitely burned out before the corona,” Young says. “But now I feel like I’m igniting again.”
Regina Ossey, a 38-year-old married mother of two in Irvine, California, has her own story of performance burnout. After losing her second pregnancy, Ossey felt like she needed a change. “Although there are so many things that can be involved with [having a miscarriage], and sometimes they have nothing to do with health, I attributed it to how much I was working at the time,” says Ossey, who was then a general manager and regional trainer for Victoria’s Secret. “I was the breadwinner, so I couldn’t really afford to have this come-to-Jesus type of time off from work.”
Still, it wasn’t until Ossey got pregnant with her son in 2015 that she began to experience a shift. “For the first time, I took six months of maternity leave, and it hit me that I don’t want to go back to a job that I hated.” As the top income-earner for her family, Ossey’s desire didn’t match with her reality. She returned to her demanding position but admits, “My heart wasn’t in it.”
After a brief stint as a stay-at-home mom, Ossey was lured back into corporate America. She took a position with Uber and “got right back into my old habits,” she says, noting she put her game face on and worked “balls to the wall” in order to make a good impression. “I felt the need to prove myself due to imposter syndrome,” she admits. Ossey worked late hours, took on side projects, and volunteered for task forces all while nursing a toddler and caring for her mother part-time while she underwent chemotherapy.
According to Ossey, the whole thing left her “depressed and stressed, but I hid it from myself by staying ‘busy.’”
Ossey longed for something different but says she continued to go hard at the office because her family needed her. “At the time, my husband was switching careers and starting at the bottom of his industry, so I had to be at the top,” she explains. “For so long, I had the pressure of having to be the one that continued to climb the ladder because my family had to eat.”
“[At home], I’ve got to be mommy all day, and at night, it’s like, drop the panties, I’ve got to be a wife.”
After getting laid off, Ossey decided to launch her own consulting firm, which led her to take on a leadership role with Project Scientist, an organization that introduces girls to STEM. Despite this, balancing her home and work lives still felt impossible. Ossey made the difficult decision to switch to part-time in her career. Taking her foot off the gas, however, didn’t result in a more balanced life. Instead of putting on what she calls her “performative face” at work, she poured that energy into her role as a wife and a mother.
“I took a backseat in my outside-of-the-home career and thought that everything would fall into place. But that wasn’t the case,” she says. “[At home], I’ve got to be mommy all day, and at night, it’s like, drop the panties, I’ve got to be a wife.”
The performance burnout Ossey felt in the workplace bled over to her personal life, too. “I’m a mother, a wife, a sister, a daughter, as well as a friend. And for me, there’s always that dynamic of wanting to perform well in every role,” she says. “But what I have learned over time is that if I’m really good in one area, it’s okay for me to not be [as good] in another.”
Though it’s been a process, Ossey says sharing her challenges with her sister-friends helped her to realize she wasn’t the only one struggling with the idea that she should do it all. She also learned to make peace with the fact that she couldn’t perfectly execute every facet of her life like she previously attempted to do. “What makes me okay with it now is taking the time to decide what I value most for the day or month,” she says. “And if I’m intentional about that decision and if I’m faltering in other areas, I feel less guilty about [not being perfect] because I’ve already mentally prepared myself for what’s at the top of my priority list.”
Another thing that has helped Ossey and Young let go of the pressure to perform is focusing on self-care. The women both prioritize taking time for themselves while they still work toward their goals. To quiet her mind and kick off her day, Young has started participating in Devi Brown’s Divine Time-Out Digital Challenge while Ossey finds solace in taking daily bike rides through her Orange County, California, neighborhood.
As a therapist, wife, and mother of three, Harris isn’t immune to the pressures to perform either. But she avoids feeling too stressed about wearing multiple hats by reminding herself that “just being is enough.”
“If you’re showing up for your husband, praise God. If you’re showing up for your kids, praise God. If you’re able to turn in an assignment on time or meet a deadline, that’s enough. If you don’t do anything else [after you reach your limit], that’s okay, too,” she says. “It’s a bad habit that we’ve picked up from living in this country that we have to constantly be doing things to prove something — but what?”
“I have learned over time that if I’m really good in one area, it’s okay for me to not be [as good] in another.”
For women struggling with performance burnout, Harris suggests they get to the root of why you feel the need to be on at all times. “There’s some story behind the reason why you feel you have to perform. And when you can answer the question ‘Who’s pushing me to do this?’ you’ll realize you have more control than you thought.”
She also advises “going into your self-care bag” to protect your energy in various ways, like taking time to reflect and altering the way you’re engaging with social media, which can trigger feelings like jealousy, anxiousness, or competitiveness.
“You may have to limit how often you’re on social media, or you may have to adjust your notifications,” she says. “You may not feel as ‘on’ as you did before, but my advice is to protect your well-being by putting in some self-care tools that you may not have used before.”
Harris also says creating daily rituals can help you “feel the feelings” and work through them in a healthy way. Whether that’s journaling or finding someone to talk to or having an accountability partner, Harris says finding something that works for you will go a long way to alleviate burnout and “feed your soul in a positive way.” | https://zora.medium.com/the-pressure-to-perform-is-wearing-us-down-5696e685180c | ['Britni Danielle'] | 2020-05-13 21:11:40.167000+00:00 | ['Women', 'Burnout', 'Work', 'Mental Health', 'The Burnout Effect'] |
Kaggle Session (Machine Learning Intro) | The Data Science Student Society (DS3) is an interdisciplinary academic organization designed to immerse students in the diverse and growing facets of Data Science: Machine Learning, Statistics, Data Mining, Predictive Analytics and any emerging relevant fields and applications.
Follow | https://medium.com/ds3ucsd/kaggle-session-machine-learning-87a9e4a76397 | ['Emily Zhao'] | 2019-10-31 07:26:08.878000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence'] |
What if advertisers were willing to pay people to view their ads? | I’ll start with a hard truth. People. Do. Not. Like. Ads.
If you’re like me and have built a career in advertising, you may be in denial about this. That’s normal. If so, please read Mark Ritson’s research-backed analysis, or reference Deloitte TMT predictions 2018.
People avoid ads whenever and wherever they can. As a marketer, you’ll find coming to terms with this quite freeing. You can stop throwing fits, turn down the Linkin Park on your headphones, wash off the excess mascara and get on with life as an adult (I grew up in the 90s if that wasn’t obvious).
Now that we’re all on the same page, let’s debate some really interesting questions.
What if people got paid to see our ads?
This isn’t the first time the question has been raised, but more recently it’s front and centre. Bunz and Loblaw are both exploring payment — or more accurately rewards — for ad consumption. What would happen to our industry if that were the norm?
Is it good for the consumer?
Yes. The old economies of advertising no longer hold true, yet (as often happens), the outdated processes remain. Historically, advertising was designed to fund, or at least subsidize, content distribution. Free TV! But with ads. Cheap newspapers! But with ads. And so on. As the cost to distribute content has dropped, advertising isn’t subsidizing distribution nearly as much: It’s funding content curation using technology.
In other words: “Yes, theoretically you have instant access to effectively limitless information on the web. But if you want to find what you need, or have us find it for you, you’ll have to use our service, which — of course — runs on advertising.” Today’s tech giants are built on this premise, and the glue that holds it all together is consumer data. Data that technology companies have collected with or without permission (that’s a debate for another time). Data they are using to fuel their businesses.
And, more pertinent to our argument, data that you the consumer should own, control, and benefit from. Within that context, the idea that you are rewarded if your data is used — a so-called data dividend — is fair and commendable. I won’t argue with that. No sir. Kudos to those companies.
Is it good for the media?
Probably yes. I’d argue that it will fundamentally change the relationship between media and consumers. It’s certainly a new business model. And what interests me the most, and the topic I want to broach now, is how media will incentivize you for your personal data.
For starters, those running diversified businesses are at a clear advantage. Case in point: Loblaw, which has grocery stores, drugstores, brands and loyalty programs. All this makes it easier to ask for people’s information in return for value. But you can’t argue that Loblaw is not a media company. They weren’t when you were a kid, but they are now.
Another example would be if you are in the search engine business but you also sell hardware. You can connect those dots — say, free phone in exchange for your search history? It wouldn’t be surprising if hardware and connectivity prices were lower if consumers were paid to see ads.
But what about those who don’t have diversified revenue streams? Would businesses introduce the idea of a reverse subscription? We, the publisher, pay you a share of ad revenue if you read our magazine (and thereby, be exposed to our ads). Can notable publications run as co-operatives?
Whatever the model, at the end of the day, one can only imagine that if consumers know why and how their data is being used, their relationship with media will be more transparent and trusting. That is bound to be good for the media.
Is it good for advertisers?
Define “good.”
When I heard about Bunz offering digital currency as a reward for ads, I sent it to some industry friends. A few of them, notably brand managers, wondered how it would drive value for brands. And that is when it dawned on me: Whether actually paying people to watch your ads is a good thing or not, thinking as if you have to pay them certainly can be.
This is a world where media is fragmented like never before, and targeting options are maddening — even for industry insiders. Perhaps this would be the best question to ask ourselves: what is the real value of getting our message in front of an audience?
Put another way… Are you confident that your message is relevant enough and your ad is compelling enough that you, personally, would pay me — the viewer — to see the ad?
For advertisers, could this become the new yardstick — a willingness to pay to see? Because if people see our work and think: “Man, you couldn’t pay me to look at this,” aren’t we just wasting our time? And our money? | https://medium.com/empathyinc/what-if-advertisers-were-willing-to-pay-people-to-view-their-ads-6276049d53a7 | ['Mo Dezyanian'] | 2019-05-31 18:28:20.156000+00:00 | ['Marketing', 'Digital', 'Advertising'] |
Productivity Software in 2020 | Productivity Software in 2020
Challenging the “One Size Fits All” Productivity Suite
The first half of 2020 has been anything but the expected, with long-term impacts for society that may take years to fully solidify and understand. Widespread lockdowns have caused more of us to rely on technology and the Internet; that in turn is driving a (somewhat awkward, unasked-for, but nonetheless) nosebleed pace of growth in usage and valuations for business software companies.
Those factors, in part, encouraged our team at Bain Capital Ventures to lead the Series B in Clockwise, which we announced this month. In early April, Notion disclosed that it was the latest in a string of business applications to close funding on the back of eye-popping growth, securing financing at a $2B valuation, joining (among many others) Airtable at $2.5B+. As we enter the second half of 2020, it’s clear not only that our work technology is more critical than ever, but also that we are well into a new wave of young companies successfully challenging the traditional dominance of bundled productivity suites from companies like Google, Microsoft, Adobe and Cisco.
Having learned from several years of conversations with founders and builders in this world, I wanted to share a review of the landscape: what are the key tailwinds? How are innovators differentiating? For founders excited about helping millions of workers be more efficient and productive, what opportunities may be under-explored today?
What are the tailwinds supporting these innovators?
Several transformations enable software providers to transact directly with their end user — and importantly, sell based on the users’ personal, role-specific preferences and needs. These factors include:
IT Flexibility: A decade ago, business users started bringing their personal devices to work at such a rate that IT organizations had to adapt. Over the past few years, companies have bent to the reality that employees will also “bring their own tools” to enable their most efficient work.
Internet Distribution: Just as consumer upstarts like Warby Parker and Dollar Shave Club leverage digital channels to directly acquire customers, business applications can benefit both from organic social reach, and from launching hyper-targeted paid marketing on accessible minimum budgets.
Decline of Gatekeepers: One of the advantages of bundling was that a vendor could be judged on breadth of its productivity suite. But bundles therefore have to serve everyone, and when customers are free to choose the individual pieces that best address their needs, they’ll often select a deep specialist, even if it costs more overall.
Influencers sharing software on social media can now drive early adoption
How are they differentiating?
When given their own choice and freedom, business users are gravitating to alternatives that compete on:
User Experience: Many productivity challengers have an obsession with eliminating unnecessary friction in usage. When you spend most of your day in email, calendar, and similar applications, even a second faster is meaningful. Clean interfaces, friendly colors, intuitive interactions, thoughtful automation, instant loading, and familiar hotkey layouts all contribute to saving time and frustration.
Collaboration: With better web infrastructure and less legacy cruft, developers can build better experiences out of the box. It’s become standard to expect multiple users working real-time on the same file, like design boards in Figma. As a bonus, in the natural course of users inviting their colleagues to cooperate on projects, they’re organically evangelizing the product.
Brand: Office and G Suite tend towards the conservative in their messaging and marketing, reflecting the need to please a lot of audiences, and also address their target buyer (IT). G Suite’s April Fools’ jokes have been more timid since Gmail was lightly roasted for the “mic-drop” button that led to workplace misunderstandings. Young startups are unencumbered in adopting a more irreverent, friendly, and approachable tone that makes their users feel invested emotionally — and with their likes, hearts and retweets.
Community: Productivity challengers invest early in nurturing their developer and creator communities, often thinking from the very early days about integrating to other tools in their user’s workspace, enabling developers to build extension capabilities, and promoting common templates and designs.
Where might more opportunities exist?
We believe that the enterprise software trends that enable user-led sales, and the productivity challengers that are emerging to better address user needs, will touch every tool in the workplace. For tinkerers and product people exploring opportunities, here are two rules of thumb that we use to map out potential whitespace.
The Rest of the Bundle
One analysis is simply to look at the major software bundles, and see where web-native challengers have yet to emerge, or are yet building momentum. For example, modern presentation products are starting to eat away at the dominance of PowerPoint. Their approaches recall the tactics we’ve seen among unicorn challengers in other categories:
Simplicity: Companies like Prezi and Beautiful simplify the tool, apply AI-driven automation, and otherwise find ways to reduce the effort it takes to put together a good presentation.
Web-Native Experience: Pitch is built with commenting and real-time editing enabled, and touts its ability to integrate data from other work tools, to keep your presentations up to date. No wonder they have users at Notion!
Buried in Existing Applications
Not only is Office a bundle, but it may turn out that each application is itself a bundle of a multitude of use cases. Excel occupies one icon on our desktops, but it actually serves a range of functionality, like planning expenses for a road trip (budgeting), or keeping track of grocery needs (to do lists), let alone tracking supply chains (SCM) and scraping data with VBA (data automation).
Within a bundle, Excel needs extreme flexibility to cater to a very broad cross-section of users. In a world where software makers can better target their ideal users, why not optimize the user interface, interactions, integrations and more for a narrow but powerful use case? It seems very possible that individual buttons in Excel, translated into dedicated productivity products, could become meaningful businesses in their own right.
What are you building?
This modern productivity software wave has been an enormously thrilling ride so far. As we look to the frontier of knowledge work that remains to be improved, we can thankfully benefit from all these better tools to ideate, design, and build.
Do you have a vision for how we can become more efficient, creative and satisfied at work? Are you harnessing state-of-the-art infrastructure, an intense focus on product perfection, and/or a passionate user community? I would love to hear from you, and to continue supporting this transformation.
I originally published this article to Efficiency Frontier. I wrote this with my colleagues Zeeza Cole and Ajay Agarwal, thanks to feedback on drafts from Jessica Retrum, Steven Lee, Tom Uebel and Jessica Ko. | https://medium.com/ideas-from-bain-capital-ventures/productivity-software-in-2020-b953346f4e1d | ['Kevin Zhang'] | 2020-07-01 18:44:37.156000+00:00 | ['Insights', 'Productivity', 'SaaS', 'Product'] |
Learn How to Create a Video Card using SwiftUI | All of the code for the VideoCard is posted on my GitHub. Feel free to download or star for later use, and if you’d like to support my further development efforts, please consider subscribing using this link, and if you aren’t reading this on TrailingClosure.com, come check us out sometime.
Getting Started
Go ahead and create a blank project in Xcode, and make sure SwiftUI is selected.
Creating The Custom PlayerView
Go ahead and create a new class PlayerView which subclasses UIView . This will give a basic boilerplate as shown below. Go ahead and delete the code for the draw function. We won't need it this time.
import UIKit
class PlayerView: UIView {
}
At the top go ahead and import AVFoundation and AVKit . We'll need these modules in order to add the player layer in a second.
Next, define 3 class variables. The first will be the playerLayer , as an AVPlayerLayer() . This is the layer that will actually be showing the video on the card. The next variable is the previewTimer , a Timer optional which will control the looping behavior of the video. Finally define the previewLength , a Double which defines how long the preview loop is. This is what you should have so far:
import UIKit
class PlayerView: UIView {
private let playerLayer = AVPlayerLayer()
private var previewTimer:Timer?
var previewLength:Double
}
Define a custom init function which takes in a CGRect (frame), URL (video URL), and Double (loop duration). Within the initializer, set the preview length and call UIView 's initializer with the frame. Also, go ahead and define the failable initializer and define a default loop length of 15 .
import UIKit
class PlayerView: UIView {
private let playerLayer = AVPlayerLayer()
private var previewTimer:Timer?
var previewLength:Double
init(frame: CGRect, url: URL, previewLength:Double) {
self.previewLength = previewLength
super.init(frame: frame)
}
required init?(coder: NSCoder) {
self.previewLength = 15
super.init(coder: coder)
}
}
We’re almost done with this class. Hang in there. It only gets easier from here. Next, we’re going to set up the player within the PlayerView initializer function.
init(frame: CGRect, url: URL, previewLength:Double) {
self.previewLength = previewLength
super.init(frame: frame)
// Create the video player using the URL passed in.
let player = AVPlayer(url: url)
player.volume = 0 // Will play audio if you don't set to zero
player.play() // Set to play once created.
// Add the player to our Player Layer
playerLayer.player = player
playerLayer.videoGravity = .resizeAspectFill // Resizes content to fill whole video layer.
playerLayer.backgroundColor = UIColor.black.cgColor
previewTimer = Timer.scheduledTimer(withTimeInterval: previewLength, repeats: true, block: { (timer) in
player.seek(to: CMTime(seconds: 0, preferredTimescale: CMTimeScale(1)))
})
}
Most of the new code explains itself. However, let’s explain the timer. The scheduleTimer function takes in our previewLength variable to run a closure after its specified amount of time. That's when we set the player's current time to zero. Remember the player was already playing (We started it earlier in the initializer).
The final touch on this class is to override the layoutSubviews() function. This will make sure the playerLayer fills out the entire view when it's resized within its SwiftUI View . The final class is shown below.
import UIKit
import AVFoundation
import AVKit
class PlayerView: UIView {
private let playerLayer = AVPlayerLayer()
private var previewTimer:Timer?
var previewLength:Double
init(frame: CGRect, url: URL, previewLength:Double) {
self.previewLength = previewLength
super.init(frame: frame)
// Create the video player using the URL passed in.
let player = AVPlayer(url: url)
player.volume = 0 // Will play audio if you don't set to zero
player.play() // Set to play once created
// Add the player to our Player Layer
playerLayer.player = player
playerLayer.videoGravity = .resizeAspectFill // Resizes content to fill whole video layer.
playerLayer.backgroundColor = UIColor.black.cgColor
previewTimer = Timer.scheduledTimer(withTimeInterval: previewLength, repeats: true, block: { (timer) in
player.seek(to: CMTime(seconds: 0, preferredTimescale: CMTimeScale(1)))
})
layer.addSublayer(playerLayer)
}
required init?(coder: NSCoder) {
self.previewLength = 15
super.init(coder: coder)
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer.frame = bounds
}
}
Setting Up The UIViewRepresentable
Next up we will create a wrapper for our PlayerView that you use to integrate into your SwiftUI view hierarchy.
Go ahead and create the class, VideoView , and subclass UIViewRepresentable . Import the AVFoundation and AVKit modules like last time. Create 3 class variables: the videoURL:URL , showPreview:Bool , and previewLength:Double Then UIViewRepresentable requires us to provide implementations of makeUIView and updateUIView functions. I've shown below how to implement makeUIView . updateUIView` will be empty.
import SwiftUI
import AVFoundation
import AVKit
struct VideoView: UIViewRepresentable {
var videoURL:URL
var previewLength:Double?
func makeUIView(context: Context) -> UIView {
return PlayerView(frame: .zero, url: videoURL, previewLength: previewLength ?? 15)
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
That’s it!
This will create your basic Video View that you can drop into your SWiftUI project. Below is a basic implementation using SwiftUI.
However, we’re going to take it a step further if you keep reading
import SwiftUI
struct VideoCardTestView: View {
@State var maxHeight:CGFloat = 200
var body: some View {
VStack{
VideoView(videoURL: URL(string: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/TearsOfSteel.mp4")!, previewLength: 60)
.cornerRadius(15)
.frame(width: nil, height: maxHeight, alignment: .center)
.shadow(color: Color.black.opacity(0.7), radius: 30, x: 0, y: 2)
.padding(.horizontal, 20)
.padding(.top, 20)
Spacer()
}
}
}
Extra Credit!
For those sticking with us, I’ll show you how we can add a ‘play’ icon on top of the video, as well as add support for tap gestures.
Create a new SwiftUI View named VideoCard .
Add a @State variable for showing the play icon. Then one for the video URL, and finally the preview length.
@State var videoURL:URL
@State var showPlayIcon:Bool
var previewLength:Double
In the body create a ZStack as the root element and inside place an instance of the VideoView we created earlier.
Now for the play icon. We can create it by instantiating an Image object and provide the string for the system image play icon. place this below your VideoView
Image(systemName: "play.circle.fill")
Add an if statement around the image to conditionally show the play image, by utilizing the showPlayIcon variable. Optionally, you may add some style to the play icon. Here's what your class should generally look ike now:
import SwiftUI
struct VideoCard: View {
@State var videoURL:URL
@State var showPlayIcon:Bool
var previewLength:Double
var body: some View {
ZStack {
VideoView(videoURL: videoURL, previewLength: previewLength)
if showPlayIcon {
Image(systemName: "play.circle.fill")
.resizable()
.scaledToFit()
.frame(minWidth: 20, idealWidth: 40, maxWidth: 40, minHeight: 20, idealHeight: 40, maxHeight: 40, alignment: .center)
.foregroundColor(Color.white)
}
}
}
}
…and for the cherry on top. To add tap gestures to the card, update your view to include this at the bottom of the ZStack
.onTapGesture {
// You Tapped the Video Card!
}
Extra Credit Complete!
Here’s an example mockup of how I used my VideoCard in testing! | https://jboullianne.medium.com/learn-how-to-create-a-video-card-using-swiftui-fb0d7a3fe7da | ['Jean-Marc Boullianne'] | 2020-04-06 10:12:32.469000+00:00 | ['Mobile App Development', 'iOS', 'Swiftui', 'Swift', 'iOS App Development'] |
Contemporary Chinese Artists, Still on the Rise | Contemporary Chinese Artists, Still on the Rise MutualArt Follow Dec 2 · 5 min read
Wang Guangyi, Yue Minjun, and Zhang Xiaogang reflect on the era when Maoist propaganda dominated the visual landscape, as they construct fresh visions of their own.
Pamela Kanner /MutualArt
Yue Minjun, Red Boat — Planche №6, 2009. Lithograph available for sale at MutualArt. Contact [email protected]
Ai Weiwei routinely makes headlines with his record-breaking sales and political activism. Far from an anomaly, his success reflects the steady emergence of contemporary Chinese artists onto the global market during recent decades.
Exploring these artists’ growing market appeal reveals the distinctive sociopolitical climate in which they came of age and how it shaped their identities. Works by Beijing-based artists Wang Guangyi, Yue Minjun and Zhang Xiaogang embody a synthesis between Western modernity and the aftershocks of Mao Tse Tung’s regime which restricted free artistic expression.
Mao’s Cultural Revolution initiative, launched in 1966, sought to preserve Chinese communism and establish extremist Maoism as China’s leading ideology. The movement prohibited art deemed anti-communist or anti-Mao as well as traditional techniques such as ink painting. Artists who resisted were subject to harassment and sometimes even torture. Propagandist art dominated, mainly oil paintings disseminated as posters, used to glorify the regime and provide the public with visual fodder of a flourishing socialist state.
Wang Guangyi, Time. Lithograph available for sale at MutualArt. Contact [email protected]
Bathed in communist red, also a conspicuous color in the Chinese tradition, the posters were often filled with young, lively people delighting in their great ruler’s magnificence. Their imagery’s prevalence persisted beyond Mao’s regime, even as artists regained freedom of expression under Deng Xiaoping, whose rise to power in 1978 drove the country toward industrialization and globalization.
Wang Guangyi’s style draws directly on Maoist propaganda. Born in 1957 in Harbin, Heilongjiang Province, he witnessed the political and cultural upheaval of the ’60s and ’70s, laboring as a railway worker in a nearby village before attending the Zhejiang Academy of Fine Arts.
Wang dispels the Cultural Revolution’s visceral brainwashing power in his compelling Great Criticism series. He blends heroic Maoist imagery with well-known consumerist icons and a Warhol-esque treatment, a style often described as Political Pop. Wang’s juxtaposition between socialist symbolism and Western advertising highlights the subliminal power both ideologies hold, depicting a narrative of how one replaced the other in China.
Wang Guangyi, Mao Zedong : №2 of Red Box, 1989. Lithograph available for sale at MutualArt. Contact [email protected]
Wang based his Mao Zedong : №2 of Red Box on the standard portrait of Mao, once an omnipresent feature of every Chinese citizen’s life, which served to elevate Mao to iconic status. Wang’s bold red lines bring to mind the grid system used by Renaissance masters to accurately render models’ proportions, simultaneously distancing artists from potentially seductive, often nude models. By applying the grid to Mao, Wang similarly dissociates himself from Mao’s cult of personality. At the same time, he breaks up the venerated figure into smaller chunks, each insignificant on its own.
Yue Minjun was born in 1962 in Daqing, Heilongjiang, where his family worked in the nearby oil fields. He is responsible for one of the most recognizable images of contemporary Chinese art — ‘that laughing face.’ It is maniacally jubilant, gleefully oblivious, caught in a moment of laughter yet eternally, perhaps obediently, frozen in time. Its closed eyes are blind to reality, mouth agape in helpless abandon. Yue’s laughing face, based on his own self-portrait, functions as an enigmatic symbol and common thread in his work.
Yue Minjun, Mushroom cloud, 2008. Lithograph available for sale at MutualArt. Contact [email protected]
Yue’s Mushroom Cloud features a row of identical laughing faces, a unified bubblegum pink-hued mass oblivious to the ominous, swelling mushroom cloud forming above. The cloud’s shape closely resembles China’s first atomic bomb test explosion in 1964, which inaugurated the country as a global nuclear power. In invoking the nuclear test, Yue underscores his laughing subjects’ blindness to the political climate ‘exploding’ in their midst. On the other hand, Yue’s subjects implore, perhaps ignorance is bliss?
Critics often associate Yue with the Cynical Realism movement, a genre characterized by a humorous, ironic approach to China’s radical transition to industrialization and modernization. Yue’s style indeed treads a fine line between cynical and light-hearted, between morbid and playful.
Zhang Xiaogang, Bloodline: Big Family, 2006. Lithograph available for sale at MutualArt. Contact [email protected]
A breakthrough star at auction, Zhang Xiaogang was born in 1958 in Kunming, Yunnan Province. At one point, his parents were taken away by Maoist authorities to be “re-educated” for three years. Zhang earned broad acclaim for his triptych The Dark Trilogy: Fear, Meditation, Sorrow, which in 2011 set a record for the highest price paid for a contemporary Chinese artwork, selling for $7 million at auction.
Zhang set another record in 2014 with the $12.1 million sale of Bloodline: Big Family №3. The artist’s belated discovery of old family photographs led him to create the Bloodline series, a surrealist reimagining of the type of family portrait common in China during the ’50s and ’60s. Zhang masterfully transforms the medium of photography into painting and exposes the era’s forced uniformity. The figures’ polished faces and empty gazes peer eerily out at us, calm yet unsettling, hiding a mysterious past.
Zhang Xiaogang, Tiananmen, 2007. Lithograph available for sale at MutualArt. Contact [email protected]
Zhang and his contemporaries grew up in a world where art and politics were closely bound. The era’s ramifications were difficult to entangle, as art schools continued to teach socialist realist style for several years afterward. After years of restriction and isolation, beginning in the ’90s, contemporary Chinese artists have emerged dramatically onto the international stage. The recent market boom, at first mainly driven by international collectors, is most recently led by Chinese buyers who command the top lots at auction.
In an open, globalized China, artists meditate on the reverberations of Mao’s Cultural Revolution. While these artists were once prescribed a strict visual iconography, today they reflect on that era to form powerful narratives of their own. | https://medium.com/mutualart/contemporary-chinese-artists-still-on-the-rise-a009b19b4bb | [] | 2020-12-02 08:02:56.281000+00:00 | ['Communism', 'Creativity', 'Freedom', 'Art', 'China'] |
5 Italian Bestsellers Unknown To the International Audience | 1. A Fortune-Teller Told Me — Tiziano Terzani
The real story of Tiziano Terzani, Italian journalist for Der Spiegel, one of the main German weekly news magazines. Terzani had a particular interest in East Asian affairs and travelled all his life through these regions. One day, during one of his travels, Terzani met a fortune teller who advised him to avoid all plane travels in 1993, as according to him, in that year he would die in a plane crash. Terzani decided to trust him and travelled for an entire year avoiding airplanes. Trips that used to take a few hours suddenly became week-long adventures. Thanks to this radical decision, Terzani will discover the real Asia, which only a few European have been able to witness. Among fortune-tellers, gurus and fakirs, he will slowly embrace his hidden spirituality without never fully abandoning his sceptical attitude. Not even when, in 1993, the colleague who replaced him survived a tragic airplane crash.
What did I learn?
A Fortune-Teller Told Me was my first approach to spirituality. If, like me, your education is grounded in science, this is the perfect book to approach the subject without suddenly starting to believe in everything paranormal. Terzani’s sceptical attitude will convince even the most convinced scientist that there are things that science cannot and should not explain. | https://medium.com/books-are-our-superpower/my-5-favourite-italian-books-824965dce0ad | [] | 2020-12-06 23:52:43.152000+00:00 | ['Book Recommendations', 'Books', 'Reading', 'Book Review', 'Books And Authors'] |
How to Repurpose a Single Piece of Content in to a $997 Course | Profitable Content Repurposing
How to Repurpose a Single Piece of Content in to a $997 Course
And get paid to do it over and over again…
PresenterMedia with Permission
[This column is adapted from a recent webinar show. If you’d like to view the webinar show too, click here]
One of the key strategies I see left out of the world of Content Repurposing is turning your content into course and other products. Since in my Content Marketing Model, you create your products when you create your content, this makes no sense to me.
Here’s what you’re going to get in this column:
The content to course mindset — Just a side note, I had a particularly good time creating the slide for that one.
Secret number one — How to create content you can repurpose into a course.
Secret number two — How to know with certainty which content to repurpose into a course.
Secret number three, How to repurpose a single piece of content into a 997 course.
Now those three steps may sound simple. One of the ridiculous myths online is if it’s simple. it can’t be serious. The exact opposite is true. Simple is the most serious, because it’s the most successful.
PresenterMedia with Permission
So let’s get going. The course to content mindset. Here’s the graphic I had fun creating. This is where most people are stuck.
I remember this all the way back when I went full time in ’07. Being on a teleseminar one weekend, and they were telling us that it was going to take a year to create a course, and would cost $2,000 to do it. I was laughing, because I’d created a course and a product that weekend.
But most people get stuck in believing that it’s hard, takes forever, and you never want to do it again. Anybody ever feel that way? As they were looking at creating a course, or creating a product. It’s hard, it takes forever, and I don’t know that I even want to do it the first time. Much less again.
Don’t worry, I’m sure you’re not alone. It’s not your fault. It’s the myth that’s been the told out there.
From now on, this will be easy for you. The way to make it even easier, is to do one this way. Once you do that, boy do your eyes open up? Because the reality, the mindset I want you to have, the content to course mindset, is as follows. The mindset I want you to adopt is its easy, fast, and almost most importantly, repeatable. It’s scalable. You can do it over, and over again, if you want.
Now, the reason I picked this picture of somebody cruising along with a suitcase like this is, the next time you catch yourself thinking,
“Well that’s already been done, or there’s no more good ideas out there, or even better. No more good domain names out there.”
I just bought repurposingrocks.com today. Please remember this:
We put people on the moon before anybody thought about putting wheels on suitcases.
If you’re old enough to have carried a suitcase, a heavy suitcase in an airport, or from the car to the hotel, you know what I’m talking about.
Isn’t that amazing? We put people on the moon before we ever put wheels on suitcases.
So there’s plenty of good ideas out there. There’s plenty of good products to make. There’s plenty of people that want them, and even plenty of good domain names out there.
So it’s not hard. It doesn’t take forever, and you will want to do it again. When you do it, the easy, fast, repeatable way, which we’re going to do right now.
So here are 3 secrets to repurposing a single piece of content into a $997 course:
PresenterMedia with Permission
How to create content you can repurpose into a course, right? Now, one of the things we’re teaching in RepurposewithPurpose.com, as you might imagine from that name, is how to create content from the get-go, with the intention of repurposing it. Repurposing it to build your audience, build your traffic, build your visibility, and repurposing it into products.
That’s what we’re focused on today.
The best way to do that is with something that I call Chunk Templates.
What the heck are Chunk Templates?
Well, Chunk Templates are when you’re doing content in numbers, like three mistakes, five tips, seven benefits, three myths, five things to avoid, seven things to always do. Because then with each one of those three, five, or seven chunks, you can turn those into a module, or into part of a course.
So now does it have to be three, five, or seven? No, it can be one, two, four, six, whatever.
Once you get past seven, you’re getting a little bit too much. You probably ought to have a part one, and part two.
But this is how I want you to change the way you create your content. Whether it’s in text, audio, video, meme, graphics, infographics, whatever. Chunk Templates, three, five, and seven. This is secret one.
PresenterMedia with Permission
You got all this content, how do I pick one to build a course around that you know your audience wants, that you know your audience is going to pay for?
Well, there’s some simple ways to do it. Basically your audience tells you. Let me give you a good example here.
Every single time I’ve sat down to create a product, and wasn’t sure which one to create, I listed five. Some of them are ones I was thinking of creating, some of them I was thinking, “No, they’re not going to want this. They’ve already had something like this.”
Here’s what happened. Here’s the results. Never once, not once, have you all picked the one I was thinking about, and thinking you would want, and thinking of doing for you.
Not once.
Oftentimes, you all have picked the one that I thought, “No, that’s not it. No.” It just amazes me what happens when I pay attention to my audience.
Because when you pay attention to your audience, and your audience tells you what they want, and then you deliver it, how much more predisposed to buy do you think they are? Enormously predisposed to buy it.
Now a t story is from a few years back when Mike Stewart and I created Live Video Secrets, the first course ever on Facebook Live.
We were recording video for this upcoming course. We discovered through our research, that not everybody, even most people didn’t have Facebook live yet.
And we both realized we can’t put this out there, if most people can’t access the main tool. That would annoy the crap out of me. So we have to wait on this. And they said it’ll probably be a few months.
“Okay. Great.”
So I drove away from Mike’s later that day thinking:
“Crap, this was the product that was going to offer, and I need to put a product out there.”
I used a very important strategy that you may follow. I drove to one of my satellite offices, also known as hiking trails, the lake, or restaurants. Chose Rice, my favorite sushi restaurant. I read something the other day that said, “for every sushi roll you eat, you take a month off your life.” I know for a fact that’s not true, because I’d have been dead in 1983.
So it was mid-afternoon, I’d been there enough know the staff. I get my table, and I pull out my laptop, and I’m thinking:
“What am I going to make? What am I going to create?”
I wrote five things down. The one I did not think y’all would pick, because I’d done a course before on it, that’s the one y’all pick. That’s where Repurposing Content Secrets was born.
Pay attention to your audience, and ask them. Ask them. Some of you were saying,
“But I don’t have an audience yet. Who do I ask?”
One of the quickest ways to get an audience is over on Medium right now. Go out, and ask people that you know that fit your ideal client, that could be on your list. Ask friends, ask family, ask people online, ask colleagues, asking on your Facebook. Ask.
PresenterMedia with Permission
Here comes secret number three: How to repurpose a single piece of content into a 997 course. Now that sounds just too good to be true, does’t it?
But it’s not, because I’m going to show you. Now remember at the beginning of this show, I said one of the myths online, if it’s simple, it can’t be serious. I’m about to prove that myth very, very wrong.
All right. The column we’re going to use as an example here came from an article I put up a few days ago, about seven great ways to repurpose your content. All I’ve done is taken each of those seven ways, and turned it into a training module. So seven great ways, we got seven modules.
PresenterMedia with Permission
What are those empty boxes for? I’ll show you in a little bit. As I remember the first way that was suggested was articles. What a surprise, right? Articles.
That’s the first module. Now I’d come up with a fancier name than article module, but I just want you to see the topic.
Here is how I would build out that module. A lot of it would be done live. I would schedule this, invite the members of the course, and have them there live. So they can get their questions answered. So they can help me build this thing, you can make better courses that way. I have a whole backstage pass system that gets people in early, where you basically… Folks, how many of you would like to get paid to create your first course, or to create your next course? I’ve been paid to create every course I’ve created the last 10 years.
First thing I would do is in a Webinar training or a video training, I would do a video. Now you don’t have to do it live. You can do it recorded. Then I’d have that video transcribed at Rev.com or Otter.io. Then from the things we said to do in that video I would create a checklist; a high value item. People love it because it makes life easier. If it fit I would then create a template for doing it faster and easier. Then I would have a Q&A, either as part of that live training, or what you can do in some courses, is do the live training one day, and then do the Q&A after people have had some time to play with it.
So let’s go to one of the audio ones. I did two audios and three videos. Let’s go to this one, and it would be a podcast.
Now, you know I like to keep things simple. But this is where this gets a little complicated. Please hang in there with me. It’ll make sense in just a moment, right?
Some of you that know me well caught the sarcasm in my voice. All you would do to make this module is the same five things. A video, a webinar about repurposing into a podcast, the transcript, the checklist, the template, and the Q&A. Folks that’s it.
I could have made this harder for you. I’ve got a master’s, and half a PhD, but why? I don’t want to be a gooroo, G-O-O-R O-O. I want to see people do well, and thrive. That’s my whole thing in everything I do. From this stuff to the volunteer work to when I was a counselor psychologist. Make it simple. Help people thrive.
Now, for the other five modules. Anybody got an idea what you do? TWhat would we do for the other five modules? You would just do the same. Now, rinse and repeat.
This makes it very simple for you to create a product, the course, and do it over and over again. Follow the same formula, because the content for each of the five pieces is going to be different for everyone.
Now, do you have to follow this? No. What I want you to do, is take this formula. Take this template, use it, master it, and then make it your own. And add your own pieces to it.
Now, you notice there’s two blank spaces here, and a lot of people leave these out. I don’t.
This first pre-module, or pre-training one is the Get Started Here module. This is very important, if you have a course that’s not going to start right away. It’s called a Stick Strategy. You want people to stick around, and you give them so much good stuff at the beginning. It works.
Then you do your seven modules, and you come down here to this one, and I call this the Putting it All Together module. That’s when you review, answer more questions, give examples, case studies, maybe an example where you build something of your own, or someone else’s. So people can see that they can do it.
And that ladies and gentlemen, boys and girls, is a darn good course, right there. Could it be three modules? Yep. Could it be five months? Yep. Could it be one module? Sure. It could be one simple course with one training module. | https://medium.com/illumination/how-to-repurpose-a-single-piece-of-content-in-to-a-997-course-b9de5edc3fd4 | ['Jeff Herring'] | 2020-12-26 02:26:36.649000+00:00 | ['Content Marketing', 'Writing', 'Content Strategy', 'Writing Tips', 'Content Repurposing'] |
WOAH! You look just like this one MAJOR CELEBRITY. | Well, I’m an idiot. At least I was. Let’s just be clear on that before we begin.
Ok, so — roller disco, birthday party, Los Angeles, lots of people, lots of people I don’t know, slightly dim lighting, and me.
We’re skating, we’re skating. The 80's are blaring, people are loving life. If you’re not having a good time you’re not here. And I was there, making laps and collecting helium balloons whose strings hung from the ceiling, tying them all to my back belt-loop and making a big show of myself. And I wasn’t going to stop until I started flying. But then I saw a girl.
She zipped past me like a bird in the night, wings and all—an angel on skates with the most truthful hips I’ve ever yet seen. I dropped the excess balloons and took just one in my hand. Well, I shouldn’t say I dropped the excess balloons because I didn’t, but I tried. I had tied them very tightly to my back belt loop and I couldn’t get them off by myself.
So, off I went, grabbing a single green balloon on my way. I glided up next to her and offered the gift. She laughed.
We began talking and skating around the rink together. She told me her first name, but it’s a common one. We talked about each other’s 80's outfits and the cloud of balloons trailing me. We bantered about skating skills—she’s incredible. She skates backwards faster than I skate forwards, probably because she grew up in the 80's and I grew up in the 90's, but I didn’t know that at the time—you see I thought she was 22 or so. I was 23. Roller skating is, after all, for youthful spirits.
“So, where are you from?” I ask.
“Toronto,” she replies.
“When did you move here?”
“Oh, no, I still live in Toronto. I’m just here visiting my sister and she took me to this birthday party.”
“Oh. Oh then. That’s cool. I don’t really know anyone here either, my friends Austin and Ryan brought me. I don’t even really know the birthday girl.”
“Neither do I,” she said. We laughed. “What do you do?” She asks.
“Well,” I say, and tell her what I hope are a few mildly impressive things, thinking a 22-year-old from Toronto visiting her sister in Hollywood might be entertained by my completely trivial attempts in the film industy.
And we’re having a nice time, we really are. It’s simple, she’s just a cute girl with extraordinary skating skills, cruising through the night as the outfits and the music whisper wonderful lies about what decade we live in, and I’m confused but I know for certain we’re living in the USA.
I grab her hand. It’s not such a big thing on a roller-rink, but I like it, and I have the impression that she may be enjoying it as well. Who knows. “So what do you do?” I ask.
“I’m an actress,” she says, like it’s nothing at all.
“Oh, awesome,” I say, and in my head I’m putting 2 and 2 together, but in exactly the wrong way, thinking, ‘A young actress who lives in Toronto but hasn’t moved to L.A. yet? She must not be that serious, just starting out.’ And gently I ask, “How’s it going so far?”
“It’s going really well actually,” she smiles.
“Nice,” I say, thinking, ‘Nice, she’s an optimist,’ then ask, “Think you’ll ever move to LA?”
“I like Toronto.”
“Cool.” I said this earnestly, and it meant, ‘you really don’t care about acting all that much, but that’s totally ok, I think you’re swell and I like you.’
We roll onward. Hypnotic jams populate the bubbling silence between us, and overwhelmingly I know—we are back in the USA.
And in the glorious flash of a disco-light or the glowing reflection of someone’s neon jumpsuit, I look over at her and for a moment catch a glimpse of reality.
“Woah,” I say, “you look just like this one really famous actress, um, shit—totally blanking on her name. Ah, I know this. She’s in…she’s in The Notebook, um…Sherlocke Holmes, tons of stuff. Oh, Wedding Crashers! Literally one of my favorite movies, can’t think of her name though. Can you?”
“Oh,” she coos, “I know who you’re talking about too, but man, I can’t think of her name either.”
“Well, you look just like her. Do you get that a lot?”
“All the time,” she confirms. We chuckle, for different reasons, and keep gliding. We’re five or ten minutes in. The small-talk stage is all gone; conversation flows naturally. That’s when I make a huge mistake.
HUGE.
I don’t remember why the conversation moved this way or why it slipped from my mouth so naturally, but it did: “Oh, so then how old are you?”
Fucking huge.
“34,” she replies naturally.
“What!?” I’m shocked. She looks so young. She’s funny, she’s very charming—she must be pulling my leg. “No way, you’re like…22.”
“Ok ok…” she relents, “I’m 23.”
“Hey, so am I.” Yet another commonality, I think. This is fantastic. I even spotted her lie/joke/sarcasm thing about being 34. Everything is going great. I really like this girl so far. I’m gonna get her digits for sure.
It couldn’t have been longer than 30 seconds after revealing my 23-hood, an age forever toddlerized by Blink 182, that she skated swiftly and unapologetically away, melding into the blurry lights and the music and the crowd and the excitement of a decade beyond my backward reach. And she’s still just crushing it, I mean owning the rink, as graceful on 8 small wheels as a ballerina in a bubble-bath. Thought about it for a while — that’s the best metaphor.
The moment she left I realized, ‘Ah…she’s really 34.’ And so I mourned the let-down, but for only a moment, because I’m thinking, ‘In all honesty, I’m 23, I should be going for girls in my own decade.’ All good. And all might have been lost too. I might have never known I even met her, but for an hour later.
We’re cutting the cake outside the rink. Everyone circles around a table and there she is again in that glittering silver-scaled skirt. She lets down her bright red hair and it’s curling and bouncing all over the room. The resemblance becomes even more striking.
I turn to my friend, “See that girl? What’s the name of the actress that she looks just like?”
“You mean Rachel McAdams?”
“Ah, finally. Yes—that girl looks exactly like her.”
“Sam. That is Rachel McAdams.”
“Fuck!”
I believe I had the grounds to exclaim such a thing, but I soon realized what to do. After we all sang happy birthday I approached, pretending I still didn’t know who she was. “Hey,” I said, “I think I figured out who you look like. I mean, I remembered the name. You look just like Rachel McAdams.” Her face turned as red as her hair and she started cracking up. “But I think you kind of, pretty much are…Rachel McAdams.” We died of laughter.
Let’s keep in mind the overall image here—I still have about 20 helium balloons tied to my back belt loop, floating up above and behind me like a colorful cloud or a big sneaky clown. I am ridiculous in every way, so absurd I’m not even embarrassed in this moment at all. It’s too far gone.
She’s very sweet. I tell her I can’t get these balloons off of me—she begins to untie the balloons. It’s difficult. It’s just not working. Why did I tie them so tight? How did I even manage to tie them so tight? Why did I tie them on at all? What’s going on? Where am I? Where are scissors?
This takes an awkward amount of time. She has to enlist help in the effort. My friend comes over, and with more time and elbow grease, the balloons come off.
I didn’t talk to her again until the party was wrapping up. Everyone’s saying goodbye, and I’m getting ready to make my last ditch effort. I have to, I tell myself. I know full well that I’m no longer cloaked in the alluring ambiguity of a dim disco dance floor, or empowered by the loop of pretty lights drawing us round and round against the linear grain of time. I know full well that I am a childish 23–year-old hustler and she is an ageless movie star. I know full well it won’t work, but I just have to, for me, for everyone.
We hug goodbye. “You know,” I say, “I actually wish I had never found out you were Rachel McAdams, and that you had never found out I was 23.”
“Why?” she wonders.
“Because then I might have asked you for your number.”
“Well, I might have given it to you,” she smirks, “but it’s too late now. We already know. It’s ruined.”
That was it. I haven’t seen her since. In a moment she swiftly moved into the realm of memories and dreams. Have I remained in the McAdams dream zone as well? The answer is no. No, I haven’t. I can only hope. I’ve tried and tried, but no matter what I do, she remains 11 years offshore, on that boat up there.
But in a dimly lit skating rink, in a dreamlike place where ages are not real and the only thing that matters is roller boogie, she held my hand for a minute, and that is the only thing that really matters. That —that is love on wheels. | https://medium.com/this-happened-to-me/woah-you-look-just-like-this-one-major-celebrity-8f5c9a905517 | ['Sam Hayes'] | 2016-05-03 00:17:06.140000+00:00 | ['Writing', 'Short Story', 'Media'] |
Blockchain in Financial Services — Hype or Reality? | Blockchain in Financial Services — Hype or Reality?
I recently attended the MarketTech conference in NYC, organized by the TABB Group. In the spirit of full disclosure, I was thinking blockchain is still in the science experiment phase, especially in an industry that’s so heavily regulated.
To my surprise, there are some really down-to-earth projects going on. For example, Credit Suisse is running a syndicated loan project meant to remove manual intervention from the process. Participants in the project include R3 consortium members BBVA, Danske Bank, Royal Bank of Scotland, Scotiabank, Société Générale, State Street, US Bank and Wells Fargo. Buy-side firms AllianceBernstein (AB), Eaton Vance Management, KKR and Oak Hill Advisors are also involved in the initiative.
“This project demonstrates the potential for blockchain technology to fundamentally reshape the syndicated loan market and the capital markets more broadly,” says Emmanuel Aidoo, head of the distributed ledger and blockchain effort at Credit Suisse. “This demonstration sets us on a path to increase efficiency and reduce costs, which will benefit banks and clients alike. By connecting a network of agent banks through blockchain, we can achieve faster and more certain settlements in the loan market.”
Why are Financial Services firms interested in Blockchain? First they see the potential for cost reduction in processes like trade settlement and reconciliation. 70% of the costs are due to the complexity of internal processes. Autonomous Research says blockchain could cut settlement costs by a third, or $16 billion a year, and cut capital requirements by $120 billion.
Second, they see revenue generation in the near future.
What are some of the challenges? Security, scalability and privacy. Most blockchains that banks are working on are private or pre-approved. Intel’s Distributed Ledger Technology group is working on improving security, scalability and privacy through the use of Intel hardware features such as Software Guard Extensions (SGX).
Just a quick search for news uncovered a host of projects announced in the last 3–4 months:
Nasdaq partners with Chain to bring blockchain to private market
Allianz bets on blockchain for catastrophe bond trading
Firms led by JPMorgan test blockchain-powered equity swaps post-trade
R3 Consortium CEO David Rutter wrote in a recent post on TABBForum:
“By enabling the industry to move from duplicated and inconsistent isolated systems of record held at each firm and to cloud-based systems with shared data, business logic and processing, blockchain-inspired technology will facilitate mutualized and consistent middle- and back-office systems that assure that one firm’s view is identical to its counterparts’ view”.
Since I work for Cloudera, I’m trying to figure out how big data fits into the blockchain revolution, so I asked one of our experts to be a guest blogger.
Joao Salcedo has been supporting, designing and implementing big data solutions with Hadoop since 2010 and with Cloudera for the last 3 years as a Systems Engineer. Joao is an early adopter of blockchain technology using it with cryptocurrencies and developing applications on top of it.
Joao’s blog:
Blockchain is a disrupting technology which allows to decentralize and distribute transactions on a public and encrypted ledger. Blockchain is essentially an organizational structure that allows transactions to be verified and recorded upon the agreement of all impacted parties. As big data allows the predictive modeling of more and more processes of reality, blockchain technology could help turn prediction into action. Blockchain technology could be joined with big data, layered onto the reactive-to-predictive transformation, which means analysis can help developers quantify the risks and benefits of modifications to the blockchain protocol, as well as monitor for troublesome behavior.
Financial institutions are willing to build their own private blockchains or are investigating the unspecified blockchain solutions:
• Three large banks in the Netherlands — ABN Amro, ING and Rabobank — investigate the use of blockchain for payment systems.
• Citigroup has built three private blockchains and an internal currency with a prime focus on
payments and eliminating counter party risks when dealing with smaller local banks. Additionally,
Citigroup has partnered with Safaricom, a mobile operator in Kenya, to enable transfer
services to the unbanked.
• Santander, one of the largest banks in the world, has identified 20 to 25 possible applications of blockchain technology in banking, including international remittance, syndicated lending and collateral management.
• Similarly, Deutsche Bank has stated that distributed ledgers and particularly blockchains have possible applications in both fiat currency and securities management, creating transparency and facilitating Know Your Customer / Anti-Money Laundering surveillance.
• Monetary Authority of Singapore (MAS) has named blockchains as one of the big trends in technologies affecting financial services, citing lower cost of operation, faster processing and failure resilience as their main benefits compared to the traditional approach.
Big data’s predictive analysis could dovetail perfectly with the automatic execution of smart contracts and analyze every transaction. We could accomplish this specifically by adding blockchain technology as the embedded economic payments layer and the tool for the administration of quanta, implemented through automated smart contracts, distributed applications (DAPPS), decentralized autonomous organizations (DAOs), and decentralized autonomous corporations (DACs).
The automated operation of huge classes of tasks could off-load humans because the tasks would instead be handled by a universal, decentralized, globally distributed computing system.
We thought big data was big, but the potential quantization, tracking and administration of all classes of activity and reality via blockchain technology at both lower and higher resolutions hints at the next orders-of-magnitude progression up from the current big-data era that is itself still developing.
A basic diagram of how big data can export blockchain data for further processing and analysis is following:
As you can see from Joao’s blog, big data and Hadoop specifically, act as a complementary processing and analysis engine to blockchain.
With this blog, I’m starting a series on blockchain, with a number of guest bloggers. Stay tuned and we look forward to your comments. | https://medium.com/cloudera-inc/blockchain-in-financial-services-hype-or-reality-ce58c436959a | ['Mihaela Risca'] | 2016-11-29 21:39:10.547000+00:00 | ['Financial Services', 'Cloudera', 'Big Data', 'Hadoop', 'Bitcoin'] |
7 Strategies to Avoid Retraumatization While Working with Psychosis | By Tim Dreby
Stories related to psychosis can be intense and can lead to traumatic recall when a sufferer retells them and does not feel contained or believed within the relationship. Perhaps this is the reason many therapists, family members, and psychiatric wards learn to shut down the telling of the story.
Shutting down stories can be seen as protecting the psychosis survivor from unnecessarily reliving the experience and going through the distress again. Perhaps this is done to avoid a fight or yet another power struggle over reality. Activating trauma that you cannot stand to consider is a bad idea, right?
Imagine being a person who has experienced psychosis and having the entire mental health system agree not to let you tell your story as a boundary. This strategy is employed over and over again despite the fact that recipients of this kind of care often become progressively more isolated and distressed over time.
Perhaps no one in the system can imagine what it is like to experience systemic indifference to traumatic material. Indeed, is it really so impossible to believe that these experiences are real and there for a purpose? Is it really so hard to believe that the person in psychosis may have some perceptions that are spot-on accurate? Not acknowledging them can be cause for further social withdrawal, instill a sense of hopelessness, and do further damage to an already ailing self-esteem.
Trying to stay on the same page as everybody else may teach a person to suppress their experiences. While symptom suppression may decrease social attacks and ridicule, I also believe it is the wrong tact for many. Too many people suppress, isolate and withdraw from social functioning. Is it not possible to create spaces and relationships in which experiences of psychosis can be dealt with in mindful manners? If survivors can be believed by supporters, if their experiences can be credited with having profound meaning, then perhaps outcomes could be better.
A New Strategy with Survivor-Led Groups
I have come to strongly believe that shutting down stories related to psychosis is the wrong thing to do. I believe this so strongly that I have come out as a therapist with lived experience with madness. I regularly share my experiences in group therapy to facilitate group reflection and the telling of stories.
I credit the Hearing Voices Network for prompting me to take this plunge. Word of survivor-led groups achieving remarkably different results prompted me to start a curriculum for professional groups. In the curriculum, which I have turned into a training and a group therapy guide, I deconstruct what psychosis is into solvable components.
It’s true that there are times when I wonder if coming out mad was the best career decision. I have had to bravely admit my vulnerabilities, which sometimes seems to hurt my credibility. And yet I find that being an artfully unreliable narrator helps guide people to their own truth more effectively. I feel I get better results having taken the plunge.
Being out has helped me exponentially in creating specialized care for psychosis survivors. As a result, I have a number of suggestions for how to encourage the telling of stories without retraumatizing survivors in group settings and in individual encounters. Many of these suggestions are based on replicating realities that happen in survivor-led groups.
1. Eradicating Stigma and Grounding Participants
Many supporters actually believe that people who experience psychosis are fragile. It is one of the three most dominant stigmas about mental health challenges, according to Patrick Corrigan’s research. As a professional, I have heard this said so many times and I am convinced that my colleagues say this because they don’t know what “psychosis” feels like. At times, simply reversing this stigma can help ground someone who is in psychosis and remind them about how tough they are to be handling such real trauma.
There are other grounding techniques that I have utilized when I sense the group is starting to feel traumatized. Often, acknowledging the trauma in the room and allowing the groups to socialize and focus on related movies, music, or art can help. If group members initiate this process, it is good to compliment and acknowledge what they are doing as being helpful. Instead of controlling the group and staying on course, collaborating and enhancing these efforts is advisable.
2. Believing that Psychosis is Happening for a Reason and Holds Truths
I already said this, but it stands to be further emphasized.
I believe that if classifying experiences that trigger psychosis as an ‘illness’ can retraumatize many, finding value in those experiences will help ground many psychosis survivors who are in distress. In other words, when the helper meets the content of the survivor’s experience with curiosity and interest, the psychosis survivor is less likely to be traumatized. In contrast, if the supporter exudes the belief that the psychosis survivor will be traumatized, this outcome will be more likely to come true.
Often the survivor leader is excited to learn that others relate to them, and has a high level of hope that others can achieve wellness in spite of disturbing material. Thus, getting naturally excited when a person is sharing details and having strong beliefs about recovery being possible helps deepen the threshold for what others can bear.
Additionally, studying different causation frameworks that psychosis survivors hold gives participants a basis for understanding how experiences that trigger psychosis are possible.
In therapy groups I have often suggested there are six styles of causation frameworks that operate in different ways at different times. Sometimes the experiences may be caused by or related to political, psychological, traumatic, scientific, spiritual or artistic factors.
Knowing which framework explains a given trigger is often impossible! However, I believe that the more types of frameworks the psychosis survivor uses to explain the triggers, the more likely that they will be able to navigate the trigger in a functional manner. Positive knowledge about all explanations helps one find the value of each experience.
The more explanations the supporter learns, the better they can help make valuable meaning of these disturbing experiences. Giving up and calling the experiences meaningless does not help.
When there is a purpose for suffering, it is far more helpful.
3. Sharing Your Own Experiences with Psychosis
One of the huge benefits of survivor-led groups is that the leader also shares their own experience with psychosis. This opens people up to telling their story because it defies the dysfunctional boundary that exists between clinicians and patients-the presumption that the clinician is ‘well’ and the patient needs to learn wellness from them because they know better.
Additionally, when a survivor leads the group and discloses their own experience it sets the stage for more sharing.
One reason I believe this works is that if group members are free to judge the leader as being delusional, they get the chance to do some projective identification testing. If they do judge the leader as being delusional and see that it doesn’t bother the leader, they will become more emboldened to take the same risks and withstand others who may try to reality-check them.
Another reason self-disclosure in survivor-led groups works is because many in the group will believe the leader’s story and support them, as that is the way they want to be treated if they tell of their own experiences. Therefore, a leader who is prepared to believe some pretty outrageous stuff in a reciprocal manner is generally appreciated by many in the group.
Whatever place the group participant may be in, the tendency is to become compelled to share. I believe that sharing breaks down defenses and helps the participant let go of the traumatically reinforced material.
4. Spotting and Sharing Related Experiences to Achieve Cultural Competence
Many workers in the mental health system might say they can’t share their experiences with psychosis because they haven’t had them. Though I agree that it can be harder to relate to psychosis material if you haven’t had those experiences of being in a crisis, I think most workers likely have had some related experiences; if they learned to identify these and articulate them it would be helpful for psychosis survivors.
If a mental health worker sits in group and understands the experiences that trigger psychosis, they will probably learn to be able to relate. Additionally, being able to relate normalizes psychosis experiences and makes it safer to disclose without feeling like others don’t believe you and don’t care. In the definition of psychosis that I have created, things like dreams, interpersonal interactions, and intuitions can trigger alternative realities. I think workers can learn to relate using those common experiences and learn to join the conversation.
I think this is a measure of cultural competence. If you can see serendipitous events and imagine thoughts that may come up from them, why not share those with the psychosis survivor? Why not think about how you might explain those experiences in creative manners? Doing so isn’t going to hurt you. It is a sign of wellness and empathy.
5. Knowing When the Story Is Really There to Test You
It is important to know when a psychosis survivor is simply trying to establish her or his right to tell the story. In the past, survivors may have been interrupted or challenged when they tried to tell their story. Some will tell fragmented stories to see if they can get away with it and keep your interest and concern. I have been known to get in there and fish for special message experiences to demonstrate that I am there with them. However, it can be important to notice when this isn’t wanted and just let the person tell their story without being judged for doing so.
In many cases, the traumatic response may happen when the test has failed yet again. Indeed, I think it is important not to be concerned about whether the psychosis survivor’s comments are accurate or fit into your reality. Perhaps it is possible for the leader to make a few inaccurate-sounding comments themselves. This helps normalize and permit those experiences and paradoxically challenges the psychosis survivor to question themselves.
This is not to say that there is not a time to challenge an inaccurate comment that is made about you; there is a point where this can be effective. But first you have to repeatedly pass the tests. And acknowledging that you don’t understand everything about yourself and that they may be seeing something you are not aware of can help put off the challenge until the test is passed.
6. Bringing Other People or Situations Into the Discussion
If I am afraid that a person is going to get triggered by sharing their psychosis story because the group is inattentive or emotionally absent, I may try interrupting and identifying a triggering experience the participant has referenced and ask other group members if they can relate to the experience. If I am not in group, I may think of a similar experience I have heard before and share that experience to prove that the person is not alone. Usually, at least, I can relate to the triggering experience and share a story. This not only prevents the participant from feeling quite so alienated, it reminds them that others can relate and deepens the support in the room.
Likewise, if I am able to listen and discern some conspiracy ideas that might explain some of the triggering experiences and I fear retraumatization, I may propose that the group talk about that particular brand of conspiracy and how it really is possible. Again, this may help the participant feel like they are not alone. Group conspiracy talk is another way to deepen the threshold of what the group can tolerate and invite stories.
With other people relating and participating, the person telling the story is less likely to be retraumatized and may feel more supported. Then, it is a great idea to return to the story and hear it out intensely without having need for reality tests.
7. Addressing the Fact That You May Be Recording What Is Said
In many countries where the Hearing Voices Network has flourished, such as England, the Netherlands, and New Zealand, socialized medicine enables support groups to be funded outside the system where there is no need for clinical notes. This also helps create a sense of safety that invites disclosure.
Indeed, if group records are going to be taken by the facilitator for reimbursement purposes, that needs to be addressed in the room, identifying the potential for conspiracy.
Letting the participants know what I believe about the notes and the potential for them to be used in an abusive manner without my knowledge is a strategy I often employ. I point to computer screens and light fixtures and suggest that if they can put cameras down peoples’ colons, they can certainly bug the room without my ability to protect the group participants. I believe it is a disservice to promise a psychosis survivor that their material is safe. We are not in control of their ideas of reference that may be confirming unsafe realities. At least when the helper acknowledges the limits of their power it validates the concern.
When I document what takes place in a group, I also note that I have used my own lived experience to crack open stories. I tell participants that I do that. I think doing so demonstrates integrity and clarifies that the note is not written with the intent to do them harm. I also think doing so reduces stigma of the chart reviewers and takes away the perception that the helper will turn on the group participant and abuse power.
It is ideal when these issues can be avoided, but I also think it is possible to address them if you have to take notes in order to bill in the health care system.
Specialized Care Is Necessary
I believe that utilizing these strategies and other well-documented efforts of the hearing voices movement can help clinicians grow and come to a point where they can listen to stories of psychosis and contain them just like survivors can. I think that people who choose to specialize in this type of care need opportunities to grow and learn to contain such stories, and that survivors need opportunities to become specialists and lead groups themselves. Specialized care is most certainly needed. | https://medium.com/mad-in-america/7-strategies-to-avoid-retraumatization-while-working-with-psychosis-244a96112fef | ['Mad In America'] | 2019-12-29 18:06:21.137000+00:00 | ['Peer Support', 'Trauma', 'Mental Health', 'Psychosis'] |
The Song of the Written Word: What is Melodic Writing? | Experienced writers use a subtle art of creating subliminal harmony in their writing. You have felt it, even if you’ve missed it. And you can do it too.
First of all: hail to all editors. Editing is an art I have undervalued for a long time and which I never really had a lot of luck with. I always edited and proofread my own work even though I knew I had ‘a fool for a client’. I’m not good at editing. I’m not good at proofreading. But I did not have any good options. Editing is not given much importance in the Portuguese publishing world, as far as I’m concerned. Only now do we see people actually asking for that kind of professional care. At least that’s my experience. So it took me all these years to finally have some experts editing my work. And a couple of first experiences weren’t that good. But now they are: I have a couple of editors working on my texts, both in Portuguese and English. And I am particularly happy with the editing of my new novel LAURA AND THE SHADOW KING.
But there are a few things I do that editors are not particularly keen to accept or understand — so I had a few struggles about this with the ones I work with. Let me speak to you today about what I call Melodic Writing. It’s something I picked up long ago from Virginia Woolf and then Jack Kerouac, and which I noticed many brilliant writers practice, even if maybe they are not aware of.
Virginia Woolf is one of my favorite authors. Even though I could never get into some of her work, as ORLANDO, other pieces still blow my mind. TO THE LIGHTHOUSE is one of my favorite novels. But the first book of her I read, back in my twenties, was THE WAVES. And I was completely struck by the opening to the book. Let me transcribe the first two paragraphs:
The sun had not yet risen. The sea was indistinguishable from the sky, except that the sea was slightly creased as if a cloth had wrinkles in it. Gradually as the sky whitened a dark line lay on the horizon dividing the sea from the sky and the grey cloth became barred with thick strokes moving, one after another, beneath the surface, following each other, pursuing each other, perpetually.
As they neared the shore each bar rose, heaped itself, broke and swept a thin veil of white water across the sand. The wave paused, and then drew out again.
This is a beautiful piece, by any standard. But there is one thing that absolutely blows my mind each time I read it and completely changed the way I write: even though you can develop an image in your head of the waves reaching the shore… you can also feel them. Please read the opening again, but now notice one thing: the sentences feel as waves themselves. They have a cadence so well structured that it is as if the sentences themselves are waves. Look particularly at the last wave:
As they neared the shore each bar rose, (the wave swells)
heaped itself, (it peaks)
broke and swept a thin veil of white water across the sand. (it stretches across the sand)
The wave paused, (the wave stops, as stretched as it can be)
and then drew out again. (and then goes back.)
This is brilliant writing. Every comma, every stop, every word is there for the specific rhythm you need. There’s no rule to that. If you edit this text with the idea that commas have a specific function and should be put here or there because of some rule, you might just ruin the whole thing. It would be easy to take out the last one, for instance, and just have: «The wave paused and drew out again.» But that is not what happened: «The wave paused — comma — and drew out again.» And suddenly the sentence pauses as well and mimics the action of the wave.
I don’t know if anyone ever studied this in Literature classes. I never did. But as far as Creative Writing is concerned, this melodic writing is extremely powerful. Subtle intelligent writing lives on the subtext, on what’s implicit, but Melodic Writing takes this to a whole new level. It works on your unconscious. It taps into your emotions at another level. And it’s not just Woolf that does it. I recognized it later in Jack Kerouac, for instance, in LONESOME TRAVELER, as he describes in several pages how a railroad worker runs to catch a train. And in Hemingway. And in Whitman. Here’s a verse from SONG OF MYSELF:
The smoke of my own breath,
Echoes, ripples, buzz’d whispers, love-root, silk-thread, crotch and vine
I don’t know about you, but the second sentence seems to me as a cloud of smoke moving through the air out of one’s mouth.
I believe Melodic Writing is a thing. I don’t know if writers do it on purpose or by instinct, but they do it, and some in a really brilliant way. So I’m very careful with people that tell me that one sentence is unnecessary or a particular word is excessive. Rhythm takes precedent. Editors have a difficult time understanding me on this. I think I’m finally working with one that is starting to understand. Let me give you an example from my own work (far inferior to the other examples, but bear with me).
The soldiers were trying to convince the injured nurse to let herself be carried on a blanket and she was trying to convince them she could walk. Then, a gunshot sounded in the next room and everyone shut up.
In this text, the first thing that my editor tried to get rid of was the word ‘Then’. Unnecessary to editors, of course. It just seems not to convey any information. The impact, she said, is conveyed by the dry sentence: A gunshot sounded in the next room and everyone shut up. But the thing is: ‘Then’ mimics the shot. It is there for a reason. I could just have said: Bang, a gunshot sounded in the next room. It just manipulates the rhythm and the attention span of the reader towards a specific event.
This may not seem a lot to you, but in the overall writing, it has an impact. And as it gets more and more refined, it has a higher and higher impact, just as the opening of THE WAVES had on me. I hope one day I’ll be able to write that well. Kudos to Woolf. I’ll keep training and applying and admiring from afar. | https://medium.com/swlh/the-song-of-the-written-word-what-is-melodic-writing-d7712aa6f800 | ['Bruno Martins Soares'] | 2020-03-05 03:10:05.699000+00:00 | ['Writing Life', 'Writing', 'Melodic Writing', 'Writing Tips', 'Creative Writing'] |
“Most People Say They Are Concerned With The Volatility Of Cryptocurrency, I’m Not Worried About It” With Sarah Austin | I had the pleasure to interview Sarah Austin. Sarah is a marketing and communications consultant who helps companies like Intel, Virgin America, Ford Motor Company and SAP.
Thank you so much for doing this with us! What is your “backstory”?
As a former video journalist at Forbes.com, early technology influencer, and television presenter, I’ve learned how to process hundreds of thousands of messages across blogs and social media to extract meaning. I spent 5 years codifying a decade worth of experience in marketing and mass human interaction into communication software.
I started a company that uses an artificial emotional intelligence system as a human resources tool. The company is called Broad Listening, a psychographic modeling software that analyzes written text to help human resource offices strike the right tone when advertising positions and communicating with existing and potential employees. I sold the company a couple years ago to an investor, but the experience taught me the value in paying attention to fine details in written communication and the micro-decisions that go into word choice.
Today, I’m passionate about what makes someone successful at their job and how companies tailor internal messaging. When I enter a company, I bring my own automation tools with me that make me 10x more productive. The future of work is automated with machine learning. People who learn how to automate as much of their work as possible are the ones most likely to survive at an organization. Today, I’m a marketing and communications consultant helping companies like Intel, Virgin America, Ford Motor Company and SAP.
Can you tell me about the most interesting projects you are working on now?
I’m finishing my book on influencer marketing that will be released later this year. The book teaches anyone how to become an influencer through the use of social media and automation. I want to share my tips and tricks on successful social media marketing and personal branding so anyone can become an influencer marketing professional from the comfort of their home.
For example, I am an influencer for SAP Leonardo, the digital innovation software that delivers blockchain and artificial intelligence technologies to enterprise businesses. I get to attend conferences with access to executives and news announcements. Attending these conferences gives me an inside look at technology and keeps me on the cutting edge. Today, I’m in Orlando for SAPPHIRE NOW, a conference with 30,000 attendees of SAP customers and partners, where I’m interviewing blockchain energy executives and SAP customers as part of a live broadcast from the show floor about protecting the environment. It’s more than just content marketing, it’s authentic audience engagement around how blockchain technology can drive purpose and profit. This is just one example of how being an influencer can open doors to build meaningful relationships and drive business.
None of us are able to achieve success without some help along the way. Is there a particular person whom you are grateful towards who helped get you to where you are? Can you share a story about that?
When I was in high school and applying for college, I was blessed to have Apple co-founder, Steve Wozniak, as my mentor — he even wrote my college letter of recommendation. I was fascinated with computer programming from an early age and as soon as I was old enough to drive I started attending hackathons and coding meetups. At one such event, called Super Happy Dev House, I met Steve Wozniak who was there coaching and encouraging young developers. He’s very supportive of young people and likes to give back to his community.
What are the things that most excite you about blockchain and crypto? Why?
I’m excited about how blockchain technology supports sustainability.
What are the 5 things worry you about blockchain and crypto? Why?
Most people say they are concerned with the volatility of cryptocurrency. I’m not worried about it. Bitcoin is still early and while it has its ups and downs that’s part of the appeal. Sell high, buy low, that’s how I look at it. Also, if you wanted to have a stable cryptocurrency then buy a stablecoin.
People complain that Bitcoin has no inherent value. I disagree. Bitcoin is a scarce commodity and deflationary. The US Dollar is inflationary. Though the US Dollar is strong, and remains above fair value, inflation puts the dollar at risk of decline. Investing in Bitcoin has a similar appeal to investing in gold, in my opinion.
Privacy is a huge concern for blockchain because it’s transparent. You don’t want your competition to know who you are doing business with or who you are paying. Unfortunately Bitcoin is not private money. Many have turned to Monero, a digital coin that promises a higher degree of anonymity and untraceability baked into its design. I don’t own any Monero, but I can certainly see the appeal.
How have you used your success to bring goodness to the world? Can you share a story?
Nonprofit organizations are becoming the management leaders of America. Every other person in America volunteers. On average volunteer gives five hours a week to one or more nonprofit organization. Volunteers are becoming unpaid staff and even taking over management tasks and serving on the board. The reason people spend 5 hours a week as unpaid staff is because they are not getting purpose or value from their paid job.
What happens is employees spend less time contributing to their company and more time volunteering because they aren’t getting the corporate social responsibility opportunities or sharing in the value of purpose from their company.
I started a nonprofit organization, Coding FTW, to solve the problems faced in the technology sector around girls education. Teaching girls to code and simple work hacks shows them how to automate work. They can apply these work hacks immediately to their school work and eventually to their jobs. The next generation entering the workforce will come equipped with their own automation tools, and teaching girls these skills will help to bring more diversity to the workforce.
This year at SAPPHIRE NOW, SAP CEO Bill McDermott announced the company mission of purpose to give customers something meaningful they can connect with. The lead the competition, and engage employees, businesses need to integrate their purpose story authentically. It’s not enough to promote software products. SAP engages customers with a deeply human connection rather than the quality of the product or service alone. SAP’s purpose is to improve people’s lives with technology.
What 3 things would you advise to someone who wanted to emulate your career? Can you share an example for each idea?
Enterprise products are difficult to build and even more difficult to scale and manage. If you are working in consumer product marketing and you want to make the switch to enterprise there are a few things you should know.
In consumer tech you build products for the end user. In the enterprise world, you serve two people: the buyers and the end-users. As a product marketer, you need to understand both the buyer needs and the user needs. Your product experience needs to solve both buyer problems and user problems while also providing a great experience to the end-users.
Often in enterprise technology, product requirements are bespoke to the needs of the clients. Some requirements are not launch critical and can be prioritized accordingly. A good product marketer understands how to prioritize a long laundry list of requirements while also keeping the client updated with the value of maintaining the product roadmap.
The client’s timeline is equally as important as your own release timeline. In enterprise technology, timelines are impacted by factors outside the products organization. When planning your release cycle be sure to set dates that are realistic to the business world. For example, while setting a product launch date close to the Christmas holiday may sound like a great idea for a consumer technology, most enterprises will not agree to a launch before a holiday break.
If someone wanted to emulate my career they would likely be working in consumer marketing and would want to make the switch to enterprise.
Some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US whom you would love to have a private breakfast or lunch with, and why? He or she might just see this :-)
I would love to sit down with Google CEO Sundar Pichai who said AI is “one of the most important things that humanity is working on. It’s more profound than, I don’t know, electricity or fire,” adding that people learned to harness fire for the benefits of humanity, but also needed to overcome its downsides. Pichai emphasises that AI could be used to help solve climate change issues, or to cure cancer.
I’m an advocate of raising awareness to both the dangers and benefits of artificial intelligence and automation. I am tirelessly working to educate the private and public sectors on the need to regulate and integrate artificial intelligence. As the workforce becomes automated, people are worried about how technology impacts job security. While parts of jobs will be lost, new jobs will be created and will expand the economy despite the change in work. | https://medium.com/authority-magazine/most-people-say-they-are-concerned-with-the-volatility-of-cryptocurrency-im-not-worried-about-a6cb4dda7187 | ['Authority Magazine'] | 2018-07-08 04:07:23.682000+00:00 | ['Artificial Intelligence', 'Blockchain', 'Founders', 'Cryptocurrency', 'Bitcoin'] |
What I Think When I Go to Bed Late | Later on, having learned the internet and its power, I’d spend my nights over PC to explore the e-world. Of course, by pretending beforehand that I fall asleep, so my parents feel relaxed and safe over my future.
There were no Lo-Fi playlists back then, well, there was no even YouTube, but the air was spiritual even without those chill beats. There was also no social network, no push notifications, nothing else besides websites and talk apps like MSN.
I don’t want to make it sound like paying tribute to the past that in stories are always associated with “earlier was better”, but what was missing in my childhood made my life better than what I’ve obtained when I grew up.
Sometime later, I found myself sitting at a school desk and realizing that my life split into two parts — at one, I pursued the best skills to keep doing classes to stay a good kid. At another one, I was browsing the internet late at night by writing blog posts and searching for breathing out my real passion.
Been raised as an obedient child, I had a disturbing feeling that I do something wrong. I’d spend school classes with glass eyes as my brain was not making a delicious mix of completely lost and uncertain thoughts.
I didn’t realize it back then, but now, at the age of 27, I realized — that was a turning point when I’ve started doing things I love to do — my hobbies, interests, passion, etc.
That was a contract with a “real life” headline that I’ve signed. | https://medium.com/the-masterpiece/what-i-think-when-i-go-to-bed-late-e8a4f1348dc3 | ['Murad Zamanov'] | 2020-12-22 12:48:28.950000+00:00 | ['Self-awareness', 'The Masterpiece', 'Consumerism', 'Brainstorming', 'Life'] |
The 101 Best Podcasts for 2019. Discover the Best Podcasts in the World | Themes: Storytelling, Crime, Mystery
This podcast is a bit harder to stomach but is no less important. There are nearly 100 episodes, all focused on the stories of missing persons. This stuff is real, and we can help.
Start Here:
Mahfuza Rahman -
A quiet and kind young woman from New York went missing from her hospital job one day.
Themes: Storytelling, Sleep, Relax
Who says bedtime stories are just for kids? Got insomnia or just can’t sleep one night? Then pop Sleep With Me on the ol’ podcast app. This is the podcast you are supposed to fall asleep to.
Start Here:
I Am The Llama That Knocks -
Talking to sheep about Marx and Engels…. Did we mention that these episodes get super odd? 😂
Themes: Life, Health, Business
“Why live an ordinary life, when you can live an extraordinary one?” -Tony Robbins
Start Here:
Airbnb & The Art of Resilience: How a flexible strategy and strong perseverance built a $30B company
Themes: Stories, Crime
Criminal tells the stories of people who have done wrong, been wronged, or been stuck somewhere in the middle. Unlike their prison cells, these inmates’ stories aren’t black and white.
Start Here:
The Editor -
“In November of 1988, Robin Woods was sentenced to sixteen years in the notoriously harsh Maryland Correctional Institution. In prison, Robin found himself using a dictionary to work his way through a book for the first time in his life. It was a Mario Puzo novel. While many inmates become highly educated during their incarceration, Robin became such a voracious and careful reader he was able to locate a factual error in Merriam Webster’s Collegiate Encyclopedia. He wrote a letter to the encyclopedia’s editor, beginning an intricate friendship that changed the lives of both men.”
Themes: Sports, News, Comedy
From the folks at Barstool Sports (if you don’t follow them on Insta, then you’re missing out) comes this fun sports podcast.
Start Here:
Jump right into whatever is their most recent episode.
Themes: Tech, News
According to The Guardian, this is “‘a podcast about the internet’ that is actually an unfailingly original exploration of modern life and how to survive it.”
We agree.
Start Here:
Underdog -
How did a dog go from near death at a shelter to becoming ‘Marnie the Dog’, one of the most well-known dogs on the internet? In this episode, the Reply All team dissects the formula for achieving internet dog fame.
Themes: Entrepreneurship and Business
Hosted by 17-year-old Casey Adam, Rise of the Young showcases some of the most creative minds of this era and provides you with ideas for how you can transform your life and business.
Start Here:
Zack Wenger: How To Get One Million Views On Snapchat
Themes: Love, Society
On the Modern Love podcast, listeners share their personal stories of love, loss, and betrayal. It’s a soap opera for the ears.
Start Here:
Kept Together By The Bars Between Us -
“Cherry Jones, the Tony, Emmy, and Obie-Award winning actor, reads a story about a woman who falls in love with a convicted murderer.”
Themes: Business, Economics
This is a podcast by the Planet Money folks, and just from that descriptor, you already know it’s going to be good. :) The Indicator distills and then quickly delivers the business and economic stories that are important.
Start Here:
Craft Beer Hops the Shark -
Learn about the boom and bust of the craft beer industry.
Themes: Technology, History, Inventions
This is a fun mash-up between two great knowledge and news-focused brands. The first, TED, made world famous for their highly intelligent and mind-stretching TED Talks. The other, a national resource that’s been giving the news straight as an arrow for well over a century. Together, you’re getting a little magic. Let the wisdom seep in and make you a tiny bit smarter, one hour at a time.
Start Here:
Big Data Revolution -
Every detail of our lives can be tracked… Will this make life easier or more complicated?
Themes: Comedy, Life
Comedian Marc Maron sits down with his guests to discuss the hardest question of life: WTF?
Start Here:
Greta Gerwig -
Marc sits down with Greta Gerwig, writer and director of Lady Bird, to discuss life, work, and her story.
Themes: Life, Business and Investing, Radical Self-Discovery
Tim Ferriss is a self-proclaimed “human guinea-pig.” Through captivating storytelling, he shares his and others’ mind-boggling experiences in life, work, business, creation, and innovation. With nearly 300 episodes available, Tim has created a massive amount of valuable content, just waiting to satisfy your “earballs.”
Start here:
Lessons from Steve Jobs, Leonardo da Vinci, and Ben Franklin -
Two master learners talking about other master learners. Prepare to feel like a slacker… ;) In this episode, Tim interviews Walter Isaacson, president and CEO of The Aspen Institute and author of many biographies — his most recent being about Leonardo da Vinci. Walter shares little-known information about Leonardo’s personal life and explains his secrets to mastery. They also discuss some of history’s other famous innovators and the tools they used to learn so much so quickly.
The main takeaway from the episode: Don’t limit yourself through specialization. Embrace curiosity, seek the answers to many questions, and learn from everything that the world has to offer. Creativity happens at the intersection between different disciplines.
Themes: Discipline, Leadership, Life
Jocko Willink is a retired Navy SEAL, so who better to teach you about discipline, leadership, and self-respect?
Start Here:
With Tim Ferriss. Musashi, and How Warrior Way Relates to life -
I first heard of Jocko while he was guest hosting Tim Ferriss’s podcast, so I found it only fitting to first listen to Jocko’s podcast when he invited on Tim. Learn about Tim’s silent retreat and the life of The Samurai.
Themes: Life, Q&A
Cheryl Strayed and Steve Almond are ‘The Sugars’. Together they will take on any of your questions, no matter how deep. Learn valuable life lessons from the struggles of others, or send your own questions in and let The Sugars guide you to a solution.
Start Here:
Cutting The Financial Cord With Dr. Kate Gale -
When should you cut the financial cord and let your ‘adult child’ stand on their own? In this episode, The Sugars answer two letters; one from a woman whose parents refuse to let her be financially independent and another from a father who is ready to cut off his son.
Themes: Science, History, and The Future
The creators of this podcast may blow your mind…. Yup, it’s another podcast from the HowStuffWorks team! From the description:
“Deep in the back of your mind, you’ve always had the feeling that there’s something strange about reality. There is. Join Robert and Joe as they examine neurological quandaries, cosmic mysteries, evolutionary marvels and our transhuman future…”
Start Here:
Tree of Life -
The ‘sacred tree’ is a recurring symbol in religion and mythology. Learn what it means and why it’s so popular.
Themes: Health, Nutrition, Ketogenic Diet
This is the only keto podcast actually hosted by a doctor. Dr. Anthony Gustin is founder and CEO of PerfectKeto.com, and in this podcast, he provides you with everything you need to know about going keto.
Start Here:
Superhuman Performance and Living Longer with the Low Carb Ketogenic Diet with Ryan Lowery -
How does the keto diet affect longevity? Join Ryan Lowery, author of The Ketogenic Bible, and Dr. Gustin as they dive deep into the massive health benefits of the ketogenic diet.
Theme: Design and Architecture
There’s design in everything around us; we just don’t see 99% of it. This is a podcast about the thought that goes into the things we don’t think about.
Start Here:
Ways of Hearing -
Digital has not only changed the way we experience sound… it’s also changed how we perceive time itself.
Themes: Sports, Fitness, News
This is the most downloaded sports podcast of all time… and for good reason. Join Bill Simmons, from The Ringer and HBO, as he interviews a slew of celebs and athletes about sports.
Start Here:
Since they cover recent news, we recommend starting with the most recent episode.
Themes: Life, Storytelling
This is a very new podcast, but 100% worth being on your radar. Find out what happens when women break the unwritten rules.
Start Here:
How to Ride a Bicycle -
Cristen and Caroline investigate the complicated history of bicycles and catcalling.
Themes: Storytelling, Writing
What does it take to go from idea to published book? Launch follows John August as he tackles the complicated process of publishing his very first novel. If you’re ready to put your idea baby out into the world but aren’t sure what to expect, then this is the podcast for you.
Start Here:
A Boy in The Woods —
To really get the full experience of John’s struggles and solutions, we recommend you start from the beginning.
Themes: History, Business, Entrepreneurship
Hosted by renowned NPR journalist Guy Raz, How I Built This dives deep into the stories behind how some of most well-known companies were built.
Start Here:
LinkedIn: Reid Hoffman -
LinkedIn is by far the most well-known job and career networking site. Ever wonder how this monster of a company was built?
Learn how Reid Hoffman’s vision for the future of the Internet turned him into one the wealthiest figures in Silicon Valley and LinkedIn into one of the most useful career tools around.
Themes: History, Personal Stories
Learn the history of WWII as told through the personal letters exchanged between the Eyde brothers.
Start Here:
An Introduction —
This is a very powerful story expressed through the love of brothers. You won’t want to miss a second of it. Start with the introduction.
Themes: Business, Tech, Future
How will future technology transform business and culture? In this podcast, Wall Street Journal reporters evaluate current affairs and try their hand at predicting… well… the future of everything.
Start Here:
Can Food Waste Save the World? -
Does the food we throw out every day have the potential to do more?
Themes: Business and the Brain
Nir Eyal is the best-selling author of Hooked: How to Build Habit Forming Products. (Sidenote: If you haven’t given his a book a read yet, we highly recommend it! :D) In Nir and Far, Nir discusses how design influences customer behavior and analyzes how specific companies build habits.
Start Here:
Infinite Scroll: The Web’s Slot Machine -
We used to “click and flick,” and now, we just scroll… and scroll… and scroll. Why did companies make the switch, and what is it about scrolling that keeps us hooked longer?
Themes: Life, Learning, Health, Fitness
From the iTunes description:
“Bulletproof Executive Radio was born out of a fifteen-year single-minded crusade to upgrade the human being using every available technology. It distills the knowledge of world-class MDs, biochemists, Olympic nutritionists, meditation experts, and more than $1M spent on personal self-experiments. From private brain EEG facilities hidden in a Canadian forest to remote monasteries in Tibet, from Silicon Valley to the Andes, high tech entrepreneur Dave Asprey used hacking techniques and tried everything himself, obsessively focused on discovering: What are the simplest things you can do to be better at everything?”
Start Here:
Vegetables, Coffee, & The Recipe To Make Alzheimer’s Optional: Dr. Steven Masley -
What better place to start with this podcast than an episode about defying disease naturally?
Themes: Society, Culture
Wesley Morris and Jenna Wortham are culture writers for The New York Times. In their podcast, Still Processing, they dissect the latest in pop culture — movies, art, music, TV — and give you their take on what it means to be kickin’ in 2019.
Start Here:
We Sink Our Claws Into “Black Panther” with Ta-Nehisi Coates -
Black Panther is more than just a box office smash. In this episode, Wesley and Jenna discuss the cultural significance of the new hit with Ta-Nehisi Coates.
Themes: Personal Stories, Storytelling
Following a similar pattern to Strangers (mentioned above), Neighbors unveils the lives of those that you pass by every day. You have so much more in common with other people than you may think — and this podcast proves just that.
Start Here:
The Genius Improviser -
Michael Kearney is the youngest person to have graduated from college… at just age ten. He is, by all definitions of the word, a genius. Yet, at age 32, he’s not off curing cancer or anything crazy. Instead, he runs an improv comedy club. Learn his story and why he’s made the choices he has.
Theme: Music
Where podcasting meets music. Discover new and diverse “trap” music with Andre Benz, host of Trap Nation.
Start Here:
Undiscovered Hits -
They were undiscovered… then Andre introduced you.
Themes: Fitness, Health
20 Minute Fitness presents you with the latest news in health and fitness. In just twenty minutes, learn all you need to know about nutrition, developing an exercise routine, and keeping your body in tip top shape.
Start Here:
How to be Mentally Tough Like an Olympian -
A major part of meeting your health and fitness goals is being able to keep a level head and focus on what it is you wish to accomplish. How do professional athletes — folks with a lot riding on their athletic success — develop this mental toughness?
Themes: Comedy, Storytelling, Fiction
Turn your lights out, tuck yourself in, and press play. Pretend you are citizen of the town of Night Vale and this is your local radio station…
Start Here:
Do as they recommend and start from whatever is the most current episode. You’ll catch on.
Themes: Culture, Entertainment
Channel 33 is a collection of shows put together by The Ringer. They discuss a range of topics, but tend to focus on entertainment and pop culture.
Start Here:
Why Stop-Motion Animation Takes Forever to Make, With Nick Park -
Stop-motion animation is one of the most time consuming storytelling methods. Learn how it’s made and why some filmmakers are still so committed to using it despite how tedious the work is.
Themes: Leadership, Business, and Customer Service
Lee Cockerell is the former Executive Vice President of Operations for Walt Disney World, so to say the least, he’s got a lot of valuable insights to share. Lucky for us, he started the Creating Disney Magic podcast to do just that. In this weekly podcast, Lee discusses lessons in leadership, management, and customer service that you can use to create ‘magic’ in your own organization.
Start Here:
Building a StoryBrand -
From the description:
“Nobody really cares about your story. Rather than hear your story, they want to be invited into a story where they can play the hero. Disney has excelled by inviting people into a story. But do you have to be Disney to create a story people want to be a part of? Of course not.”
Themes: Success, Life, Business
Most legends started as losers. Learn how awesome people like 4-Star General Stanley McChrystal, businesswoman Ann Muira-Ko, or angel investor Jason Calacanis dug themselves out of defeat to become well-known successes in their respective spaces.
Start Here:
Chris Voss: How To Negotiate With Terrorists, Businesspeople & Children -
Chris Voss is a former FBI negotiator and author of the book Never Split the Difference: Negotiating As If Your Life Depended On It. In this episode, he teaches us how to negotiate with anyone and always win.
Themes: Selling, Business, Marketing
Bill Caskey and Bryan Neal are B2B sales trainers that 20 years of experience in the biz. In The Advanced Selling Podcast, they share their tricks, tips, and strategies that can grow your personal skills, build long term relationships with clients, improve your sales team, and much more!
Start Here:
Does Your Story Really Compel Anybody? -
Bill and Bryan discuss a three part framework that can help you build or rebuild your company’s story so that it is more compelling.
Themes: Entertainment and Culture
In Binge Mode, The Ringer’s Mallory Rubin and Jason Concepcion dive deep into the topics they are currently obsessed with. This is a weekly podcast that often times introduces you to or gives you a better understanding of a range of topics.
Start Here:
Philip K. Dick’s Electric Dreams’ -
We love sci-fi, so of course we just had to mention this episode of Binge Mode. Join Mallory and Jason as they take on Philip K. Dick’s crazy cool work and the new TV show, Electric Dreams.
Themes: Culture and Impact
Akimbo is a symbol of strength and possibility. Very fittingly, it is also the name of Seth Godin’s new podcast. Learn how culture can be changed and how you can find the strength to make a difference.
Start Here:
This is a super new podcast, so why not just start with the first episode?
Themes: History, Scary Stories
Lore is a podcast about the scary stories that fuel for our everyday superstitions. If you like history and a little thrill then we recommend giving this one a listen.
Start Here:
A Grave Mistake -
The fear of death can drive people to do the unthinkable…
Themes: Storytelling, Thriller
This ‘experimental-fiction’ podcast follows the life of a caseworker as she attends to a mysterious military veteran. Quickly this story become a suspense-filled thriller that’s truly worth the listen.
Start Here:
Since this follows a story format, start at the very beginning.
Themes: Tech, Cryptocurrency, Bitcoin
Epicenter is a weekly podcast that will keep you up to date on the latest news and developments in the world of cryptocurrency, bitcoin, and blockchain.
Start Here:
Andrew Trask: OpenMined — A Decentralised Artificial Intelligence Platform -
What do you get when you mix AI with blockchain with machine learning and then sprinkle in private user data? OpenMinded’s new platform.
Themes: Storytelling and history
Paul Harvey, host of The Rest of the Story, is a storytelling whiz. In each episode of this well-known radio show, he takes just a few minutes to share unbelievable — yet, true — stories.
Start Here:
Actor on Death Row —
Find out which famous actor was recruited while on death row.
Our team at The Mission launched our new podcast, The Story. Season 1 celebrates Women’s History Month and features the unknown backstories of twelve women who changed the world. You can subscribe and get details about our launch giveaway here!
Any we missed?
Let us know in the comments below! | https://medium.com/the-mission/the-101-best-podcasts-for-2018-755162ee64e5 | [] | 2018-11-29 23:04:37.692000+00:00 | ['Learning', 'Life', 'Storytelling', 'Podcast', 'Entertainment'] |
The Fightback | The Fightback
The Battleground is the Fightback
Journalism is under attack. From vested interests, on the right as well as the left. In social media, as well as from traditional media. The digital revolution may not replace newspapers with Facebook. But it is challenging the essence of the news media and may very well up the profile of power-friendly tabloids.
The situation is compounded by years of declining circulation of old brands as a consequence of the Internet, and the increasing compromise of journalism through commercial pressures, click bait and influence exerted by media moguls on behalf of their political interests. This has allowed a rise of populism in politics to go largely unchallenged, and propaganda largely unchecked.
Europe is under attack. For over 60 years, European integration has reinforced respect for freedom, democracy, human rights, equality and the rule of law on the continent of Europe. But these values are not acquis, not in Europe, nor in the world. 100 years after the First World War, populism and nationalism are again threatening our way of life. More than ever, there is a need for quality journalism to scrutinise developments, shed light on events and speak truth to power.
What’s needed is a new kind of news media, one that shines a light behind the headlines, inspires trust, and promotes ethical journalism. Not as a niche practice, but as a common standard readers can identify with and turn to in times of change. A media, in short, for the public good, which can counter populist narratives and challenge prevailing orthodoxy.
ENTER THE BATTELEGROUND. A news service of big picture analysis and reflection on the stories of our times, TheBattleground.eu will arm readers with relevant background and critical insights to navigate the news flow and interpret events in our changing world. Tackling important topics at critical moments in the news cycle, TheBattleground.eu will publish the salient facts of a story, expose people or parties who are behind false narratives and explain their motivation.
Context is key. No story exists in a vacuum, but few are presented beyond their immediate, national news appeal. Focusing on European news through a global lens, TheBattleground.eu will highlight the geo-political context and connections which illuminate current events in ways today’s news media, with its diminishing word count and dependence on wires, has neither the depth nor space to provide.
Providing perspective — returning to the story again and again — The Battleground will bring continuity and focus, allowing audiences to deepen their understanding of news topics and debates, freed from depending on individual and isolated articles. Perspective, after all, is not just a long read.
The approach is one Team Battleground holds dear. Led by longtime news and current affairs veterans, with expertise covering Europe, as well as the US and the Middle East, TheBattleground.eu will deliver the cosmopolitan, illuminating stories absent from media coverage of the EU and Europe for far too long.
The Battleground may not break populism’s blockade of the press. But it will erode faith in lies, and set a new standard for what journalism ought to do in Europe, if not the world.
The Battleground, in short, is the fightback. | https://medium.com/thebattleground/https-medium-com-thebattleground-thefightback-42b8a9881c0f | ['The Battleground'] | 2019-01-28 11:02:39.778000+00:00 | ['Europe', 'Media', 'The Fightback', 'Media Criticism', 'Journalism'] |
Build A Movie Recommender Using C# and ML.NET Machine Learning | I will build a machine learning model that reads in each user ID, movie ID, and rating, and then predicts the ratings each user would give for every movie in the dataset.
So that gives me a list of movies and ratings for every user. To recommend a movie, all I need to do is sort the list by rating and report the top 5.
Let’s get started. Here’s how to set up a new console project in NET Core:
$ dotnet new console -o Recommender
$ cd Recommender
Next, I need to install the ML.NET base package and the recommender extensions:
$ dotnet add package Microsoft.ML
$ dotnet add package Microsoft.ML.Recommender
Now I’m ready to add some classes. I’ll need one to hold a movie rating, and one to hold my model’s predictions.
I will modify the Program.cs file like this:
The MovieRating class holds one single movie rating. Note how each field is adorned with a Column attribute that tell the CSV data loading code which column to import data from.
I’m also declaring an MovieRatingPrediction class which will hold a single movie rating prediction.
Now I’m going to load the training data in memory:
This code uses the method LoadFromTextFile to load the CSV data directly into memory. The class field annotations tell the method how to store the loaded data in the MovieRating class.
Now I’m ready to start building the machine learning model:
Machine learning models in ML.NET are built with pipelines, which are sequences of data-loading, transformation, and learning components.
My pipeline has the following components:
MapValueToKey which reads the userId column and builds a dictionary of unique ID values. It then produces an output column called userIdEncoded containing an encoding for each ID. This step converts the IDs to numbers that the model can work with.
which reads the userId column and builds a dictionary of unique ID values. It then produces an output column called userIdEncoded containing an encoding for each ID. This step converts the IDs to numbers that the model can work with. Another MapValueToKey which reads the movieId column, encodes it, and stores the encodings in output column called movieIdEncoded.
which reads the movieId column, encodes it, and stores the encodings in output column called movieIdEncoded. A MatrixFactorization component that performs matrix factorization on the encoded ID columns and the ratings. This step calculates the movie rating predictions for every user and movie.
With the pipeline fully assembled, I can train the model with a call to Fit(…).
I now have a fully- trained model. So now I need to load some validation data, predict the rating for each user and movie, and calculate the accuracy metrics of my model:
This code uses the Transform(…) method to make predictions for every user and movie in the test dataset.
The Evaluate(…) method compares these predictions to the actual area values and automatically calculates three metrics for me:
Rms : this is the root mean square error or RMSE value. It’s the go-to metric in the field of machine learning to evaluate models and rate their accuracy. RMSE represents the length of a vector in n-dimensional space, made up of the error in each individual prediction.
: this is the root mean square error or RMSE value. It’s the go-to metric in the field of machine learning to evaluate models and rate their accuracy. RMSE represents the length of a vector in n-dimensional space, made up of the error in each individual prediction. L1 : this is the mean absolute prediction error, expressed as a rating.
: this is the mean absolute prediction error, expressed as a rating. L2: this is the mean square prediction error, or MSE value. Note that RMSE and MSE are related: RMSE is just the square root of MSE.
To wrap up, let’s use the model to make a prediction.
I’m going to focus on a specific user, let’s say user number 6, and check if he or she likes the James Bond movie ‘GoldenEye’.
Here’s how to make the prediction:
I use the CreatePredictionEngine method to set up a prediction engine. The two type arguments are the input data class and the class to hold the prediction. And once my prediction engine is set up, I can simply call Predict(…) to make a single prediction on a MovieRating instance.
Let’s do one more thing and predict the top-5 favorite movies for this user:
This code uses a static helper class Movies to enumerate over every movie ID. It creates predictions for user 6 and every possible movie, sorts them by score in descending order, and takes the top 5 results.
Here’s the partial source of the helper class:
There’s a Movie class that represents a single movie. The static helper class Movies has an All property with a list of all movies, and a Get method to lookup a single movie by ID value.
With the code all done, it’s time to check the predictions. Here’s the code running in the Visual Studio Code debugger on my Mac: | https://medium.com/machinelearningadvantage/build-a-movie-recommender-using-c-and-ml-net-machine-learning-d6175ae13bc9 | ['Mark Farragher'] | 2019-11-19 15:05:06.663000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Programming'] |
Understanding Python Bytecode | Understanding Python Bytecode
Learn about disassembling Python bytecode
The source code of a programming language can be executed using an interpreter or a compiler. In a compiled language, a compiler will translate the source code directly into binary machine code. This machine code is specific to that target machine since each machine can have a different operating system and hardware. After compilation, the target machine will directly run the machine code.
In an interpreted language, the source code is not directly run by the target machine. There is another program called the interpreter that reads and executes the source code directly. The interpreter, which is specific to the target machine, translates each statement of the source code into machine code and runs it.
Python is usually called an interpreted language, however, it combines compiling and interpreting. When we execute a source code (a file with a .py extension), Python first compiles it into a bytecode. The bytecode is a low-level platform-independent representation of your source code, however, it is not the binary machine code and cannot be run by the target machine directly. In fact, it is a set of instructions for a virtual machine which is called the Python Virtual Machine (PVM).
After compilation, the bytecode is sent for execution to the PVM. The PVM is an interpreter that runs the bytecode and is part of the Python system. The bytecode is platform-independent, but PVM is specific to the target machine. The default implementation of the Python programming language is CPython which is written in the C programming language. CPython compiles the python source code into the bytecode, and this bytecode is then executed by the CPython virtual machine.
Generating bytecode files
In Python, the bytecode is stored in a .pyc file. In Python 3, the bytecode files are stored in a folder named __pycache__ . This folder is automatically created when you try to import another file that you created:
import file_name
However, it will not be created if we don’t import another file in the source code. In that case, we can still manually create it. To compile the individual files file_1.py to file_n.py from the command line, we can write:
python -m compileall file_1.py ... file_n.py
All the generated pyc files will be stored in the __pycache__ folder. If you provide no file names after compileall, it will compile all the python source code files in the current folder.
We can also use the compile() function to compile a string that contains the Python source code. The syntax of this function is:
compile(source, filename, mode, flag, dont_inherit, optimize)
We only focus on the first three arguments which are required (the others are optional). source is the source code to compile which can be a String, a Bytes object, or an AST object. filename is the name of the file that the source code comes from. If the source code does not come from a file, you can write whatever you like or leave an empty string. mode can be:
'exec' : accepts Python source code in any form (any number of statements or blocks). It compiles them into a bytecode that finally returns None
'eval' : accepts a single expression and compiles it into a bytecode that finally returns the value of that expression
'single' : only accepts a single statement (or multiple statements separated by ; ). If the last statement is an expression, then the resulting bytecode prints the repr() of the value of that expression to the standard output.
For example, to compile some Python statements we can write:
s='''
a=5
a+=1
print(a)
'''
compile(s, "", "exec")
or equivalently write:
compile("a=5
a+=1
print(a)", "", "exec")
To evaluate an expression we can write:
compile("a+7", "", "eval")
This mode gives an error if you don’t have an expression:
# This does not work:
compile("a=a+1", "", "eval")
Here a=a+1 is not an expression and does not return anything, so we cannot use the eval mode. However, we can use the single mode to compile it:
compile("a=a+1", "", "single")
But what is returned by compile ? When you run the compile function, Python returns:
<code object <module> at 0x000001A1DED95540, file "", line 1>
So what the compile function is returning is a code object (the address after at can be different on your machine).
Code object
The compile() function returns a Python code object. Everything in Python is an object. For example we you define an integer variable, its value is stored in an int object and you can easily check its type using the type() function:
a = 5
type(a) # Output is: int
In a similar way, the bytecode generated by the compile function is stored in the code object.
c = compile("a=a+1", "", "single")
type(c) # Output is: code
The code object contains not only the bytecode but also some other information necessary for the CPython to run the bytecode (they will be discussed later). A code object can be executed or evaluated by passing it to the exec() or eval() function. So we can write:
exec(compile("print(5)", "", "single")) # Output is: 5
When you define a function in Python, it creates a code object for it and you can access it using the __code__ attribute. For example, we can write:
def f(n):
return n
f.__code__
And the output will be:
<code object f at 0x000001A1E093E660, file "<ipython-input-61-88c7683062d9>", line 1>
Like any other objects the code object has some attributes, and to get the bytecode stored in a code object, you can use its co_code attribute:
c = compile("print(5)", "", "single")
c.co_code
The output is:
b'e\x00d\x00\x83\x01F\x00d\x01S\x00'
The result is a bytes literal which is prefixed with b'. It is an immutable sequence of bytes and has a type of bytes . Each byte can have a decimal value of 0 to 255. So a bytes literal is an immutable sequence of integers between 0 to 255. Each byte can be shown by an ASCII character whose character code is the same as the byte value or it can be shown by a leading \x followed by two characters. The leading \x escape means that the next two characters are interpreted as hex digits for the character code. For example:
print(c.co_code[0])
chr(c.co_code[0])
gives:
101
'e'
since the first element has the decimal value of 101 and can be shown with the character e whose ASCII character code is 101. Or:
print(c.co_code[4])
chr(c.co_code[4])
gives:
131
'\x83'
since the 4th element has the decimal value of 131. The hexadecimal value of 131 is 83. So this byte can be shown with a character whose character code is \x83 .
These sequences of bytes can be interpreted by CPython, but they are not human-friendly. So we need to understand how these bytes are mapped to the actual instructions that will be executed by CPython. In the next section, we are going to disassemble the byte code into some human-friendly instruction to see how the bytecode is executed by CPython.
Bytecode details
Before going into further details, it is important to note that the implementation detail of Bytecode usually changes between versions of Python. So what you see in this article may not be valid for all versions of Python. In fact, it includes the changes that happened in version 3.6, and some of the details may not be valid for older versions. The code in this article has been tested with Python 3.7.
The bytecode can be thought of as a series of instructions or a low-level program for the Python interpreter. After version 3.6, Python uses 2 bytes for each instruction. One byte is for the code of that instruction which is called an opcode, and one byte is reserved for its argument which is called the oparg. Each opcode has a human-friendly name which is called the opname. The bytecode instructions have a general format like this:
opcode oparg
opcode oparg
.
.
.
We already have the opcodes in our bytecode, and we just need to map them to their corresponding opname. There is a module called dis which can help with that. In this module, there is a list called opname which stores all the opnames. The i-th element of this list gives the opname for an instruction whose opcode is equal to i.
Some instructions do not need an argument, so they ignore the byte after the opcode. The opcodes which have a value below a certain number ignore their argument. This value is stored in dis.HAVE_ARGUMENT and is currently equal to 90. So the opcodes >= dis.HAVE_ARGUMENT have an argument, and the opcodes < dis.HAVE_ARGUMENT ignore it.
For example, suppose that we have a short bytecode b'd\x00Z\x00d\x01S\x00' and we want to disassemble it. This bytecode represents a sequence of four bytes. We can easily show their decimal value:
bytecode = b'd\x00Z\x00d\x01S\x00'
for byte in bytecode:
print(byte, end=' ')
The output will be:
100 0 90 0 100 1 83 0
The first two bytes of the bytecode is 100 0 . The first byte is the opcode. To get its opname we can write ( dis should be imported first):
dis.opname[100]
and the result is LOAD_CONST . Since the opcode is bigger than dis.HAVE_ARGUMENT , it has an oparg which is the second byte 0 . So 100 0 translates into:
LOAD_CONST 0
The last two bytes in the bytecode are 83 0 . Again we write dis.opname[83] and the result is RETURN_VALUE . 83 is lower than 90 ( dis.HAVE_ARGUMENT ), so this opcode ignores the oparg, and 83 0 is disassembled into:
RETURN_VALUE
In addition, some of the instructions can have an argument too big to fit into the default one byte. There is a special opcode 144 to handle these instructions. Its opname is EXTENDED_ARG , and it is also stored in dis.EXTENDED_ARG . This opcode prefixes any opcode which has an argument bigger than one byte. For example, suppose that we have the opcode 131 (its opname is CALL_FUNCTION ) and its oparg needs to be 260. So it should be:
CALL_FUNCTION 260
However, the maximum number that a byte can store is 255, and 260 does not fit into a byte. So this opcode is prefixed with EXTENDED_ARG :
EXTENDED_ARG 1
CALL_FUNCTION 4
When the interpreter executes EXTENDED_ARG , its oparg (which is 1) is left-shifted by eight bits and stored in a temporary variable. Let’s call it extended_arg (do not confuse it with the opname EXTENDED_ARG ):
extened_arg = 1 << 8 # same as 1 * 256
So the binary value 0b1 (the binary value of 1) is converted to 0b100000000 . This is like multiplying 1 by 256 in the decimal system and extened_arg will be equal to 256. Now we have two bytes in extened_arg . When the interpreter reaches to the next instruction, this two-byte value is added to its oparg (which is 4 here) using a bitwise or .
extened_arg = extened_arg | 4
# Same as extened_arg += 4
This is like adding the value of the oparg to extened_arg . So now we have:
extened_arg = 256 + 4 = 260
and this value will be used as the actual oparg of CALL_FUNCTION . So, in fact,
EXTENDED_ARG 1
CALL_FUNCTION 4
is interpreted as:
EXTENDED_ARG 1
CALL_FUNCTION 260
For each opcode, at most three prefixal EXTENDED_ARG are allowed, forming an argument from two-byte to four-byte.
Now we can focus on the oparg itself. What does it mean? Actually the meaning of each oparg depends on its opcode. As mentioned before, the code object stores some information other than the bytecode. This information can be accessed using the different attributes of the code object, and we need some of these attributes to decipher the meaning of each oparg. These attributes are: co_consts , co_names , co_varnames , co_cellvars and co_freevars .
Code object attributes
I am going to explain the meaning of these attributes using an example. Suppose that you have the code object of this source code:
# Listing 1
s = '''
a = 5
b = 'text'
def f(x):
return x
f(5)
'''
c=compile(s, "", "exec")
Now we can check what is stored in each of these attributes:
1- co_consts : A tuple containing the literals used by the bytecode. Here c.co_consts returns:
(5, 'text', <code object f at 0x00000218C297EF60, file "", line 4>, 'f', None)
So the literals 5 and 'text' and the name of the function 'f' are all stored in this tuple. In addition, the body of the function f is stored in a separate code object and is treated like a literal which is also stored in this tuple. Remember that the exec mode in compile() generates a bytecode that finally returns None . This None value is also stored as a literal. In fact, if you compile an expression in eval mode like this:
s = "3 * a"
c1 = compile(s, "", "eval")
c1.co_consts # Output is (3,)
None won’t be included in the co_consts tuple anymore. The reason is that this expression returns its final value not None .
If you try to get the co_const for the object code of a function like:
def f(x):
a = x * 2
return a
f.__code__.co_consts
The result will be (None, 2) . In fact, the default return value for a function is None , and it is always added as a literal. As I explain later, for the sake of efficiency, Python does not check if you are always going to reach a return statement or not, so None is always added as the default return value.
2- co_names : A tuple containing the names used by the bytecode which can be global variables, functions, and classes or also attributes loaded from objects. For example for the object code in Listing 1, c.co_names gives:
('a', 'b', 'f')
3- co_varnames : A tuple containing the local names used by the bytecode (arguments first, then the local variables). If we try it for the object code of Listing 1, it gives an empty tuple. The reason is that the local names are defined inside functions, and the function inside Listing 1 is stored as a separate code object, so its local variables will not be included in this tuple. To access the local variables of a function, we should use this attribute for the code object of that function. So we first write this source code:
def f(x):
z = 3
t = 5
def g(y):
return t*x + y
return g
a = 5
b = 1
h = f(a)
Now f.__code__ gives the code object of f , and f.__code__.co_varnames gives:
('x', 'z', 'g')
Why t is not included? The reason is that t is not a local variable of f . It is a nonlocal variable since it is accessed by the closure g inside f . In fact, x is also a nonlocal variable, but since it is the function’s argument, it is always included in this tuple. To learn more about closures and nonlocal variables you can refer to this article.
4- co_cellvars : A tuple containing the names of nonlocal variables. These are the local variables of a function accessed by its inner functions. So f.__code__.co_cellvars gives:
('t', 'x')
5- co_freevars : A tuple containing the names of free variables. Free variables are the local variables of an outer function which are accessed by its inner function. So this attribute should be used with the code object of the closure h . Now h.__code__.co_freevars gives the same result:
('t', 'x')
Now that we are familiar with these attributes, we can go back to the opargs. The meaning of each oparg depends on its opcode. We have different categories of opcodes, and for each category, the oparg has a different meaning. In the dis module, there are some lists that give the opcodes for each category:
1- dis.hasconst : This list is equal to [100]. So only the opcode 100 (its opname is LOAD_CONST) is in the category of hasconst . The oparg of this opcode gives the index of an element in the co_consts tuple. For example in the bytecode of Listing 1, if we have:
LOAD_CONST 1
then the oparg is the element of co_consts whose index is 1. So we should replace 1 with co_consts[1] which is equal to 'text' . So the instruction will be interpreted as:
LOAD_CONST 'text'
Similarly, there are some other lists in the dis module that define the other categories for the opcodes:
2- dis.hasname : The oparg for the opcodes in this list, is the index of an element in co_names
3- dis.haslocal : The oparg for the opcodes in this list, is the index of an element in co_varnames
4- dis.hasfree : The oparg for the opcodes in this list, is the index of an element in co_cellvars + co_freevars
5- dis.hascompare : The oparg for the opcode in this list, is the index of an element of the tuple dis.cmp_op . This tuple contains the comparison and membership operators like < or ==
6- dis.hasjrel : The oparg for the opcodes in this list, should be replaced with offset + 2 + oparg where offset is the index of the byte in the bytecode sequence which represents the opcode.
The code object has one more important attribute that should be discussed here. It is called co_lnotab which stores the line number information of the bytecode. This is an array of signed bytes stored in a bytes literal and is used to map the bytecode offsets to the source code line numbers. Let me explain it by an example. Suppose that your source code has only three lines and it has been compiled into a bytecode which has 24 bytes:
1 0 LOAD_CONST 0
2 STORE_NAME 0
2 4 LOAD_NAME 0
6 LOAD_CONST 1
8 INPLACE_ADD
10 STORE_NAME 0
3 12 LOAD_NAME 1
14 LOAD_NAME 0
16 CALL_FUNCTION 1
18 POP_TOP
20 LOAD_CONST 2
22 RETURN_VALUE
Now we have a mapping from bytecode offsets to line numbers like this table:
The bytecode offset always starts at 0. The code object has an attribute named co_firstlineno which gives the line number for the offset zero. For this example co_firstlineno is equal to 1. Instead of storing the offset and line numbers literally, Python stores only the increments from one row to the next (excluding the first row). So the previous table turns into:
These two increment columns are zipped together in a sequence like this:
4 1 8 1
Each number is stored in a byte and the whole sequence is stored as a bytes literal in the co_lnotab of the code object. So if you check the value of co_lnotab you get:
b'\x04\x01\x08\x01'
which is the bytes literal for the previous sequence. So by having the attributes co_lnotab and co_firstlineno you can retrieve the mapping from the bytecode offsets to the source code line numbers. co_lnotab is a sequence of signed bytes. So each signed byte in it can take a value from -128 to 127 (These values are still stored in a byte which takes 0 to 255. But a value between 128 and 255 is considered a negative number). A negative increment means that the line number is decreasing (this feature is used in optimizers). But what happens if the line increment is bigger than 127? In that case, the line increment will be split into 127 and some extra bytes and those extra bytes will be stored with a zero offset increment (if it is smaller than -128, it will be split into -128 and some extra bytes with a zero offset increment). For example, suppose that the bytecode offset versus the line number is like this:
Then the offset increment versus the line number increment should be:
139 is equal to 127 + 12. So the previous row should be written as:
and should be stored as 8 127 0 12 . So the value of co_lnotab will be: b'\x08\x7f\x00\x0c' .
Disassembling the bytecode
Now that we are familiar with the bytecode structure, we can write a simple disassembler program. We first write a generator function to unpack each instruction and yield the offset, opcode, and oparg:
This function reads the next pair of bytes from the bytecode. The first byte is the opcode. By comparing this opcode with dis.HAVE_ARGUMENT , the function decides if it should take the second byte as the oparg or ignore it. The value of extended_arg will be added to oparg using the bitwise or ( | ). Initially, it is zero and has no effect on the oparg. If the opcode is equal to dis.EXTENDED_ARG , its oparg will be left-shifted by eight bits and stored in a temporary variable called extended_arg .
In the next iteration, this temporary variable will be added to the next oparg and adds one byte to it. This process continues if the next opcode is dis.EXTENDED_ARG again, and each time adds one byte to extended_arg . Finally when it reaches a different opcode, extended_arg will be added to its oparg and set back to zero.
The find_linestarts function returns a dictionary that contains the source code line number for each bytecode offset.
It first divided the co_lnotab bytes literal into two sequences. One is the offset increments and the other is the line number increments. The line number for offset 0 is in co_firstlineno . The increments are added to these two numbers to get the bytecode offset and its corresponding line number. If the line number increment is equal or bigger than 128 (0x80), it will be considered a decrement.
The get_argvalue function returns the human-friendly meaning of each oparg. It first checks to which category the opcode belongs and then figures out what the oparg is referring to.
The findlabels function finds all the offsets in the bytecode which are jump targets and returns a list of these offsets. The jump targets will be discussed in the next section.
Now we can use all these functions to disassemble the bytecode. The dissassemble function takes a code object and disassembles it:
It will first unpack the offset, opcode and oparg for each pair of bytes in the bytecode of the code object. Then it finds the corresponding source code line numbers, and checks if the offset is a jump target. Finally, it finds the opname and the meaning of the oparg and prints all the information. As mentioned before each function definition is stored in a separate code object. So at the end the function calls itself recursively to disassemble all the function definitions in the bytecode. Here is an example of using this function. Initially, we have this source code:
a=0
while a<10:
print(a)
a += 1
We first store it in a string and compile it to get the object code. Then we use the disassemble function to disassemble its bytecode:
s='''a=0
while a<10:
print(a)
a += 1
'''
c=compile(s, "", "exec")
disassemble(c)
The output is:
1 0 LOAD_CONST 0 (0)
2 STORE_NAME 0 (a)
2 4 SETUP_LOOP 28 (to 34)
>> 6 LOAD_NAME 0 (a)
8 LOAD_CONST 1 (10)
10 COMPARE_OP 0 (<)
12 POP_JUMP_IF_FALSE 32
3 14 LOAD_NAME 1 (print)
16 LOAD_NAME 0 (a)
18 CALL_FUNCTION 1
20 POP_TOP
4 22 LOAD_NAME 0 (a)
24 LOAD_CONST 2 (1)
26 INPLACE_ADD
28 STORE_NAME 0 (a)
30 JUMP_ABSOLUTE 6
>> 32 POP_BLOCK
>> 34 LOAD_CONST 3 (None)
36 RETURN_VALUE
So 4 lines of source code are converted into 38 bytes of bytecode or 19 lines of bytecode. In the next section, I will explain the meaning of these instructions and how they will be interpreted by CPython.
The module dis has a function named dis() which can disassemble the code object similarly. In fact, the disassmble function in this article is a simplified version of dis.dis function. So instead of writing, disassemble(c) we could write dis.dis(c) to get a similar output.
Disassembling a pyc file
As mentioned before, when the source code is compiled, the bytecode is stored in a pyc file. This bytecode can be disassembled in a similar way. However, it is important to mention that the pyc file contains some metadata plus the code object in marshal format. The marshal format is used for Python’s internal object serialization. The size of the metadata depends on the Python version, and for version 3.7 it is 16 bytes. So when you read the pyc file, first you should read the metadata, and then load the code object using the marshal module. For example, to disassemble a pyc file named u1.cpython-37.pyc in the __pycache__ folder we can write:
Bytecode operations
So far we learned how to disassemble the bytecode instructions. We can now focus on the meaning of these instructions and how they are executed by CPython. CPython which is the default implementation of Python uses a stack-based virtual machine. So first we should get familiar with the stack.
Stack and heap
Stack is a data structure with a LIFO (Last In First Out) order. It has two principal operations:
push: adds an element to the stack
pop: removes the most recently added element
So the last element added or pushed to the stack is the first element to be removed or popped. The advantage of using stack to store data is that memory is managed for you. Reading from and writing to stack is very fast, however, the size of stack is limited.
Data in Python is represented as objects stored on a private heap. Accessing the data on heap is a bit slower compared to stack, however, the size of heap is only limited by the size of virtual memory. The elements of heap have no dependencies with each other and can be accessed randomly at any time. Everything in Python is an object and objects are always stored on the heap. It’s only the reference (or the pointer) to the object that is stored in the stack.
CPython uses the call stack for running a Python program. When a function is called in Python, a new frame is pushed onto the call stack, and every time a function call returns, its frame is popped off. The module in which the program runs has the bottom-most frame which is called the global frame or the module frame.
Each frame has an evaluation stack where the execution of a Python function occurs. The function arguments and its local variables are pushed into this evaluation stack. CPython uses the evaluation stack to store the parameters required for any operations and also the result of those operations. Before starting that operation, all the required parameters are pushed onto the evaluation stack. Then the operation is started and it pops its parameters. When the operation is finished, it pushes the result back onto the evaluation stack.
All the objects are stored on the heap and the evaluation stack in the frames deals with references to them. So the references to these objects can be pushed onto the evaluation stack temporarily to be used for the later operations. Most of Python’s bytecode instructions manipulate the evaluation stack in the current frame. In this article whenever we talk about the stack it means the evaluation stack in the current frame or the evaluation stack in the global frame if we are not in the scope of any functions.
Let me start with a simple example, and disassemble the bytecode of the following source code:
a=1
b=2
c=a+b
To do that we can write:
s='''a=1
b=2
c=a+b
'''
c=compile(s, "", "exec")
disassemble(c)
and we get:
1 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (a)
2 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (b)
3 8 LOAD_NAME 0 (a)
10 LOAD_NAME 1 (b)
12 BINARY_ADD
14 STORE_NAME 2 (c)
16 LOAD_CONST 2 (None)
18 RETURN_VALUE
In addition, we can check some other attributes of the code object:
c.co_consts
# output is: (1, 2, None)
c.co_names
# output is: ('a', 'b', 'c')
Here the code is running in the module, so we are inside the global frame. The first instruction is LOAD_CONST 0 . The instruction
LOAD_CONST consti
pushes the value of co_consts[consti] onto the stack. So we are pushing co_consts[0] (which is equal to 1 ) onto the stack.
It is important to note that stack works with references to the objects. So whenever we say that an instruction pushes an object or the value of an object onto the stack, it means that a reference (or pointer) to that object is being pushed. The same thing happens when an object or its value is popped off the stack. Again its reference is popped. The interpreter knows how to retrieve or store the object's data using these references.
The instruction
STORE_NAME namei
pops the top of the stack and stores it into an object whose reference is stored in co_names[namei] of the code object. So STORE_NAME 0 pops the element on top of the stack (which is 1 ) and stores it in an object. The reference to this object is co_names[0] which is a . These two instructions are the bytecode equivalent of a=1 in the source code. b=2 is converted similarly, and now the interpreter has created the objects a and b . The last line of the source code is c=a+b . The instruction
BINARY_ADD
pops the top two elements of the stack ( 1 and 2 ), adds them together and pushes the result ( 3 ) onto the stack. So now 3 is on top of the stack. After that STORE_NAME 2 pops the top of the stack into the local object (referred by) c . Now remember that compile in exec mode compiles the source code into a bytecode that finally returns None . The instruction LOAD_CONST 2 pushes co_consts[2]=None onto the stack, and the instruction
RETURN_VALUE
returns with the top of the stack to the caller of the function. Of course, here we are in the module scope and there is no caller function, so None is the final result which remains on top of the global stack. Figure 1 shows all the bytecode operations with offsets 0 to 14 (Again it should be noted that the references to the objects are pushed onto the stack, not the objects or their values. The figure does not show it explicitly).
Functions, global and local variables
Now let’s see what happens if we also have a function. We are going to disassemble the bytecode of a source code which has a function:
#Listing 2
s='''a = 1
b = 2
def f(x):
global b
b = 3
y = x + 1
return y
f(4)
print(a)
'''
c=compile(s, "", "exec")
disassemble(c)
The output is:
1 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (a)
2 4 LOAD_CONST 1 (2)
6 STORE_GLOBAL 1 (b)
3 8 LOAD_CONST 2 (<code object f at 0x00000218C2E758A0, file "", line 3>)
10 LOAD_CONST 3 ('f')
12 MAKE_FUNCTION 0
14 STORE_NAME 2 (f)
8 16 LOAD_NAME 2 (f)
18 LOAD_CONST 4 (4)
20 CALL_FUNCTION 1
22 POP_TOP
9 24 LOAD_NAME 3 (print)
26 LOAD_NAME 0 (a)
28 CALL_FUNCTION 1
30 POP_TOP
32 LOAD_CONST 5 (None)
34 RETURN_VALUE
Disassembly of<code object f at 0x00000218C2E758A0, file "", line 3>:
5 0 LOAD_CONST 1 (3)
2 STORE_GLOBAL 0 (b)
6 4 LOAD_FAST 0 (x)
6 LOAD_CONST 2 (1)
8 BINARY_ADD
10 STORE_FAST 1 (y)
7 12 LOAD_FAST 1 (y)
14 RETURN_VALUE
In addition, we can check some other attributes of the code object:
c.co_consts
# output is: (1, 2, <code object f at 0x00000218C2E758A0, file "", line 3>, 'f', 4, None) c.co_names
# Output is: ('a', 'b', 'f', 'print')
In the first line (offsets 0 and 2) the constant 1 is first pushed into the evaluation stack of the global frame using LOAD_CONST 0 . Then STORE_NAME 0 pops it and stores it in an object.
In the second line, the constant 2 is pushed into the stack using LOAD_CONST 1 . However, a different opname is used to assign it to the reference. The instruction
STORE_GLOBAL namei
pops the top of the stack and stores it into an object whose reference is stored in co_names[namei] . So 2 is stored in the object referred by b . This is considered a global variable. But why was this instruction not used for a ? The reason is that a is a global variable inside the function f . If a variable is defined at the module scope and no functions access it, it will be stored and loaded using STORE_NAME and LOAD_NAME . At the module scope, there is no distinction between global and local variables.
In the third line, the function f is defined. The body of the function is compiled in a separate code object named <code object f at 0x00000218C2E758A0, file "", line 3> and it is pushed onto the stack. Then a string object which is the name of this function 'f' is pushed onto the stack (in fact references to them are pushed). The instruction
MAKE_FUNCTION argc
is used to create the function. It needs some parameters that should be pushed onto the stack. The name of the function should be on top of the stack and the function’s code object should be below it. In this example, its oparg is zero, but it can have other values. For example, if the function definition had a keyword argument like:
def f(x=5):
global b
b = 3
y = x + 1
return y
Then the disassembled bytecode for line 2 would be:
2 4 LOAD_CONST 5 ((5,))
6 LOAD_CONST 1 (<code object f at 0x00000218C2E75AE0, file "", line 2>)
8 LOAD_CONST 2 ('f')
10 MAKE_FUNCTION 1
An oparg of 1 for MAKE_FUNCTION indicates that the function has some keyword arguments, and a tuple containing the default values should be pushed onto the stack before the function’s code object (here it is (5,) ). After creating the function, MAKE_FUNCTION pushes the new function object onto the stack. Then at offset 14, STORE_NAME 2 pops the function object and stores it as a function object referenced by f .
Now let’s looks inside the code object of f(x) which starts at line 5. The statement global a does not convert into a separate instruction in the bytecode. It only guides the compiler that a should be treated as a global variable. So STORE_GLOBAL 0 will be used to change its value. The instruction
LOAD_GLOBAL namei
pushes a reference to the object referred by co_names[namei] onto the stack. It is then stored in b using STORE_GLOBAL 0 . The instruction
LOAD_FAST var_num
pushes a reference to the object whose reference is co_varnames[var_num] onto the stack. In the code object of function f , the attribute co_varnames contains:
('x', 'y')
So LOAD_FAST 0 pushes x onto the stack. Then 1 is pushed onto the stack. BINARY_ADD pops x and 1 , adds them together and pushes the result onto the stack. The instruction
STORE_FAST var_num
pops the top of the stack and stores it into an object whose reference is stored in co_varnames[var_num] . So STORE_FAST 1 pops the result and stores it in an object whose reference is y . LOAD_FAST and STORE_FAST are used with local variables of the functions. So they are not used at the module scope. On the other hand, LOAD_GLOBAL and STORE_GLOBAL are used for the global variables accessed inside functions. Finally, LOAD_FAST 1 will push the value of y on top of the stack and RETURN_VALUE will return it to the caller of the function which is the module.
But how this function is called? If you look at the bytecode of line 8, first, LOAD_NAME 2 pushes the function object whose reference is f onto the stack. LOAD_CONST 4 pushes its argument ( 4 ) onto the stack. The instruction
CALL_FUNCTION argc
calls a callable object with positional arguments. Its oparg, argc indicates the number of positional arguments. The top of the stack contains positional arguments, with the right-most argument on top. Below the arguments is the function callable object to call.
CALL_FUNCTION first pops all the arguments and the callable object off the stack. Then it will allocate a new frame on the call stack, populate the local variables for the function call, and execute the bytecode of the function inside that frame. Once that's done, the frame will be popped off the call stack, and in the previous frame, the return value of the function will be pushed on top of the evaluation stack. If there is no previous frame, it will be pushed on top of the evaluation stack of the global frame.
In our example, we only have one positional argument, so the instruction will be CALL_FUNCTION 1 . After that, the instruction
POP_TOP
pops the item on top of the stack. That is because we do not need the returned value of the function anymore. Figure 2 shows all the bytecode operations with offsets 16 to 22. The bytecode instructions inside f(x) are shown in red.
Figure 2
Built-in functions
In line 9 of the disassembled bytecode of Listing 2, we want to print(a) . print is also a function, but it is a built-in Python function. The name of the function is a reference to its callable object. So first it is pushed onto the stack and then its argument is pushed. Finally, it will be called using CALL_FUNCTION . print will return None , and the returned value will be popped off the stack after that.
Python uses its built-in functions to create data structures. For example, the following line:
a = [1,2,3]
will be converted to:
1 0 LOAD_CONST 0 (1)
2 LOAD_CONST 1 (2)
4 LOAD_CONST 2 (3)
6 BUILD_LIST 3
8 STORE_NAME 0 (a)
Initially, each element of the list is pushed onto the stack. Then the instruction
BUILD_LIST count
is called to create the list using the count items from the stack and pushes the resulting list object onto the stack. Finally, the object on the stack will be popped and stored on the heap and a will be its reference.
EXTENDED_ARG
As mentioned before, some of the instructions can have an argument too big to fit into the default one byte, and they will be prefixed by the instruction EXTENDED_ARG . Here is an example. Suppose that we want to print 260 * characters. We could simply write print('*' * 260) . However, I will write something unusual instead:
s= 'print(' + '"*",' * 260 + ')'
c = compile(s, "", "exec")
disassemble(c)
Here s contains a print function which takes 260 arguments and each of them is a * character. Now look at the resulting disassembled bytecode:
1 0 LOAD_NAME 0 (print)
2 LOAD_CONST 0 ('*')
4 LOAD_CONST 0 ('*')
. .
. .
. . 518 LOAD_CONST 0 ('*')
520 LOAD_CONST 0 ('*')
522 EXTENDED_ARG 1
524 CALL_FUNCTION 260
526 POP_TOP
528 LOAD_CONST 1 (None)
530 RETURN_VALUE
Here print is pushed onto the stack first. Then its 260 arguments are pushed. Then CALL_FUNCTION should call the function. But it needs the number of the arguments (of the target function) as its oparg. Here this number is 260 which is bigger than the maximum number that a byte can take. Remember that the oparg is only one byte. So the CALL_FUNCTION is prefixed by EXTENDED_ARG . The actual bytecode is:
522 EXTENDED_ARG 1
524 CALL_FUNCTION 4
As mentioned before the oparg of EXTENDED_ARG will be left-shifted by eight bits or simply multiplied by 256 and will be added to the oparg of the next opcode. So the oparg of CALL_FUNCTION will be interpreted to be 256+4 = 260 (please note that what the disassemble function shows is this interpreted oparg not the actual oparg in the bytecode).
Conditional statements and jumps
Consider the following source code which has an if-else statement:
s='''a = 1
if a>=0:
b=a
else:
b=-a
'''
c=compile(s, "", "exec")
disassemble(c)
The disassembled bytecode is:
1 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (a)
2 4 LOAD_NAME 0 (a)
6 LOAD_CONST 1 (0)
8 COMPARE_OP 5 (>=)
10 POP_JUMP_IF_FALSE 18
3 12 LOAD_NAME 0 (a)
14 STORE_NAME 1 (b)
16 JUMP_FORWARD 6 (to 24)
5 >> 18 LOAD_NAME 0 (a)
20 UNARY_NEGATIVE
22 STORE_NAME 1 (b)
>> 24 LOAD_CONST 2 (None)
26 RETURN_VALUE
We have a few new instructions here. In line 2, the object that a refers to is pushed onto the stack, and then literal 0 is pushed. The instruction
COMPARE_OP oparg
performs a Boolean operation. The operation name can be found in cmp_op[oparg] . The values of cmp_op are stored in a list named dis.cmp_op . The instruction first pops the top two elements of the stack. We call the first one TOS1 and the second one TOS2 . Then the boolean operation selected by oparg is performed on them (TOS2 cmp_op[oparg] TOS1) , and the result is pushed on top of the stack. In this example TOS1=0 and TOS2=value of a . In addition, the oparg is 5 and cmp_op[5]='≥' . So cmp_op will test a≥0 and stores the result (which is true or false) on top of the stack.
The instruction
POP_JUMP_IF_FALSE target
performs a conditional jump. First, it pops the top of the stack. If the element on top of the stack is false, it sets the bytecode counter to target. The bytecode counter shows the current bytecode offset which is being executed. So it jumps to the bytecode offset which is equal to target and the execution of bytecode continues from there. The offset 18 in the bytecode is a jump target, so there is a >> in front of that in the disassembled bytecode. The instruction
JUMP_FORWARD delta
increments the bytecode counter by delta. In the previous bytecode, the offset of this instruction is 16, and we know that each instruction takes 2 bytes. So when this instruction is finished, the bytecode counter is 16+2=18 . Here delta=6 , and 18+6=24 , so it jumps to the offset 24 . The offset 24 is a jump target and it has a >> sign too.
Now we can see how the if-else statement is converted to the bytecode. The cmp_op checks if a≥0 . If the result is false, POP_JUMP_IF_FALSE jumps to the offset 18 which is the start of else block. If it is true, the if block will be executed and then JUMP_FORWARD jumps to the offset 24 and does not execute the else block.
Now let’s see a more complicated Boolean expression. Consider the following source code:
s='''a = 1
c = 3
if a>=0 and c==3:
b=a
else:
b=-a
'''
c=compile(s, "", "exec")
disassemble(c)
Here we have a logical and . The disassembled bytecode is:
1 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (a)
2 4 LOAD_CONST 1 (3)
6 STORE_NAME 1 (c)
3 8 LOAD_NAME 0 (a)
10 LOAD_CONST 2 (0)
12 COMPARE_OP 5 (>=)
14 POP_JUMP_IF_FALSE 30
16 LOAD_NAME 1 (c)
18 LOAD_CONST 1 (3)
20 COMPARE_OP 2 (==)
22 POP_JUMP_IF_FALSE 30
4 24 LOAD_NAME 0 (a)
26 STORE_NAME 2 (b)
28 JUMP_FORWARD 6 (to 36)
6 >> 30 LOAD_NAME 0 (a)
32 UNARY_NEGATIVE
34 STORE_NAME 2 (b)
>> 36 LOAD_CONST 3 (None)
38 RETURN_VALUE
In Python and is a short-circuit operator. So when evaluating X and Y , it only evaluates Y if X is true. This can be easily seen in the bytecode. In line 3, first, the left operand of and is evaluated. If (a≥0) is false, it does not evaluate the second operand and jumps to the offset 30 to execute the else block. However, if it is true, the second operand (b==3) will be evaluated too.
Loops and block stack
As mentioned before, there is an evaluation stack inside each frame. In addition, in each frame, there is a block stack. It is used by CPython to keep track of certain types of control structures like the loops, with blocks and try/except blocks. When CPython wants to enter one of these structures a new item is pushed onto the block stack, and when CPython exits that structure, the item for that structure is popped off the block stack. Using the block stack CPython knows which structure is currently active. So when it reaches a break or continue statement, it knows which structures should be affected.
Let’s see how loops are implemented in the bytecode. Consider the following code and its disassembled bytecode:
s='''for i in range(3):
print(i)
'''
c=compile(s, "", "exec")
disassemble(c) -------------------------------------------------------------------- 1 0 SETUP_LOOP 24 (to 26)
2 LOAD_NAME 0 (range)
4 LOAD_CONST 0 (3)
6 CALL_FUNCTION 1
8 GET_ITER
>> 10 FOR_ITER 12 (to 24)
12 STORE_NAME 1 (i)
2 14 LOAD_NAME 2 (print)
16 LOAD_NAME 1 (i)
18 CALL_FUNCTION 1
20 POP_TOP
22 JUMP_ABSOLUTE 10
>> 24 POP_BLOCK
>> 26 LOAD_CONST 1 (None)
28 RETURN_VALUE
The instruction
SETUP_LOOP delta
is executed before the loop starts. This instruction pushes a new item (which is also called a block) onto the block stack. delta is added to the bytecode counter to determine the offset of the next instruction after the loop. Here the offset of SET_LOOP is 0 , so the bytecode counter is 0+2=2 . In addition, delta is 24 , so the offset of the next instruction after the loop is 2+24=26 . This offset is stored in the block that is pushed onto the block stack. In addition, the current number of items in the evaluation stack is stored in this block.
After that, the function range(3) should be executed. Its oparg ( 3 ) is pushed before the name of the function. The result is an iterable. Iterables can generate an iterator using the instruction:
GET_ITER
It takes the iterable on top of the stack and pushes an iterator of that. The instruction:
FOR_ITER delta
assumes that there is an iterator on top of the stack. It calls its __next__() method. If it yields a new value, this value is pushed on top of the stack (above the iterator). Inside the loop, the top of the stack is stored in i after that, and the print function is executed. Then the top of the stack which is the current value of the iterator is popped. After that, the instruction
JUMP_ABSOLUTE target
sets the bytecode counter to target and jumps to the target offset. So it jumps to offset 10 and runs FOR_ITER again to get the next value of the iterator. If the iterator indicates that there are no further elements available, the top of stack is popped, and the byte code counter is incremented by delta. Here delta=12 , so after finishing the loop it jumps to offset 24. At offset 24, the instruction
POP_BLOCK
removes the current block from the top of the block stack. The offset of the next instruction after the loop is stored in the block (here it is 26). So the interpreter will jump to that offset and continue execution from there. Figure 3 shows the bytecode operations with offsets 0, 10, 24 and 26 as an example (In fact in Figures 1 and 2 we only showed the evaluation stack in each frame).
Figure 3
But what happens if we add a break statement to this loop? Consider the following source code and its disassembled bytecode:
s='''for i in range(3):
break
print(i)
'''
c=compile(s, "", "exec")
disassemble(c) -------------------------------------------------------------------- 1 0 SETUP_LOOP 26 (to 28)
2 LOAD_NAME 0 (range)
4 LOAD_CONST 0 (3)
6 CALL_FUNCTION 1
8 GET_ITER
>> 10 FOR_ITER 14 (to 26)
12 STORE_NAME 1 (i)
2 14 BREAK_LOOP
3 16 LOAD_NAME 2 (print)
18 LOAD_NAME 1 (i)
20 CALL_FUNCTION 1
22 POP_TOP
24 JUMP_ABSOLUTE 10
>> 26 POP_BLOCK
>> 28 LOAD_CONST 1 (None)
30 RETURN_VALUE
We have only added a break statement to the previous loop. This statement is converted to
BREAK_LOOP
This opcode removes those extra items on the evaluation stack and pops the block from the top of the block stack. You should notice that the other instructions of the loop are still using the evaluation stack. So when the loop breaks, the items that belong to it should be popped off the evaluation stack. In this example, the iterator object is still on top of the stack. Remember that the block in the block stack stores the number of items that existed in the evaluation stack before starting the loop.
So by knowing that number, BREAK_LOOP pops those extra items off the evaluation stack. Then it jumps to the offset which is stored in the current block of the block stack (here it is 28). That is the offset of the next instruction after the loop. So the loop breaks and the execution is continued from there.
Creating the code object
The code object is an object of type code , and it is possible to create it dynamically. The module types can help with dynamic creation of new types, and the class CodeType() in this module returns a new code object:
types.CodeType(co_argcount, co_kwonlyargcount,
co_nlocals, co_stacksize, co_flags,
co_code, co_consts, co_names,
co_varnames, co_filename, co_name,
co_firstlineno, co_lnotab, freevars=None,
cellvars=None )
The arguments form all the attributes of the code object. You are already familiar with some of these arguments (like co_varnames and co_firstlineno ). freevars and cellvars are optional since they are used in closures and not all functions use them (Refer to this article for more information about them). The other attributes are explained using the following function as an example:
def f(a, b, *args, c, **kwargs):
d=1
def g():
return 1
g()
return 1
co_argcount : If the code object is that of a function, the number of arguments it takes (not including keyword only arguments, * or ** args). For function f it is 2 .
co_kwonlyargcount : If the code object is that of a function, number of keyword only arguments (not including ** arg). For function f it is 1 .
co_nlocals : The number of local variables plus the name of functions defined in the code object (arguments are also considered local variables). In fact, it is the number of elements in co_varnames which is ('a', 'b', 'c', 'args', 'kwargs', 'd', 'g') . So it is 7 for f .
co_stacksize : Shows the largest number of elements that will be pushed onto the evaluation stack by this code object. Remember that some opcodes need to push some elements onto the evaluation stack. This attribute shows the largest size that the stack will ever grow to from the bytecode operations. In this example it is 2 . Let me explain the reason for that. If you disassemble the bytecode of this function you get:
2 0 LOAD_CONST 1 (1)
2 STORE_FAST 5 (d)
3 4 LOAD_CONST 2 (<code object g at 0x0000028A62AB1D20, file "<ipython-input-614-cb7dfbcc0072>", line 3>)
6 LOAD_CONST 3 ('f.<locals>.g')
8 MAKE_FUNCTION 0
10 STORE_FAST 6 (g)
5 12 LOAD_FAST 6 (g)
14 CALL_FUNCTION 0
16 POP_TOP
6 18 LOAD_CONST 1 (1)
20 RETURN_VALUE
In line 2, one element is pushed onto the stack using the LOAD_CONST and will be popped using STORE_FAST . Lines 5 and 6 similarly push one element onto the stack and pop it later. But in line 3, two elements are pushed onto the stack to define the inner function g : its code object and its name. So this is the maximum number of elements that will be pushed onto the evaluation stack by this code object, and it determines the stack size.
co_flags : An integer, with bits indicating things like whether the function accepts a variable number of arguments, whether the function is a generator, etc. In our example its value is 79 . The binary value of 79 is 0b1001111 . It uses a little-endian system in which the bytes are written from left to right in increasing significance. So the first bit is the first one on the right. You can refer to this link for the meaning of these bits. For example, the third bit from the right represents the CO_VARARGS flag. When it is 1 it means that the code object has a variable positional parameter ( *args -like).
co_filename : A string, specifying the file in which the function is present. In this case, it is '<ipython-input-59–960ced5b1120>' since I was running the script in Jupyter notebook.
co_name : A name with which this code object was defined. Here it is the name of the function 'f' .
Bytecode injection
Now that we are completely familiar with the code object, we can start changing its bytecode. It is important to note that the code object is immutable. So once created we cannot change it. Suppose that we want to change the bytecode of the following function:
def f(x, y):
return x + y c = f.__code__
Here we cannot change the bytecode of the code object of the function directly. Instead, we need to create a new code object and then assign it to this function. To do that we need a few more functions. The disassemble function can disassemble the bytecode into some human-friendly instructions. We can change them as we like, but then we need to assemble it back to the bytecode to assign it to a new code object. The output of disassemble is a formatted string which is easy to read, but difficult to change. So I will add a new function which can disassemble the bytecode into a list of instructions. It is very similar to disassemble , however, its output is a list.
We can try it on the previous function:
disassembled_bytecode = disassemble_to_list(c)
Now disassembled_bytecode is equal to:
[['LOAD_FAST', 'x'],
['LOAD_FAST', 'y'],
['BINARY_ADD'],
['RETURN_VALUE']]
We can now change the instructions of this list easily. But we also need to assemble it back to the bytecode:
The function get_oparg is like the inverse of get_argvalue . It takes an argvalue which is the human-friendly meaning of an oparg and returns the corresponding oparg. It needs the code object as its argument since the attributes of the code object like co_consts are necessary to convert the argvalue into the oparg.
The function assemble takes a code object and a disassembled bytecode list and assembles it back into the bytecode. It uses dis.opname to convert the opname to the opcode. Then it calls get_oparg to convert the argvalue to the oparg. Finally, it returns a bytes literal of the bytecode list. We can now use these new functions to change the bytecode of the previous function f . First, we change one of the instructions in disassembled_bytecode :
disassembled_bytecode[2] = ['BINARY_MULTIPLY']
The instruction
BINARY_MULTIPLY
pops the top two elements of the stack, multiplies them together and pushes the result onto the stack. Now we assemble the modified disassembled bytecode:
new_co_code= assemble(disassembled_bytecode, c.co_consts,
c.co_varnames, c.co_names,
c.co_cellvars+c.co_freevars)
After that we create a new code object:
import types
nc = types.CodeType(c.co_argcount, c.co_kwonlyargcount,
c.co_nlocals, c.co_stacksize, c.co_flags,
new_co_code, c.co_consts, c.co_names,
c.co_varnames, c.co_filename, c.co_name,
c.co_firstlineno, c.co_lnotab,
c.co_freevars, c.co_cellvars)
f.__code__ = nc
We use all the attributes of f to create it and only replace the new bytecode ( new_co_code ). Then we assign the new code object to f . Now if we run f again, it does not add its arguments together. Instead, it will multiply them together:
f(2,5) # Output is 10 not 7
Caution: The types.CodeType function has two optional arguments for freevars and cellvars , however, you should be careful when using them. As mentioned before the co_cellvars and co_freevars attributes of the code object are only used when the code object belongs to a function which has free variables or nonlocal variables. So the function should be a closure or a closure should have been defined inside it. For example, consider the following function:
def func(x):
def g(y):
return x + y
return g
Now if check its code object:
c = func.__code__
c.co_cellvars # Output is: ('x',)
In fact, this function has one nonlocal variable x since this variable is accessed by its inner functions. Now we can try recreating its code object using the same attributes:
nc = types.CodeType(c.co_argcount, c.co_kwonlyargcount,
c.co_nlocals, c.co_stacksize, c.co_flags,
new_co_code, c.co_consts, c.co_names,
c.co_varnames, c.co_filename, c.co_name,
c.co_firstlineno, c.co_lnotab,
cellvars = c.co_cellvars,
freevars = c.co_freevars)
But if we check the same attribute of the new code object
nc.co_cellvars Output is: ()
It turns out to be empty. So types.CodeType cannot create the same code object. If you try to assign this code object to a function and execute that function, you will get an error (this has been tested on Python 3.7.4).
Code optimization
Understanding the bytecode instructions can help us with the optimization of the source code. Consider the following source code:
setup1='''import math
mult = 2
def f():
total = 0
i = 1
for i in range(1, 200):
total += mult * math.log(i)
return total
''' setup2='''import math
def f():
log = math.log
mult = 2
total = 0
for i in range(1, 200):
total += mult * log(i)
return total
'''
Here we define a function f() to calculate a simple mathematical expression. It has been defined in two different ways. In setup1 , we are using the global variable mult inside f() and directly use the log() function from math module. In setup2 , mult is a local variable of f() . In addition, math.log is first stored in the local variable log . Now we can compare the performance of these function:
t1 = timeit.timeit(stmt="f()", setup=setup1, number=100000)
t2 = timeit.timeit(stmt="f()", setup=setup2, number=100000)
print("t1=", t1)
print("t2=", t2)
--------------------------------------------------------------------
t1= 3.8076129000110086
t2= 3.2230119000014383
You may get different numbers for t1 and t2 , but the bottom line is that setup2 is faster than setup1 . Now let’s compare their bytecode to see why it is faster. We just look at the line 7 in the disassembled code of setup1 and setup2 . This is the bytecode for this line: total += mult * log(i) .
In setup1 we have:
7 24 LOAD_FAST 0 (total)
26 LOAD_GLOBAL 1 (mult)
28 LOAD_GLOBAL 2 (math)
30 LOAD_METHOD 3 (log)
32 LOAD_FAST 1 (i)
34 CALL_METHOD 1
36 BINARY_MULTIPLY
38 INPLACE_ADD
40 STORE_FAST 0 (total)
42 JUMP_ABSOLUTE 20
>> 44 POP_BLOCK
But in setup2 we get:
7 30 LOAD_FAST 2 (total)
32 LOAD_FAST 1 (mult)
34 LOAD_FAST 0 (log)
36 LOAD_FAST 3 (i)
38 CALL_FUNCTION 1
40 BINARY_MULTIPLY
42 INPLACE_ADD
44 STORE_FAST 2 (total)
46 JUMP_ABSOLUTE 26
>> 48 POP_BLOCK
As you see in setup1 both mult and math are loaded using LOAG_GLOBAL , but in setup2 , mult and log are loaded using LOAD_FAST . So two LOAD_GLOBAL instructions have been replace with LOAD_FAST . The fact is that LOAD_FAST as its name suggests is much faster than LOAD_GLOBAL . We mentioned that the name of the global and local variables are stored in the co_names , and co_varnames . But how does the CPython interpreter find the values when executing the compiled code?
Local variables are stored in an array on each frame (which is not shown in the previous figures to make them simpler). We know that the name of local variables are stored in co_varnames . Their values will be stored with the same order in this array. So when the interpreter sees an instruction like LOAD_FAST 1 (mult) , it reads the element of that array at index 1 .
The global and builtins of the module are stored in a dictionary. We know that their names are stored in the co_names . So when the interpreter sees an instruction like LOAD_GLOBAL 1 (mult) , it first gets the name of that global variable from co_names[1] . Then it will look up this name in the dictionary to get its value. This is a much slower process compared to a simple array lookup for the local variables. As a result, LOAD_FAST is faster than LOAD_GLOBAL , and replacing LOAD_GLOBAL with LOAD_FAST can improve performance. It can be done by simply storing builtin and global variables into local variables or directly changing the bytecode instructions.
Example: Defining constants in Python
This example illustrates how to use the bytecode injection to change the behavior of functions. We are going to write a decorator which adds a const statement to Python. In some programming languages like C, C++, and JavaScript there is a const keyword. If a variable is declared as const using this keyword, then changing its value is illegal, and we cannot change the value of this variable in the source code anymore.
Python does not have a const statement, and I do not claim that it is really necessary to have such a keyword in Python. In addition, defining constants can be also done without using the bytecode injection. So this is just an example to show you how to put the bytecode injection into action. First, let me show how you can use it. The const keyword is provided using a function decorator named const . Once you decorate a function by const , you can declare the variable inside it as constants using the keyword const. (the . at the end is part of the keyword). Here is an example:
@const
def f(x):
const. A=5
return A*x
f(2) # Output is: 10
The variable A inside f is now a constant. Now if you try to reassign this variable inside f , an exception will be raised:
@const
def f(x):
const. A=5
A = A + 1
return A*x
--------------------------------------------------------------------# This raises an exception :
ConstError: 'A' is a constant and cannot be reassigned!
When a variable is declared as const., it should be assigned to its initial value, and it will be a local variable of that function.
Now let me show you how it has been implemented. Suppose that I define a function like this (without decoration):
def f(x):
const. A=5
A = A + 1
return A*x
It will be compiled properly. But if you try executing this function, you get an error:
f(2)
-------------------------------------------------------------------- NameError: name 'const' is not defined
Now let's take a look at the disassembled bytecode of this function:
2 0 LOAD_CONST 1 (5)
2 LOAD_GLOBAL 0 (const)
4 STORE_ATTR 1 (A)
3 6 LOAD_FAST 1 (A)
8 LOAD_CONST 2 (1)
10 BINARY_ADD
12 STORE_FAST 1 (A)
4 14 LOAD_FAST 1 (A)
16 LOAD_FAST 0 (x)
18 BINARY_MULTIPLY
20 RETURN_VALUE
When Python tries to compile the function, it takes const as a global variable since it has not been defined in the function. The variable A is considered to be an attribute of the global variable A . In fact, const. A=1 is the same as const.A=1 since Python ignores the whitespace between the dot operator and the name of the attribute. Of course, we really do not have a global variable named A in the source code. But Python will not check it at compile time. Only during execution it will turn out that the name const is not defined. So our source code will be accepted during compiling. But we need to change its bytecode before executing the code object of this function. We first need to create a function to change the bytecode:
This function receives the list of bytecode instructions generated by assemble_to_list as its argument. It has two lists named constants and indices which store the name of the variables declared as const and the offset at which they have been assigned for the first time. The first loop searches the list of bytecode instructions and finds all the ['LOAD_GLOBAL', 'const'] instructions. The name of the variable should be in the next instruction. In this example the next instruction is ['STORE_ATTR', 'A'] , and the name is A . This name and the offset of this instruction is stored in constants and indices . Now we need to get rid of the global variable const and its attribute and create a local variable named A instead. The instruction
NOP
is a ‘Do nothing’ code. When the interpreter reaches to NOP , it will ignore it. We cannot simply delete the opcode from the list of instructions since deleting one instruction reduces the offset of all the following instructions. Now if there are some jumps in the bytecode, their target offset should change too. So it is much easier to simply replace the unwanted instruction with NOP . Now we replace ['LOAD_GLOBAL', 'const'] with NOP and then replace ['STORE_ATTR', 'A'] with ['STORE_FAST', 'A'] . The final bytecode looks like this:
2 0 LOAD_CONST 1 (5)
2 NOP
4 STORE_FAST 1 (A)
3 6 LOAD_FAST 1 (A)
8 LOAD_CONST 2 (1)
10 BINARY_ADD
12 STORE_FAST 1 (A)
4 14 LOAD_FAST 1 (A)
16 LOAD_FAST 0 (x)
18 BINARY_MULTIPLY
20 RETURN_VALUE
Now line 2 is the equivalent of a=2 in the source code, and executing this bytecode does not cause any run-time error. The loop also checks if the same variable is not declared as const twice. So if the variable declared as const already exists in the constants list, it will raise a custom exception. Now the only remaining thing is to make sure that the const variables have not been reassigned.
The second loop searches the list of bytecode instructions again to find any reassignment of the constant variables. Any instruction like ['STORE_GLOBAL', 'A'] or ['STORE_FAST', 'A'] means that a reassignment is in the source code, so it will raise a custom exception to warn the user. The offset of the initial assignment of a const is required to make sure that the initial assignment is not considered as a reassignment.
As mentioned before, the bytecode should be changed before executing the code. So the function add_const needs to be called before calling the function f . For this reason, we place it inside a decorator. The decorator function const receives the target function f as its argument. It will first change the bytecode of f using add_const and then create a new code object with the modified bytecode. This code object will be assigned to f . | https://towardsdatascience.com/understanding-python-bytecode-e7edaae8734d | ['Reza Bagheri'] | 2020-03-05 20:07:09.647000+00:00 | ['Python', 'Virtual Machine', 'Bytecode', 'Disassembly', 'Metaprogramming'] |
Python HOW: Starting with Docker | Photo by Kaique Rocha from Pexels
Docker helps you to package up your project with all of the dependencies needed to run it from anywhere “Build, share and run any application, anywhere!”
Is it a Docker image or a container? 😕
Let’s clear this out straight away. You first build a Docker image by reading a set of instructions from a Dockerfile . Once you run this image, it’s called a container
Docker Engine 🚒
To do any thing Docker, you first need to install the Docker Engine. Docker Engine is available on a variety of Linux platforms, Mac and Windows through Docker Desktop, Windows Server, and as a static binary installation. You can have a look here and choose the one that works for your OS
To check that you have the engine installed, open a terminal and run docker version
Project Structure 📂
Assume your project has the following structure:
app.py is a dummy scripts that only prints out pandas version when executed:
requirements.txt has all the required dependencies:
To learn how to create requirements.txt using pipenv check my article 👉
Creating a Dockerfile 📁
Create a file and name it Dockerfile (without an extension) and add the following instructions to it:
Let’s explain these instructions one by one
Line 1: a valid Dockerfile must start with a FROM instruction that sets the parent image. Parent images are hosted on Docker Hub, so you don’t need to store them locally. Parent images are based on an OS (e.g. Debian, Alpine Linux, etc.). Each subsequent instruction in the Dockerfile modifies this parent image
In our case, the parent image is the slim variant of the python 3.7 image. This image is recommended to start with as it contains the minimal packages needed to run Python so it’s small. There are few other variants that you can choose from. Have a look here for more description
Line 3: COPY the requirements.txt into the app folder in the parent image (first the folder app will be created as it doesn’t exists)
Line 4: set the app folder as the working directory for any instructions that follow (i.e. we can now simply refer to it as . )
Lines 6&7: RUN 2 pip commands, the first to upgrade pip in the parent image and the second to install all the dependencies found in requirements.txt in the parent image ( && to separate commands and \ to use a new line for clarity). We could’ve used 2 RUN instructions but this will create 2 layers instead of one (later in Tips)
Line 9: COPY the app.py to the app folder in the parent image
Line 11: define what command gets executed when running the image (i.e. python app.py )
Tips and Best Practices 🙇
A Docker image consists of read-only layers (each of which consists of the files generated from running an instruction*). The layers are stacked and each one is a delta of the changes from the previous layer
You should always try to:
Keep the number of layers in your image to a minimum as much as possible. This can be done by using one instruction to execute multiple instructions separated with && *Only the instructions RUN , COPY , and ADD create layers (other instructions create temporary intermediate images, and do not increase the size) Start by building the layers that you know won’t be changed during development, such as copying the requirements.txt and installing the dependencies. This is useful because every time a layer instruction has changed, that cashed layer is invalidated and every layer after it, so they’ve to be rebuilt. This is really time saving! so always keep your scripts under development in the last layer You can run app.py with any number of arguments by using CMD["python", "app.py", "param1", "param2", …]
For much more in depth best practices check here
Build Dockerfile as a local Docker image
Your project structure should now looks like:
To build you Docker image, open a terminal and navigating to the project folder, then run the build command:
Don’t forget the . at the end
This will create a local image with a name <image-name> (e.g. starting-with-docker ) and a tag <tag> (e.g. v.1.0 ). If you don’t add a tag (it’s optional), the tag will be latest by default. However, it’s advised to always give a tag. A common practice is to tag with your image version
To check that the image was built, you can list all local images and their IDs by running:
You can see 2 images in fact, one is the parent image python:3.7-slim, and the other is starting-with-docker
Run local Docker image 🏃
Now you’ve built a local image, you can run it from the terminal using the run command:
The --rm flag automatically cleans up the container and removes the file system when the container exits
You’ve built and ran your first Docker image/container!
Push local image to a Container Registry 👆
So you’ve built a local image, and you can run it locally. But how can you share it with others? or even use it yourself from another machine? To be able to do that, you need to host it as a repository on a remote Container Registry
Docker offers a Container Registry at Docker Hub (similar to GitHub but for hosting Docker images instead). However, you will only get 1 free private repository, so unless you don’t mind the rest of your images being public (i.e. anyone can access them), then you need to pay a premium for private repositories (Microsoft Azure, Google Cloud, and AWS all offer private Registries for a premium as well)
Docker Hub Repository Plan
For now, we will use the free private Repository offered by Docker Hub. Head to Docker Hub and create an account (if you haven’t already when downloading the Docker Engine). Navigate to repositories > Create Repository > and name the repository starting-with-docker and set the Visibility to Private > Create
You will then see the Docker command to push a local image to this repository, mine is:
To push your image however, it needs to be first associated with your Docker Hub repository. To do that, you tag the same local image with a new name using the tag command:
This tags the image starting-with-docker:v.1.0 into the starting-with-docker repository in drgabrielharris container registry with the v.1.0 tag
If you list all your local images, you can now see an extra image with the same ID as the source image
Your local images are exactly the same but with different names
Finally, you push to the repository by using the push command:
If you look at your Docker Hub, you will see your image there with the command to pull it
You can push different versions of your image (using different tags) and they will all be here
Run your remote image
You can now run the image directly from the Registry from anywhere using the run command:
You have to be logged into the docker engine with your account
If you would rather have a local copy of the remote image in a new machine, you can pull the remote image:
You have to be logged into the docker engine with your account
To allow others pull/push access to your repository, you need to add them as collaborators. Navigate to your repository > Collaborators > and add users by their Docker IDs (collaborators can’t delete the repository or change its status from private to public). If you want to assign more granular collaborator rights (“Read”, “Write”, or “Admin”), you need to use teams & organizations
What is next?
You can read more about advanced uses of Docker to deploy:
Azure Web Apps (coming soon)
Azure Function Apps (coming soon)
Dash Web Apps (coming soon)
Prodigy annotation tool from Explosion AI (coming soon)
Happy coding! | https://medium.com/swlh/python-how-starting-with-docker-d2be73d9ae92 | ['Gabriel Harris Ph.D.'] | 2020-09-24 11:53:34.622000+00:00 | ['Machine Learning', 'Python', 'Docker', 'Data Science'] |
Breakfast Update Week 34 | A new friday update with tech and media news from the last week. Made for clients and colleagues, but everyone can read along. We talk about it at breakfast. | https://medium.com/breakfast-update/breakfast-update-week-34-29f2a742cdb | ['Morten Løwenstein'] | 2017-08-25 06:49:56.984000+00:00 | ['Augmented Reality', 'Digital Marketing', 'Amazon', 'Instagram', 'Social Media Marketing'] |
Designing for diversity | Inclusive design is often confused with simply designing for people with disabilities. However, true inclusive design is much more than this — it is about designing for as diverse a range of people possible. It is a philosophy that encourages us to consider how size, shape, age, gender, sexuality, ethnicity, education levels, income, spoken languages, culture & customs, and even diets shape the way we interact with the world. More importantly, it is about designing products and services in light of this understanding.
A list of some of the factors that should be considered for inclusive design — size & shape, gender & sexuality, ethnicity, income & social class, education & training, languages &communication, culture & customs, diets, and age
Designing for Mr Average
Not so long ago, the term ‘inclusive design’ did not exist. There was also a view among many that one-size may fit all, and designing for an ‘average man’ was good enough.
Today, we are still surrounded by products that only work well for a limited range of people. Some are hard to interact with if your hands are small, or you have limited strength or dexterity. Others don’t fit because of the shape of your nose or torso, others are biased toward those who speak a certain language or follow certain customs.
It is not always completely clear why products are designed to exclude people. Often, it’s a perceived efficiency-thoroughness trade off — a variant of the 80:20 rule, that crudely suggests that you can get it right for 80% of the people for 20% of the effort, while it takes a further 80% of the effort to get it right for the remaining 20%. However, much of the time it is simply that the designers haven’t thought enough about the diversity of the people who wish to interact with the product that they are designing, often because it’s not in the culture of the company.
How big a problem is this?
It is also often the case that the number of excluded people is dramatically underestimated. Capabilities are frequently thought of in binary terms. For example, you can either see or you can’t, or you can hear or you can’t. In reality, our sensory, cognitive and physical capabilities all tend to sit on a long spectrum. Some on this spectrum are excluded altogether, while a much greater number are inconvenienced. To complicate things further, these spectrums are rarely linear; in many cases, they are multi-dimensional.
Taking sight as just one example, the range of capabilities is incredibly complicated. Some people can see perfectly well without any form of correction, others require spectacles to see things that are far away or very close, others take longer to shift focus, or perhaps struggle in low light, some are unable to perceive colour, while others have a limited field of view (tunnel vision, or only peripheral vision), or monocular vision. The remaining senses are just the same, whether it is hearing touch, smell or taste — some people may have no sensation at all, however, a much larger group have different capabilities on multi-dimensional spectrums.
Physical capabilities are very similar. These include the kind of capabilities that we might naturally think about when we consider inclusive design, such as mobility, strength, flexibility, dexterity and reach. Just like our sensory capabilities, they each lie on a spectrum.
Our cognitive abilities also lie on a spectrum, and it’s not quite as simple as a link to IQ. Some people may have exceptional memories, problem-solving skills, communication abilities, recognition or attention. However, our capability in one aspect is rarely an accurate predictor for another.
To complicate things further, our capabilities are rarely fixed. As we become tired or fatigued, our capabilities may drop off. Likewise, things change as we age or as a result of events in our lives, perhaps some form of trauma.
Three aspects of usability — sensory (sight, hearing, touch, smell & taste), cognitive (memory, problem-solving, communication, recognition & attention), and physical (reach, dexterity, flexibility, strength, & mobility)
Downhill from forty
The old adage ‘it’s downhill from forty’ is not strictly true, in terms of our capabilities, it is actually more like mid-thirties! In early childhood, our sensory, cognitive, and physical capabilities improve very quickly. We master our senses at a relatively young age, while it typically takes much longer until we reach our peak in terms of physical and cognitive capabilities (in our early thirties). However, by our mid-thirties we are broadly at our peak on all of these, from then on we tend to start to see a general degradation as we age. By the time we reach retirement age, strength may be 50% of its peak, we also tend to shrink (by around 5%), while our sensory abilities also tend to deteriorate. Eye reaction time doubles, we require around twice as much light to read, we lose high-frequency hearing, and our sense of taste and smell become much less sensitive — often resulting in older people using much more salt, pepper and flavourings in cooking.
However, we are increasingly remaining in work for much longer. As such, the role of inclusive design is becoming more important if we wish to remain an efficient and effective part of the workforce.
Why design for a more diverse market?
The ethical case for inclusive design is easy to understand. Most of us want to live in a world where we all have an equal chance of engaging with society, participating in different activities, living independently. With an ageing population is most parts of the world, it also makes a good case at a societal level. But it’s a philosophy that also makes great business sense, and one that is embraced by some of the world’s leading companies to develop a larger customer base, improve customer satisfaction, reduced returns & servicing, increased brand reputation, and improved staff morale.
Perhaps the most credible business case is designing products that a greater number of people choose to buy and remain happy with — largely because of a greater fit with their capabilities. When thinking about capabilities it’s useful to think of them on three levels:
1. Permanent (e.g. having one arm)
2. Temporary (e.g. an arm injury)
3. Situational (e.g. holding a small child)
The market for people with one arm is relatively small, however, a product that can be used by people carrying a small child (or using one of their arms for another task) is much larger. As such, designing for the smaller market of permanent exclusions is often a very effective way of developing products that make the lives of a much wider group more flexibility, efficient and enjoyable.
Doing it…
Given the range of human capabilities that a designer has to consider, it is perhaps possible to understand why it is an area that is often overlooked. However, inclusive design does not have to be too taxing, particularly when it is embedded as a natural part of the design process.
The relationship between understand, define and evaluate
The first step on the path to designing more inclusive products is to understand where the current challenges are. The diagrams above can be useful prompts for this — firstly by thinking about the demands that the device places on people at a sensory, cognitive and physical level, and secondly by considering which aspects of human diversity may influence the interaction with the device (these are prompts rather than an exhaustive list).
The second step is to make informed decisions about the product specification. This includes balancing the needs of inclusion with other measures of system performance (such as efficiency, efficacy, safety, flexibility, and satisfaction). At the early stages of the design, the specification should always be seen as a ‘living document’ that should be refined and updated as the design matures.
The task does not end here however, the remaining step is to continually test and evaluate the design throughout the design process. In reality, this means testing against the specification (and relevant standards) but also testing with as diverse a range of users as possible. While much can be done based on methods and tools, there really is no substitute for testing with people.
Understanding how to make the product better is, of course, just one part of the challenge. Getting more inclusive products to market relies on the buy-in of the wider product team — a commitment to design better products that are appreciated and valued by a diverse range of people and, by doing so, achieve better commercial success. | https://uxdesign.cc/designing-for-diversity-13ce6780690a | ['Dan Jenkins'] | 2019-06-17 07:29:12.982000+00:00 | ['Inclusive', 'Design', 'Accessibility', 'Inclusive Design', 'Diversity'] |
Java Tips — A homemade linked list | The node of the list
First of all we need to define the collection like a set of node and every node must have:
Value : the information content that every node carry;
: the information content that every node carry; Pointer: the pointer to the next element into the list; with this link every node knows the following node;
The structure of a Node is:
Node class
The Node class is provided of:
A class variable to maintain the value and one to maintain the pointer to the following Node
and one to maintain the to the following Node A default constructor that initializes to null the value and the pointer;
that initializes to null the value and the pointer; A value constructor that initializes the value of the node with the object received in input and the pointer to null;
that initializes the value of the node with the object received in input and the pointer to null; Getter and setter methods to manipulate the node.
The linked list will be a sequence of nodes and every node will point by reference to the following node like the next picture:
Linked list structure
The LinkedList class is provided of:
A class variable to maintain the reference to the first node of the list and one to maintain the reference to the last node;
node of the list and one to maintain the reference to the node; A default constructor that initializes to null the value and the pointer;
that initializes to null the value and the pointer; Getter methods to retrieve the first and the last node of the list;
methods to retrieve the first and the last node of the list; Business method useful to manipulate the list.
The structure of the LinkedList class is:
Extract of LinkedList class
The core of the Linked List has two class variables: one to maintain the reference of the head of the list (first) and one to maintain the tail of the list (last). When the list is created both are null and need to be updated for every manipulation of the list.
Moreover the Linked List core class needs to implements the Iterable interfaces to be iterate. | https://medium.com/quick-code/java-tips-a-homemade-linked-list-9adae0906332 | ['Marco Domenico Marino'] | 2019-09-22 05:34:52.286000+00:00 | ['Software Development', 'Data Structures', 'Java', 'Programming', 'Programming Languages'] |
How to Break Up With Your Product | Working on a digital product is like being in a relationship. You go through an experience of growth, connection, and learning. You constantly think about it, day in and day out. It can be a frustrating and rewarding experience.
For some people, they’ve devoted years to building, improving, and maintaining a product. They intimately know it’s history as well as its strengths and shortcomings. But like most relationships, things change and sometimes you just need to move on. The product just isn’t serving the right needs for its users anymore (or your organization).
What do you do when you know the relationship is over?
The Breakup Letter
On a recent project, our team spent a lot of time aligning around change. At the core of this change was a critical product that we all recognized needed to be retired or completely overhauled in some way. Letting go is hard though, especially when you’re in a relationship with a product that you’ve devoted countless time and energy into. As one last exercise before jumping into a future vision session I facilitated a relationship ending milestone...
We wrote breakup letters to the product.
It sounds silly (yes, people were very skeptical), but this kind of closure experience was both cathartic and illuminating. The letters allowed us to project our frustrations, disappointments, unmet expectations, challenges as well as some fond memories about the work. Some letters were short and to the point, others were drawn out and emotional, some were very funny. Mostly they were a mix of all of the above. All of them packed with meaning.
This format works because it has a commonly understood structure, but finds itself in an unfamiliar context. It takes a light-hearted approach to what could be a very serious discussion. Participants are able to remove themselves from having a direct conversation and use the letter to translate deeper emotions in a less vulnerable way. This can be an emotional exercise.
After reading the letters out loud there was a collective sense of letting go. We were able to voice the practices and conditions that just weren’t serving us anymore. No comments or reactions from the group were necessary. Each letter represented a truth for that person that couldn’t be refuted and needed no discussion. There was a clear acknowledgment of the emotional labour that can go into this work.
This kind of passion needs to be acknowledged and recognized in any change initiative.
A closure experience like this isn’t just about catharsis. It also helps a team identify the patterns and behaviours that we don’t want to replicate in the future.
By learning what didn’t work, what hasn’t served us, we are able to move forward and identify what we want and need in our new relationship.
Activity Steps
This activity is suitable for any kind of closure experience where reflection or letting go is desired. Activities like a project ending, retiring a product, the end of a workshop or conference. Break up letters don’t have to be antagonizing! They can be mutually desired and amicable. | https://medium.com/swlh/how-to-break-up-with-your-product-cbc96086a0c0 | ['Davis Levine'] | 2020-11-07 06:08:15.906000+00:00 | ['Product Design', 'Workshop', 'Design', 'Product Development', 'Facilitation'] |
Migrating to Python 3: The HealthifyMe Experience. (Part 2) | We moved all the fishes 😀
Hello and welcome back here. This is part 2 where we will explore how HealthyfyMe moved to python 3 without having downtime with ongoing development. If you missed the part-1 where we explored why you should move to Python 3 and rare case difference in Python 2 and Python 3 and Compatible Solutions for those.
Introduction
We have approximately 12–15 developer backend team. This is our base project with 7–10 build release daily, having bug fixes, improvements, security fixes, new feature development etc. Our main challenge was not to stop the current development process. We had to make sure our project is compatible with Python 3.X without breaking the compatibility with Python 2.X.
Migration is not an easy task, it’s not like you change your configuration file, settings files and it will work as expected. It’s a continuous process where we have to plan, analyze, test and iterate through code base each time we migrate. We will explain each of these details here.
Migration Strategy
Migration Strategy
Plan/Research
If your application is small and can be refactored quickly, just start fresh and re-write the code using Python 3.
This was not the case for us. Our application is big, works on a good scale and developers are working on different parts of the code base, you’d need a functioning application at all times. We have to plan first how we are going to migrate such a large project. We listed down all the components, scripts (python scripts for staging and prod), environments, internal and external apps where we need migration.
We spent the time to understand/research on —
Analyze
Once we spent time on research and we find out some of the existing automated tools that we used for this migration.
caniusepython3: This package takes in a set of dependencies and then figures out which of them are holding you up from porting to Python 3.
2to3: Automated Python 2 to 3 code translation.
six: Package intended to support codebases that work on both Python 2 and 3 without modification.
Python-Modernize: This library is a very thin wrapper around lib2to3 to utilize it to make Python 2 code more modern with the intention of eventually porting it over to Python 3
to utilize it to make Python 2 code more modern with the intention of eventually porting it over to Python 3 python-future: It allows you to use a single, clean Python 3.x-compatible codebase to support both Python 2 and Python 3 with minimal overhead.
Apart from this, we created our own wrapper for making code compatible with python 2 and python 3. That we explained in our part-1 where we mention how some feature works differently in Python 2 and Python 3 and what is the compatible solution for such cases.
Migration Process
Isolated Git Branch:- We followed the basic software ethics of keeping each new changes in different git branch. For better understanding for each change, we create a new git branch having prefix py3- .
__future__’s Adding:- We added require future import to each python file to make code compatible in both python version. We added from __future__ import absolute_import, division, print_function, unicode_literals based on file code require in the python file.
Compatible third-party packages updated:- We are having more than 180 third party package dependencies in our project. There were packages that are compatible with both the versions like django(1.11) , simplejson(3.8.1), Requests(2.22) and some packages that are not compatible like redis, django-cacheops, django-fake-model etc. We updated package that is not compatible with both the version and created a different requirements file for python 3.
Six package:- The conclusion was to use six, which is a library to make it easy to build a codebase that is valid in both in Python 2 and 3. We used six package functionalities like six.iteritems, six.moves.range, six.moves.urllib.parse.urlencode, six.moves.zip, six.with_metaclass,six.text_type, six.string_types, six.moves.urllib.request, six.viewkeys, six.StringIO,six.moves.html_parser etc.
Custom compatible wrapper:- As we mention in the blog part-1 there were still in a lot of places where we need to write our own compatible solution that can work in both python version. We created two compatible layers one for the Unittest case and other for our main codebase. We replaced the newly created method in exiting codebase. Some of the methods are like
def isPY3():
"""Check the current running version is python 3 or not."""
return True if _PY == 3 else False def base64ify(bytes_or_str):
if _PY == 3 and isinstance(bytes_or_str, str):
input_bytes = bytes_or_str.encode('utf8')
else:
input_bytes = bytes_or_str
try:
output_bytes = base64.b64encode(input_bytes)
except (UnicodeEncodeError, TypeError):
# This happens when the input message has
# non-ascii encodable characters in an unicode string
# `'`(ascii encodable) vs `’`(non-ascii encodable)
# In this case, we first need to encode it to utf-8
# and then do the base64 encoding
output_bytes = base64.b64encode(input_bytes.encode('utf-8'))
if _PY == 3:
return output_bytes.decode('ascii')
else:
return output_bytes def py2min(input_list):
"""Get the minimum item from list."""
if not input_list:
raise ValueError('List should not be empty')
return min(input_list) if None not in input_list else None def py2_round(x, d=0):
"""Round same as PY2 in PY3."""
p = 10 ** d
if x >= 0:
return float(math.floor((x * p) + 0.5)) / p
else:
return float(math.ceil((x * p) - 0.5)) / p def hash_512_create(value):
"""Hash obj creation python 2 and python 3 compatibility."""
if isPY3():
if isinstance(value, str):
value = value.encode('utf-8')
return hashlib.sha512(value) def django_smart_bytes(value):
"""Django smart_bytes always returns str for python 2 and python 3 compatibility."""
if isPY3():
return smart_bytes(value).decode('utf-8')
return smart_bytes(value)
Created a wrapper for unittest case also because assertCountEqual is depricated in Py3, String to byte comparison will fail, Dict order is different in both versions, Mock lib is included in unittest lib.
Unittest Case:- At HealthifyMe we always try to follow the best engineering practices. There can be a separate debate on the why we need Unittest case, but one thing we learn with this migration is that if you have Unittest case with good coverage then you will save your 30–40 % time in migration. In Healthifyme we have test coverage of more than 90%, because of which we felt comfortable changing some parts of the code since this wouldn’t cause lots of bugs on production.
CICD Pipeline:- We started running two CICD pipelines for both python 2 and python 3. Here we were trying to make sure that once migrated, compatible code should not be changed in a non-compatible way.
Code changes/Developers learning:- We are having more then 150 app’s in our monolith Django project. We started picking each app and try to make the code compatible with both the python version. Once the app is compatible we add that to our CICD pipeline. This way we were making sure that for the migrated app if developers are making any code change the CICD pipeline will take care of code compatibility and if any part of the code breaks in any of the versions then it is the developer’s responsibility to make it compatible. In this way, we are making sure for all the migrated app will be compatible with our new code and developer also will be familiar with how to write compatible code. Like this way, we migrated all the Django project apps. | https://medium.com/healthify-tech/migrating-to-python-3-the-healthifyme-experience-part-2-97a53d43e37d | ['Manmohan Sharma'] | 2020-12-16 05:17:48.828000+00:00 | ['Migration', 'Python', 'Python3', 'Django'] |
Pandas GroupBy — take the most from your data | Taking GroupBy in project
For the demonstration, I will be using COVID-19 dataset (since it is the hottest topic nowadays), you can find it here: https://www.kaggle.com/imdevskp/corona-virus-report.
Take a better look at the data :
import pandas as pd
data=pd.read_csv('covid_19_clean_complete.csv')
As you can see, the dataset contains the following data :
Province/State
Country/Region
Latitude and Longitude of Province/State
Date — data is updated each day, and it is tracking the number of confirmed, Death, and Recovered on daily basis
Confirmed, Deaths and Recovered cases for Province/State by day.
The main problem here — analyzing documented cases by Country. You probably ask why? Looking at this data, you can notice that we have recorded instances daily BUT by Province/State. Countries that don’t have provinces or states, like Croatia, for them, we have one new row each day. But let’s take a look at Australia, there is a daily record for each registered province, on that note, for countries with documented province/state variable, we have multiple rows inserted each day.
I hope you get the gest of it and see how easily someone can make a mistake when analyzing this dataset.
Now, we can see groupby in natural light! The plan is to create a new dataset with the following variables :
Country/Region
Confirmed, Deaths and Recovered
So, let’s do this:
#grouping data
data_by_country=data.groupby(['Country/Region'])[['Confirmed','Deaths','Recovered']].sum() #reset index
data_by_country=data_by_country.reset_index() #print out sorted data from max to min by Confirmed cases
data_by_country.sort_values(by='Confirmed',ascending=False)
Recorded cases by Country
This is great! We can now see the sum of all recorded cases by each Country sorted from a maximum number of cases to a minimum. In a previous example, we only took one variable (Number) and summed it by letter, but now we took three variables and summed it by Country. Please be aware, always use the list of variables that you want to transform!
I know what the following question is.
Can we have multiple variables as indexes in this function? Yes, we can!
Let’s group data by Country and Date variables. Basically, for each Country, we want to have recorded cases by Date. Here is how we can do this :
data_by_country_date=data.groupby(['Country/Region','Date'])[['Confirmed','Deaths','Recovered']].sum() data_by_country_date=data_by_country_date.reset_index() data_by_country_date
Data grouped by Country and Date
This is amazing! We have all recorded cases daily for each Country. With this dataset, we can easily create beautiful visualizations and comparisons between countries.
If we want to filter out only data where confirmed cases exist for each Country, just simply filter with :
data_by_country_date[data_by_country_date.Confirmed>0]
You can see for country Afganistan start date is 24–02–2020, not as above 22–02–2020.
Now you have it, explore this pandas function, use it in your projects when you have to transform your dataset. The following code is a demonstration of how I used it on analysis od COVID-19 data.
I know this isn’t any fancy, but I just want to show you it is used in real-life coding.
The following function flow:
Takes the original dataset and name of the Country you want to present — if Country isn’t entered, then takes data globally — and returns recorded cases from the first case until today.
def get_overall_numbers(df,country='all'): df['Date']=df.Date.astype('datetime64') #set Date to datetime if country=='all': #all - takes data globally
subset=df.groupby('Date') [['Confirmed','Deaths','Recovered']].sum()
subset=subset.reset_index() else:
sub=df[df['Country/Region']==country] #take data for given country
subset=sub.groupby(['Country/Region','Date'])[['Confirmed','Deaths','Recovered']].sum()
subset=subset.reset_index()
return subset[subset.Confirmed>0]
Calling the function :
Globally — Country =’all.’
get_overall_numbers(data)
Globally documented cases by Date
2. Country=’Croatia’
get_overall_numbers(data,'Croatia')
Documented cases by Date for Croatia
If you are not number person, let’s take it to the plot :
#Code is now really simple since we have function #importing libraries
import seaborn as sns
import matplotlib.pyplot as plt sns.lineplot(x=get_overall_numbers(data).Date,y=get_overall_numbers(data).Confirmed/1000000)
Conclusion
I hope this is helpful to all of you out there in the python world, new guys, or someone who just wants to learn more. Please, let me know if you use it in your projects. :)
I wanted to write this article as inspiration from my college colleagues when they mentioned the following: they were looking for simple explanations about how they can group the data and how to use it after transformation.
If someone has similar or the same question, I genuinely hope that you found your answer in this article.
Thank you for reading. Stay tuned.
Bye. | https://medium.com/analytics-vidhya/pandas-groupby-take-the-most-from-your-data-1303a4d41389 | ['Hana Šturlan'] | 2020-05-10 09:35:48.956000+00:00 | ['Pandas', 'Transformation', 'Python', 'Covid 19', 'Analytics'] |
‘The Talented Miss Farwell’ Is 2020’s Most Memorable Character | She thought about Hans Hartung, his deceptive technique — how the slashed and curved lines across the canvas appeared haphazard but were in fact painstakingly considered, arranged. She didn’t linger long here, though, because thinking about art was the least interesting way to experience it. The difference between reading the recipe and spooning in a bite of trembling lemon soufflé.
Becky Farwell lives two lives.
In one, she’s Becky, a hardworking, sharp, brilliant accountant who manages the finances of her small Illinois town. Amid financial crises, political upheaval, and her own private tragedy, Becky keeps the town afloat, to the bemusement and gratitude of its residents. Becky, always capable of shifting some money around to fund a desperately needed project. Becky, with her odd little habits, like insisting on picking up the office mail every day from the post office herself. Becky, who lives alone in the house where her single father raised her.
In another, she’s Reba, a ruthless art collector who will stop at nothing to acquire the pieces she wants — that she needs. Whom schemers in the art world will throw to the wolves and who, in turn, will throw others down into the pit with little more than a backwards glance. Reba won’t let a dead artist’s reticent estate stand between her and the artist’s most coveted works. Reba won’t allow the little issue of the global art market collapse or a few million dollars in debt get in the way of her ever-expanding collection of world-class masterpieces.
Hans Hartung, T1981-E21
Given that the title of Emily Gray Tedrowe’s third novel, The Talented Miss Farwell, is a clear homage to Patricia Highsmith’s famous Ripley series, it would be fair to assume that Becky is a psychopath, a merciless narcissist without a conscience whose only earthly attachment is her obsession with what she calls her “activity,” an elaborate, decades-long con to build her art collection at the devastating expense of the town she was raised in and works for. But she is far from a psychopath. Her friendships are few, and like many people with addictions, her “activity” takes precedence over her relationships out of increasing desperation. What makes Becky such a fascinating, brilliant character is that she is, in fact, full of heart. She loves the people in her life. She loves Pierson, Illinois. She wants to give, and goes to great lengths (and makes tremendous sacrifices) to do so. But it becomes clear through the years that her “activity,” her con, will consume her life if she doesn’t pump the breaks. The question of the book is: will she stop before it’s too late?
My relationship to Becky is the best kind to have with a book character, which is to say, it’s complicated. Her financial acumen and unwillingness to allow sexism, classism, and elitism stand in her way make it impossible not to root for her, but at its heart, what she’s doing is criminal, and it’s not a victimless crime, either.
With Becky Farwell, Tedrowe has created one of the year’s most fascinating, complex, nuanced characters. I loved her, and I absolutely loathed her. Told in the close third person, the novel’s language is straightforward and unadorned, allowing Becky’s character development — and the development of her crime — take center stage. Every scene, every detail, feels necessary, which is just how Becky would want it.
The Talented Miss Farwell comes out September 29.
[FYI: I use affiliate links, so when you click my links and make a purchase, I get a cut. Cool!] | https://angelalashbrook.medium.com/the-talented-miss-farwell-is-2020-s-most-memorable-character-e3017f169818 | ['Angela Lashbrook'] | 2020-09-16 14:27:36.213000+00:00 | ['Book Review', 'Books', 'Reading', 'Fiction', 'Culture'] |
Amazon Chime SDK Whiteboard with Data Messages for Real-time Signaling | This article is also available here.(Japanese)
https://cloud.flect.co.jp/entry/2020/06/01/115652
(NEW ARTICLE 20th/Oct./2020):
Faceswap and Virtual Background on your brower
In the last article, I explained how to create a virtual background for Amazon Chime SDK.
I’d like to continue talking about the Amazon Chime SDK in this post.
Well, did you know that Amazon announced a new feature addition to their Amazon Chime SDK the other day?
This feature allows participants in a conference to exchange data messages by using the data communication channel used by Amazon Chime. As mentioned in the announcement, this allows us to easily implement the whiteboards and emojis among participants in the conference room. And it can also be used to control the state of the conference room, such as forcing participants to mute.
So I’d like to show you how to make a whiteboard using this feature.
This is the behavior of the whiteboard I made this time.
Amazon Chime and Signaling
This additional features of the Amazon Chime SDK uses the signaling communication already existing in Amazon Chime. Amazon Chime’s video conferencing is achieved using a technology called WebRTC, and in WebRTC, signaling communication is used to control the session.
Specifically, WebRTC is used for P2P communication between browsers, and the signaling communication is used to identify the destination of the other party or to exchange keys for cryptographic communication in order to start this communication.
And, even though it is called P2P communication, it is necessary to go through a relay server called TURN when communicating over a firewall. The exchange of information about these routes is also done through signaling communication.
If you want to know more about WebRTC and its relationship with signaling, please refer to this page.
Amazon Chime provides managed relay servers and signaling channels to make it easy to start video conferencing in a variety of network environments. The new feature leverages the managed communication path for this signaling to allow arbitrary data messages to be exchanged. So developers can easily add things like shared whiteboards to their video conferencing systems without having to provide a server for messaging.
New API Overview
The three new methods offered are as follows
This function sends data messages with “Topic”.
First of all, each client registers a callback function that defines the process for each Topic. Then, when the sender sends a data message with Topic, the client receives the data message and calls the callback function corresponding to Topic. We don’t know the details of the internal processing, especially the data flow, but it’s probably running on a general publish/subscribe model.
After using it this time, I found it to be a very easy to use feature.
Note that this function may not be able to receive the data message even if the Publisher of the data message has subscribed to the topic of the data message, so you may need to be careful. I think the advantage of the publish/subscribe model is that it allows publishers and subscribers to completely ignore each other’s relationships, so the fact that publishers can’t receive data when they’re in the same software (session) was just a little off.
Shared Whiteboard
Here’s the general process flow of the shared whiteboard we created.
Detects mouse events/touch events on the Publisher’s browser’s canvas (HTMLCanvasEelemnt) and identifies the coordinates. Drawing on the Publisher’s canvas Send coordinates to Broker(Chime) as a data message (real-timeSendDataMessage) Sending coordinates from Broker to each Subscriber Drawing on the canvas of each Subscribers
As mentioned earlier, Publisher cannot receive the data messages it sends, so it must draw on its own canvas before sending the data messages. When creating an application that wants to reflect user operations in the UI without delay, such as a whiteboard, it is better for the user experience to reflect them in the UI before sending data messages, so I think it will be similar regardless of whether Publisher can receive data messages or not.
Furthermore, since Publisher cannot receive the data messages it sends, it may be easier to implement because Publisher does not have to discard the received data.
Implementat
Subscribe
Here is an example of a wrapper function that registers a topic and a corresponding callback function to be subscribed by realtimeSubscribeToReceiveDataMessage. Here, we define a callback function that calls app.app.receivedDataMessage when we receive a data message and use it as an argument. Please note that app.app.receivedDataMessage itself can be defined elsewhere for arbitrary processing.
export const setRealtimeSubscribeToReceiveDataMessage = (app:App, audioVideo:AudioVideoFacade, topic:string) =>{
const receiveDataMessageHandler = (dataMessage: DataMessage): void => {
app.receivedDataMessage(dataMessage)
}
audioVideo.realtimeSubscribeToReceiveDataMessage(topic, receiveDataMessageHandler)
}
Send DataMessage
This is an example of how to send a data message using realtimeSendDataMessage.
In order to draw on the whiteboard, the coordinates of the starting and ending points, stroke information, line thickness, etc. are JOSNized and sent.
sendDrawsingBySignal = (targetId: string, mode:string, startXR:number, startYR:number, endXR:number, endYR:number, stroke:string, lineWidth:number)=>{
const gs = this.props as GlobalState
const message={
action: 'sendmessage',
data: JSON.stringify({
cmd : MessageType.Drawing,
targetId : targetId,
startTime : Date.now(),
mode : mode,
startXR : startXR,
startYR : startYR,
endXR : endXR,
endYR : endYR,
stroke : stroke,
lineWidth : lineWidth
})
}
gs.meetingSession?.audioVideo.realtimeSendDataMessage(MessageType.Drawing.toString(), JSON.stringify(message))
}
Demo
WhiteBoard
This is how the whiteboard function you created works. This demo will be a simulated classroom whiteboard. You can see that what you draw on the right side is reflected on the left side of the screen.
WhiteBoard with SharedDisplay
You can also create this whiteboard as an overlay so you can use it with the Amazon Chime SDK’s screen sharing feature to give a presentation.
Code
The features described in this article are built into a test bed of new features using video conferencing.
If you are interested in it, please visit the following repository.
Finally
This time, I tried to create a whiteboard using the latest features of Amazon Chime SDK.
In Japan, it was recently announced that the state of emergency has been lifted. However, it still seems difficult to get many people in the classroom to teach a lesson. Also, face-to-face customer service can be risky and difficult to do in the same way. I think that video conferencing and shared whiteboards may be an option to address these issues. | https://dannadori.medium.com/amazon-chime-sdk-whiteboard-with-data-messages-for-real-time-signaling-c0740575a6c0 | [] | 2020-10-19 18:53:32.866000+00:00 | ['Amazon Chime', 'AWS', 'Video Conferencing', 'JavaScript'] |
Programming with Databases in Python using SQLite | Photo Credit: Pixabay
If you are aspiring to be a data scientist you are going to be working with a lot of Data. Much of the data resides in Databases and hence you should be comfortable accessing data from databases through queries and then working on them to find key insights.
Data forms an integral part of the lives of Data Scientists. From the number of passengers in an airport to the count of stationary in a bookshop, everything is recorded today in form of digital files called databases. Databases are nothing more than electronic lists of information. Some databases are simple, and designed for smaller tasks while others are powerful, and designed for big data. All of them, however, have the same commonalities and perform a similar function. Different database tools store that information in unique ways. Flat files use a table, SQL databases use a relational model and NoSQL databases use a key-value model.
In this article, we will focus only on the Relational Databases and accessing them in Python. We will begin by having a quick overview of the Relational databases and their important constituents.
Relational Database: A Quick Overview
A Relational database consists of one or more tables of information. The rows in the table are called records and the columns in the table are called fields or attributes. A database that contains two or more related tables is called a relational database i.e interrelated data.
The main idea behind a relational database is that your data gets broken down into common themes, with one table dedicated to describing the records of each theme.
i) Database tables
Each table in a relational database has one or more columns, and each column is assigned a specific data type, such as an integer number, a sequence of characters (for text), or a date. Each row in the table has a value for each column.
A typical fragment of a table containing employee information may look as follows:
The tables of a relational database have some important characteristics:
There is no significance to the order of the columns or rows.
Each row contains one and only one value for each column.
Each value for a given column has the same type.
Each table in the database should hold information about a specific thing only, such as employees, products, or customers.
By designing a database this way, it helps to eliminate redundancy and inconsistencies. For example, both the sales and accounts payable departments may look up information about customers. In a relational database, the information about customers is entered only once, in a table that both departments can access.
A relational database is a set of related tables. You use primary and foreign keys to describe relationships between the information in different tables.
ii) Primary and Foreign Keys
Primary and foreign keys define the relational structure of a database. These keys enable each row in the database tables to be identified and define the relationships between the tables.
Primary Key
The primary key of a relational table uniquely identifies each record in the table. It is a column, or set of columns, that allows each row in the table to be uniquely identified. No two rows in a table with a primary key can have the same primary key value.
Imagine you have a CUSTOMERS table that contains a record for each customer visiting a shop. The customer’s unique number is a good choice for a primary key. The customer’s first and last name are not good choices because there is always the chance that more than one customer might have the same name.
Foreign Key
A foreign key is a field in a relational table that matches the primary key column of another table.
The example above gives a good idea of the primary and foreign keys.
Database Management Systems
The Database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze data. The DBMS used for Relational databases is called Relational Database Management Systems(RDBMS). Most commercial RDBMSes use Structured Query Language (SQL), a declarative language for manipulating data, to access the database. The major RDBMS are Oracle, MySQL , Microsoft SQL Server, PostgreSQL , Microsoft Access, and SQLite .
We have barely scratched the surface regarding databases here. The details are beyond the scope of this article.However, you are encouraged to explore the database ecosystem since they form an essential part of a data scientist’s toolkit.
This article will focus on using python to access relational Databases. We will be working with a very easy to use database engine called SQLite.
SQLite
SQLite is a relational database management system based on the SQL language but optimized for use in small environments such as mobile phones or small applications. It is self-contained, serverless, zero-configuration and transactional. It is very fast and lightweight, and the entire database is stored in a single disk file. SQLite is built for simplicity and speed compared to a hosted client-server relational database such as MySQL. It sacrifices sophistication for utility and complexity for size. Queries in SQLite are almost identical to other SQL calls.
Python sqlite3 module
SQLite can be integrated with Python using a Python module called sqlite3. You do not need to install this module separately because it comes bundled with Python version 2.5.x onwards. This article will show you, step by step, how to work with an SQLite database using Python.
Before starting I would highly recommend you all to install DB Browser for SQLite. The browser can be downloaded from their official page easily. DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite. It will help us to see the databases being created and edited in real time.
DB Browsers’ view
Since everything is in place, let us get to work.
Contents: | https://medium.com/analytics-vidhya/programming-with-databases-in-python-using-sqlite-4cecbef51ab9 | ['Parul Pandey'] | 2019-05-15 00:35:58.421000+00:00 | ['Database', 'Python', 'Sqlite', 'Data Science', 'Programming'] |
How to create a “fashion police” with React Native and off-the-shelf AI | Testing the AI
After training, I applied the model on my test set of 10 “cute” and 10 “not cute” labeled shirts by using the “Quick Test” functionality. I got the following results:
The recall — actually cute clothes correctly classified as cute — was 8/10, or .80. The precision — clothes classified as cute that were actually cute — was 8/13, about .62. The F1 score rounds up to .70. Not amazing, not terrible, I’d say, for an off-the-shelf model. Make of that what you will, but I think it’s definitely better than a clueless friend at picking out clothes for me.
When I looked at the classifications of each image, I saw that the AI tended to classify images based on color a lot. If you look at the training data above, you can see that I favored more plain colors like white, black, and blue, while bright shirts were mostly labeled “not cute”.
The classifier was correct on this one.
This simplistic view on my taste didn’t always work, though. Let’s look at some more examples.
The AI didn’t really understand the “style” of the shirts, only the color. I labeled a lot of shirt with “folds” in them (like in the shoulder of the gray shirt above) as “not cute”, yet the AI still classified the gray shirt as cute. The red shirt above might’ve been too bright to be classified as “cute” and therefore was wrongly classified. So yes, the AI didn’t do too poorly because after all, color is a big factor that influences my own style, but it also seemed to fail at picking up more nuances in my taste.
Going to a store? You’ll need an app for that.
I also wanted to test the capabilities of the AI in the wild — like taking pictures of clothing in a store and using that to decide whether or not to buy a certain piece of clothing.
Creating an app for yourself is easier than ever, so don’t worry — we’re not going to be full-on deploying to the App Store. That takes too long, anyway. I used React Native to quickly put together a cross-platform (works on iPhones and Androids) app with the functionality I needed.
The functionality? Well, that would be the ability to take a picture of a piece of clothing and have the AI instantly predict whether the predictee would deem it “cute” or not. So we need to be able to use the phone’s camera, be able to take pictures, use the Microsoft Prediction API on pictures we take in real time, and convey the results back to the user. This is quite easy to do with Expo’s services, and if you want to dive into the specifics, all my code is available on Github.
A side note about the Prediction API
The most confusing part of making this was trying to send the image file taken from the phone camera directly through the API endpoint. You’re supposed to send the data as an “octet-stream”, and there is very little support or documentation on this on Microsoft’s end. I tried sending over a binary-encoded image, I tried sending the image file in a form-data format, I tried resizing the image and then doing a combination of the above — but nothing I tried worked.
To be honest, I spent hours and hours trying to figure out why nothing was getting me a good response. Ultimately, I asked a friend-of-a-friend who had actually encountered this exact issue before, and he said that he eventually gave up trying to directly send the image file, and instead used another API to upload the image first, then send the web url of the image.
Hearing this, I admitted defeat and adopted that solution: I used the Imgur API to upload the images taken from the phone and then sent in the image web url.
Anyway…
After this, the app worked! And the AI worked surprisingly similarly to the way it performed on the Nordstrom test set. It was still trained on the Nordstrom.com images, so see the following results on some random clothing I already own: | https://medium.com/free-code-camp/creating-a-fashion-police-with-react-native-and-off-the-shelf-ai-78b606002aa1 | ['Kelsey Wang'] | 2019-05-17 17:43:02.553000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'App Development', 'React Native', 'Tech'] |
The Most Common Sexual Fantasies | In doing research for his book, Tell Me What You Want, Dr. Justin Lehmiller surveyed more than 4,000 people in the United States about their sexual fantasies. What’s interesting is that around 98% of people reported having had at least one sexual fantasy, and that included asexuals (more than half of whom say they masturbate at least monthly).
It turns out there are certain things that we tend to conjure in our imaginations more often than others. In his research on common sexual fantasies, Dr. Lehmiller found that seven things in particular kept coming up. So what are we all dirty daydreaming about?
1. Multipartner sex.
Yep, it appears as though the majority of us have at least entertained the idea of threesomes. For people in relationships, this fantasy usually involves their significant other (though not always). Interestingly, heterosexual-identifying men tend to prefer their imagined threesomes to be with two women, while heterosexual-identifying women generally didn’t have a preference for which genders they were getting it on with.
2. Power, control, and rough sex.
Power dynamics aren’t just about stroking the ego-they also make for some scintillating bedroom play while stroking, ahem, other things. Rough sex and BDSM involving spanking, hot wax, restraints, biting and other dominant/submissive acts rated highly among research participants.
3. Changing things up.
Most of us like at least a little bit of variety in our lives. And it can help reignite a spark when sex in a long-term relationship becomes a little boring. Perhaps that’s why so many people like to fantasize having sex with someone different, having sex in an unusual place, or even just having sex in a position you’ve never tried before.
4. Taboo or forbidden circumstances.
Rulebreaking—and the risk of getting caught—is usually a bit of a thrill, no matter the context. And the adrenaline rush that comes with it can also heighten a sexual encounter significantly. But since there’s of the risk of, you know, being arrested for things like public sex, it’s understandable that people like to act out their desires for it in their fantasies. Preferred locales for such illicit encounters include parks, offices, beaches and elevators.
5. Open relationships.
Similar to the penchant for threesomes, a lot of people like to entertain the idea of opening up their relationship to include new partners—with their significant other’s permission, of course. (If you’re considering making your fantasy a reality, our guide on how to have an open relationship may come in handy.) Other people favor the idea of being able to watch their partner have sex with someone else.
6. Passion and romance.
Turns out that sexual fantasies don’t always need to be dirty. In fact, a lot of people simply fantasize about having meaningful sex that makes them feel desired (and good in bed). In some cases, people will also use the fantasy to ‘improve’ aspects of their body or sexual performance that they’re not happy within real life.
7. Same-sex encounters, gender-bending and erotic flexibility.
As society in general becomes more aware of the fact that both gender and sexuality are nuanced spectrums, people are becoming more comfortable exploring their own sexuality. Heterosexual-identifying women in particular revealed that they often fantasize about sex with other women, and at least a quarter of heterosexual-identifying men surveyed said they fantasize about sex with other men or with a transgender woman.
In his research, Dr. Lehmiller also discovered some pretty interesting facts about the intricacies of sexual fantasies and how they vary according to gender and sexual preference.
Men, for example, are more likely to have fantasies that are quite explicit and involve multiple partners. What’s more, they tend to focus on specific body parts-both their own, and those of their partner(s)-rather than merely on a sexual act. Overall, their imagined sexual escapades were mostly anchored in the physical.
Women, on the other hand, are more inclined to focus on more emotional or romantic details like scene setting-like a beautiful beach, a moonlit evening, or a picturesque forest. Often there’s also more of a build-up in their fantasies, like eyes meeting across the room, strolling along that beach, or getting caught in a rainstorm together. This is in line with the fact that women often need foreplay to help prepare their minds and bodies for the actual act of intercourse. These differences between men’s and women’s fantasies seemed to be true regardless of sexual orientation.
Another interesting insight? Women fantasize most frequently during ovulation, but their fantasies are often different during that time because it changes what women find attractive. Dr. Lehmiller also found that, for all genders, the content of our sexual fantasies tends to vary depending on our psychological state at the time. For example, when we are feeling insecure, our fantasies may involve circumstances that boost self-esteem, such as a narrative around independence or being sexually irresistible.
Read more of Dr. Lehmiller’s research on sexual fantasies. | https://getmaude.medium.com/the-most-common-sexual-fantasies-acbc90e5f9cc | [] | 2019-08-26 20:45:38.389000+00:00 | ['Relationships', 'Modern', 'Culture', 'Sex', 'Wellness'] |
Escape Characters in Java | Learn how we can use escape sequence in Java
These characters can be any letters, numerals, punctuation marks and so on. The main thing when creating a string is that the entire sequence must be enclosed in quotation marks:
public class Main {
public static void main(String[] args) {
String alex = new String ("My name is Alex. I'm 20!");
}
}
But what do we do if we need to create a string that itself must contain quotation marks? For example, suppose we want to tell the world about your favorite book:
public class Main {
public static void main(String[] args) {
String myFavoriteBook = new String ("My favorite book is "Twilight" by Stephanie Meyer");
}
}
It seems the compiler is unhappy about something! What do you think the problem could be? And what does it have to do with quotation marks? In fact, it’s all very simple. The compiler interprets quotation marks in a very specific way, i.e. it expects strings to be wrapped in them. And every time the compiler sees “, it expects that the quotation mark will be followed by a second quotation mark, and that the content between them is the text of a string to be created by the compiler. In our case, the quotation marks around the word “Twilight” are inside other quotation marks. When the compiler reaches this piece of text, it simply doesn’t understand what it is expected to do. The quotation mark suggests that a string must be created. But that’s what the compiler is already doing! Here’s why: simply speaking, the compiler gets confused about what it is expected to do. “Another quotation mark? Is this some kind of mistake? I’m already creating a string! Or should I create another one? Argh!…:/” We need to let the compiler know when a quotation mark is a command (“create a string!”) and when it is simply a character (“display the word “Twilight” along with quotation marks!”). To do this, Java uses character escaping. This is accomplished using a special symbol: \ . This symbol is normally called "backslash". In Java, a backslash combined with a character to be "escaped" is called a control sequence. For example, \" is a control sequence for displaying quotation marks on the screen. Upon encountering this construct in your code, the compiler will understand that this is just a quotation mark that should be displayed on the screen. Let's try changing our code with the book:
public static void main(String[] args) {
String myFavoriteBook = new String ("My favorite book is \"Twilight\" by Stephanie Meyer");
System.out.println(myFavoriteBook);
}
}
We’ve used \ to escape our two "internal" quotation marks. Let's try running the main() method... Console output: My favorite book is "Twilight" by Stephanie Meyer Excellent! The code worked exactly how we wanted it to! Quotation marks are by no means the only characters we may need to escape. Suppose we want to tell someone about our work:
public class Main {
public static void main(String[] args) {
String workFiles= new String ("My work files are in D:\Work Projects\java");
System.out.println(workFiles);
}
}
Another error! Can you guess why? Once again, the compiler doesn’t understand what to do. After all, the compiler doesn’t know \ as anything other than a control sequence! It expects the backslash to be followed by a certain character that it must somehow interpret in a special way (such as a quotation mark). But, in this case, \ is followed by ordinary letters. So the compiler is confused again. What should we do? Exactly the same thing as before: we just add another \ to our \ !
public class Main { public static void main(String[] args) { String workFiles= new String ("My work files are in D:\\Work Projects\\java");
System.out.println(workFiles); }
}
Let’s see what we get: Console output: My work files are in D:\Work Projects\java Super! The compiler immediately determines that the \ are ordinary characters that should be displayed along with the rest. Java has quite a lot of control sequences. Here's the full list:
\t - tab.
- tab. \b - backspace (a step backward in the text or deletion of a single character).
- backspace (a step backward in the text or deletion of a single character).
- new line.
- new line. \r - carriage return. ()
- carriage return. () \f - form feed.
- form feed. \' single quote.
single quote. \" double quote.
double quote. \\ backslash.
Thus, if the compiler encounters
in the text, it understands that this is not just a symbol and a letter to display on the console, but rather a special command to "move to a new line!". For example, this may be useful if we want to display part of a poem:
public class Main {
public static void main(String[] args) {
String byron = new String ("She walks in beauty, like the night,
Of cloudless climes and starry skies
And all that's best of dark and bright
Meet in her aspect and her eyes...");
System.out.println(byron);
}
}
Here’s what we get: Console output: She walks in beauty, like the night, Of cloudless climes and starry skies And all that’s best of dark and bright Meet in her aspect and her eyes… Just what we wanted! The compiler recognized the escape sequence and output an excerpt of the poem on 4 lines.
Escape Unicode characters
Another important topic that you need to know about in connection with escape characters is Unicode. Unicode is a standard character encoding that includes the symbols of almost every written language in the world. In other words, it’s a list of special codes that represent nearly every character in any language! Naturally, this is a very long list and nobody learns it by heart :) If you want to know where it came from and why it became necessary, read this informative article: https://docs.oracle.com/javase/tutorial/i18n/text/unicode.html All Unicode character codes have the form “ u +<hexadecimal digit>". For example, the well-known copyright symbol is represented by u00A9. So, if you need to use this character when working with text in Java, you can escape it in your text! For example, we want to inform everyone that Kajal Rawal owns the copyright to this lesson:
public class Main {
public static void main(String[] args) {
System.out.println("\"Escaping characters\", \u00A9 2020 KajalRawal");
}
}
Console output: “Escaping characters”, © 2020 KajalRawal Great, it all worked out! But it’s not just about special symbols! You can use Unicode and escape characters to encode text written simultaneously in different languages. And even text written in several different dialects of the same language! | https://medium.com/swlh/escape-characters-in-java-7190f4be9bc0 | ['Kajal Rawal'] | 2020-09-17 07:14:30.328000+00:00 | ['Escapesequenceforspace', 'Character', 'String', 'Java'] |
Crunching large data sets to identify air pollution hot spots | Interested in using data to solve healthcare challenges?
We all have heard that air quality is becoming worse and lot of us dismiss it that it’s happening somewhere else or its a Delhi problem only! However, the reality is that bad air quality is much closer home than we want it to be.
To find out how bad it is, we need to have a good data about air quality. It has been estimated by researchers that Bengaluru needs to have at least 41 real-time air quality monitors (after taking into consideration WHO & CPCB guidelines, land use pattern, population density, air shed etc).
This is still much lower than comparable cities like London and Paris which have 120 and 60 real-time monitors respectively. Currently Bengaluru has only 10 such real-time monitors! (I’m discounting the manual stations in BLR)
Even if we have all the 41 fixed stations, to find air quality at any given point in time, we need high quality models which take in data from such stations & other parameters and estimate the air quality at any point in between these stations.
Urban air pollution concentrations vary sharply over short distances (≪1 km) owing to unevenly distributed emission sources, dilution, and physicochemical transformations. Accordingly, even where present, conventional fixed-site pollution monitoring methods lack the spatial resolution needed to characterize heterogeneous human exposures and localized pollution hotspots. This is accentuated in an Indian context where land use patterns are not well known & adherence to zoning regulations are absent in many cases.
To help estimate air quality in hyper local level & identify hot spots researchers in Oakland, California stacked a Google Street View car with good quality air quality sensors & drove them around, more details of this fascinating work is here. The way shown by the researchers in CA may be the way for us to solve the data availability problem in India and derive insights which will push us to take air quality related issues more seriously.
Google Street view car fitted with an air quality monitor. Source
It is being hoped that this high resolution mobile monitoring along with large number of static monitors being facilitated as mentioned above will help providing a very accurate & personalised information about air pollution to citizens.
Clean Air Platform-Bengaluru is planning to run a pilot in Namma Bengaluru on the lines of the work done in CA. It will be running a vehicle similar to this, since we don’t have Google Street View cars 🙁
The challenge for all of us in BLR is to crunch the data generated & identify the hostspots, create a model to estimate air quality at given point in the city, and of course use all this to bring about a change, which will improve air quality!
Even before the pilot starts, the vehicle with equipment will be driven around BLR for 200 km every day (is that possible?). To start off some questions which need to be answered are:
What is the route on which the vehicle should run? (so that max number of schools, business parks / districts & residential areas can be covered) How many shifts should the car be run? (we all know the legendary traffic in BLR)
3. What is the daily schedule for the car to cover a large area
4. How should we cover an area? Go over & over the same place on consecutive days or spread over a period of time?
Lets hack our way to the answers!
Apply to attend now! | https://medium.com/the-fifth-elephant-blog/crunching-large-data-sets-to-identify-air-pollution-hot-spots-73ae6882abf | ['Yogesh R'] | 2018-06-06 06:23:55.947000+00:00 | ['Bangalore', 'Hackathons', 'Air Pollution', 'Environment', 'Data Science'] |
Why “Narrow Your Field of Focus” Isn’t the Right Advice for All Writers | If you are an eclectic writer, make it work for you
I once won a tee-shirt for being an eclectic writer. Okay, not every writer’s goal, but at the time, spreading my writing within a wide range of topics worked for me, not against me.
But I often hear the opposite advice from other writers. Perhaps limiting their work to a narrow field of subjects brings them positive results. “Narrow your focus” isn’t the right advice for all of us though.
Some writers are built to be Jack-of-all-trades. What’s more, they are proficient in varied areas rather than churning out mediocre writing.
The idea branching off in several directions leads to poor work and other negative results doesn’t always hold true. People imagine you can’t diversify and maintain integrity and skill. Yet, if you are an expansive writer at heart, you thrive when you spread your abilities.
Writer’s needs vary
I write in several genres, and one feeds the other. Creative writing, for instance, spills into factual articles and I’m glad.
After writing poetry or short stories, my brain is in creative mode. If I then switch to, let’s say, a self-improvement post, it’s easy for me to include metaphors and my unique creative writing voice to my work.
Your needs as a writer may differ. Perhaps focusing on a single theme helps you excel and brings out your best. It might attract readers who enjoy specific subjects too. But it is possible to be eclectic and maintain readership.
Suggestions
If you want to branch out and vary written topics, you might create publications to house them, so readers can go to the articles or stories they enjoy most.
You can also write for other writer’s publications that suit different genres. Your readers will see where you’ve posted written work, and can head in a direction they choose.
Your writer’s voice is probably best maintained though
I say “probably,” in case I’m wrong. Nonetheless, once you’ve built a following of readers who want to come back to your work again and again, varying your tone of writing may disappoint. If they enjoy your work, the chances are they like your writing voice.
What’s a writing voice?
Your writing voice is how your personality shines through into written work. It’s unique to you and reflects your character and fashions the tone of your writing. Often, it’s creative, although, some writers have a natural clean, matter-of-fact style that suits them, and it’s devoid of idiosyncrasy.
Writers are unique
Writers are like honey bees. Some stay close to the nest. They forage for nectar, but don’t spread their wings too far. Others remain close to the queen. Their job is specific, and they are experts.
But there are also bees that venture further afield. They collect nectar from varied flower types. And a few don’t gather nectar so much as they travel far and wide to make sure they know where a new hive can live if a calamity befalls the existing one.
They are adventurous and lead different lives to those bees with an indoor job. All bee careers, however, benefit the hive. They are necessary, and their differences don’t occur by accident.
We humans, whether writers or not, have tendencies peculiar to us. We aren’t the same, and what suits one person doesn’t always suit another.
If you, like me, are an eclectic writer, fear not, you aren’t wrong. No one’s right or wrong to prefer a narrow or wide focus. Figure how your particular writing personality fits into the writer’s market and make the most of it. Don’t restrict your work if it doesn’t feel right. | https://medium.com/the-bolt-hole/why-narrow-your-field-of-focus-isnt-the-right-advice-for-all-writers-475530f5e943 | ['Bridget Webber'] | 2019-12-05 13:22:47.040000+00:00 | ['Personal Development', 'Writing', 'Writer', 'Writing Tips', 'Life'] |
What to learn from an open workspace | More and more corporations are redesigning their office spaces. Cubicles and private desks are gradually replaced with sofas, shared desks and chairs, so offices get more open for collaboration and interaction. In 2015 Samsung spent millions of dollars on constructing its headquarter in the Silicon Valley. Executives hoped for more mix-and-mingle chances among workers when they placed a shared lobby between the floors. Why is such a giant in technology like Samsung pursuing a tentative open community model for their workspace? Let’s find out by having a look look at how great such a workspace can be.
Open up your plan, open up your future
Have you ever wondered about the rapid growth and surprising creativity of startups? Their workspace is one part of the answer. The idea behind it is that, everyone finds their own seat and sits next to whomever may help them. Subsequently, we can learn what we don’t know yet from our colleagues. This contributes to boosting the collaboration, productivity and even creativity. Because “two heads are better than one”.
The Shortcut Lab offers a collaborative workspace where you can learn, test your ideas and give a hand to a likewise-minded community
An open space helps out a lot in decreasing people’s aversion to stressful hierarchy offices. The workspace is not only for now but also the future. It is for us — adventurers in our entrepreneurial journey — to come and learn. It is, in fact, a community. The idea behind it is that we are all driven by common passion. When an idea pops up, we learn how to develop it, and make it happen. Then all entrepreneurial minds can create strong connections and we can engage in a real hub.
Energy is an epicenter
The energy in an open space comes from the team. We cannot deny the power of team building. No one ever wants to work among strangers and go home alone. The need for enjoying informal interaction is real. We can strengthen our team with lunches, small talks, or even outside activities. They are great opportunities for the whole team to collide and build up a contributive image of a real workspace. Subsequently, we give a hand to the community we are in.
On the other side, openness does not necessarily mean a total mash up. When we share a desk or a couch with our colleagues, we may feel that we are losing our privacy. Then it is about time that we put on our headphone and find our private corner. This is effective in relaxing and re-charging after a long period of talks and interactions.
Privacy in an open plan is necessary as well
Each of us will make it a real open space
There is one more thing to learn from startups: hierarchy does not really exist. Instead, all are welcome and there is no discrimination. No matter if you are a trainee, a student who just gets started with entrepreneurship or a refugee who is interested in kicking off your idea — you are equal to anyone else. Everyone has their own voice. Openness only exists when people can feel it.
The Shortcut Lab is for everyone.
Volunteer meeting is fun at The Shortcut.
Don’t forget about meetings! Meetings are usually perceived as a formal conference with laptops, screens, and serious faces. It might seem like you’re wasting time and only get stressed out. But if you open up your meetings to everyone and encourage the team to speak up about their ideas, you can get so much more out of it.
At the volunteer meetings at The Shortcut everyone is welcome to share their ideas and dive deeply in the organisation. All the peers are encouraged to take over the role of the host to create an interactive atmosphere. Laptops get shut down. Volunteers gather at the couch, start the conversation and soon it feels as if a group of friends is sharing stories.
An open community workspace benefits you and your future more than you expect. It is more than a creative physical place. It is a community where you are trained, you are learning and you feel welcome and can engage. Your presence there is meaningful. Last but not least, you are a part of the community. You empower it, contribute to it and build it as a hub where people share the same values as you. | https://medium.com/the-shortcut/what-to-learn-from-an-open-community-workspace-6012fe53702c | ['Trinh Tran'] | 2018-09-18 12:53:18.502000+00:00 | ['Coworking', 'Design', 'Volunteer', 'Team', 'Diversity'] |
Weekly Machine Learning Research Paper Reading List — #8 | Weekly Machine Learning Research Paper Reading List — #8
Authors: Ye Zhu, Kai Ming Ting and Mark J.Carman
Venue: Pattern Recognition
Paper: URL
Abstract:
Density-based clustering algorithms are able to identify clusters of arbitrary shapes and sizes in a dataset which contains noise. It is well-known that most of these algorithms, which use a global density threshold, have difficulty identifying all clusters in a dataset having clusters of greatly varying densities. This paper identifies and analyses the condition under which density-based clustering algorithms fail in this scenario. It proposes a density-ratio based method to overcome this weakness, and reveals that it can be implemented in two approaches. One approach is to modify a density-based clustering algorithm to do density-ratio based clustering by using its density estimator to compute density-ratio. The other approach involves rescaling the given dataset only. An existing density-based clustering algorithm, which is applied to the rescaled dataset, can find all clusters with varying densities that would otherwise impossible had the same algorithm been applied to the unscaled dataset. We provide an empirical evaluation using DBSCAN, OPTICS and SNN to show the effectiveness of these two approaches. | https://medium.com/towards-artificial-intelligence/weekly-machine-learning-research-paper-reading-list-8-f6415645685e | ['Durgesh Samariya'] | 2020-09-22 12:39:49.688000+00:00 | ['Machine Learning', 'Science', 'Research'] |
How to Turn Your Memoir into a Short Story | I wrote my first short story in 2004 when I wrote about the impact my mother’s death had on me. It was a memoir short story.
A short story is exactly what it sounds like — a short story. It can be fiction shorts or nonfiction shorts. Publications will specify types of shorts they will accept.
In nonfiction, the difference is the memoir short story is told in story form like a fictional story, not essay format, despite being based on real life. When told this way it creates an immersive experience for the reader that keeps them turning the page.
The story takes the reader on a journey, allowing them to experience the journey as well as deepen the emotional connection with the story and author. The reader lives it with the storyteller.
Word counts for short stories range from 500 to 20,000 words. The most common lengths are 500, 1000, 3000, 5000, 7000 to 10,000 words. You should always read the submission guidelines of any publication where you want to submit your story.
You can fictionalize your memoir to protect people involved, at which time it becomes a fictionalized story based upon a true story.
A memoir short story I wrote about the loss of my mother:
Story format in memoir can be seen in novels and movies.
You see this memoir or biographical type of story format often in movies based on books, such as Where The Red Fern Grows, based loosely on the childhood of Woodrow Wilson Rawls, and A River Runs Through It, based on the 1976 semi-autobiographical novel by Norman Maclean.
Granted, these are novel-length stories; however, the fictional format was followed to convey a deeper experience of the story, which translates well visually and produces award-winning movies.
The author starts in the narrator’s voice, which is non-intrusive in the character journey for the reader. The voice of the narrator only appears briefly in the beginning and at the end of the story.
Give us a sense of time and place.
Fiction techniques such as deep point of view and show-don’t-tell help you to create an engaging experience for your reader.
Carry us to the place you lived using landmarks that are markers of the time. If you lived in the eighties, they still had attendant and self-serve gas stations; in the fifties, soda still came in glass bottles. The body styles of vehicles are ways to communicate the era, as well.
These time markers set a scene and give the reader atmosphere and place without a lot of telling.
The year or decade your memoir happens in shapes it through the trendy fashion and styles, slang words, patterns of speech, and the surrounding landscape using key specifics for the time.
Show us, don’t tell us.
If your character feels rushed show us by letting us see them scurry around, running late, experiencing frustration in action, and through the consequences that result from being late.
Show us through body language, interactions, and dialogue. According to Janice Hardy in her book Understanding Show, Don’t Tell, you should use words that demonstrate the physical action such as I reached over, I picked up the cup.
In deep point of view, certain types of verbs put distance between you and your reader.
Alice Gaines in Mastering Deep Point of View says there are three kinds of verbs that do that: perceiving verbs, thinking verbs, emoting verbs. These are verbs that tell: perceiving: to know, to wonder; emoting: to see, to feel, saw, notice; thinking: to wish, to feel.
Use description to bring a scene alive and give it character. We learn a lot in this first paragraph of a fictional story, Anne of Green Gables, about the character Mrs. Rachel Lynde through description.
~First paragraph of Anne of Green Gables from the Project Gutenberg website.
CHAPTER I. Mrs. Rachel Lynde is Surprised MRS. Rachel Lynde lived just where the Avonlea main road dipped down into a little hollow, fringed with alders and ladies’ eardrops and traversed by a brook that had its source away back in the woods of the old Cuthbert place; it was reputed to be an intricate, headlong brook in its earlier course through those woods, with dark secrets of pool and cascade; but by the time it reached Lynde’s Hollow it was a quiet, well-conducted little stream, for not even a brook could run past Mrs. Rachel Lynde’s door without due regard for decency and decorum; it probably was conscious that Mrs. Rachel was sitting at her window, keeping a sharp eye on everything that passed, from brooks and children up, and that if she noticed anything odd or out of place she would never rest until she had ferreted out the whys and wherefores thereof.
More on Show, Don’t Tell:
There are many story craft books that teach techniques that can help you immerse your reader in your personal story whether it is fiction or creative nonfiction.
Other recommended reading for deep point of view:
Below are a collection of memoir short shorts or biographies, in story format, which are engaging reads, plus a couple of good memoir novels:
As I said above, reading is the best way to learn these immersive techniques, which can help you learn to show, rather than tell, your memoir in short story form. | https://medium.com/ninja-writers/how-to-turn-your-memoir-into-a-short-story-1cdb9b26d1d2 | ['Juneta Key'] | 2020-10-24 22:52:25.286000+00:00 | ['Memoir', 'Writing Tips', 'Short Story', 'Fiction', 'Writing'] |
The Icon Kaleidoscope | People spend most of their working time using Microsoft Edge and Office to get things done, and the teams were excited to experiment with the new materials on these popular products. We know how important these experiences are to our customers, so the icons needed to fit in and stand out at the same time. Based on extensive testing and customer feedback, we introduced rich gradients, broadened our spectrum of colors, and implemented a dynamic motion with ribbon-like qualities.
Our customers are also beginning to use mixed reality to accomplish goals in a completely new way. Blending the physical and digital worlds in our icons helped us think beyond traditional manifestations of colors, finishes, and materials. We needed to consider the third dimension, so we chose new materials that reflected light and depth and felt more tactile.
Whether our customers use their phone, PC, or VR headset to get work done, we wanted to reach people in every environment. The newest design guidelines helped us unify icon construction across the company and within each product family.
Designing our future together
Our community has been on this journey with us from the beginning, and the path to this icon redesign was no different. We conducted countless rounds of research for every icon. From mild to wild, we explored a multitude of design directions and listened to customers around the world. We learned what didn’t resonate with people (flat design and muted colors) and what did (depth, gradations, vibrant colors, and motion), all of which drove our decisions. | https://medium.com/microsoft-design/the-ripple-effect-expanding-our-icon-design-system-74b4d916b7a4 | ['Jon Friedman'] | 2019-12-12 22:01:34.712000+00:00 | ['User Experience', 'Technology', 'Icons', 'Microsoft', 'Design'] |
Misty, Guilty, Self-Centered, Lonesome, Bored | We travelled via a dilapidated Volkswagen van. (Illustration by Rolli)
The best thing for a hangover, fortunately, is coffee.
On my way back from Perks, I bought a newspaper. I hadn’t read a newspaper since 1998.
As I skimmed the day-old news, an all-caps advert caught my eye.
SAVE THE EARTH!
It was a punchy title. I kept reading.
ONLY TWO HOURS PER WEEK COMMITMENT!
That seemed manageable. I circled the phone number. Hoping that, when it came to Warm Fuzzies, Mother Nature wouldn’t be half as stingy as humanity.
A few days later, I was the fifth member of Earth Patrol, a group of citizens who convened once a week to gather plastic waste from the shores of our polluted urban lake. We travelled via a dilapidated Volkswagen van owned by Bethany, the founder of the group. She never left the driver’s seat herself, and when our allotted clean-up time expired, honked the horn until we piled back into the van.
On my first day, I found about fifty water bottles and as many straws.
I found twice as many of each the second day, and twice as many again the third.
“This feels futile to me,” I admitted to one of the teenage volunteers, as I picked up the umpteenth water bottle. “Does it feel futile to you?”
The young woman grimaced at a condom.
“This work is so, so important,” she said. “For our resumes.”
At the end of the month, Bethany “treated” us to a meal of juice and salad at Freshly. Both came in plastic containers. The cutlery was plastic, too, and the straws.
“When you think about it,” I said, laughing as I poked at my salad, “we’re creating as much plastic waste as we’re picking up.”
No good can come of thinking. Absolutely none.
Bethany dropped her plastic fork. She pushed back her plastic salad container. She stood up — and slammed her water bottle on the table top.
“YOU JUST DON’T GET IT, DO YOU!” she said.
I waited. It seemed she’d have more to say.
She didn’t.
Bethany stormed out the door, into her van — and started honking.
One by one, the volunteers abandoned their salads, and ran after her.
“I really need this for my resume,” the last volunteer said to me, apologetically, on his way out the door.
Tires squealed. The Volkswagen van vanished.
I finished my salad. Then gathered up everyone’s plastic containers and threw them in the trash.
The second I stepped outside, of course, it started to rain. I could’ve called a cab. But decided to walk home. Though it ruined my fedora, it was the best thing I’d done for the environment in weeks. If my selfless act attracted any Warm Fuzzies, I was too damp and cold and hungry to notice or care. | https://medium.com/pillowmint/misty-guilty-self-centered-lonesome-bored-395c10e2675e | ['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites'] | 2020-12-13 05:06:19.387000+00:00 | ['Human Parts', 'Cities', 'Nonfiction', 'Humor', 'Life'] |
How To Make Your Wordpress Site Load Faster | How to make wordpress sites load faster
In this article, we will cover one simple solution to completely improve your website speed without a headache. Learn how you can speed up your website. It’s much simpler than you might imagine and it requires almost no time to improve things like your speed test and other website related tests. Come along with me and learn how to make my WordPress site load faster isn’t a problem.
After hours of testing and retesting, I’ve come up with the best solution for improving your website speed. If you’re like me you spent a ton of time (wasted) trying all manner of things.
Before we can look at the solution we need understand the problems.
What creates a slower website?
A slower website can be the result of three different things in my book.
Huge unoptimized images
Poorly/All in one theme
Bad Plugins
Your image is slowing down your website
If your uploading images to your website without them being optimized then you are really shooting yourself in the foot. There are a ton of image optimization plugins (insert link) that someone can use.
The rule of thumb is images should be less than 500 kb. The smaller the better.
Stop buying themes that are sold by one man shops. Getting a faster website easy if you stop making mistakes like these.
Sorry, but the one man shop for a theme is normally a bad idea. It takes a team to make sure that a theme is properly updated and constantly stays secure.
Make sure if you buy a theme it has:
Excellent documentation
Support
History/Changelog
Bad plugins work the same as themes
Unsupported Plugins a no better than a bad theme. They are often coded poorly and not properly updated which means, your in for a ride.
Just don’t use them or find a similar plugin that works that is supported. I’d rather pay money for something that works than something that does not. Learning how to make my wordpress site load faster takes more work than most people would be willing to consider but the results are worth it.
What about free plugins?
If you are like me, free plugins are a source of frustration. They offer just enough solution to keep them installed but rarely offer a complete package. I tried all manner of plugin combinations but at none of them improved my speed without losing in another area.
Some popular plugins like wp cache or w3 total cache. It wasn’t till I was in a Facebook group that I heard of wprocket.
When I began exploring the solution I figured, maybe it would be worth the money. Maybe this one solution could give me a faster website easy.
I spent the money and I have never looked back.
What about a free theme
Free themes run into the same problem as plugins and everything else. If they are coded properly a theme can be a great asset to a business as it provides the best and safest security for your site.
A poor theme, on the other hand, can completely topple your speeds. If you don’t believe me try running a simple speed test at Pingdom.
You will see how bad off your pages are according to that site which also happens to be pretty accurate. Even if you rank in the top 90% if you are trying to get 95+ it can make the difference between a customer staying on your site and leaving.
So when you are thinking about how to make my WordPress site faster, always consider your theme.
How to make my WordPress site load faster
Website speed is a problem. How do we fix it? Most websites load over 2 seconds which is a huge problem.
Lot customer is lost money and the only way to solve this issue is to speed up that slow website.
Enter WP Rocket. I am a part of a bunch of Facebook groups where developers talk shop and this plugin kept coming up into my radar. So I dropped the money and decided to give it a shot and boy was I shocked!
Below is an image from Pingdom Speed Test, showing my somewhat optimized speed after days of the grueling test.
My Original Speed Test
This is what my website looked like after installing WP Rocket with zero input from me.
My new speed test using WP Rocket
Out of the box, it increased my speed and solved some of my problems.
Here is a follow up screenshot months later with the plugin optimized. Same website hardly any additional changes:
Completely customized Speed test
This one program did what several free plugins could never do, improve my website speed without a lot of input from me. Learning how to make my WordPress site load faster is a task that anyone can do but it requires you to spend a little money in order to achieve the results.
I’d highly recommend that you give WP Rocket a shot. How to make my WordPress site load faster is easy, just use WP Rocket.
Disclaimer: I am not an affiliate for wprocket. This post is not sponsored and is just my experience with the product. | https://medium.com/techtrument/how-to-make-my-wordpress-site-load-faster-d1a7ac6b6ffb | ['Patrick Mccoy'] | 2018-01-29 21:45:00.365000+00:00 | ['Speed', 'Design', 'Hacks', 'WordPress', 'Webdesign'] |
The Man Who Spent A Decade Locked In A Tower | “The first who had the courage to say as an author what he felt as a man.” — William Hazlitt, English journalist
Michel, born in 1533, was the poster boy for philosophers of the French Renaissance. His writing combined intellectual arguments with personal anecdotes. Today, that style is much revered and often copied, but not many have managed it with quite the eloquence and wisdom of Montaigne.
Michel’s upbringing was decidedly odd. His father insisted that Michel should only be spoken to in Latin. A cultured man would know the ancient languages he would argue. And so it was, that French born Michel’s first spoken language was the ancient verse of Latin. To compound his difficulty speaking his native voice, he was later assigned a German tutor who couldn’t even speak French.
If that wasn’t bizarre enough, his father, a practitioner of humanist philosophy, chose to send the new-born Michel to live with a peasant family. Pierre Eyquem, Seigneur of Montaigne and father to Michel had mapped out a pedagogical plan of upbringing. His father reasoned it would “draw the boy close to the people, and to the life conditions of the people, who need our help”. It’s hard to say what an impact that had on a baby, torn away from his natural mother and living unawares in a peasant cottage. At the age of three, the infant Michel was relocated back to the chateau.
At the behest of his father, all the servants, maids and every member of staff were instructed to only speak to Michel in Latin. This included his mother. A toddler, torn away from an adopted family and then thrown into an alien environment where nobody could speak his language. It’s a surprise that Michel didn’t harbor any deep resentment towards his father.
Indeed, Michel would thrive under the bizarre arrangements. Every morning he would be awaken by a musician playing one instrument or other. His Latin education was accompanied by constant intellectual and spiritual stimulation. He was familiarized with Greek by a pedagogical method that employed games, conversation, and exercises of solitary meditation, rather than the more traditional books.
Michel was embellished with the spirit of liberty and delight. He would later muse on his upbringing with great fondness. His father was a constant source of pride within his Essays. At the age of 13, having mastered the entire school curriculum, Michel studied law. It wasn’t long before he became counsellor to the parliament of Bordeaux and later join the French King at court. His crowning glory in the professional world was being awarded the highest honor of the French nobility, the collar of the Order of Saint Michael.
Not bad for somebody under the age of 30! It was in Bordeaux where he first met Étienne de La Boétie. The two hit it off immediately. Side by side, they would share long walks and lunches together. The two woven into each other’s lives and were inseparable. Heartache was soon to follow as Etienne fell ill with the plague. In 1563, he died with Michel weeping by his side.
The impact on Michel from his friend’s death was huge. It would be a further eight years of torment before Michel declared he had had enough and locked himself away in the tower. In 1571, he retired from public life to the Tower of the château, his so-called “citadel”, in the Dordogne, where he almost totally isolated himself from every social and family affair.
Locked away from all human contact, Michel lost himself in his vast collection of books. A library that stretched to over 1,500 volumes of work. On the bookshelf, Michel inscribed the following:
“In the year of Christ 1571, at the age of thirty-eight, on the last day of February, his birthday, Michael de Montaigne, long weary of the servitude of the court and of public employments, while still entire, retired to the bosom of the learned virgins, where in calm and freedom from all cares he will spend what little remains of his life, now more than half run out. If the fates permit, he will complete this abode, this sweet ancestral retreat; and he has consecrated it to his freedom, tranquility, and leisure.”
It was here that Montaigne would write his Essais, a collection of a large number of short subjective essays on various topics published in 1580 that were inspired by his studies in the classics. His written style would go on to influence many great authors and critics, from Shakespeare, Friedrich Nietzsche to Erich Auerbach.
According to Nietzsche, Montaigne ‘truly augmented the joy of living on this Earth’.
At the age of 59, Michel would die from quinsy (Peritonsillar abscess). An infection behind the tonsil that led to paralysis of his tongue. A man who loved to talk was rendered speechless.
As Michel de Montaigne once said: “Que sçay-je?” (“What do I know?”), it turns out, he knew an awful lot about life. | https://medium.com/lessons-from-history/the-man-who-spent-a-decade-locked-in-a-tower-9ba001cbad0a | ['Reuben Salsa'] | 2020-10-21 20:02:04.937000+00:00 | ['Ideas', 'Philosophy', 'Salsa', 'History', 'Writing'] |
How To Get Clients Quickly (and Authentically) | Photo by Dan Gold on Unsplash
You would love to do marketing authentically rather than succumb to short-term pressure, deceptive tactics, or hype.
Yet… you need to quickly fill your next course, program, or get some clients now.
How can you get clients ASAP without resorting to conventional marketing strategies that don’t feel right to you?
The key is to care more for your audience than they usually experience.
Here are 5 strategies to consider:
1. Personal Invitation
A client of mine successfully fills her woman’s group every year.
Another client successfully fills her multi-month course.
Both of them do a lot of personal invitations to fill these offerings. They thoughtfully email, message, and even call some of their ideal participants. They mention why they thought of inviting them specifically.
No need to pressure anyone, of course. However, care enough for that person to let them know you would enjoy having them at your event, and that you believe they’ll really benefit, for reasons you name that are specific to them, based on what you know of them.
And before the deadline, care enough to personally send a friendly, gentle reminder.
Personal, thoughtful invitations will always stand out, because most marketing is done en masse, without caring for individuals.
2. Create Content Related to Your Offer
If you haven’t been posting content consistently, your audience might not trust you enough to sign up for your offer (service, product, or event).
Although it’s a bit late in the process, you can still make a difference in your sign up rate by sharing some content now. Focus on creating content that is related to your offer. It will pique the interest of your potential clients and create some context that makes your offer more relevant to them.
Some ideas:
Share a success story about your offer (event / product / service) — what were they (or you) struggling with, or hoping to achieve, and how did they (or you) engage with your product, and what transformation did they/you experience? And what’s 1 tip the reader can use immediately? What’s a piece of foundational knowledge that the audience can benefit from, and that would make them better fit for your offer? (If they only understood _______ better, they would use or benefit from your offer more quickly, or they would be a better client.) A few key lessons learned from your work with clients. The story of why and how you created this offer. Why do you care? It’s not just for the money, obviously. What moves you to do this work? Look at the social media sharings from your ideal clients. The fact that they share those posts means they resonate with that type of content. Can you make similar content… except, related to your offer?
In all of these pieces of content, weave in the mention of your offering.
Also, experiment with what works better for your audience by trying different formats:
Long text (for example, this blog post you’re reading!)
Short text (example)
Short text with link (example)
Long video (example)
Short video (example)
Video interview (example)
Image (example)
The more variety you experiment with, the more you understand what works for your audience.
3. Distribution & Repetition
It doesn’t matter how much content you post, if few people see it.
And those who see it — have they seen it enough times?
People generally don’t buy something the first time they see it. It can take 3–5 times (some advertisers say 7–20 times!) before they make a decision.
You’ve probably seen an ad, pay no attention, and then by the 3rd time you see it, you might wonder “What is this thing anyway?” and then you actually take a look.
So don’t be shy about posting your offer several times over a few days or weeks. Aim for 5–7 times, if you still need to fill the offer. Ideally, post about it in different formats or in different ways to help prevent “ad fatigue”, i.e. to avoid annoying your audience.
Come back to the deeper purpose of your advertising: it’s not to make money, but to fulfill the mission of your business by serving people with a good product. Your repeated announcements about your offering is for the purpose of making sure that the people who need your product will see it.
Besides emailing your subscribers and posting to social media, the simplest way to reach lots of the right people is via Facebook Ads.
If you haven’t yet learned it, you might consider my online course about Facebook Ads.
4. Influencers / Promotional Partners
If you are willing to reach out to influencers, you may be able to fill your offer quickly.
An influencer is someone who has an audience similar to your ideal clients. Examples:
Bloggers
Youtube channels
Facebook Page owners
Instagram personalities
Email Newsletter writers
Twitter accounts with big following
Facebook friends who have thousands of friends
Any friend or supporter of yours who is a great connector
Reach out to influencers from an intention of creating a win-win:
Your know the influencer well enough to believe that your offer is something they are personally interested in, and would enjoy getting complimentary access to. Their audience is a great fit and would likely be grateful to know about your offer.
Thoughtfully reach out to influencers when the above criteria match.
Just like when you do individual outreach to potential clients, remember to treat your influencer like a VIP and do some research about them before you send them a message. Ideally you will have created a bit of a relationship with them by having engaged with their posts for at least a few weeks.
Of course, not everyone you reach out to will say yes — it will be a minority — but if you are willing to do this, you can quickly reach many new potential clients.
To dive deeper into this strategy, consider my online course for creating simple and effective entrepreneurial collaborations.
5. Improve Your Offer
One of the biggest reasons your audience isn’t signing up for your offer is that the title or description doesn’t resonate with what they want at this time.
Maybe you wrote the marketing from your own intuition… and perhaps you’ve been very much in your own head. You haven’t had enough conversations with your ideal client.
Perhaps you wrote it with some fear (“I really hope they sign up!”) and that kind of energy may be turning off your audience.
What is needed is for you to empathize with your ideal client, and then make a joyful invitation in your copy.
There are different ways to understand them better. You can look at their social media profiles and see what they’ve been posting lately. That gives you an indication of their state of mind and heart.
The best way, however, is to have a conversation. Talk with them directly. Try to do this over video so you can see their facial expressions and understand them better.
(If you can’t find enough of them to talk to, you probably know someone who knows your ideal client, and is willing to answer your questions with them in mind.)
Whomever you talk to, here are some questions to guide your audience/market research conversation. Try to form these questions in your own words, so you can speak it naturally, in context of the moment:
What are they going through right now, that your business can help with? How are they trying to solve the problem… using what products, services, events, or programs? What have they tried before? What about it worked well (if anything), and what didn’t work well? What concerns do they have about an offer like yours, that you can respond to in your marketing copy?
You can have such conversations through email, private messaging, or through a survey. However, the best information (and empathy) emerges from 1–1 video (or in-person) conversations.
If you’ve got a deadline (a workshop to fill), then just aim to have 3 of these conversations, or quickly send a survey to your email list.
The insights you gain will then allow you to re-write your marketing copy from deeper understanding of them.
Additional reading: 7 marketing copy ideas
When’s the best time to plant a tree?
Ultimately, what you need to do starting now is to build an audience that will allow you to more easily fill your future courses or client roster.
It’s tough to expect that, without much of an audience, you can sell anything successfully. It’s too much pressure on you, and on your small audience.
You need enough people to care about your brand (your authentic presence) to have enough people consider your offers.
It’s not reasonable to expect great results with last minute promotions like this… it’s like forcing a tree to produce fruit when you just planted the seed recently.
There is an organic process of people learning about your business… beginning to trust you… and coming to believe that you can really help them.
There’s also the organic process in their own life of coming to a place that they finally need and want your help.
Authentic marketing is about being of genuine service to your audience, and it’s tough for them to feel that you care, if you’re just selling to them all the time.
Therefore, dedicate yourself to show up consistently to be of real service to your audience, to educate and inspire them through your content. Then, the next time you need to fill an offer, your audience will already trust you.
I wish you gratitude and joy as you go about connecting with your audience. ❦ | https://georgekao.medium.com/how-to-get-clients-quickly-and-authentically-10dfab8aee0c | ['George Kao'] | 2020-03-27 19:24:54.433000+00:00 | ['Entrepreneurship', 'Authentic Marketing'] |
3 Steps to Correcting Faulty Intuitions | Thinking takes time and effort, meaning we can’t apply it to everything we do, otherwise it would take us too long to get anywhere.
For this reason we rely a great deal on intuition. It helps us navigate familiar roads, put our socks and shoes on, and get breakfast together. For the most part, it does its job and lets us be more selective in what we think about.
But intuition doesn’t always do a good job, and what we need in these cases is to inject some cognition. The question is whether we’re very good at recognising when this is the case, and how effective we are at following through.
Gut Override
In 2005, psychologist Shane Frederick developed the cognitive reflection test. It aims to measure how effective we are at identifying a flaw in our intuition and using our head to resolve the problem.
The test consists of 3 questions:
A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
In each case, intuition has something to say.
But given the nature of the questions and the context I’m asking them, it should be obvious that the questions are designed to be intuitively wrong.
They are trick questions, but they nevertheless highlight how intuitions go awry, and they give us a chance to see how good people are at recovering.
To succeed on this test requires wrestling with your intuition, and being able to do that is a skill which reaches far beyond the bounds of these questions.
How to Stage a Successful Override
Succeeding on the cognitive reflection test requires a few things.
You have to recognise the problem. If nothing seems amiss, you won’t have a reason to question the gut response. When conflict has been detected, you need the motivation to override it. If the problem is insignificant and you don’t have the energy, you won’t bother to correct the problem you recognised. You need the right tools for the override. What good is identifying the problem if you don’t know how to fix it?
A Closer Look at Each Level
Detection
To know you have to question the intuitive response, you need to have an intuitive response. If there were none you would have no choice but to work through the problem, or suffice with not having an answer.
But that intuitive response also needs to make itself open to questioning. There needs to be the hint of an error, a warning label of low confidence.
In a sense, recognising that you have to question your intuition is itself an intuition.
Here’s the widget question again:
If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
While you bear witness to the gut yelling 100! You might feel that there’s a problem with that answer and go in to check it out.
If the intuition of 100 is unaccompanied by a feeling of uncertainty or caution, the default is to trust your gut.
Research into processing fluency is relevant here. Studies have consistently shown that the easier something is to process, the more likely we are to trust it.
For instance, one study found the readability of a font can cause people to be more critical of the information:
When asked how many animals of each type Moses took on the ark, a clear font gets more answers of 2, a difficult-to-read font leads to more people recognising Moses never had an ark, Noah did.
In response to this research, some designers created a font called Sans Forgetica. It’s deliberately difficult to read, with the aim that it engages the reader’s mental muscles, making them more critical of the information and helping them remember the content.
In a review of the research on fluency, Adam Alter and Daniel Oppenheimer write:
“Whether a stimulus is easy to perceive visually, easy to process linguistically, easy to retrieve from memory, or semantically activated, people believe that it is truer than its less fluently processed counterparts.”
In the widget question, the nature of the question is hidden in a pattern so easy to pick up that it’s difficult to avoid. If one pattern is 5 5 5, and the other 100 [?] 100, it’s not hard to fill in the gap.
But if 5 machines make 5 widgets in 5 minutes, each machine takes 5 minutes to make 1 widget. Meaning100 machines can make 100 widgets in 5 minutes.
Motivation
We don’t often do things for no reason. We’ll think if we expect it to pay off somehow.
Those payoffs might be external, such as earning money or avoiding being punished; or internal, such as exploring something out of curiosity or the enjoyment of a challenge.
Sometimes we think for the pleasure of thinking, other times we’re pushed into thinking by our environment. Sometimes we think while something is at stake, other times we think just to pass the time.
When it comes to what we do with a dubious intuition, the role of motivation is key. If the problems and uncertainties presented by our intuition tug at the right heart-string, we’ll give it our attention.
Here’s the bat and ball question again:
If a bat and a ball together cost $1.10, and the bat costs $1 more than the ball, how much does the ball cost?
It turns out, people aren’t very good at finding the right answer. In Thinking Fast & Slow, Daniel Kahneman wrote that “More than 50% of students at Harvard, MIT, and Princeton gave the intuitive — incorrect — answer.”
However, watching other people as they make mistakes doesn’t offer an easy way to distinguish between not detecting an error and not having the incentive to fix it.
For that, we can look at research by Wim De Neys, Sandrine Rossi, and Olivier Houdé. They found that people who fail the cognitive reflection test aren’t oblivious of their errors, and that while they go with the intuitive response, they are less confident in it.
The fact they were aware there might be a problem but went along with that answer anyway, suggests there wasn’t enough of an incentive for them to bother trying to fix it.
While there is likely a large variety of people who enjoy engaging in these types of problems and those who prefer to invest minimal effort, some have argued that in general people are intellectually lazy, that we’re cognitive misers.
“The rule that human beings seem to follow is to engage the brain only when all else fails — and usually not even then.” — David Hull, Science and Selection
There is a trade-off between the ease and speed of intuitive thought or the slow and expensive process of thinking, and we have a natural preference for the easy route.
We have to be rather stingy with our mental effort because life is short and attention narrow. Each person will draw their own line between worth thinking about and not worth thinking about. Not all problems are worth solving.
Mindware
Thinking alone far from guarantees the correct response. You need to know how to solve the problem.
In the cognitive reflection test, you need to know the required mathematical operations, or at least know how you can learn them. Without that, even if you recognise an error and have the desire to fix it, you won’t.
Effort alone can backfire. It’s too easy to go looking for reasons to stick with the intuitive response, or to seek out evidence that supports what we want to be right. We could get to the point of overthinking, ruminating and going around in circles without ever making progress.
Effort has to be directed in the right way to get the right response. This is where learning comes in. We need a repertoire of rules, processes, systems, and models, that we can use to understand and solve problems.
Keith Stanovich calls these tools mindware. He writes:
“The mindware necessary to perform well on heuristics and biases tasks is disparate, encompassing knowledge in the domains of probabilistic reasoning, causal reasoning, scientific reasoning and numeracy.”
Many of the intuitions and heuristics we have now we have for a reason.
When we succumb to biased reasoning or a misguided intuition, it is often because some rule that worked in our ancestral past remains embedded in our mind, and kicks into action in situations it wasn’t adapted to.
Rather than mindware, others point to mental models. In The Great Mental Models, Shane Parrish writes:
“These are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world.”
There are many mental models we could learn. They include ideas like supply and demand, opportunity costs, and regression to the mean. There are models for different disciplines, from math to biology and economics.
Parrish recommends a variety of models from different disciplines, as that helps us see problems with different perspectives:
“By default, a typical Engineer will think in systems. A psychologist will think in terms of incentives. A biologist will think in terms of evolution. By putting these disciplines together in our head, we can walk around a problem in a three dimensional way.”
Refining Intuition
At first, the mental models we learn will be applied at the third stage. We will recognise a problem, we will want to fix it, and we will have the tools necessary to think it through and find the correct solution.
Over time, the mental models could themselves become the intuitive processes. When sufficiently learned they will become automatic.
“Largely subconscious, mental models operate below the surface,” writes Shane. “We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.”
In a world full of information, statistics, numbers, and opinions, it can prove monumentally difficult to make sense of it. Problems are often far more complex and ambiguous than the difference in price between a bat and ball.
It pays to ensure we are using the right models when we try to understand bigger problems.
Our intuitions might be convincing, but they might also be missing something, or answering an easier question than the one we want answered. We need mental models to alert and orient us towards better solutions. | https://medium.com/understanding-us/the-3-steps-to-correcting-your-faulty-intuitions-11d5397bb132 | ['Sam Brinson'] | 2020-08-07 18:56:17.792000+00:00 | ['Mental Models', 'Problem Solving', 'Intuition', 'Thinking', 'Psychology'] |
Words Are Hard, Okay? | I’m so sorry if I seemed like
I didn’t want to talk.
It was only because, like…
I didn’t want to talk.
Wait, no.
I mean, I have this thing…
It’s pretty dumb actually,
but I think too much, I think,
and I think too… anxiously.
I mean… how do you say?
Remember that thing you did?
I’m just too nervous to do it.
What? No, it doesn't matter what you did,
cause whatever it is, I’m scared of it.
Oof, did I say it… okay?
I just mean everything you do
in your, you know, life
is so scary for me to go through
and sometimes it doesn’t even feel like real life.
So you should maybe know,
I mean if you would like to,
of course, you don't have to though…
I’m just the type to…
deep breath
…worry I might seem mean
and ruin your mood,
just know that’s never what I mean,
I don’t wanna be rude.
I’m sorry if I’m being too annoying,
and then I’m sorry for when…
I inevitably apologize for being too annoying.
I’m sorry once again.
It’s funny I’m so bad at words
and yet sometimes I call myself a writer
when I’m all fear or at least two-thirds.
So maybe not a writer, just a liar.
So… if I seemed like I didn’t wanna talk,
it was because I was too nervous,
and sometimes I can’t say when I talk…
so I end up wordless.
Did that make sense?
Well, the curse of… you know what
has now affected my poems also,
so I’m sorry for making you read all of that,
and I’m also sorry for rhyming “also” with “also”.
Back to not talking about not talking. | https://medium.com/scribe/words-are-hard-okay-77e772000dca | ['Veronica Georgieva'] | 2020-11-14 13:45:51.320000+00:00 | ['Poetry', 'Anxiety', 'Poem', 'Mental Health', 'Conversations'] |
AI and the Law: Setting the Stage | While there is reasonable hope that superhuman killer robots won’t catch us anytime soon, narrower types of AI-based technologies have started changing our daily lives: AI applications are rolled out at an accelerated pace in schools, homes, and hospitals, with digital leaders such as high tech, telecom, and financial services among the early adopters. AI promises enormous benefits for the social good and can improve human well-being, safety, and productivity, as anecdotal evidence suggests. But it also poses significant risks for workers, developers, firms, and governments alike, and we as a society are only beginning to understand the ethical, legal, and regulatory challenges associated with AI, as well as develop appropriate governance models and responses.
The Revolution by Fonytas, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
Having the privilege to contribute to some of the conversations and initiatives in this thematic context, I plan to share a series of observations, reflections, and points of view over the course of the summer with a focus on the governance of AI. In this opening post, I share some initial thoughts regarding the role of law in the age of AI. Guiding themes and questions I hope to explore, here and over time, include the following: What can we expect from the legal system as we deal with both the risks and benefits of AI-based applications? How can (and should) the law approach the multi-faceted AI phenomenon? How can we prioritize among the many emerging legal and regulatory issues, and what tools are available in the toolbox of lawmakers and regulators? How might the law deal with the (potentially distributed) nature of AI applications? More fundamentally, what is the relevance of a law vis-à-vis a powerful technology such as AI? What can we learn from past cycles of technological innovation as we approach these questions? How does law interact with other forms of governance? How important is the role of law in a time where AI starts to embrace the law itself? How can we build a learning legal system and measure progress over time?
I hope this Medium series serves as a starting point for a lively debate across disciplines, boundaries, and geographies. To be sure, what I am going to share in these articles is very much in beta and subject to revision and new insight, and I’m looking forward to hearing and learning from all of you. Let’s begin with some initial observations.
Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. These techniques are used in a broad range of applications, spanning areas as diverse as health diagnostics, educational tutoring, autonomous driving, or sentencing in the criminal justice context, to name just a few areas of great societal importance. From a legal and regulatory perspective, the term AI is often used to describe a quality that cuts across some of these applications: the degree of autonomy of such systems that impact human behavior and evolve dynamically in ways that are at times even surprising to their developers. Either way, whether using a more technical or phenomenological definition, the justification and timing of any legal or regulatory intervention as well as the selection of governance instruments will require a careful contextual analysis in order to be technically workable and avoid both overgeneralization as well as unintended consequences.
Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. As a growing number of increasingly impactful AI technologies make their way out of research labs and turn into industry applications, legal and regulatory systems will be confronted with a multitude of issues of different levels of complexity that need to be addressed. Both lawmakers and regulators as well as other actors will be affected by the pressure that AI-based applications place on the legal system (here as a response system), including courts, law enforcement, and lawyers, which highlights the importance of knowledge transfer and education (more on this point below). Given the (relative) speed of development, scale, and potential impact of AI development and deployment, lawmakers and regulators will have to prioritize among the issues to be addressed in order to ensure the quality of legal processes and outcomes — and to avoid unintended consequences of interventions. Trending issues that seem to have a relatively high priority include questions around bias and discrimination of AI-based applications, security vulnerabilities, privacy implications of such highly interconnected systems, conceptions of ownership and intellectual property rights over AI creative works, and issues related to liability of AI systems, with intermediary liability perhaps at the forefront. While an analytical framework to categorize these legal questions is currently missing, one might consider a layered model such as a version of the interop “cake model” developed elsewhere in order to map and cluster these emerging issues.
Gesture Recognition by Comixboy, licensed under the Creative Commons Attribution 2.5 Generic license.
When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. As noted, AI-based technologies will affect almost all areas of society. From a legal and regulatory perspective, it is important to understand that new applications and systems driven by AI will not evolve and be deployed in a vacuum. In fact, many areas where AI is expected to have the biggest impact are already heavily regulated industries — consider the transportation, health, and finance sectors. Many of the emerging legal issues around specific AI applications will need to be explored in these “sectoral” contexts. In these areas, the legal system is likely to follow traditional response patterns when dealing with technological innovation, with a default on the application of existing norms to the new phenomenon and, where necessary, gradual reform of existing laws. Take the recently approved German regulation of self-driving cars as an example, which came in the form of an amendment to the existing Road Traffic Act. In parallel, a set of cross-cutting issues is emerging, which will likely be more challenging to deal with and might require more substantive innovation within the legal system itself. Consider for instance questions about appropriate levels of interoperability in the AI ecosystem at the technical, data, and platform layers as well as among many different players, issues related to diversity and inclusion, and evolving notions of the transparency, accountability, explainability, and fairness of AI systems.
Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. This information asymmetry also exists between the technical AI experts on the one hand, and actors in the legal and regulatory systems on the other hand, who are both involved in the design of appropriate legal and regulatory regimes, which points to a significant educational and translational challenge. Further, even technical experts may disagree on certain issues the law will need to address — for instance, to what extent a given AI system can or should be explained with respect to individual decisions made by such systems. These conditions of uncertainty in terms of available knowledge about AI technology are amplified by normative uncertainties: people and societies will need time to build consensus among values, ethics, and social norm baselines that can guide future legislation and regulation, the latter two of which also have to manage value trade-offs. Together, lawmakers and regulators have to deal with a tech environment characterized by uncertainty and complexity, paired with business dynamics that seem to reward time-to-market at all cost, highlighting the importance of creating highly adaptive and responsive legal systems that can be adjusted as new insights become available. This is not a trivial institutional challenge for the legal system and will likely require new instruments for learning and feedback-loops, beyond traditional sunset clauses and periodic reviews. Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.
The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions. Different legal and regulatory regimes aimed at governing the same phenomenon are of course not new and are closely linked to the idea of jurisdiction. In fact, the competition among jurisdictions and their respective regimes is often said to have positive effects by serving as a source of learning and potentially a force for a “race to the top.” However, discrepancies among legal regimes can also create barriers when harnessing the full benefits of the new technology. Examples include not only differences in law across nation states or federal and/or state jurisdictions, but also normative differences among different sectors. Consider, for example, the different approaches to privacy and data protection in the US vs. Europe and the implications for data transfers, an autonomous vehicle crossing state boundaries, or barriers to sharing data for public health research across sectors due to diverging privacy standards. These differences might affect the application as well as the development of AI tech itself. For instance, it is argued that the relatively lax privacy standards in China have contributed to its role as a leader in facial recognition technology. In the age of AI, the creation of appropriate levels of legal interoperability — the working together of legal norms across different bodies and hierarchy of norms and among jurisdictions — is likely to become a key topic when designing next-generation laws and regulations.
Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. In debates about the relationship between digital technology and the law, the legal system and regulation are often characterized as an impediment to innovation, as a body of norms that tells people what not to do. Such a characterization of law is inadequate and unhelpful, as some of my previous research argues. In fact, law serves several different functions, among them the role of an enabler and a leveler. The emerging debate about the “regulation of AI” will benefit from a more nuanced understanding of the functions of law and its interplay with innovation. Not only has the law already played an enabling role in the development of a growing AI ecosystem — consider the role of IP (such as patents and trade secrets) and contract law when looking at the business models of the big AI companies, or the importance of immigration law when considering the quest for talent — but law will also set the ground for the market entry of many AI-based applications, including autonomous vehicles, the use of AI-based technology in schools, the health sector, smart cities, and the like. Similarly, law’s performance in the AI context is not only about managing its risk, but is also about principled ways to unleash its full benefits, particularly for the social good — which might require managing adequate levels of openness of the AI ecosystem over time. In order to serve these functions, law needs to overcome its negative reputation in large parts of the tech community, and legal scholars and practitioners play an important educational and translational role in this respect.
Innovation by Boegh, Creative Commons Attribution 2.0 Generic license.
Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. Over the past two decades of debate about the regulation of distributed technologies with global impact, rough consensus has emerged in the scholarly community that a governance approach is often the most promising conceptual starting point when looking for appropriate “rules of the game” for a new technology, spanning a diverse set of norms, control mechanisms, and distributed actors that characterize the post-regulatory state. At a fundamental level, a governance approach to AI-based technologies embraces and activates a variety of modes of regulation, including technology, social norms, markets and law, and combines these instruments with a blended governance framework. (The idea of combining different forms of regulation beyond law is not new and, as applied to the information environment, is deeply anchored in the Chicago-school and was popularized by Lawrence Lessig.) From this ‘blended governance’ perspective, the main challenge is to identify and activate the most efficient, effective, and legitimate modalities for any given issue, and to successfully orchestrate the interplay among them. A series of advanced regulatory models that have been developed over the past decades (such as the active matrix theory, polycentric governance, hybrid regulation, and mesh regulation, among others) can provide conceptual guidance on how such blended approaches might be designed and applied across multiple layers of governance. From a process perspective, AI governance will require distributed multi-stakeholder involvement, typically bringing together civil society, government, the private sector, and the technical and academic community — collaborating across the different phases of a governance lifecycle. Again, lessons regarding the promise and limitations of multi-stakeholder approaches can be drawn from other areas, including Internet governance, nanotechnology regulation, or gene drive governance, to name just a few.
In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. The previous paragraph introduced a broader governance perspective when it comes to the “regulation” (broadly defined) of issues associated with AI-based applications. It characterized the law as only one, albeit important, instrument among others. Critics argue that in such a “regulatory paradigm,” law is typically reduced to a neutral instrument for social engineering in view of certain policy goals and can be replaced or mixed with other tools depending on its effectiveness and efficiency. A relational conception of law, however, sees it neither as instrumentalist nor autonomous. Rather, such a conception highlights the normativity of law as an institutional order that guides individuals, corporations, governments, and other actors in society, ultimately aiming (according to one prominent school of thought) for justice, legal certainty, and purposiveness. Such a normative conception of law (or at least a version of it), which takes seriously the autonomy of the individual human actor, seems particularly relevant and valuable as a perspective in the age of AI, where technology starts to make decisions that were previously left to the individual human driver, news reader, voter, judge, etc. A relational conception of law also sees the interaction of law and technology as co-constitutive, both in terms of design and usage — opening the door for a more productive and forward-looking conversation about the governance of AI systems. As one starting point for such a dialogue, consider the notion of society-in-the-loop. Recent initiatives such as the IEEE Global Initiative on Ethically Aligned Design further illustrate how fundamental norms embedded in law might guide the creation and design of AI in the future, and how human rights might serve a source of AI ethics when aiming for the social good, at least in the Western hemisphere.
As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run. The rise of AI leads not only to questions about the ways in which the legal system can or should regulate it in its various manifestations, but also the application of AI-based technologies to law itself. Examples of this include the use of AI that supports the (human) application of law, for instance to improve governmental efficiency and effectiveness when it comes to the allocation of resources, or to aid auditing and law enforcement functions. More than simply offering support, emerging AI systems may also increasingly guide decisions regarding the application of law. “Adjudication by algorithms” is likely to play a role in areas where risk-based forecasts are central to the application of law. Finally, the future relationship between AI and the law is likely to become even more deeply intertwined, as demonstrated by the idea of embedding legal norms (and even human rights, see above) into AI systems by design. Implementations of such approaches might take different forms, including “hardwiring” autonomous systems in such ways that they obey the law, or by creating AI oversight programs (“AI guardians”) to watch over operational ones. Finally, AI-based technologies are likely to be involved in the future creation of law, for instance through “rule-making by robots,” where machine learning meets agent-based modeling, or the vision of an AI-based “legal singularity.” At least some of these scenarios might eventually require novel approaches and a reimagination of the role of law in its many formal and procedural aspects in order to translate them into the world of AI, and as such, some of today’s laws will need to be re-coded.
Thanks to the Special Projects Berkman Klein Center summer interns for research assistance and support. | https://medium.com/berkman-klein-center/ai-and-the-law-setting-the-stage-48516fda1b11 | ['Urs Gasser'] | 2017-06-26 21:42:41.642000+00:00 | ['Governance And Tech', 'Algorithms', 'Law', 'Artificial Intelligence', 'Data'] |
How to Quiet Your Inner Critic | You know, that voice that’s telling you you’re worthless.
That voice. We’ve all heard it.
That insidious little bastard that gets into our heads and pollutes our thoughts and our actions.
It tells us we can’t do it. It tells us no one cares. It tells us to give up. It tells us to keep silent.
It destroys us. Moment after moment. Day after day.
Does it drive you crazy? Does it bring you to tears? Do you hate it?
We all have an inner critic. And whenever it decides to intervene in our lives, we wish it would just fuck off. Unfortunately, it’s not that simple.
You can’t wave a magic wand and make it disappear. You can, however, learn some practical tools for dialing down your inner critic’s piercing and debilitating voice. That’s the aim of this article.
Below are 5 steps you can take today to begin to dial down your inner critic. If that sounds good to you, let’s get started.
Step #1: Learn to identify when your inner critic is speaking
This one might seem obvious, but it’s more difficult than you think.
Typically, we identify with our thoughts. We claim them as “me”, “mine”, or “I”. So, when the inner critic speaks, we internalize those thoughts and we take them to be a truth about us. It can be a challenge to tease apart what we think about ourselves from what our inner critic is telling us.
Our first step, then, is to learn how to pinpoint the moments when our inner critic is speaking. To catch these moments, pay particular attention to when you’re feeling anxious, distracted, or numb. More generally, your inner critic has likely spoken whenever you perceive a shift in mood toward the negative.
Step #2: Change your inner critic’s thoughts
Once you’ve identified the thoughts belonging to your inner critic, you need to remind yourself that those thoughts don’t belong to you. So, if you think, “I’m too stupid to contribute to this conversation”, change the sentence to “My inner critic thinks, I’m too stupid to contribute to this conversation.”
This small change in how you think will provide a little bit of separation from your inner critic’s thoughts. In time, this separation will decrease your inner critic’s volume and influence.
Continue to do this with all your inner critic’s thoughts until it has become a habit.
Step #3: Have compassion for your inner critic
Now that you’re altering how your inner critic speaks to you, you need to investigate the situation. What is causing your inner critic to speak up? What is it afraid of? What is it ashamed of? What does it wish it could control?
In short, what are the authentic feelings underlying the inner critic’s voice?
Investigating the source of the inner critic’s harsh language will help you understand what it’s trying to accomplish. Usually, your inner critic is expressing fear, shame, or helplessness. In a fucked up sort of way, it’s actually trying to help you.
But, as you know, it’s not helping. It’s just making things worse. Unfortunately, your inner critic is a part of you — and it’s terrified.
So, instead of hating your inner critic, try to have some compassion for it. This is going to be hard because you know it’s just bringing you down. To generate some compassion for your inner critic, try to picture it as a scared child that doesn’t know how to deal with the situation except to lash out.
Use the investigation of your authentic feelings to relate to your inner critic. If you’re honest with yourself, your investigation will have uncovered some difficult emotions. It’s those emotions that your inner critic is reacting to. Agree with your inner critic that those are difficult emotions to deal with and reassure it that you’ll be OK.
In time, having compassion for your inner critic will do far more good than feeling hatred toward it.
Step #4: Write a more realistic explanation
Next, you want to create a new narrative for yourself — not a lie, but one that is closer to reality. If your inner critic said, “I’m too stupid to contribute to this conversation”, write down a more realistic description of the situation.
For example: although I often struggle to add value to conversations, I know this has more to do with my lack of confidence than my lack of intelligence.
We all struggle to put our best foot forward. Some of us more than others. The important point to recognize is that the harsh judgements of our inner critic are usually way off base. They exemplify our worst nightmares and darkest thoughts.
Writing down an alternative narrative helps us to see what happened in a more positive, hopeful, and productive light. But, again, it’s not about lying to ourselves — it’s about recognizing that our inner critic is distorting reality, so we need to do the work of establishing a more realistic perspective.
Step #5: Act in spite of your inner critic
When you’re inner critic rears its ugly head, we tend to cower in fear. This fearful reaction has become a habit through a lifetime of repetition. Our fear makes it seem impossible to change our reaction, and our inner critic wants to keep it that way.
Thankfully, we were all born with the antidote to our inner critic — bravery.
Each time you hear your inner critic, you have a choice — run away and hide or stand your ground. At first, standing your ground will seem impossible. You’ve spent a lifetime bending to the will of your inner critic. This is why we need to start small.
Are you scared of speaking up at a meeting? Start by asking a benign question.
Are you anxious about a piece of work or assignment? Break off a small chunk and start there.
Do you worry that no one will like you at the party? Narrow in on one person to have a chat with and go from there.
You will be amazed how even the smallest possible step — taken bravely— can change your outlook on life. It will make you realize that you’re not as trapped by the thinking of your inner critic as you think you are. And this will give you hope for the future.
Quiet your inner critic
We all have an inner critic and we all want it to go away. Thankfully, there are concrete steps we can take to help make that happen.
It will be easier for some of us and harder for others, but we can all do it. We just have to stick with it.
These tools and practices will help you quiet your inner critic: | https://medium.com/curious/how-to-quiet-your-inner-critic-f907b7f06597 | ['Jeff Valdivia'] | 2020-11-03 02:43:49.665000+00:00 | ['Self Improvement', 'Wellbeing', 'Self Doubt', 'Fear', 'Psychology'] |
Cognitive Transformation | Distributed to Artificial Intelligence
Cognitive Transformation
A technical & architectural overview
Image by ambroo from Pixabay
Most of us in the IT industry most likely heard, read about, or already started working with the cognitive systems for various practical use cases. Would you like to have a quick taste? Then, keep reading! I simplified this complex topic as much as possible in this article.
The purpose of this article is to briefly introduce to you what cognitive computing entails, its current progress in the industry, the value proposition for its necessity, and my personal observations and thoughts on trends and future plans.
Cognitive computing is made up of a combination of emerging technologies, processes, and approaches in cross disciplines: cognitive and computer science. It is being built on long term research and development work in Artificial Intelligence (AI). Cognitive computing, as a strong extension to AI, is the simulation of our thought processes in computerised models. From a scientific discipline perspective, in a nutshell, cognitive computing emerged by combining related attributes of cognitive science dealing with natural intelligence and computer science dealing with artificial intelligence.
Many systems in the industry and research settings can display knowledgeable behaviour. These behavioural elements are the simulation of human thought processes using computerised models.
The value of cognitive computing can be realised by three major capabilities in computer science:
a) Self-learning systems using pattern recognition
b) Data mining and analytics
c) Natural Language Processing
These three key points can enable the creation of autonomous computer systems that can solve business, scientific, academic, and other problems with minimal input or intervention from human beings.
I ask two critical questions while assessing the merits of cognitive transformation:
1. What is the significance of this era needing new solution approaches?
2. What made it a new milestone and steppingstone in our technological transformation?
These two broad questions pop up three key considerations in my analysis:
1. We are stuck in our technological progress!
2. Emerging technologies in computer and cognitive science can offer new capabilities to extend our current capabilities to produce novel solutions.
3. Computers finally started mimicking human thinking to some limited extent.
Whilst there are some skeptics, some of us are about to reach the consensus that cognitive computing can be a real game-changer for our productivity and effectiveness to address the growing and sticky world problems.
Transformative leaders keep emphasizing that we need new approaches and more productive ways in agility to meet our growing demands on this aging planet. The prime premise is articulated as “today’s problems cannot be solved with the technologies of the past”. This emphasis resonated with the computer and cognitive scientist hence a strong thought leadership prevailed globally.
The ultimate goal of cognitive systems is the capability to solve complex and time-consuming problems without human intervention. This statement has tremendous implications on our technological development and consequentially in economies. We attempt to bring the technology to such a state that it is almost autonomous in solving our difficult problems faster and more efficiently than we perform as human beings.
From emerging technologies perspective, the key contributions to cognitive computing have been in developing research in deep machine learning, neural networks, big data analytics, data mining, predictive analytics, smart embedded objects, mobility, and natural language processing. We have been dealing with these technologies over decades however now the fusion of these technologies in an integrated manner has started creating synergistic outcomes for the progress and rapid growth of cognitive computing.
Even though there is a massive debate going on in various forums, in my humble opinion, artificial intelligence and cognitive computing are two different fields from developmental and scientific discipline perspectives. However, they are closely and tightly interrelated.
Whilst the focus in cognitive computing is that even though computers learn by themselves, the process is still managed and controlled by human beings. By looking at from the same perspective, AI literature envisages that augmented intelligence of machines can surpass human capability hence they can be autonomous. This premise has been a controversial topic and AI progress rightly or wrongly created fear in society. We need to learn more about the implications of emerging technologies, increase our knowledge experimentally across disciplines, and gain new experiences in mass collaboration to deal with this fear and leverage the augmented intelligence of technological offerings.
These comparative observations in my interactions with the thought leader collaborators in the industry and academia created an interesting sentiment that I wanted to share here. Whilst AI pessimistically scare the people for losing control over the machines, cognitive computing is displaying an optimistic acceptance for its usefulness without dominating and controlling our lives.
To simplify this view metaphorically, AI is like an aggressive boss entering our lives as a fearful controller, whereas cognitive is considered like a docile personal assistant who facilitates the process, correct our errors gently, improve our tasks, solves our problems faster, and makes our jobs and lives easier.
Photo by Nick Fewings on Unsplash
From a technical point of view, cognitive systems acquire knowledge from the vast amount of data turned to processed information using various analytical techniques such as descriptive, prescriptive, predictive and semantics. Big data analytics have been the key enabler of generating information and turning it to knowledge.
The cognitive systems transform content into context using confidence-weighted responses and supportive pieces of evidence. For example, the machine learning algorithms in the systems refine the way they use patterns with multiple repeat loops. This specific way of processing data can enable guessing new problems hence can generate new solution models. This approach is characterised as deep learning which contributes to the functional requirements of cognitive systems.
So far AI, as a growing discipline in computer science, achieved many intelligence marks in various fragmented forms. For example, there is a growing body of knowledge in expert systems, neural networks, virtual reality, and robotics. The challenge has been integrating these fragmented pieces for coherent capability and value propositions. Based on this premise, cogent speculative ideas prevailed that a cognitive approach must leverage AI technologies and integrate them for productive outcomes and value propositions.
From a historical point of view, computers demonstrated fast calculations and data processing. However, they struggled understanding natural language or image recognition. With the introduction of AI, they started having these new capabilities. With integrated capabilities introduced by cognitive computing, computers now can learn in a way mimicking human learning.
From the industry point of view, as you may notice from the media, there is a tremendous focus on cognitive computing. One of the recent remarkable developments in cognitive computing was achieved with the implementation of IBM’s Watson. It is practically used in industry as a support system for example by doctors. It helps collate the deep knowledge around a patient’s condition, history, link them to established journal articles, medical best practices, and diagnostic tools, by analysing these factors in an integrated manner, it rapidly provides informed advice to health professionals.
Other leading technology organisations such as Google, Microsoft, Samsung, Amazon, and Apple made a considerable amount of progress with cognitive computing. For example, most of us familiar with and have been interacting with Siri, Cortana, Bixby, Alexa, and Google Assistance using natural language day by day.
There can be limitless use cases for cognitive computing. Some areas which are leveraging the capabilities of cognitive computing are cybersecurity, insurance, governance, education, climate, financial, manufacturing, and medical solutions.
The future of cognitive computing can be outlined in three key words discovery, engagement, and decision. In the near future, cognitive systems are expected to be able to simulate how the brain truly works. Imminently, they can help us understand complex issues that we couldn’t grasp before, manage complex risks with informed decisions, gain insight for solving our most difficult problems. These can be achieved with cross-discipline studies and tight integration among multiple disciplines towards fundamental shared goals and objectives of humanity.
From a practical point of view, cognitive computing also interrelates with Cloud Computing and uses the Cloud service model efficiently as its enabling hosting infrastructure. It leverages Internet technologies and contributes to further development of IoT (Internet of Things). In the near future, we can see the creation of a new integrated platform of Cognitive, Cloud, IoT, Mobility, integrated with Analytics and Big Data ubiquitously in the workplace, homes, schools, shopping centres, banks, entertainment centres, and everywhere else that we can imagine.
This is a topic close to my heart, with a strong interest in cognitive science as a technologist, hence I serve as an advocate of the topic by creating awareness in all walks of my life. I believe that it is time to embrace cognitive computing not only for its business value but also for potential benefits that it can offer to our society as we have been yearning and fantasizing as depicted in science fiction books and movies for many decades.
You are welcome to join my 100K+ mailing list, to collaborate, enhance your network, and receive technology newsletter reflecting my industry experience. | https://medium.com/technology-hits/cognitive-transformation-b60d6e24f963 | ['Dr Mehmet Yildiz'] | 2020-12-06 06:39:58.719000+00:00 | ['Leadership', 'Artificial Intelligence', 'Cognitive Science', 'Cognitive Computing', 'Innovation'] |
The Couple In Economy | The Couple In Economy
Short character study written on the train home
1972
The affair had begun in late summer, as most seem to do, the leaves flooding the grasses and the sun dimming in heat and hours. She was young and unextraordinary, he was wealthy and bored of his wife. And so it has been, is, and will be in the lives of thousands of lovers yesterday, today, and tomorrow.
And yet, that morning, something felt strange to Boston’s commuters about this particular extramarital liaison.
June Carmick was curiously plain, in that way which makes one peer closer over the folded pages of a morning newspaper.
She had an unusually prominent jaw, an ugly, bulbous quality to her nose and a hairline that seemed to start too far up on her head. Still, her complexion was good, one supposed, unspoilt by cosmetics or scrubbing, and her eyes had a greenish light to them that could almost be attractive when she spoke with vigour about her latest paperback or purse.
A dull creature, but not quite ugly enough or stupid enough to be worthy of comment. Perhaps that made it worse. In the shudder of carriages, as she locked hands with the doting, handsome man who grinned down on her, you could almost feel the loud, echoing chorus of bemused questions from the travellers.
Why her?
The man accompanying her was a catch, but a married one. Handsome, late fifties, sprawling in a way which suggested money was not the reason he was travelling with her in economy class. No, he’d just come back from the suburbs with her, some unimpressive tenement block no doubt with peeling wallpaper, hard mattresses and obnoxiously loud cats. But he had stared at her, in the low swinging light of the carriage before the sun rose, like she was more beautiful than anyone in the world.
She worked in the typing pool, he was a partner. Isn’t that the way most of these stories start? They were discreet, her eagerly so in the desire to keep him, and him perhaps more so in the desire to keep the liaisons from his formidable socialite wife. Who knows how it began, this strange joining of an unextraordinary woman with an extraordinary man? Perhaps he grabbed her hand one day in an elevator and asked her to join him for coffee as the September rain wept against the cafeteria windows. Or maybe he wrote three words on the back of a memo with a number and left it on her desk as she chewed on a pencil, tasting the wet graphite and paint in her mouth. It didn’t matter, and it is lost to history. The deed was done, and they were set.
“You need to kill my wife,” he had said to her once, staring up at the bulb that flickered hard against the yellowing paper. She lay flat on her back, silent, wondering if he was joking. “If you kill Susan, we could keep everything. I could marry you.” The thought seemed to form in his mouth before his mind had fully formed the plan. “You have no motive, no connection to her. It would look like an overdose if you did it right.”
She had swallowed hard, turning to watch the slow crease of his eyes that sprang like tree roots from his temples when he was thinking. “Please, Robert, no,” she had said, hoping the words were less weak than she felt. “I’m not a killer.”
But it was useless, and he kissed and pleaded so fiercely that he had barely placed the pills in her hand before she had nodded an unspoken promise. Six small grey pills with a short groove down the middle that, when ground with a pestle, looked white, like sucralose. She didn’t ask what they were, or why. Just whether to ring the front bell of the great whitewashed house at the top of the hill, or ask for the staff to let her in from the kitchens.
“Oh no,” Robert had said, lighting a cigarette with a magnificent flourish of a lighter in one hand. “You must have tea with her. Susan would never accept a gift from a stranger without meeting them. Bad etiquette, June dear.”
“What then?”
“Just put the powder in the sugar bowl when she goes to show you a photo album or something. Say you don’t take it. Say you’re a historian of homes in Massachusetts or something. She’ll like that. Always enjoyed the glamour of academic attention, Susan.”
He said the words like she was already dead.
“How can you be so cold with her? Didn’t you ever love her?”
He laughed, the performative laugh of a man bred for dinner parties and polo. “Darling, in my circles, a marriage is about a merger at best and avoiding a bastard at worst. With Susan, it was about capital and daddy’s money. No one,” he finished, sounding rehearsed, “Ever loved Susan Jones-Vanderst.”
She let silence break a few words more from him.
“She’s cold, my dear June. Very cold.”
And with that, he turned back to her on the bare white sheets, cigarette in mouth, as if they had discussed whether to buy croissants on the way to the station that Thursday. | https://madelainehanson.medium.com/the-couple-in-economy-1c0936d70800 | ['Madelaine Hanson'] | 2020-01-05 12:44:13.170000+00:00 | ['Dark Fiction', 'Horror', 'Fiction', 'Short Story', 'Psychology'] |
15,000 children, 800 women die every day mostly of preventable or treatable causes | The world made remarkable progress in child survival between 1990 and 2018. The under-5 mortality rate — the probability of a child dying between birth and his or her 5th birthday — fell by over 50% to 39 deaths per 1,000 live births. Mortality among children of ages 5–14 years also fell by over 50% to 7 deaths per 1,000 children. The reduction in the under-5 mortality rate has accelerated and nearly doubled since 2000. It now declines by 3.8% annually, compared to 2% between 1990 and 2000.
Despite progress, there are still huge disparities in child survival by region. In 2018, around 8 in 10 under-5 deaths occurred in just two regions: Sub-Saharan Africa (54%) and South Asia (28%). Sub-Saharan Africa continues to suffer the highest under-5 mortality rate in the world, followed by South Asia. One in 13 children in Sub-Saharan Africa and 1 in 24 children in South Asia die before their 5th birthday.
According to the UN IGME report, in 2018, 121 countries had already achieved an under-5 mortality rate below the Sustainable Development Goal (SDG) target of 25 or fewer deaths per 1,000 live births. Of the remaining 74 countries, 20 are on track to achieving SDGs if current trends continue. Progress must accelerate in 54 countries to reach the SDG target by 2030.
Somalia, Nigeria, Chad, Central African Republic, Sierra Leone and Guinea are among the countries with the highest under-5 mortality with more than 100 deaths per 1,000 live births. This rate is 20 times more than the rate of under-5 deaths in high income countries (5 deaths per 1,000 live births).
The neonatal period (the first month of life) is a critical period for child survival. Between birth and the 15th birthday, the risk of dying is highest in the neonatal period. About 40% of deaths under 15 years of age occur in the first month of life. Globally, an estimated 2.5 million newborns died in the first month of life in 2018, which is approximately 7,000 newborns every day. Progress in reducing neonatal mortality is slower than progress for older ages. As a result, the share of neonatal deaths relative to all under-5 deaths has increased.
Around the time of a child’s birth is also critical for mothers.
The global maternal mortality ratio (MMR) declined 38%, from 342 deaths per 100,000 live births in 2000, to 211 in 2017. That said, an estimated 295,000 women died due to complications during pregnancy and childbirth in 2017.
In 2017, Sub-Saharan Africa had the highest MMR among seven regions, at 534 deaths per 100,000 live births. South Asia was second-highest with 163 deaths per 100,000 live births. South Asia saw the greatest decline between 2000 and 2017. Its MMR fell 59%, from 395 to 163 deaths per 100,000 live births.
As with child deaths, over 80% of the global burden of maternal deaths are in Sub-Saharan Africa (68%) and South Asia (19%). East Asia and Pacific account for 7% of global maternal deaths, and the rest of the world share the remaining 5%.
Globally, the lifetime risk of maternal death nearly halved between 2000 and 2017 — from 1 death in 100 women to 1 in 190. The risk is highest in Sub-Saharan Africa because of the incidence of high risk per birth and high fertility; 1 out of 38 women still died in 2017 due to pregnancy or childbirth.
Reports call for higher coverage of quality care for mother and baby
Both the UN IGME report and the UN MMEIG report urge action to accelerate progress in preventing maternal and child deaths, as too many women and children continue to die from easily preventable and treatable causes. Given that almost half of under-5 deaths occur shortly after birth, many child deaths and maternal deaths can be prevented by reaching high coverage of quality antenatal care, skilled care at birth, and postnatal care for mother and baby. | https://medium.com/world-of-opportunity/15-000-children-800-women-still-die-every-day-mostly-of-preventable-or-treatable-causes-302e3bd6cf1e | ['World Bank'] | 2019-09-20 14:57:15.786000+00:00 | ['Disease', 'Health', 'Data', 'Maternal Health', 'Children Health'] |
Three Different Things: March 19, 2020 | Three Different Things: March 19, 2020
Side Effects of Social Distancing, Social Distancing Simulation, and Managing Your (Newly) Remote Workers
(Coronavirus)
BLOOM: I think another thing that’s going to be damaged in the long run, actually, is: if everyone’s working from home, there’s not going to be that kind of workplace discussions, coffee-table discussions, lunchtime talk. And most of that, it turns out, is important for long-run innovation. So day-to-day, we can get along with, you know, if you’re dealing with the same current customers or same ideas. But when you examine businesses or scientists or even the way I do my own research, a lot of that creativity comes from idle time and relaxed discussion with colleagues, and that’s all gone. So I also worry that five, 10 years out from now, we will see this as another lowering in long-run growth rate because we’ve taken a big hit to innovation.
I have a few teenage kids and they often do homework in group hangouts. They’re reading a book or doing a report while a device is nearby, maybe on the table or floor, with a few of their friends doing the same thing. It’s like a virtual office cubicle… they mutter things to themselves, a friend hears it and chimes in… I think crisises like this will end up highlighting the behavior differences between generations. We don’t have to be in the same space to have watercooler talk, we just need new norms created by different generations.
2. How to Flatten the Curve, a Social Distancing Simulation
If you want to follow along with the code, I use R, a free language and environment for statistical computing and graphics. You’ll want to install it first if you haven’t already. Installation is usually straightforward.
Flatten the curve, with code! This is a pretty cool write up that’ll get you into R if you haven’t tried it before.
3. A Guide to Managing Your (Newly) Remote Workers
Establish structured daily check ins: Many successful remote managers establish a daily call with their remote employees.
A daily check-in is the single most vital thing to do in a virtual environment. Doesn’t have to be 1-on–1, but it does have to be daily…. without daily standups, I find work just slips because people lose focus on the work and the rest of the team. | https://medium.com/early-hours/three-different-things-march-19-2020-ba698b3946de | ["Sean O'Brien"] | 2020-03-19 11:49:24.805000+00:00 | ['Programming', 'Data Science', 'Economics', 'Coronavirus'] |
VC Corner Q&A: SC Moatti of Mighty Capital | SC Moatti is the managing partner of Mighty Capital, a Silicon Valley venture capital firm, and Products That Count, one of the largest and undoubtedly the most influential network of product managers in the world.
Previously, SC built products that billions of people use at Facebook, Nokia and Electronic Arts. Andrew Chen, General Partner at Andreessen Horowitz, called SC “a genius at making mobile products people love.”
— What is Mighty Capital’s mission?
Mighty Capital is an early-growth Silicon Valley venture capital firm. We deliver outstanding returns by investing in great products that are also great businesses, like Airbnb, DigitalOcean and Amplitude. Amplitude calls us the “best value for the dollar invested.”
Our moat is exclusive access to a global business acceleration platform which helps our portfolio companies drive sales, hire talent, validate ideas, and exit via corporate M&A: Products That Count, one of the largest and undoubtedly the most influential networks of product managers. That’s why top-tier venture firms like USVP, Mayfield and Aligned Partners have invited us to co-invest with them, and why Business Insider says we’ve come up with “a new way of doing venture capital.”
— What is one thing you’re excited about right now?
Times of hardship are when the best companies are founded because that’s when the most brilliant entrepreneurs come up with their most creative and resilient ideas.
— Who is one founder we should watch?
Charlie Silver, founding CEO of MissionBio, and Spenser Skate, founding CEO of Amplitude.
— What are the 3 top qualities of every great leader?
Integrity, humility, tenacity.
— What is one question you ask yourself before investing in a company?
If I’m only able to make one investment this year, is this the one?
— What is one thing every founder should ask themselves before walking into a meeting with a potential investor?
Spend as little time as possible telling your story so you can spend as much time as possible build a relationship with your investor.
— What do you think should be in a CEO’s top 3 company priorities?
Sales, hiring and fundraising.
— Favorite business book, blog or podcast?
Crossing the Chasm is a classic. I also often read Stratechery. And with the risk of sounding self-promoting, I love the resources we produce at Products That Count (www.productsthatcount.com) -they are meant to help all of us build better products.
— Who is one leader you admire?
Jeff Bezos. I admire his strategic mind and vision, his ability to execute, and his boldness/courage.
— What is one interesting thing most people won’t know about you?
I sing in a rock band.
— What is one piece of advice you’d give every founder?
You are ready.
Ready to make a pitch? Startups looking for an opportunity to pitch Mighty Capital can apply here! | https://medium.com/startup-grind/vc-corner-q-a-sc-moatti-of-mighty-capital-243ad14a7af0 | ['The Startup Grind Team'] | 2020-09-02 15:59:17.383000+00:00 | ['Vc Corner', 'Fundraising', 'Startup', 'Venture Capital', 'VC'] |
Charlie Parker Evolved into “The Birdman” and Changed Jazz Through Bebop | Charlie Parker Evolved into “The Birdman” and Changed Jazz Through Bebop
The iconic saxophonist was born 100 years ago and left a lasting imprint on music and the 20th century during his short life
Photo by Janine Robinson on Unsplash
“Bird’s contribution to all the jazz that came after involved every phase of it. He sure wasn’t the beginning, and he wasn’t the end- but that middle was bulging!” — Dizzy Gillespie.
There is something special about a virtuoso musician that allows them to stand head and shoulders above the rest of the pack.
Since Charlie Parker’s death in 1955, his skillful spirit is still very much alive in the music emanating from the horns of jazz musicians. This year marks what would have been the Birdman’s 100th birthday.
And the sixty-five years since Parker’s passing has seen the lives of Americans change immensely. The Civil Rights Movement occurred shortly after Parker’s death and since that time minorities have made strides towards equality.
The time period leading up to Parker’s discovery was unique in that it allowed for a young and ambitious artist to come onto the jazz scene and make it big. Few people realized at the time that this new icon would be at the forefront of a stylistically new sound known as bebop.
When critiquing a figure of great importance it is important to retrace their roots. Individuals that adapt to the social and economic issues of a time period tend to be the ones that history remembers.
In the late 1920’s the stock market on Wall Street crashed and several other factors led to what would be known as the Great Depression. During a time when most of the country was experiencing a devastating economic depression, Kansas City hosted a phenomenal entertainment industry.
The night clubs that fueled this industry shielded it from the hardships faced by many other parts of the country. Thriving entertainment requires several talented individuals to keep a hot scene on fire.
Many musicians came to Kansas City during the 1930s because it was considered a musical haven. With such a large amount of talent concentrated in an area with a radius of about fifty miles would make it inevitable that new talent would be discovered.
Parker was only a young boy when he first began to discover music. According to the jazz guitarist and composer Jens Larsen, Parker had little parental guidance during his youth. Parker’s father left the family and his mother was forced to work nights to provide for her children. Like most young children seeking a thrill, Parker occupied his time by being mischievous. While wandering around Kansas City in the evenings, Parker would usually find himself sneaking into one of the many music halls that existed in the city.
A lack of authority figures in Parker’s life, in combination with the exciting atmosphere of his childhood, most likely fostered what would become his unique thrill-seeking personality. After all, Parker was not only in search of big things musically, but he was also interested (or became immersed) in the highs and lows of heroin addiction.
Parker’s battle with drugs may have stemmed from seeing his childhood hero Lester Young become afflicted with alcoholism at the end of his career. Another aspect of Parker’s personality was that he was a loner.
Many people who knew Parker described him as such, although his actions would not have left any doubt concerning his status as a loner. Quiet seeking likely gave him the dedication and time needed in perfecting his saxophone trade.
Parker would often frequent a local club known as the Reno Club. This place served both black and white customers and attracted some fairly big names from the area. Parker most likely visited this club so often because his childhood idol, Lester Young, was known to play here very often.
Lester Young is described as one of the tenor saxophone’s “Titans”. Parker looked up to Young so much that he learned to play each one of Young’s solo’s note for note. Young unlike many people had a unique approach to jazz.
On many occasions, the sound that emanated from his horn could be described as very melodic. Also, the resonance of Young’s music was considered to be far more developed than his contemporaries.
One of the reasons why Young was so talented was because of his melodic rhythmic notions that would come to evolve from the beat. Examples of this in Parker's music are pointed out by Giddin’s Visions of Jazz, in which Orintholgy sets a beat of one quarter equals 187. The lines of music progress with scores of eighth and sixteenth runs tied together by little melodic triplets.
In addition to the melodic rhythms, Parker also picked up a stylistic approach from Lester Young. Young was well known for his defined blues legacy and style. Parker utilized the style and musical rhythms to produce musical scores that almost told a story. In Visions of Jazz Giddins points out that many pieces in which Parker soloed used a technique of placing extended rests throughout the song.
As in a book, it seems likely that these rests signaled the end of a chapter during a song. A transition such as this would create dissonance in which the listener would anticipate what would come next. just like awaiting a new book chapter.
In addition to the heavy influence of Young, Parker was also influenced by parts of Coleman Hawkins’s bebop music that seemed appealing. Ornithology was by no means the limit to which Parker could play.
Bebop was the opposite to swing because it went to the polar extremes (slow and fast) of the beat. Coleman Hawkins was a person who made sure that concentration was taken to the rhythm.
Many pieces composed by Parker reach into the upper two hundred for quarter note beats. If it is not hard enough to concentrate on the rhythm at this speed then it will be hard to believe that one of Parker’s compositions actually set the quarter notes at 310 beats per minute.
Ben Webster says it best in Quotable Jazz when he states, “That horn ain’t supposed to sound that fast”.
As a saxophonist myself, it is obvious from my first-hand experience that many of Parker’s transcribed solos are almost impossible to site read at first unless the tempo is cut (at least) in half.
Then with some dedicated practice and improvisation by yourself it becomes manageable. This undoubtedly has inspired numerous jazz players since Parker’s day.
John Coltrane is a perfect example of someone who began playing a lot of notes and very fast. This style of music, known as sheets of sound, most likely would never have evolved had it not been for Parker.
Parker learned more than just playing aggressively from Coleman Hawkins. Hawkins was a native of the Midwest like Parker and he was considered by many to be The Father of the Tenor Saxophone. As a soloist, Hawkins was renowned for coming up with impressive solos and pushing the envelope on sound.
Most likely the biggest impact that Hawkin’s career had on Parker was the style of Vertical Playing. Vertical playing is considered an improvising style of musical play that is not based on melody as much as a chord progression.
Parker picked up on this philosophy while pioneering the new form of music known as bebop. Unlike Swing, which is very organized, Parker seemed to view jazz as an improvisational way of expressing himself.
The chromatic implications of many of Parker’s pieces are very profound. In fact, Parker describes a moment in which “the light bulb in his head essentially switched on”.
This light bulb moment, which can be described as the profound musical instant of Parker’s career, occurred when it was discovered that the aspects of playing improvisations high in the upper partials of chords created a very interesting and exciting sound. Park was whaling in the altissimo range. This is the musical discovery that he would leave as his legacy.
Parker’s high octave playing becomes more evident when looking at saxophonist soloists before and after Parker. Many people began to attack altissimo chord improvisations after hearing Parker’s recordings.
Parker was never a soprano saxophone player himself but may have inspired the generation after him to begin playing this previously underused instrument.
One of Parker’s followers, John Coltrane likely made a late-career switch from tenor saxophone to soprano saxophone because of Parker. The soprano saxophone before this switch had almost no soloists. Afterward, more and more people began looking to play the soprano saxophone with its ability to belt out the high octave sounds in crisp and delighting detail. Further evidence of Coltrane’s admiration of Parker comes from the relentless sound, speed, and diversity of his solos.
Many people claim that history repeats itself and this may be the reason for Gary Giddon’s statement that, “Jazz is based on a cyclic motion” in his book Visions of Jazz.
It is evident throughout history that talent does not always mean success. It appears that those people who often see the most success are those individuals that are dedicated to and love their hobby.
While Parker is an all-time great, he was by no means considered a natural on the saxophone. It was at Parker’s childhood hangout, the Reno Club, that he faced one of the most embarrassing events in his life.
At the young age of seventeen, Parker decided to stop in on a jam session that was being held at Reno. The star of the night was a drummer named Jo Jones who was a vital part of Count Basie’s orchestra.
Accounts of the incident describe the sound of Parker’s playing as being so bad that Jones became appalled and refused to continue playing. If this was not embarrassing enough, Jones decided to hurl a cymbal at Parker.
This cymbal incident is as much a part of Charlie Parker's lore as is his contribution to Jazz and his assuming the nickname “The Bird”. This horrifying event did not deter Parker from continuing to play.
Instead, Parker had the will and determination to succeed in those endeavors that he may not have been good at but that he loved. Parker’s late teenage years were a point in his life when he practiced almost compulsively instead of giving up.
During this time it was stated that Parker attempted “playing clean and looking for pretty notes”. This intense practice is a musical term known as woodshedding.
Parker’s first break came in Kansas City after his intense practicing session. At the Reno Club, where he often watched his favorite players as a kid, Parker at the age of nineteen got a gig playing with Jay McShann.
It was while on the road with the McShann band that Parker received his nickname “The Bird”, which would stay with him for the rest of his life. After playing with McShann for a short period of time, Parker made his move to New York City. This change in atmosphere was critical in three distinct ways for Parker’s Life.
While in New York Parker begins dishwashing at Jimmy’s Chicken Shack. During this employment, Parker listened to Art Tatum perform nightly.
Tatum also inspired Parker musically. Parker adopted a technique of Tatum’s known as Harmonic Restructuring into his music. In fact, this may be directly linked to Parker’s epiphany moment, when he realized what kind of music could come out of tinkering with the higher octaves of the alto saxophone.
Although Parker did play some slow pieces many of his songs had the elements of Tatum’s speedy tempo. Also, Parker further defined the virtuoso playing that was started by people such as Art Tatum and Louis Armstrong.
While in New York City Parker also recorded his first piece, which was entitled Ornithology. Recordings and the radio were important during this era because unlike in the past, musicians were no longer restricted to having viewers in only a certain region.
People throughout the country began hearing and liking what Parker was playing. As a result of this, his musicianship quickly became very popular and many knew of his name. At the time of his death in 1955, it was believed Parker was one of the most recognized players on the radio in the United States.
New York City’s last impact on Parker is in the way that it tainted his life and career. After being introduced to heroin Parker would suffer from a lifelong battle of addiction to the drug. During this time Parker fell to pieces on several occasions.
Despite his recognition and popularity, Parker was almost completely unemployable at the time of his death because of his lack of organization and consistent no-shows for gigs. The dark side of Parker’s great, and shining, musical light are the mentions of arrests and hospitalizations for drug usage that accompany him when spoken of by historians. It’s alleged that Parker’s addiction led him to sell his saxophone for drug money. After realizing what a bad decision this was Parker forced to borrow a saxophone for that night’s performance.
Parker also made trips to the west coast. During Parker’s mid-1940’s journey to Los Angeles, he heard bebop for the first time. Although many were impressed with the music that was heard, they were also disgusted with the person they now viewed as a junkie who was performing.
While developing bebop ended up being among Parker’s greatest accomplishments his descent into drug use is one of Parker’s worst legacies. Many young people, who looked up to Parker, were attempting any means possible to play his type of music. Because of this impressionability, musicians would attempt to play under the influence of drugs because they believed that this inebriation would enlighten them. Drug problems in the music world are not uncommon and Parker’s usage and addiction are just part of his story.
When looking at Parker’s death, it is obvious that he died fairly young. Parker’s death coincidentally coincides almost exactly with Julius “Cannonball” Adderley’s rise to musical fame.
In a way, it is as if Parker passed the torch on to Cannonball through jazz fans. Although Cannonball showed many Parker-like aspects in his playing, he was not afraid to contribute musically his own beliefs.
This is true of Parker too, who took from his childhood heroes Young and Hawkins, yet still gave so much more to music than he borrowed. Jazz like most things truly is an entity that is in continuous evolution. Parker was an important contributor to the history of jazz that changed the genre and influenced those who came after it.
Like Lennie Tristano said, “If Charlie Parker [were alive and] wanted to invoke plagiarism laws, he could sue almost everybody who’s made a record in the last ten years [or since his death]”.
In this way, Parker’s short but lasting impact on jazz became fully incorporated into jazz’s rich history by those who were inspired to play in his style. Here’s to 100-years since the birth of the legendary jazz musician, Charlie Parker, who accomplished so much in a relatively short life.
As a person, Parker was complex and this complexity allowed him to transition into a jazz saxophonist known as The Bird who ushered in the subgenre of bebop within jazz. | https://medium.com/lessons-from-history/charlie-parker-evolved-into-the-birdman-and-changed-jazz-through-bebop-a09931b04484 | ["Thomas O'Grady"] | 2020-12-28 20:54:14.472000+00:00 | ['Music', 'History', 'Culture', 'Jazz', 'Art'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.