title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
How to Find the Energy to Bootstrap Your Side Hustle | How to Find the Energy to Bootstrap Your Side Hustle
After a long days work, the last thing most of us want to do is work more.
Thanks Annie Spratt for the fantastic photo from Unsplash
You work all day, come home, and all you want to do is crash. Or, as the saying goes, “grab some Netflix and chill.” The furthest thing from your mind is getting back to the grind and work some more.
The thing is, what you really want is not to have to get up and go back and do the work thing again tomorrow.
That’s why our weekends are so sacred. We get to do what we love for two whole days.
But as we sit in the vegetative state, watching the next episode of something we’ll never remember next week, we know in the back of our mind, we need to start pulling something together in our spare time. Something to allow us not to go back to the Monday grind.
We’ve read all the stories of success online where people seemingly just like us have ‘made it’ by putting in the sweat equity and building a venture after hours which has left them in a state of total prosperity.
What type of superhumans are these people?
How did they find not only the physical energy but the mental sustenance they needed to keep pushing through when their body and mind is pushing back.
I struggled with this for a long time. I sucked up enough mental strength to push ahead with some side projects, but never any home run. By home run, all I’m saying is having a side business earn enough to replace the regular job.
When I did manage to push through my slothy couch hugging laziness and got some stuff accomplished, I looked at the things I thought was holding me back the most.
These are the things I started crushing, which actually lead to some success.
Trying to get the day job out of mind.
A great photo by Joanes Andueza from Unsplash
I struggled to get my day job out of my mind for a long time. Every day there was a new consortium of struggles and problems to solve. Every night I took those problems home with me for a little extra homework.
Most often, it wasn’t intentional.
It just happened.
I’d find myself starting to think about a project, and something at work would come to mind.
If you have a job where you can pack it away and not think about it after you leave, you might have a better shot than most for succeeding with a side venture.
Taking my work home with me may have made me more successful in the day job, but it played hell with me to get things rolling on the side.
And, the side hustle was what I truly wanted.
Can give any credible advice on how to solve this dilemma? I would say to just “give less of a shit” about your daily grind.
Can you mentally position yourself to go all in and consider your night gig your primary gig? And relegate your day employment as something just to get you by? I consider this “not owning your day job.”
Detach yourself from it.
Make your day job something you do instead of who you are.
Someday grinds are different than others. Some require you to be ‘all in’ or out totally. With my current thinking, this is what is wrong with our work society. Work must come first, and everything else second.
If you can break that mental chain, your night gig probably has a better chance of survival.
Balancing family life and other crap that goes on.
I carefully chose ‘crap’ in that subhead because seriously, we fill our lives with too much ‘filler crap’ to give us the excuse why something didn’t work.
Family stuff is the important stuff.
But, there is so much is put under the guise of ‘family stuff’ we just don’t need to do.
Either way, you will need to find that balance if you want to succeed at a side hustle.
Spending more time with your family may be the reason you want out of that bastard daily grind.
I love the colors in this photo by Matias Islas from Unsplash
Then, by all means, put your family first and do the things that make you happy.
But, take a serious look at where you can carve out blocks of time to work on your side project.
Some people forgo sleep and work on side projects long after the family goes to bed. That’s not my speed.
I like my sleep too much.
I love getting up in the morning and not feeling like I need more rest.
So, my schedule may look a little different than yours.
Uncluttering our mind.
I also struggled with uncluttering my mind. I look at the brain like a web browser with a bunch of open tabs.
As the day goes on, you keep opening more tabs.
By the end of the day, you have a ton of open tabs and a ton of things cluttering the mind.
When I got hung up most with my side projects, I noticed the cluttering fogged my brain enough I couldn’t concentrate on doing anything. I found two solutions.
First, I would close all the tabs and clean my mind of all the day’s stuff.
How did I manage this? I meditate.
I learned meditation years ago. I do a lot of experimenting with a lot of different things in my life. Meditation was just one of those things. It stuck with me, and it works great for eliminating all those open tabs of my day.
The second solution when you get too cluttered of a mind is to do mindless ‘busy work’ tasks. This is still stuff that needs to get done like checking email, responding to customer questions, or maybe billing. The problem is if you fill your whole time with busy work, the core of the business doesn’t get done.
At one point, you will need to find time to work on the project with an open mind.
You will need to break through and fight the resistance as Steven Pressfield calls it. It’s ultimately the resistance that is holding you back from making that side hustle a reality. | https://medium.com/swlh/how-to-find-the-energy-to-bootstrap-your-side-hustle-ed71f485357b | ['Kevin Katzenberg', 'Life Reboot Project'] | 2019-06-23 03:51:01.106000+00:00 | ['Side Hustle', 'Side Project', 'Entrepreneurship', 'Entrepreneur', 'Bootstrap'] |
Beta Reader Basics | Beta Reader Basics
7 Ways to Choose and Use Beta Readers
The purpose of having beta readers on a finished manuscript is exactly the same as it is for having beta testers on a new mobile app or taste tasters on a new kind of Coca-Cola. You give members of your target audience a product that is whole and viable so that they can use it and test it and give you feedback on what is working and what might need shoring up. Responding to that feedback is the first step in sharing your work with the world.
Photo by Fabiola Peñalba on Unsplash
The Rule of Three
I think an ideal number of beta readers is three. You don’t want too many people weighing in because then it just becomes writing by committee.
The only instance where I might change the recommendation of three beta readers is if a client is writing something very specific about a person, place or thing that the writer may not completely understand. This would include academic subjects, historical subjects, stories set in a particular place where the writer has not lived, or stories that deeply involve a career the writer does not know about.
In James Patterson’s Masterclass, for example, he talks about having relationships with cops and FBI operatives who can help make sure that he has captured the spirit of the tasks those people do in his novels. (Note that he says spirit; he knows he is writing fiction and that the facts are necessarily made up, but his impulse is to make sure the facts suggest the truth.)
I had a client writing a YA fantasy historical novel set in a Yeshiva and he had a Jewish scholar read his pages to make sure that he was getting the terminology and the religious elements correct.
There is a recent trend in publishing to get what is known as a “sensitivity reader” for any story which prominently features diverse characters whose lived experience is outside the experience of the author. Not all professionals agree that this is a necessary or a wise step. You can read about the debate in a Writer’s Digest column here.
In all three of these cases, I would consider the “technical” reader to be a fourth reader — someone you select for this specific task and not necessarily to give you an overall impression of how the story is working.
No matter what, resist the urge to get a whole bunch of people to read your work-in-progress. This is usually the result of your eagerness to show your work to all your friends and key people in your network so that they can pat you on the back and praise you. Remember that your manuscript is going to get better as it moves through the phasing of publishing. Wait to knock their socks off when it is closer to what it is going to ultimately be.
Who to Choose
You want people you trust to be both honest and compassionate, people who can read the manuscript in a timely fashion, and people who know how to say something meaningful about a story.
You don’t need empty praise. Having someone say, “I loved it” or “It’s so cool that you wrote a story!” or “I liked the part about the bears,” is unhelpful.
You don’t need people’s opinions. Having someone say, “You should change thing from aliens to dinosaurs,” or “I would like this better if it was a romance,” does no one any good.
You don’t need a proofreader. Having someone say, “Your verb-tense agreement on page 10 is out of kilter,” does little to help you write the best story you can. That is the work for a copyeditor or proofreader, and this is not the time for that work.
Be wary of asking people who love you to read the work at this phase, or people in your family who may have strong reactions that have little to do with the story itself — unless there is good reason to do so (i.e. the story was inspired by a family member or you are writing a memoir.) Asking family members to be beta readers tends to lead to arguments because there is so much wrapped up in your expectation of their reaction.
Also be wary of asking members of your existing writing group unless full manuscript reads are part of the set-up. These people have seen your story develop over time and will already have biases and ideas about it that may or may not be present on the pages. The perfect beta reader is coming in cold, but with compassion and curiosity.
I think the best beta readers are people who don’t know you that well — your hairdresser, a co-worker who has never been to your house, someone you met at a writing conference, someone you met in an online course or workshop, someone in an online affinity group.
Look for people who love to read, who read in your genre, and who have an interest in your topic.
What About Famous Friends?
If you have them and they are willing, then sure!
A Special Note for Middle Grade and YA Writers
You will want readers in your target age range, but an adult approaching kids and teenagers directly is potentially creepy. The best thing to do is approach their parents, their teachers, and their librarians to see if those adults know any kids who might be interested in beta reading. Many young people love to be asked for their opinions and if they are readers, they are likely really smart and discerning about what they like and why.
Try to find at least one person in your target age range to be a beta reader.
Then go back to those parents, teachers, and librarians to see if they would be willing to read your work, too; they likely know very well how kids that age think and how they might respond to a given story. Besides, a lot of adults read YA themselves.
What to Ask
Good feedback helps you test your own theories about the book, so ask questions that are based on what you are most concerned about. This is a chance for you to be honest about your work; writers often know what’s wrong, but it’s scary to face it because it means that a.) you’re not done and b.) you may have to learn how to fix what’s wrong and c.) you now have proof that you are not the rare genius who can get it right the first time. Now is the time to face the truth and build your muscle for improving your work.
If you are worried that your protagonist (in fiction) is unlikeable, ask your beta readers to write down a two-line reaction to your protagonist.
If you are worried that your argument (in nonfiction) isn’t logical, ask your beta readers to write down a two-line summary of your point.
If you are worried the book is too long, ask your beta readers to mark any places where they skimmed or got bored.
If you are worried that your book isn’t good enough, ask your beta readers to be honest about where they put the book down and stopped reading.
Good feedback helps you see things in the work that you couldn’t possibly see. All of us have the burden of knowledge — we can only know what we know. Beta readers can help us overcome that burden. Be sure to ask your readers for their honest assessment of the work — what is working and not working — and then step back and honestly weigh how you feel about what they say. Try to be open to everything, and don’t simply defend what you have done — but also remember that you are the god of your own story. You are trying to match the vision of the story you see in your mind, and only you can see that.
Give Them a Deadline
Give your beta readers a reasonable deadline for completing the feedback. Tell them in advance that you need their reports by a certain date — and if that doesn’t work with their lives, find another reader. I often recommend giving beta readers 4–6 weeks, depending on the length of the manuscript.
What to Do with the Feedback
If the feedback rings true to you, do what you have to do to address it, even if that means making a radical change. Be brave about doing the hard work; this is the stage when a book often goes from good to great.
If the feedback doesn’t ring true to you, let it go. You are not obligated to respond to every bit of feedback.
If the feedback seems mean spirited, don’t take it personally. Recognize that anyone who is mean to someone who is doing their best to create something from nothing likely has their own twisted narratives around the creative process — and that has nothing to do with you. Shake it off and keep doing what you are doing.
To learn about Author Accelerator’s manuscript evaluation service, please click here. | https://medium.com/no-blank-pages/beta-reader-basics-190b1411af69 | ['Jennie Nash'] | 2019-06-17 14:36:44.752000+00:00 | ['Books And Authors', 'Authors', 'Writing', 'Craft', 'Writing Tips'] |
Project Update #11 — Alpha Launch & Marketing Action Plan | One more week for Alpha stage before getting into the Beta !
The understanding of natural language is the key point between Daneel’s brain and the end user. That’s why we are using the Alpha stage to train Daneel by asking him as many questions as possible. The Alpha is currently available to our ambassadors, advisors, partners and gold members. This will improve his capacity to understand natural language, ensuring that all features can be initiated and controlled by a simple spoken request. However, with that in mind, other features are currently hidden to ensure that efforts are focused on the chat system.
The ending stage will include some of the other features in the app such as the Insight (market sentiment/price prediction) and the Dashboard. This will help us prepare for the next step, the beta test starting on October 15th.
For more details about the Alpha stage, please read the full Medium article.
You can still apply to become a Beta Tester before October 14th, hurry up!
Launching Daneel
The launch of product is a fantastic opportunity to generate some media attention and let everyone know what we’re about. With the launch of Daneel upcoming, we want go get the message out there as to what Daneel is about and how it will help crypto investors. To achieve this, we have a range of activities planned for our launch, to get our brand out there and show the world what we’re all about. In addition, we have kept our latest partnerships secret: this will enable us to distill them for maximum impact during the marketing campaign! Here are our plans for the next months:
Influencers Campaign on social medias and Youtube with reviews of the Beta version and use cases. Bounty Campaign with free subscriptions given out, goodies, and 500,000 DAN to be distributed to our top participants. Furthermore, the bounty will help us to have collect vital feedback during the Beta test (October 15th — November 30th): this will then move onto ratings and store downloads immediately following the public launch on December 1st . SEO and advertising through targeted websites, medias with PR campaigns Launching Event in Paris, We are preparing for our launch event where we will be broadcasting videos from Paris in partnership with Chain Accelerator.
We are also exhibiting at the Malta Blockchain Summit on November 1 and 2 and at CES Las Vegas next January 8–11, 2019. We will showcase daneel to the public and professionals. In addition, we will spread the word through a new product videoshowing daneel’s features and use cases.
Blockchain World Summit London
We’ve been looking forward to the Blockchain World Summit for a long time, as this event was a fantastic opportunity to show off our work and network with a range of different people.
During the event we had a dinner with our friends from Amon: we spoke about our common future, and more specifically about how Daneel will be integrated into their application.
Additionally, our technical team enjoyed a long chat with Automata: a platform for ‘’noobies’’ to invest in crypto with a robot assistant (risk management, investment, portfolio management, security…) — our platforms serve slightly different use cases, but share striking similarities that could benefit from healthy idea exchange.
We also met some influencers and made some interviews with them:
1) James Crypto Bull interviewed our CEO Joseph Bedminster
2) Crypto Academy NL interviewed our Head of Communication Harold Kinet
For full event report, read the Medium article.
Meeting with 50partners in Paris
The concept is simple, 50 experienced entrepreneurs come together to select and support promising start-ups until success. Joseph was lucky enough to be selected to present Daneel, the feedback was very encouraging! We are in process with them in addition to other potential investors.
Stay tuned:
Website: https://daneel.io/
Twitter: https://twitter.com/daneelproject
Telegram: t.me/DaneelCommunity
Facebook: https://www.facebook.com/daneelproject
LinkedIn: www.linkedin.com/company/11348931/
YouTube: https://www.youtube.com/channel/UCJH6gsFUJlZr_ka3HQjZhKw
Github: https://github.com/project-daneel | https://medium.com/daneel-corporate/project-update-11-alpha-launch-marketing-action-plan-82389def9283 | ['Daneel Assistant'] | 2018-10-08 14:11:01.755000+00:00 | ['Blockchain', 'Bitcoin', 'Artificial Intelligence', 'Cryptocurrency', 'Fintech'] |
You Might Not Succeed | There is a lot of self-help advice on the Internet that claims to give people the secrets to success. People are telling you to hustle harder; to fake it till you make it; to smile more; to state your intentions. And of course, all of these people are willing to tell you exactly how, as long as you pay them something for their guidance, or if they are desperate, to shower them with attention.
I am not one of those people. I am the asshole telling you that these people are full of shit.
Here is the cold, bitter truth — you might not succeed. That idea you have, the one you sweat and toil over, might not earn you the fame or notoriety you desperately want it to.
There is this pervasive myth in America that if you work hard, that effort will translate into success. This concept is called a meritocracy, though it is referred to almost exclusively as the American Dream, and it runs pretty deep in the American mythos. In the words of Senator Tammy Duckworth — who is by no means a conservative icon — “The American Dream I believe in is one that provides anyone willing to work hard enough with the opportunity to succeed.” The American public is heavily invested in the meritocratic ideal of success coming from hard work. It’s something heard in countless rags-to-riches stories from Whoopi Goldberg to Barack Obama. If you work hard, the refrain goes, then you will get the recognition and riches you deserve.
It’s a familiar story.
It’s also predominantly a lie.
In general, social mobility is declining within the United States. If you start out poor, there is an increasing likelihood that you will stay poor for the entirety of your life. For many Americans, real wages (i.e., your salary when adjusted for inflation and cost of living) have not budged in decades. Meanwhile, the cost of living for things such as health care and college tuition has been steadily increasing.
These difficulties do not mean such success never happens. The number of self-made billionaires within the United States is increasing, but these people make up 0.00017% of the population. You literally have a better chance of winning the lottery. And unsurprisingly, this success cuts more dramatically along racial and gendered lines.
Maybe, however, your definition of success is smaller. Forget building Facebook 2.0. You just want to create a moderately successful career. That’s admirable, but it’s getting harder. The share of smaller, newer companies in relation to the rest of the economy is decreasing. It has been for decades. Larger companies are consolidating greater market share, and that’s (probably) stifling entrepreneurship. It’s just more difficult now to start even a small, new business.
You might be one of the lucky few that do succeed, but there is a substantial possibility that you won’t. There are literally millions of desperate Americans pushing towards the same end, and they are also trying to hustle their problems away. They are taking the same online webinars, reading the same self-help books, adhering to the same crazy sleep schedules, reciting the same mantras. They want this more than anything, and it just isn’t enough.
There are significant, systemic issues within the United States that are stifling entrepreneurship, and you cannot merely willpower them away. There isn’t a self-help book that will teach you how to dismantle systemic poverty in one sitting. A form of meditation won’t magic-away the United States shitty healthcare system. A workout routine isn’t going to suddenly grant you a livable wage.
And even if there were — even if wealth inequality weren’t an issue — you still might not succeed. The nature of popularity ultimately means that some ideas cannot win. Not every project is endorsed. Not every applicant will be hired. Not every company will make it off the ground. Failure is both inevitable and healthy.
What’s not healthy is the collective delusion that we live in a meritocracy. We pretend like hard work is all an idea needs to thrive, and then make people feel like they are 100% to blame when their efforts fail against the ideas of richer, more educated, more privileged people. We tell people to throw their entire existence into a goal without warning them first that they will most likely fail — repeatedly — before (maybe) finding moderate success amongst a small group of like-minded peers. And that’s the ideal scenario. It’s far more likely that after giving it their all, they will struggle to remain within the same economic class they started out in.
If I told you that you would spend the next ten years of your life devoting the majority of your time on a project that would ruin you financially, socially, emotionally, and maybe even physically, then would you still do it? Would you still live for weeks on end on unhealthy food? Would you still risk ruining your back on long work shifts that last for days, if not weeks? Would you still keep that entrepreneurial sleep schedule of 4 hours and 14 precisely calculated minutes?
I don’t think that you would. I think once you let go of the idea that you will succeed where everyone else has failed — that you are just better and more deserving — then you will start to look around.
You will realize that the concept of hustling was never meant to lift you up, but rather, to keep you down. | https://alexhasopinions.medium.com/you-might-not-succeed-289a8e3dd975 | ['Alex Mell-Taylor'] | 2019-07-19 04:24:21.160000+00:00 | ['Entrepreneurship', 'Networking', 'Hustle', 'Bootstrap', 'Success'] |
How to market an estate agency business | Estate agency has a long history of stereotyped presumptions. From the money-grabbing agents who will try and sell you any property that gives them a large chunk of commission, to the offices cluttered with paperwork, deafening phone calls and irate workers. We imagine an office-scene similar to that of Wolf of Wall Street when thinking of estate agency. All these stereotypes paint a dark picture of what potential buyers/sellers will encounter when coming face to face with an agent. Will they meet with a slimy agent who’s a posed pick-pocketer? or will they be bombarded with key phrases like “This property is extremely popular, so if you want it you need to sign the papers now” or “This home is perfect for your young family” as you and the agent stand in what can only be defined as a dilapidated barn.
Like most stereotypes, however, this vision we have been given time and time again is quite far from the truth and, as a marketer, it’s your job to destroy these stereotypes for the benefit of your company. A quick Google search reveals very little about the type of marketing company marketers perform that go beyond the basics when working in estate agency. Instead, there are hundreds of marketing branded talks, tool kits and courses that look at marketing from the agent’s profile-building perspective. This doesn’t help company marketers at all.
Aside from the general marketing basics of getting on social media, posting reviews and be consistently active, company estate agency marketing is a marketing type that hasn’t been truly explored yet. Estate agency marketers are starving for the in-depth knowledge we all crave about our specific industry. We want to know best practices, tips on how to increase engagement and what new trends are taking over. This process of weaving through the maze is made even more complex when adding on top the stereotypes that follow estate agency.
Who does it best?
Zoopla is one of the best (in my opinion) marketers for this industry, why? Because they don’t focus on the people in suits. Like many top-of-their-league marketing teams, Zoopla’s produces content that platforms the end result, i.e a happy home. They do this through max 30 second long clips which are beautifully shot and featuring up to two people enjoying their new home. Zoopla gives us the fairy tale result we dream of when buying or selling a home. We are fed home-ownership hope and we, the audience, eat it up.
This isn’t a new concept of course. Since selling and buying began, companies have been giving us a dream of how our lives will change after purchasing/selling a product and we believe it wholeheartedly.
Lucky Strike advert
Unfortunately, though not all companies have the time or money to spend on ultra high-quality video-focused campaigns, and instead we make do with what we have. This is in no way means our marketing is not as effective. After all, in everyone’s pocket, we hold a camera, a microphone and any editing tool compacted into one little mobile phone. As a marketer, your phone is your greatest tool.
All marketers know of the power of social media and video. Trending data is bombarding us the power of these two superheroes every day, but how can we as estate agency marketers make use of them?
The problems
Firstly, we need to know our audience and their pain points. Unluckily for estate agent marketers, our audience is pretty big and so the option to specify and focus is almost impossible. We have first-time-buyers, growing families, buy-to-let owners, landlords and those looking to move to a smaller home in their old age to name a few. With all these people you have a lot of choices, almost too many choices on how to market to your audience. So, we go onto our next basic marketing step, understanding our audience’s pain points. These pain points can include buying/selling novices who feel overwhelmed, agency stereotypes, bias and other factors that we aren’t even aware of yet. Perhaps one of the biggest set backs is the weight of stress. Moving is one of the most stressful processes we go through during our lives and this is our greatest enemy as marketers. We don’t want our audience to feel stressed. We want them to trust us and think of our business first when wishing to move.
As all these different issues swirl above our heads, it can be overwhelming and some marketers may choose to take the safe route, posting property pictures on Facebook, setting up reviewing tools and generally sticking to a safe and easy way of gaining traction. Marketing basics are important but they won’t be winning you any industry awards or potential clients anytime soon. Your marketing needs to stand out and meet your audience halfway.
Best practices
Here they are, the best practices we all have been waiting for. Let’s start off with something very obvious. Use video. My god use video. Use you phone and start recording. Though estate agency has a wide audience to cater to, most people would prefer video and picture together compared to just pictures.
Record a property viewing, walk through the property and feed into the dream of living at this property. Mention how the property would be great for growing families and add your personal touch as a marketer, not as an estate agent. You want to garner attention for your brand, not necessarily sell the property so do what you do best and market.
Next, remember it’s not all about the properties, it’s about the people. The perception of estate agency isn’t great so break the stereotype. Ask your staff to post about themselves outside of the office. What are they interested in? What are their hidden talents? Essentially, who are they?
Obviously, keep it professional but a bit of fun here and there adds some humanity to your company and this is what your audience will remember.
Finally, dare to be different. Be that bridge that not only helps potential clients resonate with your brand but also b the bridge that finds new, creative ways of overcoming audience pain points. For example, buying a house is full of terminology that we don’t encounter outside of the estate agency world so overcome this linguistic issue through guides, slideshows, whatever you want. This type of content is ever-green and so the option to redesign is always there. The benefit of ever-green content is it is continuously helpful to your audience and can cater to many audience types.
Mic drop
Estate agency company marketing is difficult but it’s also an open book. There isn’t a big book outlining how to market to this type of industry so stick to your basics but explore. Try out new trends, lead the industry. | https://medium.com/the-innovation/how-to-market-an-estate-agency-business-ed2633a4c619 | ['Amy Montague'] | 2020-08-22 18:11:01.148000+00:00 | ['Real Estate', 'Best Practices', 'Marketing', 'Estate Agency', 'Marketing Strategies'] |
Understanding Sync, Async, Concurrency and Parallelism | Understanding Sync, Async, Concurrency and Parallelism
Implementing in Python.
After releasing Python 3 we are hearing a lot about async and concurrency which can be achieved by asyncio module.But there are other ways to achieve asynchronous capability in python also, these are Threads and Processes .
Lets discuss basic terms we will use in this article.
Sync
In Synchronous operations, if you start more than one task, the tasks will be executed in sync, one after one.
Async
In asynchronous operations, if you start more than one task, all the tasks will be started immediately but will complete independently of one another. An async task may start and continue running while the execution may move to a new task. First task will wait for completion of second task, after completing second task, first one will be resumed to complete.
Concurrency and Parallelism
Concurrency and parallelism are philosophical words, the ways how tasks will be executed. On the other hand synchronous and asynchronous concepts are programming model.
Concurrency means executing multiple tasks at the same time but not necessarily simultaneously.
Parallelism means executing multiple tasks at the same time simultaneously. Parallelism is hardware dependent. why? In a computer with single core processor, only one task is said to be running at any point of time. So if you want to achieve parallelism, you need multi core processor.
In a single core environment, concurrency happens with tasks executing over same time period via context switching i.e at a particular time period, only a single task gets executed.
with tasks executing over same time period via context switching i.e at a particular time period, only a single task gets executed. In a multi-core environment, concurrency can be achieved via parallelism in which multiple tasks are executed simultaneously.
Understanding by Real World Example
Your boss told you to buy a air ticket and confirm him by email.There are two tasks : buy ticket and confirm by email.
Sync : You called to airline agency and asked for ticket, agency guy confirms you that ticket is available and he told you to wait for few minutes to book it finally, you keeps waiting. After ticket is finally booked, you write email to your boss.
Async : You called to airline agency and asked for ticket, agency guy confirms you that ticket is available and he told you to wait for few minutes to book it finally, at the moment agency guy confirmed you that ticket is available, you start writing email to your boss. You did not wait for book the ticket finally to start writing email. Here both tasks making progress together(concurrently) but by only one person.
Parallelism : Previously you were doing both task alone, now you asked for help of one of your colleague. When agency guy confirms that ticket is available, you asked your colleague to start writing email. Here you doing one task and your colleague is doing other task.This is called parallelism . Here both tasks making progress together(concurrently) by two persons.
Threads
Using Python thread you can achieve concurrency but not parallelism because of Global Interpreter Lock (GIL) which ensure that only one thread runs at a time. Thread takes advantage of CPU’s time-slicing feature of operating system where each task run part of its entire task and then go to waiting state. When first task is in waiting state, second task is assigned to CPU to complete it’s part of entire task.
Let’s see an example.
Output :
Total 5 Threads are queued, let's see when they finish!
Worker 5, slept for 3 seconds
Worker 4, slept for 4 seconds
Worker 2, slept for 5 seconds
Worker 1, slept for 6 seconds
Worker 3, slept for 6 seconds
Here 5 threads are making progress together, asynchronously and concurrently.
Processes
To achieve parallelism Python has multiprocessing module which is not affected by the Global Interpreter Lock . Lets check an example.
Here multiple processes are running on different core of your CPU (assuming you have multiple cores). It’s true parallelism!
With the Pool class, we can also distribute one function execution across multiple processes for different input values. If we take the example from the official docs:
Output :
[1, 4, 9]
Here same function is executing with different values in different processes and finally the results are being aggregated in a list. This would allow us to break down heavy computations into smaller parts and run them in parallel for faster calculation.
concurrent.futures
The concurrent.futures module has ThreadPoolExecutor and ProcessPoolExecutor classes for achieving async capability. These classes maintain a pool of threads or processes. We submit our tasks to the pool and it runs the tasks in available thread/process. A Future object is returned which we can use to query and get the result when the task has completed.
Lets check an example of ThreadPoolExecutor :
Output :
http://some-made-up-domain.com/ generated an exception:
page size is 973090 bytes
page size is 349527 bytes
page size is 1130261 bytes http://www.foxnews.com/ page size is 246819 bytes http://europe.wsj.com/ page size is 973090 bytes http://www.bbc.co.uk/ page size is 349527 bytes http://www.cnn.com/ page size is 1130261 bytes
At the moment any task is being finished, it returns and we are printing result. Check that for cnn.com we are sleeping for 10 seconds. For ProcessPoolExecutor just replace ThreadPoolExecutor with ProcessPoolExecutor(5) . Remember that the ProcessPoolExecutor uses the multiprocessing module and is not affected by the Global Interpreter Lock. I suggest you to run this code to understand properly.
Multiprocessing allocates separate memory and resources for each process/program whereas, in multithreading threads belonging to the same process shares the same memory and resources as that of the process.
Why We Need Asyncio
We have threads and processes to achieve concurrency, then why we need Asyncio? Lets identify the problem with an example.
Guess we have three threads T1, T2, T3 , each one has I/O operations and other few lines of code to execute. Our operating system gives very small amount of time to each task to use CPU and switches between them until they finishes. Assume T1 finishes its I/O operation first and without executing other codes interpreter switches to T2, which is still waiting for I/O, then interpreter switches to T3, its also still waiting, then interpreter moves to T1 and executes other remaining codes. Did you noticed the problem?
T1 was ready to execute other codes but the interpreter switched between T2 and T3. Wouldn’t it been better if interpreter would have been switched to T1 again to execute the other codes? then switch to T2 and T3.
asyncio maintains an event loop and that event loop tracks different I/O events and switches to tasks which are ready and pauses the ones which are waiting on I/O. Thus we don’t waste time on tasks which are not ready to run right now. In Thread we don’t have control to pause/resume task but asyncio gives us pause/resume capacity.
The first advantage compared to multiple threads is that you decide where the scheduler will switch from one task to another, which means that sharing data between tasks it’s safer and easier.
When to Use Which One
CPU Bound : mathematical computations >Multi Processing
I/O Bound, Fast I/O, Limited Number of Connections : network get request > Multi Threading
I/O Bound, Slow I/O, Many connections : lots of frequent File r/w, network file download, DB query > Asyncio
Further Readings:
https://realpython.com/intro-to-python-threading/
https://pymotw.com/3/threading/index.html
https://pymotw.com/3/multiprocessing/index.html
https://docs.python.org/3/library/multiprocessing.html
https://pymotw.com/3/concurrent.futures/
http://www.dabeaz.com/GIL/
https://callhub.io/understanding-python-gil/ | https://medium.com/swlh/understanding-sync-async-concurrency-and-parallelism-166686008fa4 | ['Goutom Roy'] | 2019-11-23 21:14:51.626000+00:00 | ['Asyncio', 'Programming', 'Threads', 'Process', 'Python'] |
Nest.js & Webpack Hot Module Replacement | Hot Module Replacement, or HMR, can be found in many different languages and frameworks in software engineering to keep this article short the scope will be limited to HMR in webpack and, briefly, how it works with nest. For more information check out nestjs.com documentation on hot reload and webpack’s documentation on HMR
Nest.js
According to nestjs.com:
Nest is a framework for building efficient, scalable Node.js server-side applications. It uses progressive JavaScript, is built with TypeScript (preserves compatibility with pure JavaScript) and combines elements of OOP (Object Oriented Programming), FP (Functional Programming), and FRP (Functional Reactive Programming). Under the hood, Nest makes use of Express, but also provides compatibility with a wide range of other libraries (e.g. Fastify). This allows for easy use of the myriad third-party plugins which are available.
Nest does most of its dirty work through annotations which gives it a feel similar to Java Spring. The most common use case for nest is quickly building a loosely coupled, scalable architecture using Typescript.
Webpack
Webpack is an open source JavaScript module bundler. At its most basic it bundles JavaScript files for use in a browser but can also be configured to minifiy, transform, or package anything.
What is Hot Module Replacement(HMR)?
HMR is a way to quickly replace modules in a running application, removing the need to reboot an entire server when changes are made. A high level view of HMR is that the run-time will occasionally download updates. The application will then sync by replacing the updated or removed modules.
Photo by Chiara Ferroni on Unsplash
A more in depth explanation is that the HMR run-time supports a check and apply method. When check is called an HTTP call goes out to update the JSON manifest. If this call fails, then HMR assumes there is no update. If the call succeeds, then the list of updated chunks are compared to the list of current chunks. For each current chunk the updated chunk is downloaded. When all chunks are downloaded the HMR run-time will then switch to ready. Now an apply method in your code, most likely in a router if using nest.js, will flag all updated modules as invalid and update handlers will stop bubbling up of the invalid designation. If there is no update handler for the code being updated, the invalid will continue to bubble up to the parent module until it reaches an update handler or an entry point and fails. Assuming the code has an update handler all invalid files will be discarded and the run-time will switch back to idle from ready.
What are some use cases for HMR?
The strongest use case I have found for HMR is in a development environment. Locally, without HMR my node server needed to be rebooted at every change and would take an average of 17 seconds to reboot. With HMR I was able to keep my node server up and running all day and updates were applied around 13 milliseconds. Nest.js also wraps its main.hmr.ts file in a conditional, so if HMR is disabled all HMR related code will be removed from the compiler. HMR also provides extremely fast styling changes comparable to if the changes were made in a browser debugger, and will retain application state during a reload.
Photo by Rishi Deep on Unsplash
How do I implement HMR? | https://jtearl188.medium.com/nest-js-webpack-hot-module-replacement-c321626139ca | ['Jt Earl'] | 2019-01-10 00:26:51.488000+00:00 | ['Typescript', 'Webpack', 'Nodejs', 'JavaScript', 'Expressjs'] |
7 Psychological Biases That Are Making You Resist Your Own Growth | Growth is hard.
Sometimes, it’s downright terrifying.
It requires us to take an honest look at ourselves, to abandon what we’ve known, and to suspend ourselves in uncertainty without knowing when we’ll ever find the next step.
It’s uncomfortable, but it’s also essential, because our brains and bodies are built to give preference to comfort, assimilation and familiarity.
What this is means is that we will remain what we have always been unless we consciously choose to become something else. Sure, everyone evolves and adapts over time, but if you aren’t intentional about it, you’ll end up the product of who and what is around you as opposed to an authentic expression of who you really are.
Growth is a required assignment.
The only question is when we do it, and how long it takes for us to realize that we often have to defy some of our instincts in order to create a better reality for ourselves.
Here are a few of those unconscious fears that prevent us from becoming all that we possibly can be, and how they might specifically be affecting you.
1. You don’t actually want to feel too much happiness. | https://medium.com/age-of-awareness/7-psychological-biases-that-are-making-you-resist-your-own-growth-3d0462640af9 | ['Brianna Wiest'] | 2020-11-25 20:41:01.667000+00:00 | ['Growth Hacking', 'Life Lessons', 'Personal Development', 'Self Improvement', 'Psychology'] |
Lets talk about the future of air cargo | You invest in the future you want to live in. I want to invest my time in the future of rapid logistics.
Three years ago I set out on a journey to build a future where one-day delivery is available anywhere in the world by commercializing high precision, low-cost automated airdrops. In the beginning, the vision seemed almost so grand and unachievable as to be silly. A year ago we began assembling a top-notch team full of engineers, aviators and business leaders to help solve this problem. After a lot of blood sweat and tears, we arrive at present day with the announcement of our $8M seed round raise backed by some amazing capital partners and a growing coalition excited and engaged to accelerate DASH to the next chapter. With this occasion, we have been reflecting a lot on the journey and the “why” that inspired this endeavor to start all those years ago.
Why Does This Problem Exist?
To those of us fortunate enough to live in large well-infrastructured metropolitan cities, deliveries and logistics isn’t an issue we often consider. We expect our Amazon Prime, UPS, and FedEx packages to arrive the next day or within the standard 3–5 business days. If you live anywhere else these networks grind to a halt trying to deliver. For all its scale, Amazon Prime services less than 60 percent of zipcodes in the US with free 2-day prime shipping. The rural access index shows that over 3 Billion people, live in rural settings and over 700 million people don’t live within easy access to all-weather roads at all. Ask manufacturers in need of critical spare parts in Montana, earthquake rescue personnel in Nepal, grocery store owners in mountainous Columbia, or anyone on the 20,000 inhabited islands of the Philippines if rapid logistics feels solved or affordable. The short answer — it’s not.
Before that package is delivered to your door it requires a middle mile solution to move from region to region. There is only one technology that can cross oceans, mountains, and continents in a single day, and that is air cargo.
Air cargo accounts for less than one percent of all pounds delivered, but over 33 percent of all shipping revenue globally. We collectively believe in air cargo and rely on it to get our most critical and immediate deliveries, including a growing share of e-commerce and just in time deliveries. If you want something fast, it’s coming by airplane. There is no substitute.
However, the efficiency and applications for air cargo break down when the plane has to land. While the 737 can fly over 600 mph and thousands of miles, it requires hundreds of millions in infrastructure, airports, and ground trucking to get cargo from the airport to your local warehouse making it very costly for commercial deliveries. The ground infrastructure has to exist on every island in the Philippines, every mountain town in Columbia and every town in Nepal. This infrastructure has to reach both sides of every mountain or island anywhere you want things fast. Even when you can land at a modern airport take-off and fuel burn during climb can account for upwards of 30 percent of an entire flight’s fuel use and drives insurance and maintenance costs from landing and takeoff cycles. This problem is so intrinsic to air cargo and logistics it almost seems natural. Well of course flyover states and rural areas don’t get cheap, fast, and convenient deliveries. Are you going to land that 737 at 20 towns on the way from LA to New York City? We fly over potential customers on our way to big urban cities with modern infrastructure even though only a minority portion of the world’s population lives there. Something has to change.
Our solution
To solve this problem is simple in thought. Yet this has been one of the most complex tasks I’ve had the honor of working on in my engineering career. Land the package, not the plane. By commercializing high-precision low-cost air drops you can decouple airplanes from landings, runways and trucks. Suddenly a delivery to rural Kansas is just as fast and cost-effective as a major coastal city. Fuel, insurance, utilization rate, service improvements, coverage area, and-and-and, so many metrics improve overnight in significant ways if an existing aircraft can deliver directly to the last mile sorting facility and bypasses much of the complexity, cost and infrastructure needed for traditional hub and spoke networks.
DASH Systems performing air drop tests in Southern California (image from DASH Systems)
Perhaps one of the most common questions I received when I started DASH why hasn’t [insert your preferred enterprise organization here] done this before? Without taking a detour conversation on why large enterprises historically struggle with innovation, the simple answer is: Because now is the time. Advancements in IoT, low size weight and power flight controllers coupled with a career implementing automation in safety-critical environments meant that the necessary ingredients were ready. Tremendous credit is due to some of the most brilliant engineers, scientists and developers I’ve had the pleasure of working with who took to task carving away raw ideas and rough prototypes into aerospace grade commercial products. All with the bravery to do so while working outside the confines of existing aerospace text books.
Beyond the intricacies of technology was a personal impetus to implement. My father’s family has origins in Barbados, during hurricane season we would make the call, when the phone lines were restored, to ask “is everything okay?” It often felt like a roll of the dice if they would be spared that year in a sick game of roulette that someone else would lose. With islands by definition nearly all help and aid have to come from aboard, but how can supplies be distributed when ports are destroyed, runways damaged and roads washed out? To me, it is a moral imperative to help, but also to build self-sustaining commercial solutions that can scale to help more in the future.
This thought process was put to the test in 2017, just weeks after starting to seriously contemplate and study the ideas that became DASH. Hurricane Maria hit Puerto Rico. I awoke just as millions of others to witness one of the worst hurricanes to make landfall in 100 years. That day we started making calls, 10 days later we were flying inland in a rented Cessna 208 delivering thousands of pounds of humanitarian supplies via air drops to cut off communities. The take away was that if this could be done safely and legally on an idle Fedex feeder aircraft, if those on the ground were willing and ready for rapid logistics at the same price they would have paid, why did it have wait until a natural disaster to strike? DASH exists because there is no technology, process, or company that can honestly make the claim of delivery to anywhere or even most places in under 2 days. We in large cities have come to enjoy it and expect it, yet in the same breath, we cut the conversation short for those geographically situated elsewhere. Our solution exists and with the hard work of an amazingly talented team and excellent partners continue to scale and grow until that one day that claim can be made.
Our Future
The story of DASH is far from over, our vision is rapid logistics anywhere and there is a flight path ahead of us to get there. Today, DASH is advancing the state of the art of precision air drop technology, tomorrow we are looking to deliver into your community wherever it is and despite the circumstances. The entire globe deserves the same level of service and convenience. The list is too long to thank everyone who has helped DASH get to where we are today, and growing longer every day. Instead I can offer up, look to the skies you may see your next delivery safely and precisely coming down to a location near you.
Joel Ifill is the founder and CEO of DASH Systems. He can be found at www.dashshipping.com and reached at [email protected] we are always on the hunt for talented roboticists engineers and developers who enjoy aviation, inquire at [email protected] | https://medium.com/silicon-valley-robotics/lets-talk-about-the-future-of-air-cargo-219878829236 | ['Andra Keay'] | 2020-12-17 02:23:17.492000+00:00 | ['Delivery', 'Logistics', 'Air Cargo', 'Startup', 'Robotics'] |
I Surrender | I Surrender
There is so little I can change in this moment — I can only keep trying
Photo by Leonard Okpor on Scopio
I surrender. I have half the energy I had pre-pandemic. Maybe less. I’m lucky if I wash the dishes and make the bed. I can’t keep all cylinders revving like I used to. I can’t create as fast and beautifully as I once did.
I surrender. I have a hard time getting out of bed in the mornings. I fear running into a pushy neighbor while on my morning walk, having someone chastise me for continuing to quarantine even though our county has reopened, having to explain why I don’t want hugs right now. (I mean, I do — but I don’t.) And the days feel like they go downhill from there. I am still excited to work on my projects, but in the midst of that, I am alone. So alone. For the long run. And I hate that. I dread facing that each day.
I surrender. I cannot seem to keep up with my friendships anymore. It takes all my energy just to compose one email or answer a text message. I fear losing the opportunity to get to know the new friends I’ve made recently just because I can’t seem to get up the energy to correspond with them. I’m terrified of losing my close friendships, especially with Sunny and Frank, because talking to them on the phone or Skyping them feels exhausting. Will they be able to have patience with me through this time of separation? I guess that question is irrelevant because no matter the answer, I can’t do any better than this.
I surrender. I won’t be able to access my little woodland the way I used to. I know I can’t change that. I will have to settle for long retreats there with longer periods of separation between.
I surrender. My days with the owls seem to be over. I only saw them once this season. And no babies, so far as I could tell. I feel untethered without my yellow-eyed friends. But what can I do?
I surrender. My family is like a set of tectonic plates in this pandemic. Lots of drifting away from one another, then crashing together in conflict. Everyone is going in separate directions when just a couple months ago, I felt so close to each person. Now there are boundaries everywhere. Separations. Indefinitely.
I surrender. I have lost my sweet Alex. Due to circumstances outside my control, it might be years before I am allowed to see him again. My heart is shattered. Every day, I feel this dark cloud above my head. All of that love, all of that time together…lost like a puff of smoke in the wind. Like it never happened. And I cannot do a goddamn thing about it.
I surrender. My health problem flares up for days and then almost disappears. And then flares up. And then disappears. I don’t know what medical steps to take from here, in the middle of a pandemic. How urgent is this mystery ailment? I don’t know what to do.
I surrender. My county has reopened and I’m afraid it’s too soon. Has something changed that I’m not aware of? Did the virus go away? Did we get a vaccine? Restaurant patios are crowded with people, elbow-to-elbow. The grocery store is even busier than it was before the panic buying started. And no one is wearing masks. I’m terrified we’re going to see a spike in cases and have to lock down again. Terrified that a second lockdown will erupt in gun violence, as protesters recently promised. But what can I do?
I surrender. All I have wanted these past few years was to heal my body, heart, and soul. I have worked so hard to become more intimate with people. To expand my definitions of love. To explore my sexuality. And now I cannot even touch another human being without worrying about compromising their safety — and mine. I have to wait for this to run its course. For a vaccine or for whatever solution is coming. There is nothing to do but wait.
I surrender. I’m hauling ass on this hamster wheel. And going nowhere. Treading water. Working so hard just to stay in the same place. And there’s nothing I can do about that.
I surrender.
© Yael Wolfe 2020 | https://medium.com/liberty-76/i-surrender-798a2410fa1e | ['Yael Wolfe'] | 2020-05-17 12:11:00.836000+00:00 | ['Spirituality', 'Life Lessons', 'This Happened To Me', 'Mental Health', 'Pandemic'] |
Modeling Long-Range Interactions Without Attention | Transformer technology, which has become an important force in the field of natural language processing (NLP), has recently begun to show its strength in the field of computer vision.However, the quadratic memory footprint of self-attention has hindered its applicability to long sequences or multidimensional inputs such as images which typically contain tens of thousands of pixels.
Comparison between attention and lambda layers. (Left) An example of 3 queries and their local contexts within a global context. (Middle) The attention operation associates each query with an attention distribution over its context. (Right) The lambda layer transforms each context into a linear function lambda that is applied to the corresponding query.Source[1]
Based on the above limitations, [1] proposes termed lambda layers, or Lambda Layer, which provides a general framework for capturing the long-term interaction between the structured collection of model input and context information. It captures such interactions by converting the available context into linear functions (called lambdas) and applying these linear functions to each input separately.
Let
R^v
denote structured collections of vectors, respectively referred to as the the queries and the context.
We consider the general problem of mapping a query (q_n, n) to an output vector y_n ∈ R^{|v|} given the context C with a function F :
Such a function may act as a layer in a neural network when processing structured inputs.
A lambda layer takes the inputs
and the context
as input and generates linear function lambdas that are then applied to the queries, yielding outputs
The lambda layer first computes keys and values by linearly projecting the context, and keys are normalized across context positions via a softmax operation yielding normalized keys K¯ as following:
Where,
Its implementation can be viewed as a form of functional message passing.
With each context element contributing a content function
and a position function
Where,
is a positional embedding for the relation (n, m).
The e λ_n function is obtained by summing the contributions from the context as
The content lambda λ^c is invariant to permutation of the context elements, shared across all query positions n and encodes how to transform the q_n solely based on the context content. In contrast, the position lambda λ^{p}_n encodes how to transform the query content qn based on the content c_m and positions (n, m), enabling modeling structured inputs such images.
The output of the lambda layer is obtained as
Where,
This process captures dense content and position-based long-range interactions without producing attention maps.
[1] propose to decouple the time and space complexities of our lambda layer from the output dimension d. Rather than imposing |v|=d, they create |h| queries {q^{h}_ {n}}, apply the same lambda function λ_n to each query q^{h}_ {n}, and concatenate the outputs as:
[1] refer to this operation as a multiquery lambda layer as each lambda is applied to |h| queries.
The researchers conducted control experiments to compare LambdaNetworks with baseline ResNet50, channel attention and previous use of self-attention to supplement or replace Research method of 3x3 convolution in ResNet50. The results show that when the parameter cost is only a small part of other methods, the lambda layer is significantly better than these methods, and compared with Squeeze-and-Excitation (channel attention) to achieve +0.8% improvement.
Comparison of the lambda layer and attention mechanisms on ImageNet classification with a ResNet50 architecture. The lambda layer strongly outperforms alternatives at a fraction of the parameter cost. We include the reported improvements compared to the ResNet50 baseline in subscript to account for training setups that are not directly comparable.Source[1]
The researchers compared the lambda layer and the self-attention mechanism, and gave a comparison of their throughput, memory complexity, and ImageNet image recognition accuracy. This result shows the insufficient attention mechanism. In contrast, the lambda layer can capture global interactions on high-resolution images, and can achieve a 1.0% improvement over the local self-attention mechanism, while running almost three times faster than the latter.
The lambda layer reaches higher accuracies while being faster and more memoryefficient than self-attention alternatives. Inference throughput is measured on 8 TPUv3 cores for a ResNet50 architecture with input resolution 224x224.Source[1]
In addition, location embedding can also be shared between lambda layers, further reducing memory usage requirements with minimal downgrade costs. Finally, lambda convolution has linear memory complexity, which is very useful when encountering very large pictures in image detection and segmentation tasks.
The researchers found that it is possible to construct LambdaResNets to improve the parameters and flops efficiency of large-scale EfficientNets.
(Left) LambdaResNets improve upon the parameter-efficiency of large EfficientNets.(Right) LambdaResNets improve upon the flops-efficiency of large EfficientNets.Source[1]
Such results indicate that the lambda layer may be very suitable for use in resource-limited scenarios, such as embedded vision applications.
Finally, the researchers evaluated the effectiveness of LambdaResNets using the Mask-RCNN architecture for target detection and power segmentation tasks on the COCO dataset.
COCO object detection and instance segmentation with Mask-RCNN architecture on 1024x1024 inputs. Mean Average Precision (AP) is reported at three IoU thresholds and for small, medium, large objects (s/m/l).Source[1]
Using the lambda layer will produce consistent gains on all IoU thresholds and all object proportions (especially small objects that are difficult to locate), which indicates that the lambda layer is easy to achieve good results in more complex vision tasks that require positioning information.
Conclusion
Experiments on ImageNet classification and COCO object detection and instance segmentation demonstrate that LambdaNetworks significantly outperform their convolutional and attentional counterparts while being more computationally efficient. Finally, LambdaResNets, a family of architectures that considerably improve the speed-accuracy tradeoff of image classification models. LambdaResNets reach state-of-the-art accuracies on ImageNet while being ∼4.5x faster than the popular EfficientNets on modern machine learning accelerators.
References
1.Anonymous.LambdaNetworks: Modeling long-range Interactions without Attention,ICLR 2021. | https://medium.com/swlh/modeling-long-range-interactions-without-attention-c318acf45f73 | ['Nabil Madali'] | 2020-10-28 18:50:18.051000+00:00 | ['Machine Learning', 'Computer Vision', 'Transformers', 'Object Detection', 'Self Attention'] |
I Know What You’ll Do NEXT… | Collaborated Post by Chu Chu, Minyi Huang, Valerie Huang, Yinglai Wang
This blog is written and maintained by students in the Professional Master’s Program in the School of Computing Science at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit {sfu.ca/computing/pmp}.
Poster of the Movie I Know What You Did Last Summer
Have you ever seen this horror movie before? Have you ever been curious about what others are doing? We don’t like being monitored by others, but curiosity drives us to want to know what others are doing or what they are going to do next. Through the method what we called Human Pose Estimation (HPE), your dream will come true. Wei et al. proposed Convolutional Pose Machine(CPM) in their paper which applies deep learning to HPE. CPM combines the advantages of convolutional neural network and pose machine to learn image features and image-dependent spatial models to estimate human poses.
Human Pose Estimation
For decades, HPE has attracted much attention in computer vision. It is a key step to in understanding the behavior of people in images and videos. HPE is defined as the positioning of human joints(also called key points like elbows, wrists, etc.) in an image or a video. It is also defined as searching for a specific pose in a space composed of all joint poses. The Pose Estimation problem can be classified into 2D Pose Estimation and 3D Pose Estimation. 2D Pose Estimation uses 2D coordinates(x, y) to estimate the 2D pose of each joint in the RGB image. Similarly, 3D Pose Estimation uses 3D coordinates(x, y, z) to estimate the 3D pose in the RGB image.
2D Pose Estimation
3D Pose Estimation
Applications
HPE has a variety of applications, which are widely used in action recognition animation, gaming, video surveillance and other fields. For example, a quite popular deep learning application named HomeCourt, uses Pose Estimation to analyze the movement of basketball players.
One of the interesting usages of CPM is to predict to next movement by key joint position in dancing figures.
Heads, arms and legs position prediction with CPM
Heat Map of Dancing Figures with CPM
Why is it difficult?
The main challenges in HPE, such as occlusion and unclear key points. However, all key points are related to each other due to the structure of the human anatomy. Using easy-to-recognize key points to guide the detection of difficult-to-recognize key points is beneficial. Moreover, tracking multiple persons people in one frame is far more difficult than tracking a single individual and poses its own set of problems, since multi-person pose estimation not only has to identify the number of people but also figure out the inter-person occlusion.
What is Convolutional Pose Machine?
1. Pose Machine
To develop an understanding of a Convolutional Pose Machine (CPM) it will easier to simply begin by understanding a Pose Machine first.
Pose Machine is a multi-stage sequential structure that can perform end-to-end training. It provides a continuous prediction framework for learning implicit spatial models and generates rather accurate results on spatial contexts like human postures.
The detailed structures of Pose Machines and Convolutional Pose Machines are shown in the following graph:
Detailed structures of Pose Machines and Convolutional Pose Machines
The algorithm is comprised of sequence of predictors that learn from previous mistakes to produce increasingly refined estimates for key point location. In stage 1, the predictors produce an estimation for the location of each part. After that, the predictors of subsequent stages improve their prediction by learning to correct the past mistakes, which is the stage 2. By the final stage, the purpose of the method is to accurately locate each part.
Let’s explain the different stages one by one:
Architecture of the Stages from a Pose Machine
In Pose Machine model, Stage 1 shows the model for image feature extraction and Stage 2 is the prediction model. In detail, Stage 1 takes in an image, usually a human movement image with the body in a certain pose. “x” embodies the content of the image, which is then fed into a predictor “g”. The “g’s” are different predictors that predict its “beliefs” for assigning a location to a certain body part. For example, the knee or the elbow joint. In Stage 1, the predictor is labeled “g1”. “g1” analyzes “x” and produces a set of heat maps “b1”.
This image shows a clearer view of the heat map outputs in a single stage:
The three heat maps are graphical representation of data where the individual values contained in a matrix are represented as colors. From the three sample heap maps heat maps above, we can see that they have highly multi-model appearance variation. It usually contains “hot” (red, yellow) and cold (blue, green) colors to show the locations of body parts. The hotter colors indicate the joint part of a body, and the colder colors are usually backgrounds or non-joint part. The method is designed by the guide of heat map of each part.
Following the first stage, we feed its result to Stage 2.
Stage 1 to Stage 2 in Pose Machine Model
In Stage 2, the procedure is similar to Stage 1. With the heat maps generated in Stage 1, we need a method to transform them to data that we can feed into Stage 2. Then comes the function ψ(·), which transforms the heat maps in “b1” to context features that can be fed into the next stage, in this case, Stage 2. With these context features, we again produce image features x’, let it go through a new predictor “g2” and generates new heat maps “b2” in Stage 2.
This image shows a clearer view of the procedure from Stage 1 to Stage 2:
Input Image to Stage 1 Heat Maps, and further process in Stage 2 and Stage 2 Heat Maps
Generally, the number of stages is a hyper-parameter. The first stage is always fixed, while the stages following the second are just the repetitions of stage 2. In Stage 2, we take the heat maps and the image features as input. The heat maps add spatial backgrounds for the next stage. The result is obtained by repeatedly refining the heat map in each later stages to rectify errors. The following figure shows show heat map is more accurate in identifying the joint position with further stages.
Repeated Refining Heat Map through Three Stages
2. Convolutional Pose Machine
CPM is a cascaded model based on Pose Machines, and usually contains more than two stages. The results from the last stage will become the inputs to the next stage.
Convolutional Post Machines based on Post Machines
The “C (Convolutional)” part is shown in the extended “C” and “P” stages in figure (c) and figure (d). In (a) and (b), “x” is just the image features extracted from the original input image. With CPM, the single “x” image feature model is extended into multiple convolution and pooling layers. It replaces classifiers & feature extractors in pose machines with convolution and max-pooling.
CPM makes improvements on PM so that both image and contextual feature representations can be learned directly from data. The sequence of pictures in (e) shows how the receptive field expands from a knee joint to larger space. At each specific stage of CPM, the spatial context information contained in the heat maps will provide clear cues for the subsequent stages to generate better maps with more accurate joint positioning.
Problems and Challenges
Vanishing Gradient
Since CPM consists of many stages, the gradient updates propagated to early stages of the network are ridiculously small or not-existent. To reduce the impact of this issue, CPM provides intermediate supervision while computing loss. Instead of computing loss only at the end of the network, we compute loss at every stage of the network. The loss is still calculated with L2 distance between the heat map and the ground truth.
Histograms of Gradient Magnitude During Training
From the histogram of gradient magnitudes we can observe the change in magnitude of gradients in layers at different depths in the network. For models without red-line intermediate supervision, gradient distribution of higher layers presents more uniform compared to early stages. However, the gradient magnitude distribution has a moderate variance throughout the stages of the network with intermediate supervision.
2. Receptive Field
The network is designed according to a receptive field at the higher output layers of the stage network, which supports the learning of potentially complex and long-range correlations between parts.
Wrists and Elbows Effective Receptive Field
With large receptive fields, networks are effective at modeling long-range spatial interactions between parts.
Applications
Key point detection problem addressed by current technology
There are three models aimed for resolving key point detection problems. We can rank them by detection accuracy, CPM < DeeperCut < CMU OpenPose.
In this article, we focus on the CPM model, which is the predecessor of open source project OpenPose. Pose estimation task can be categorized as Fully Convolutional Network. CPM model takes a human pose image as input, output multiple heat maps based on key point inputs.
CPM is a cascaded network that sequentially takes the previous FCN contextual as the input of next FCN.
Each stage generates a response image for the next stage (blue score in the above image)
A center map in the form of a Gaussian response is the extra input of the network. With the center map, CPM is able to deal with multi-human pose detection by telling the network its current target position.
General usage of CPM (with Tensorflow)
One of the interesting usages of CPM is to predict next movement by key joint position in dancing figures.
Here is one of the state-of-the-art models for 2D body and hand pose estimation:
Fashion Landmark Detection project using CPM
Applying CPM in Fashion Landmark Detection can locate the key points on various types of clothes.
The project uses the data provided by the Tianchi FashionAI Global Challenge, which includes 40,000 images as a training set, and 10,000 images for testing purposes. Each picture is assigned a corresponding clothing category along 5 categories: blouse, outerwear, dresses, skirts, and trousers. The training set provides annotations for 24 key points corresponding to each picture, including x coordinates, y coordinates, and whether three items of information are visible (-1 means no, 0 means exist but not visible, 1 means exist and visible).
The tricky part of this project lies in how to achieve the goal of detection with a model that is used to do prediction.
The first approach to resolve this issue starts with adding an extra key point that moves randomly across the picture, which means the input images are no longer static and the model can start training.
Result of first approach with random moving extra key point.
Result of enhanced approach with extra key point only moving along the edges
Result of first approach is not ideal since the extra point is interfering with a real feature area. The new approach fixes this issue by allowing the extra point to only move along the edge, thus not affecting the real feature area.
Testing results are shown below, which look promising.
Future Expectation
Despite the fact that the use of CPM modeling is limited human pose detection and estimation, there are applications (like the Fashion Landmarker Detection project) that can be improved using this model by reconstructing the input feature and fine tuning the model.
Right now, the CPM model requires a precise annotation as input, which increases tremendous amount of workload. There is still improvement to be done, such as detection of the key point (joint) only by posing movement (without annotation input).
Reference
Convolutional Pose Machines — Tensorflow:
https://github.com/timctho/convolutional-pose-machines-tensorflow
Convolutional Pose Machines:
https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Wei_Convolutional_Pose_Machines_CVPR_2016_paper.pdf
DeepCut Pose Estimation:
https://pose.mpi-inf.mpg.de/contents/pishchulin16cvpr.pdf
Semantic Pose Machines:
https://www.ingentaconnect.com/contentone/ist/ei/2018/00002018/00000010/art00014?crawler=true&mimetype=application/pdf
Human Pose Estimation: Simplified:
https://towardsdatascience.com/human-pose-estimation-simplified-6cfd88542ab3 | https://medium.com/sfu-cspmp/i-know-what-youll-do-next-169616ba4935 | ['Valerie Huang'] | 2020-02-05 02:05:06.949000+00:00 | ['Blog Post', 'Big Data', 'Machine Learning', 'Convolutional Network'] |
How to maintain the focus as a software developer? | Software development is very challenging. Firstly you have to understand the problems you are solving. Then you need to think of the solution. Depending on the solution, you have to research proper technology. During the process, you need to have a clear perception of the system you are working on and to think longterm. If not any future changes will be hard to pull off.
It’s not enough to be a great engineer to develop and maintain the software. The effort you invest in the process has to be excellent too. I’ve made tons of mistakes only because I’ve lost the focus. So I’ve been looking at different techniques to get faster into the state and to preserve it.
Why do we need to focus?
I believe this is quite obvious, nevertheless, let’s point out a few facts. When you focus, it means you direct attention to only one duty. You make decisions quicker, and you absorb and process information faster. Focus is the first step to get into the flow state. Flow is an optimal state of consciousness, a peak state where we both feel our best and perform our best. It’s hard to get to the state, and if we disrupt it, we need much time to get back, if we get back. That’s why we need to focus intensely on our work so we could perform better.
Have clear goals
Focus here is on the clear, not on the goals.
I’ve seen many developers who think that sprint planning and defining tasks are in some sense, a waste of time. You have to go into the details of every single one; there are discussions about the parts of the software you won’t develop; it’s better to spend time on development because we can make small decisions along the way; assignments seem clear already.
All these reasons are wrong, or the person who is defining the tasks is not doing his/her job adequately. First, you have to have a broader picture of the software, and you never know which feature is waiting for you to maintain. Second, the goal of planning is to save time. If you don’t make all the small decisions, you will have to pause your work either to think of a solution or to ask someone to clarify the problem. Either way, you are losing the focus, and not just that, you are disturbing someone else’s.
If you work in an environment where you don’t have proper product specifications, try to write them out yourself. If the feature you are working on is complicated, split it into chunks. Now you can focus on work not on developing a product. Majority of us are bad multitaskers. We should do only one thing at the time.
Don’t let anything disturb you.
How many times did this situation happen to you?
During the workday, you should have time only for yourself. So there are no questions from other colleagues, no social networking, no fantasy football, or any other distractions.
In that part of the day, my phone is on the silent mode, desktop apps for chatting like Viber or Whatsapp are off. If you are the team lead, I encourage you to make this a team policy so everyone would know not to disturb others and to focus on the job. After this period is over, that’s the time for questions and discussion.
This policy is hard to implement in open space because there are many people and many teams. My former colleague had a great solution. He had some USB lamp, and if it were on, everyone would know not to bother him.
Some people will think this is a selfish approach. I don’t agree. I believe it’s selfish to ask for something that you need a few minutes of analysis or not to wait for a few hours to get an answer. If someone’s dedicated full attention to solving a problem, we should respect that. It maybe feels like he needs only a second to give us the solution, but as I already said, it’s very hard to get into the flow but very easy to get out of it.
Organize code well
Imagine you are writing a new service or control for your app. Everything goes as planned, and the new component is working great. Next step is to integrate it. You open some older class, look at the code, and can’t understand a thing. You don’t want to work anymore. I had a fair share of those situations.
Working with old or other people’s code can be very frustrating. You can avoid the problem using coding standards. When code is clean, organized, easy to read, understand, and navigate, or as we call it beautiful, you won’t lose much time searching for the parts you need. Also, understanding the code is more comfortable. Therefore it’s easier to maintain the focus.
Organized code has other advantages too. I have already written about it. Check it out.
Organize your tasks
During the sprint, you probably get several tasks. Firstly you have to prioritize what’s essential and make sure you can deliver it on time. Afterward, you have the flexibility to arrange others as you prefer. Some tasks are challenging, some relaxing and some just boring. Don’t do all the enjoyable stuff and leave the dull for the end. No matter how exciting the job is, the quality of the code has to be identical.
Focus, and particularly the flow state, are very expensive. They take much energy from us, and we need to regenerate. So if you have more than one fun task in the sprint, after you finish first, next one should be dull one or relaxing one. You don’t want to overload your brain and burn out.
If you leave all boring ones for the end of the sprint, you won’t have enough energy, desire to complete all of them. I like to leave the relaxing ones for the end of the week, so I don’t get exhausted before the weekend.
To get a maximum of yourself, you have to understand all responsibilities and to know your ability. If you are too ambitious, you’ll burn out. If you are too relaxed, you’ll stagnate. So, be cautious with the organization of your tasks.
Struggle well
There are occasions when there are no fun tasks in a plain sight. Everything we get is either dull or well above our skill level. Even if we use all pieces of information above, it’s hard to focus and be as productive as you can be. Sometimes coding is a stressful process, and effort we need to invest is enormous. Unfortunately, there is no practical advice that can help us. We have to struggle well.
Stop perceiving struggle as something negative. The struggle is a test of character and creativity. In those moments, to keep myself motivated, I repeat that there is no avoiding pain, especially if you’re going after ambitious goals. Eventually, you’ll have a breakthrough, and those responsibilities won’t look so difficult anymore.
The last point I want to add is about perception. A few days ago, there was a discussion about the article I wrote on Twitter, and some guy said that creating a modular app is hard. My response was that it’s not hard; it’s challenging. Having that positive mindset help you to overcome many obstacles and get into a productive state.
Conclusion
To focus and to get into a flow state can be very difficult. However, we should do our best to get int those state. Then we learn faster, make better decisions, and solve problems quicker. Software development is quite challenging, and competition is enormous. We should use any tool or method to be as productive as we can so we can stay relevant in the industry.
Want to find out more?
Or you want to discuss swift and technology? Follow me on Twitter. You’ll find additional content, reading recommendations, and much more. | https://medium.com/flawless-app-stories/how-to-maintain-the-focus-as-a-software-developer-d43aeb25693c | ['Pavle Pesic'] | 2019-08-07 09:48:48.767000+00:00 | ['iOS', 'Mobile', 'Mobile App Development', 'Swift', 'iOS App Development'] |
A Peculiar Tune | Quatria
Upon issuing forth from the cave mouth through which he had escaped the Black Water and Stone Sea below, Benda became immediately drowsy.
The sun shone down on him, and he blinked dumbly to see its rays again. It took several moments for his eyes to adjust to natural light again, and his limbs felt incredibly heavy. The cool breeze rose up from what he could now see was a large body of water not far off. He looked around. The cave opened onto a rocky ledge. And without climbing down, or otherwise exploring further his surroundings, he promptly returned to the cave mouth and dozed off for how long he did not know, half in the shade of the cave, half in the sun’s rays.
He slept the sleep of the dead, but he knew he was not dead, and despite being nearly (or even possibly) turned to stone, he had returned to the land of the living. Or at least one of those lands…
What roused him finally was not the smell of the sea nearby, the crashing of waves, or the cry of the sea birds. It was the strange sound of a flute, horribly out of tune, arythmic, almost broken sounding, which roused him out of slumber.
He stretched, and opened his eyes, rubbed his ears, and his face, his cheeks, and his forehead, blinked once hard, and then again.
“What a horrible racket!” he muttered to himself. Gathering himself, he got up to search for its origin. A thin rocky trail descended the rocky ledge from the cave mouth. As it wound downward, there was revealed a green but scrabbly field, and beyond it a sea which Benda did not recognize immediately.
Round the final bend of his descent, the broken horrid tune grew significantly louder, until stepping out onto the grass, Benda thought he spotted its source. He sensed there was someone behind a boulder of medium height, too tall to easily see over. He crept over to behind where the boulder lay, taking care to stay out of the line of sight of whoever — or whatever — might be on the other side.
Try as he might though, Benda could not surprise the strange but wise little fellow who had sat down there to sit in the shade and play his flute. As Benda approached, from the far side of the boulder the music — if you could really call it music — stopped abruptly, and there was silence. Benda, crouched there, suddenly felt a bit foolish for hiding and stood up.
“Hullo there,” he said over the rock, and strode out around it. But he did not realize, that from the other side, the little fellow had also gotten up to get a look at the visitor who was trying to creep up on him unawares. And so, when each reached the other side, they found it empty.
“Most peculiar,” said the little figure wearing a wide brimmed hat, top slumped to one side. Most peculiar indeed. He held in one of several hands (actually, technically, rootlets) his little reed flute, which he had carved himself from a special type of reed that only grew alongside the Great River. Over his shoulder was slung a simple pack. “Most peculiar indeed.”
“I say,” Benda began, circling back the way he had come. Likewise, the little brown lumpy figure did the same in the opposite direction on his thin stalks-for-legs. Once again, they each found an empty space the other had just vacated. “Hold still!”
The little figure did so, and finally revealed himself bodily to Benda, bowing low in a pretentious and awkward but nonetheless charming manner. Benda couldn’t help but smile ear to ear at this ridiculous creature.
“Tob the Gobble, at your service,” he said, removing his purple hat with a flourish, and using the butt end of his flute to simulate the cane of a gentleman.
“Benda, at yours,” he said.
Returning his cap to his brown lumpy head, Tob remarked, “Just Benda?”
Benda shook his head, “Just Benda.”
“Hmm! Then, Benda the Just we’ll call you! Everyone needs a second name, don’t you agree?”
“I’ve had too many,” Benda admitted, still smiling at the peculiar little fellow. “Now I’m just me.”
“What’s one more for good measure?” the gobble said, twirling around merrily. “Hmm, that reminds me of a song,” he began, holding up his flute to what seemed to pass for a mouth on the little creature. He exhaled into the instrument with a strange sort of hooofting sound.
“Please, let’s just… talk for a while first,” Benda interrupted him, hoping to break off another long horrible peculiar little tune like that which had roused him from his sleep and brought him to this place.
Tob seemed to eye him from the many small root buds which speckled the surface of his hard-looking skin, and lowered his flute. “Music critic, eh? I see. What about jokes? Got any good ones? Don’t worry, I’ve got quite a few! Let’s see…”
“I’m sorry,” Benda said. “I’m sure they’re quite funny, and I don’t mean to be rude, but I’m not really much in the mood for joking or music. I’ve lost my friends, and I don’t know where even really I am. I’m just trying to get home to see my family again.”
“Tough crowd,” Tob quipped. “Let’s see then, a tale! A tell all! Whoohoo!”
Benda groaned, he realized, audibly. But Tob didn’t seem to notice or care, and just went right on going.
“A tale it is then. It’s settled. It’s been a while. Let’s see… I’ll tell you a tale, not a tall tale, but not quite a small tale — let’s call it a little bit more than ‘little.’ A little bit (much) like me. In fact, it’s my tale, and I’ll give it to you, and then you, dear friend, can regale me with yours.”
“This sounds like it might take a while. Is there somewhere we might sit down?” Benda asked. “Somewhere out of the sun?” He looked up again at the sky, nervously beginning to remember the prying eyes from above, the eagle and shape-shifter Murta who had hunted him and his party like prey, driving them under-ground.
“Why, yes! I know just the place, Benda the Just, for just such a tale! Hooray! Follow me!” And off he went, playing his flute and dancing as they walked, stop only once briefly to interject, “A song for a walk, I always say! Hey, hey!”
Near the edge of the sea, where the rock ledge jutted out, they rounded a bend, and there was a grove of low trees. Benda sheltered down in it, out of the sun, and braced himself for what he suspected might be a rather long-winded tale from his peculiar, yet oddly charming new friend, Tob the Gobble. | https://medium.com/quatrian-folkways/a-peculiar-tune-791568412dbf | ['Timothy S. Boucher'] | 2019-09-19 02:48:08.833000+00:00 | ['Fiction', 'Fantasy', 'Writing', 'Science Fiction', 'Quatria'] |
Why we Need Design Thinking in Politics | “Naked girl, covered in Napalm. Five marines Raising the Flag, Mount Suribachi. V for Victory. They remember the pictures. Fifty years from now, they’ll have forgotten the war.“ Wag the Dog (1997)
A picture is worth a thousand words, and in politics there are a lot of words. From the early cave paintings in Bhimbetka (dated 13,000 BC to 12,700 BC) humans have used visual methods to describe their own version of events in order to shape public perception. Religion began to spread through the illiterate parts of the world, largely due to the strong iconography that came alongside it. In the 19th century royal families, great conquerors and heroic battles were commemorated in elaborate, gigantic paintings, which solidified the way we think about those events and people hundreds of years later. In the 20th century, politicians used Conscripted Art and propaganda to control public opinion. Thus, when we talk about design in politics, we’re not just thinking about a candidate’s great logo or a movement’s compelling website, we’re considering all the visual elements that project a certain image and message.
Reaching the Global Village
In the 21st century, the age of the internet and social media, we find ourselves in a reality where not having a clear, strong, and– most importantly, memorable– visual identity is not an option. We all live in a global village, and the one language almost all of us speak is the visual language.
As a people, we like to think we’ve come a long way since the days of shameless political propaganda and that we are much more suave now. We won’t be fooled by posters and political infomercials that are trying to bluntly paint a different picture from what we know to be true. The good news is that we are much more sophisticated than that. The bad news is that so are the campaigners.
Social media has contributed massively to the proliferation of design in politics. It’s not just that we focus so often on visual elements as we scroll through media-sharing platforms like Instagram, Twitter, and Facebook; it’s also about the sheer volume of content that is being produced and the rate at which it’s shared. To keep viewers’ interest, you need a graphic language, templates, and constant production. In that way, social media allows smaller movements or campaigns to gain recognition on the basis of their graphic design presence: it’s not just what you say, it’s also how you look.
The Medium is the Message
America, in many ways, is a perfect example of how the relationship between design and politics has evolved in the digital era. Many have called Barack Obama “the social media president,“ since he was the first holder of the office to effectively utilize the tools offered by the internet. “The high production value and disciplined branding of the Obama campaigns deserve a great deal of credit for raising the standards of campaign branding, proving that great branding provides a powerful platform to build trust, tell stories, and engage the public imagination,“ writes Deroy Peraza for fastcompany.com.
The Obama campaigns have also showed us once more that Marshall Mcluhan was right when he said in 1964 that “The Medium is the Message“; it is certainly true when it comes to politics. When the Obama campaign reached out to a street artist to create one of the most memorable images for the campaign, they were able to brand Obama as the “cool grassroots candidate.“ Obama got the graphic treatment of a social justice icon, in a very early stage in his career. One can argue that this was a catalyst for him receiving the Nobel peace prize in 2009, after 12 days in office: life imitating art.
On both sides of the 2016 American presidential election, the design and branding work was very much aligned with the tone of voice of each campaign, although they took very different design paths. Michael Beirut, a designer on the 2016 Clinton campaign, said, “Donald Trump’s graphics were easy to dismiss. They combined the design sensibility of the Home Shopping Network with the tone of a Nigerian scam email.“ But this wasn’t an oversight. The outdated aesthetic was appealing to Trump’s base, and most importantly was aligned with his key message of going back to “simpler times“ when things were better. If this website was a person, it would be a white middle age man.
On the other hand, almost as a perfect juxtaposition, if Hillary Clinton’s website was a person, it would be a young woman. Clinton’s campaign design was well-thought-out (design work on the campaign had begun two years beforehand), and felt fresh and colorful, with graphic elements that showed progress in the literal sense (the arrow inside the H) and in a more abstract sense (introducing a more innovative design language).
YES and NO
But let’s take a trip outside of the US for a moment, and travel back in time to 1988, when Chile was under the dictatorship of Augusto Pinochet. A national referendum was approaching, an opportunity for the Chilean people to determine whether Pinochet should extend his rule for another eight years. The referendum asked the people of Chile one thing: Augusto Pinochet Ugarte — YES or NO. With only a few months to organize, understaffed and with no budget, the “NO campaign“ went straight to work.
The YES side had the clear advantage of unlimited government funds and the sitting president as their candidate. But even though the NO campaign had been forced into a negative position, they were able to use design language to underline the positivity and hope they wanted to associate with a “NO“ vote. The campaign used a rainbow as its main symbol, emblamatic of the plural views of the opposition (each member party had its own color depicted in the rainbow) and, at the same time, the hope for a better Chile and a more prosperous future. Some of the ads had the feeling of a Coke commercial, with images of people laughing and having fun, as a selling point to what a future with NO could look like. The rest is, well, history. Against all odds, Pinochet was voted out of office.
With the rise of nationalism in recent years, there are many examples of campaigns with “nationalistic“ design language, with strict use of the national colors and glorification of the party leader as the sole protectors of the state. Brazil’s new president, Jair Bolsonaro, noticeably used the yellow and green of the Brazilian flag as the main hues of his campaign. His opponent, Fernando Haddad, used a broader color scheme, adding in shades of blue along with the traditional yellow and green. In the end, Bolsonaro’s nationalistic language, which his chosen campaign colors underlined, triumphed over the more liberal Haddad.
A Win-Win
But not all is lost. Like everything in life, to every action there’s a counter action. We can look at some of the Democratic campaigns that are popping up, and see some hopeful yellows, refreshing mint greens, bright pinks, and more. And it’s not just the colors. Progressive parties around the world are using design language to communicate their views of acceptance and equality in a new and exciting way.
Design is a thread that runs through all human political movements, but these days it’s looking more and more like a tapestry. Today’s audiences are much more sensitive and savvy: the generation of voters raised in the internet era is propelling us into a newly “visual age.“ Voters are pushing the political world to produce better visuals, and by doing so, the conversations around it can be elevated. It’s a win-win, and that hardly ever happens in politics.
By Gal Cohen & Esmé Rocks | https://edenspiekermann.medium.com/why-we-need-design-thinking-in-politics-cb3cff400972 | [] | 2019-07-29 08:21:34.020000+00:00 | ['Design', 'Politics'] |
“CHILD’S PLAY” | play SimCity BuildIt for iPad. I play it so often, in fact, that it might be the thing I do most these days. It is not nearly as immersive and complex as past SimCity titles, but I nonetheless feel wanted, sometimes eve needed, and loved by the Sims who inhabit my city called Monteblo. Monteblo didn’t begin with my recent download of the game, though. No, Monteblo rose from a distant — and imaginary — location in the South Pacific around the time that puberty surged pimples through the pores of my skin.
Monteblo is a place that straddles time. There are books filled with graph paper pages on which are drawn the intricately detailed designs of Greek temples, and mayoral estates designed after Greek temples, structures that dapple the countrysides, suburbs and grand cities that make up the magical land. That same magical land is home to one of the States’ (Monteblo is a territory of the United States of America, of course, with voting rights) most successful collegiate sports programs: Deusis Testy. There are football and basketball cards with famous players’ pictures, heights and achievements on them. Among the most famous is of a player known simply by his surname: Frekmeiesy.
( frĕk • mī • ĕs • ē )
Frekmeiesy was a two-sport athlete who was bound to fall victim to his popularity. His name alone was enough to drive his Creator mad. It elicited smiles. Smiles from adoring fans, even smiles from opponents’ fans. Hell, Frekmeiesy’s appeal was so universal that his Creator’s Creators could hardly go a single moment without mentioning him!
And always with those smiles, those smiles that confessed simultaneous love and mockery. As his popularity grew his statistical output began to wilt. The hand of his creator pointed lightning bolts at him fierce enough to siphon the athletic ability from his joints. Game by game Frekmeiesy could but sit idle on the bench as other players stole the spotlight that was once his, players with normal names like Joe and Jack, Johnson and Franklin; players who could barely draw a glance from the home crowd much less smiles from the Creator’s Creators, no matter how high their shooting percentages!
The Creator had a soft heart. The Creator felt pain and bore great responsibility when Frekmeiesy struggled. Temporarily, Frekmeiesy was even removed from the basketball roster entirely. The Creator thought this would bring an end to Frekmeiesy’s front page fame. If only He, the Creator, could extinguish Frekmeiesy’s existence from memory, well then he’d… he’d… | https://medium.com/not-complaining-but/child-s-play-3085bc46d237 | ['The Motor Tom'] | 2015-09-10 20:45:57.272000+00:00 | ['Childhood', 'Ncb Catalogue', 'Storytelling'] |
The White Supremacist Fascist State Murdered Anthony Lamar Smith | The White Supremacist Fascist State Murdered Anthony Lamar Smith
The Facts and Rule of Law — Twisted by the Judiciary — Demonizing the Victim and Canonizing the Cop
The Court of Hate
Missouri Circuit Court Judge Timothy Wilson’s reprehensible ruling is deeply disturbing in its Trumpian Fascism. The alt-judge contorts the facts in order to reach his predetermined decision — the ruling holds, the prosecution did not prove beyond a reasonable doubt that Jason Stockley murdered Smith. It is a lesson in White Supremacist Fascism, cop adulation and racism — simply, what Trump’s rabid base understands to be “Law and Order,” this is Arpaio justice.
From the outset this Antebellum travesty of legal alt-reasoning reads as if a KKK defense attorney wrote it — not a neutral finder of fact and law. The alt-judge begins his demonizing analysis with the important fact that Smith pulled into a parking lot at a Church’s Fried Chicken Restaurant and with his robust legal acumen delineates that Smith’s companion “urinated” behind the building. He continues by asserting that “Smith did not have a bag containing food on any of the occasions he returned to the car.” How your dis-honor do you know that? No reason to support a factual assertion with evidence except the LAW, right?
All of these things are legally deemed findings of fact but none of them is necessary for or mentioned again in the legal analysis or the legal holding. When a defendant waives his right to a jury, facts not relevant to the legal analysis are necessarily not presented to the trier of fact and law — as irrelevant and a waste of time for the judiciary — if not at the actual trial than definitely in the holding. Not here though — here these facts are written, for no reason except to justify the forgone racist conclusion — demonize the victim and applaud the police.
The insidious, travesty of mendacious, justice ruling continues by taking as gospel whatever the defendant says is true. It is inherent in the American criminal justice system that a defendant is innocent until proven guilty but not that the defendant is a vision of sanctimony.
The alt-judge goes out of his way to describe the murdering cop as a graduate of West Point, that he injured his back while serving in Baghdad in 2004 — specifically from a bombing at the Shaheen Hotel and still has lingering problems with his sciatic nerve. Who the fuck cares, your dis-honor? The esteemed alt-jurist touts Stockley’s militarism as divine, lest the world not forget he’s a fascist, patriotic — murderer. But not one kind word for Mr. Smith, not legally relevant but clear evidence of judicial bias. I rhetorically wonder why?
He also deemed it necessary to assert that there was no prior or concurrent federal investigation into the incident. All of this is gratuitous and legally irrelevant — it’s the legal doctrine of bullshit! A doctrine that is enshrined throughout this bigoted nations history and has been tradition since the insidious founders lustfully lynched.
There is a recording of Stockley saying, “we’re killing this motherfucker, don’t you know.” The racist alt-jurist writes that it cuts off 45 seconds before the pursuit ends. Yes, this is evidence — he wrote it in the findings of fact. But it’s not evidence of intent according to the def, sorry, alt-judge. It’s not evidence of anything apparently because the defendant doesn’t remember saying it and that’s enough for this alt-court.
This dishonorable member of the Missouri bench seemingly justifies his nonsensical take on intent with the sick rejoinder that the statement can be “ambiguous depending on the conduct.”
The conduct your dis-honor is the fucking cop shooting Smith at point blank range and testimony that there was a kill shot! But hey it’s ambiguous, right? About as ambiguous as this would be if the roles had been reversed — as ambiguous as you would be with your fascist righteousness, to swiftly rule and sentence to eternal damnation, from atop your noxious bench of supremacist whiteness.
There’s nothing more American than black men murdered by white cops without consequence — and I quote myself from an article about Terrence Crutcher’s murder by a white cop and that cop’s subsequent acquittal:
Racism is real, sobering reality — however, race is a creation, an American creation — in the 200,000 plus years of human history, religion and race are the most horribly, heinous creations the species has ever dreamed up — nightmare more fitting. Both creations rely on tribalism and social order — in fact that’s why they are central to dystopias — to control and garner power, while exploiting the weak — it’s not surprising than that both fit squarely within fascist ideology.
I’m not going to further analyze the racist, alt-legalese diatribe from the Twenty Second Circuit Court of Missouri because that’s giving it too much credence. Hopefully, you, alt-judge, will be forever haunted by nefarious, racist infamy and your alternative fact spewed fascist, legal manifesto of demonization and alt-evidence.
In reference to your alt-logic, I would be remiss to not point out one more thing — it was solely the defendant’s DNA on the gun purportedly found in the victim’s car, you write but you explain that damnable fact away forcefully for what reason? An appeal — hardly — it wasn’t placed in the victim’s car because you didn’t see the defendant frame him, right? That’s the kind of solid hypocrisy laced legal reasoning that could get you seated next to Gorsuch, real soon — it’s not a long distance metaphorically from your lowly far-right state bench to the fascist SCOTUS — courts just like yours all over the country — state and federal — will become more and more festooned with alt-right devils in black robes, masked in choir boy facades.
The Senate Judiciary Committee Hearings offered stark evidence of this — Gorsuch, is an exemplary conservative hypocrite, and he will implement the GOP’s racist agenda — he did not answer questions with answers, only originalist alt-jurist obfuscation — because doing so would expose this evil.
The Supreme Court is slipping further towards authoritarianism — one more Trump or Pence pick will cause its nadir to true tyranny — at least for a generation. This puts Roe v. Wade and Obergefell v. Hodges (among many other cases) in true danger. The Senate Majority leader and his fellow republicans obstructed the confirmation of a qualified jurist in an attempt to create a history that solely supports their agenda and was clearly a disservice to this country.
The stolen Supreme Court seat now held by Gorsuch — an epic pious hypocrite — evil — only to be out done by the guys that put him there — Senate Majority Leader McConnell, his destruction of the filibuster and unconstitutional obstruction of President Obama’s nominee, Judge Garland, an autocratic blow to American democracy.
Not to be overshadowed by the tyrannical tendencies of his Party, Ochre Il Duce’s Nixon style Tuesday Night Slaughter of FBI Director Comey was another shot to the heart of the American Rule of Law — at least the common criminal dwelling at 1600 Penn in the 70’s had the sense to Massacre on a Saturday.
And your dis-honor, the last thing I’d like to address, if it please the alt-court — is the fact that Stockley had an Ak-47, although not relevant to the murder charge, you deemed it necessary to justify and that’s plain evil — not simply because it’s not justified. You, alt-judge Wilson, are a disgrace to the bench — a vile, racist bigot — your propaganda infused decision would make Jim Crow proud.
And again, I quote myself:
This is demonic but not surprising in the good ole boy America of Jefferson Beauregard III of 2017 and his drug war revamp, consent decree backtrack, forensic lab debauchery to imprison more of those he’d like to be slaves — with his sick, carnal lust of Antebelum Ideology — worship of Jesus and guns but nothing else in the Constitution especially the 1st, 5th, 6th, 8th, 14th Amendments and his desire to destroy the 13th with the illogical devotion to the 2nd as purveyor of “law and order” for him and his kin — not to mention any right to autonomy which only he and the South shall have to break from the Union and to be born-again bigots — to codify Biblical idiocy while continually demonizing the other and fearmongering the non-existent Sharia threat and its demonic capacity. Ever the hypocrite — these Christian Radicals would implement a Theocracy — similarity to the hellish Islamic State Caliphate is based on shared hate — and god. | https://medium.com/thenewnorth/the-white-supremacist-fascist-state-murdered-anthony-lamar-smith-9c4c1edba66b | ['Benjamin Josef Doscher'] | 2018-02-11 03:53:14.934000+00:00 | ['Politics', 'Trump', 'Nonfiction'] |
Queerification: Stories with a Twist | by James Finn
Prism & Pen is so packed with queer treasure this week, I barely know where to begin. Our editor’s choices can’t highlight our best writing, because P&P writers have submitted so many GREAT stories. Abbie Drake joins Emma Holiday with critical perspectives on coming out as trans later in life. theoaknotes gives us a youthful trans/nonbinary perspective, and Ty Bo Yule spins a tale of queer Christmas camp from their days as a “baby butch.”
Also, expanding our emphasis on writing about art and artists, Bradley Wester and I go meta. I translate painful, controversial lyrics by French singer/songwriter Renaud, while Bradley digs into homoerotic paintings to explore homophobia with his straight artist friends.
All that and more, just below!
Editor’s Picks —
Creative Nonfiction
Pride Is the Antonym of Shame
Bradley Wester
Bradley and his visual artist friends have a tough but friendly dialogue about homophobia, pornography, and the nature of art that you won’t want to miss. | https://medium.com/prismnpen/queerification-stories-with-a-twist-a8c5d98aab5d | ['James Finn'] | 2020-12-20 18:51:36.252000+00:00 | ['LGBTQ', 'Storytelling', 'Creative Non Fiction', 'Fiction', 'Poetry'] |
You’re not Going to be the next Steve Jobs, Oprah, or Beyonce | A few years ago, a college student posted the following question on Quora.
How can I become great like Steve Jobs, Richard Branson, or Elon Musk? One of the people who answered the question was Justine Musk. She’s one of the few people that has had an inside look into what it’s like to be Elon Musk. This is a brief excerpt of what she said:
Extreme success results from an extreme personality and comes at the cost of many other things. Extreme success is different from what I suppose you could just consider ‘success’, so know that you don’t have to be Richard or Elon to be affluent and accomplished and maintain a great lifestyle. Your odds of happiness are better that way.
But if you’re extreme, you must be what you are, which means that happiness is more or less beside the point. These people tend to be freaks and misfits who were forced to experience the world in an unusually challenging way. They developed strategies to survive, and as they grow older they find ways to apply these strategies to other things, and create for themselves a distinct and powerful advantage. They don’t think the way other people think. They see things from angles that unlock new ideas and insights. Other people consider them to be somewhat insane.
Her answer went viral, got picked up by many media outlets, and I interviewed her about the psychology of visionaries on The Unmistakable Creative.
In western culture, we place celebrities on pedestals. Our heroes are billionaires and cultural icons. You can’t pick up a self help book without reading their stories. They have set the standards and values by which we live our lives.
The pseudo celebrity culture of internet fame amplifies this leading to an increasing focus on resume values instead of eulogy values, and the feeling that unless you become the next Steve Jobs, Oprah, or Beyonce, you’ve failed. We read self help books, listen to interviews, and attend workshops in the hopes that we might become one of them.
Unless a million people buy you’re books, listen to your podcast, or read your blog, it’s not worth doing. Social media leads to endless comparison and envy, unrealistic expectations of ourselves and our lives, and a great deal of unhappiness.
Over the last year or two as podcasting has entered a golden age, I hear the same story over and over from various podcasters. A friend and I were talking about a popular podcaster who had the goal to become the next Oprah. When I asked that same friend about her goals for her podcast, she also referenced becoming Oprah. Tim Ferriss gets compared to Oprah.
But Oprah is already taken. With the fragmentation of the media landscape, the millions of options at your fingertips, even the producers of the show have more or less said there will never be another Oprah simply because the system is no longer set up to produce one. You’re not going to become the next Oprah.
The alternative, the one right at your fingertips is to seek out what Seth Godin calls “the smallest viable audience” and become the next best version of yourself.
Average at one thing but Extraordinary at another
At heart of the human potential movement is the underlying message that your potential isn’t limited, you’re not stuck with what you have, and you’re capable of anything. That’s of course complete bullshit. You are limited, incapable of plenty, and are stuck with certain parts of what you have.
In 7th grade, I had the genius idea of joining the football team. I lived in Texas, where football is a religion and 7th graders are the size of grown men. When the linemen did tackling drills, I got pushed back almost 20 yards. It looked like the scene from The Blindside where Michael Ohr pushes a kid right into the end zone and over the fence post.
At the same time, the band director suggested I switched instruments from the trombone to the tuba. He saw something in me, told me that I could make all-state band, and encouraged me to practice. He told me that I could be an average athlete or an extraordinary musician. So I practiced for 3 hours at a time from 7th grade through my junior year in high school. And I did make all-state band.
If I had put that same effort into playing football, it’s unlikely I would have made that kind of progress. I’m a scrawny Indian, who has almost no tolerance for physical pain, and probably wouldn’t last more than one series on a football field.
While telling kids they can be and do anything might be well intended encouragement, it could cause more harm than good. But that doesn’t mean we should tell them they suck. Instead, we should acknowledge natural limitations and encourage their natural strengths.
All of us are average at one thing and extraordinary at another. But when your goal is to become the next Steve Jobs, you might be blinded to whatever it is that makes you extraordinary and spend your life chasing what makes you average.
We Are Not All Created Equal
We come from unique circumstances, environments, and families. We are born with individual strengths, weaknesses, and limitations. Even two kids born in the same family will end up with different results.
My sister and I were both raised with a similar value system. We were encouraged to work hard and get good grades. We both got straight A’s in high school and got into Berkeley.
She graduated with a high GPA from Berkeley, graduated with honors from medical school, was the chief anesthesiology resident at Yale, and finished a fellowship at UCLA. Needless to say, she’s extremely smart and good at what she does.
I got terrible grades, couldn’t hold down a job for more than a year, got rejected from every business school I applied to, and ended up at Pepperdine because it was my only option.
Same genetics. Same parents. Same college. Drastically different results. Of course, the idea that we’re not all created equal isn’t that inspiring. It doesn’t help Nike sell more shoes or Kelloggs sell more cereal.
Why Outliers are Terrible Role Models
Every listener who heard Tim Ferriss’ interview with Lebron James could follow his diet and exercise regiment, read the books he’s read, and implement every single piece of advice he shared. It’s possible they might make it the NBA. But it’s not very probable.
One of the dangers of our obsession with celebrities and outliers is that it causes us to make decisions based on false hope and unlikely possibilities. One of the most eye opening conversations I’ve had about this on The Unmistakable Creative was with an old mentor. The section below is from our conversation. You can listen to our full interview here.
When anyone talked to Tiger Woods, it was inevitable he was going to become Tiger Woods. This exists with people like Elon Musk, Mark Zuckerberg, Mother Theresa and Oprah. They are born that way.
These people are going to be successful and what happens is that they become the role models. I believe that’s dangerous because while they are models of success, they are not models of reality. They are role models for the majority of society, but what happens is that our view of success gets distorted.
Firstly, our idea of how to achieve success gets distorted and secondly, the ways in which we tell their stories by and large do not include the inherent abilities that they have. One example: Michael Phelps wins eight gold medals in a single Olympics. Why? Because he was born Michael Phelps.
They were born in a way that they are just going to win no matter what, so those people are not good models to follow for the rest of us.
What we should be doing is creating a safe environment in which we can be as vulnerable as we need to be, so that we can not only hold on to the possible, but increase our chances of achieving the probable.
We don’t create those environments for ourselves as a society, as governments, as businesses, or as a culture in America. We tend not to create vulnerable environments that allow us to be safe enough to be exposed and actually increase our probabilities of success. What we do instead is look at all these examples of people that don’t need that and we try to live like them.
What happens next is that we fail and we experience unnecessary suffering. We are unable to find the safe places to explore our vulnerabilities and our flaws, allowing ourselves to realize the fact that it’s not probable for us to be like them.
So what we do is we go to the safe places:
The safe places are the motivational events ,the places where everyone else is pretending to be happy.
The safe places are the internet, television, happy commercials, and all the things that allow us to avoid looking at the things that are not increasing our probability of success.
Because it’s very vulnerable and hard to expose what we don’t have. It’s only happening in the recesses of our own mind and that’s a very lonely place to be.
When you focus entirely on what’s possible, without considering what’s probable, we do ourselves a great disservice and chase pipe dreams. As David Heinemeier Hansson said, “You’re probably not going to make it to the top, so why put so much time into a fool’s errand?”
The Age of Perfectionism
Scroll through your Facebook newsfeed and you’ll get a highlight reel from people’s lives.
The author whose books has done so well, it’s being translated into 10 languages.
The travel blogger who is moving to another country.
The fitness junkie whose ass looks amazing in yoga pants with advice on how yours can too.
Out of the above, the only on that’s not based on a real example is the third. But I wouldn’t have to look too hard to find a real example like this. I found these after looking for only 10 minutes, and most people use social media for far more than 10 minutes a day. And we wonder why social media makes us feel terrible. It is like getting completely shitfaced, smoking a pack of cigarettes, and snorting a line of blow and wondering why we woke up with a hangover.
Social media fuels envy, comparison, and anxiety in an age of perfectionism where you’re not just trying to keep up with Joneses, but strangers on the internet whose lives appears more interesting and glamorous than ours. We see the highlight reels of people’s lives and mistake the trailer for the movie. The trailer might have gotten people into the theaters, but it doesn’t change the fact that Waterworld was a terrible movie.
A Rigged Game and a Fool’s Errand
You can play a game in which you keep score, measure your life according to other people’s expectations and vanity metrics like fans and followers. You can measure your self esteem with metrics like bank balance, book sales, downloads and traffic to your web site. But this game is rigged for one simple reason: you’re always behind somebody else. The other alternative is to create and contribute out of the spotlight, for an audience of one, which might just cause you to reach an audience of millions.
When celebrities and billionaires are the primary role models in our culture, when internet fame is a career goal, and we measure our value in the dollars we earn, books we sell, and meaningless vanity metrics designed to keep us addicted to social networks, it’s worth asking ourselves a few things. We should ponder if we’ve lost our way, if our compass has led us astray from the eulogy values that matter towards the resume values that won’t when we’re gone.
I recently finished reading Robin Williams biography. By all accounts, he was a comedic genius. He accomplished more than most of us could dream of doing in a lifetime. He was rich, famous, and successful. And yet, he suffered a darkness that stood in stark contrast to the man who could make us laugh until we cry.
After his death, his friends and his family members didn’t talk about the awards he won, the movies he was in, or his prolific career. They spoke about his character, his heart, and the joy that he brought to all of us with his brilliance.
Do you want be remembered for the joy you brought, or the bullet points on your resume? The metrics you increased or the meaning you created? The size of your bank account or the size of your heart?
You’re not going to be the next Steve Jobs, Beyonce, or Oprah. Nor should you want to be. If your goal is to become the next version of someone else, you’re denying the world your unique gifts, and the next best version of yourself.
Do you find it challenging to stay calm under pressure?
I’ve put together a list of interviews with best-selling authors and entrepreneurs to help you stay resilient in the face of adversity. Just click here. | https://skooloflife.medium.com/youre-not-going-to-be-the-next-steve-jobs-oprah-or-beyonce-6752cf806834 | ['Srinivas Rao'] | 2020-06-11 16:07:53.269000+00:00 | ['Entrepreneurship'] |
Building Complex Image Augmentation Pipelines with Tensorflow | Building Complex Image Augmentation Pipelines with Tensorflow
Using the Tensorflow data module to build a complex image augmentation pipeline.
If you want to train your models with Tensorflow in the most efficient way you probably should use TFRecords and the Tensorflow data module to build your pipelines, but depending on the requirements and constraints of your applications, using them might be necessary not and an option, the good news is that Tensorflow has made both of them pretty clean and easy to use.
In this article, we will go through a simple yet efficient way of building pipelines with complex combinations of data augmentation using the Tensorflow data module.
One of the options I mentioned that could improve your models' training, is to use TFRecords, TFRecord is a simple format provided by Tensorflow for storing data, I am not going into too many details about TFRecords because it is not the focus of this article but if you want to learn more check out this tutorial from Tensorflow.
The information provided here can be applied to train models with Tensorflow in any hardware, I am going to use TPU as the target hardware because if you are using TPUs, probably you are already trying to make the most of your resources, and you would need to use the Tensorflow data module anyway.
Data augmentation with Tensorflow
First, we will begin by taking a look at how data augmentation is done at the official data augmentation tutorial by Tensorflow.
# Data augmentation function
def augment(image, label):
image = tf.image.random_crop(image, size=[IMG_SIZE, IMG_SIZE, 3])
image = tf.image.random_brightness(image, max_delta=0.5)
image = tf.clip_by_value(image, 0, 1)
return image, label
# Tensorflow data pipeline
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
As we can see at the augment function, it will apply a sequence of transformations to the images, first, it will take a random crop, then apply random brightness and finally clip the values to keep them between 0 and 1.
Following Tensorflow best practices, a data augmentation function is usually applied to the data pipeline by a map operation.
The problem with the approach above is how the transformations are being applied to the images, you are basically just stacking them sequentially, generally, you will need to have some control over what and how is being applied, let me describe a few scenarios to make my point.
Scenario 1:
Your data may benefit from advanced data augmentations techniques like Cutout, Mixup, or CutMix, if you are familiar with how they work you know that for each sample you are probably going to apply only one of them.
Scenario 2:
You might want to use many “pixel-level” augmentations, by pixel-level I mean transformations like brightness, gamma adjust, contrast, or saturation, usually lighter variations of those transformations can be safely used at many different datasets, but using all of them at once might change too much your images and end up disturbing the model training.
So what could be done?
If you are familiar with data augmentation for computer vision tasks you might have heard of libraries like Imgaug or Albumentations, if not, here are two examples from the Albumentations library of how it can do data augmentation:
def augment (p=0.5):
return Compose([
RandomRotate90(),
Flip(),
Transpose(),
OneOf([
IAAAdditiveGaussianNoise(),
GaussNoise(),
], p=0.2),
OneOf([
MotionBlur(p=0.2),
MedianBlur(blur_limit=3, p=0.1),
Blur(blur_limit=3, p=0.1),
], p=0.2),
OneOf([
OpticalDistortion(p=0.3),
GridDistortion(p=0.1),
IAAPiecewiseAffine(p=0.3),
], p=0.2),
OneOf([
CLAHE(clip_limit=2),
IAASharpen(),
IAAEmboss(),
RandomBrightnessContrast(),
], p=0.3),
HueSaturationValue(p=0.3),
], p=p)
augmented_image = augment(image=image)['image']
We can clearly see that Albumentations provides a much more efficient way of applying different transformations to images. You can apply them sequentially, like the Tensorflow tutorial, but you can also use operations like “OneOf” and choose only one among a group of transformations to be applied, and the most important detail is that here you can control the probability that each transformation has of being applied.
It is worth it noting that the transformations that these libraries use are heavily optimized to run as fast as possible, Albumentations even have a benchmark.
The best of both worlds would be if we could use a library like Albumentations that is very efficient and already implement a lot of different transformations with our Tensorflow data pipeline, but unfortunately, it is not possible, so what we can do?
Complex data augmentations with Tensorflow
Actually, if we use some creativity, we can build data augmentation functions that are pretty close to the ones provided by Albumentation, and only using Tensorflow code, so it can run on TPUs integrated with Tensorflow pipelines, here is a simple example:
def augment(image):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
if p_spatial >= .2:
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
return image
Great! this function has all the things that we liked about Albumentations and is pure Tensorflow, let’s check:
— [x] Apply transformation sequentially.
— [x] “OneOf” type of transformation (grouping).
— [x] Control the probability of applying a transformation.
Let’s breakdown what is going on at this function.
First, we define two variables p_spatial and p_rotate then assign to them probabilities, those probabilities are sampled from a random uniform distribution, this means that all numbers in the interval [0, 1] have the same chance of being sampled.
Then we have two different types of transformations that we want to apply, flips and rotates, they have different semantics so they belong to different groups.
For the flips transformations if p_spatial is greater than .2 we will apply two random flip transformations, in other words, there is an 80% chance of applying those two random flips.
At the rotates transformations we are using more control, this will be similar to the “OneOf” from Albumentations because we are applying only one of those transformations, each of them has a 25% chance of being applied and there is also a 25% chance of applying nothing at all, we needed this kind of control here because there is no point of rotating the image 90° thee times, then 2 more times and so on.
Using this idea you can build data augmentation functions that can be a lot more complex than this one, here is an example that I used for the SIIM-ISIC Melanoma Classification Kaggle competition:
def data_augment(image):
p_rotation = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_cutout = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_shear = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_shear > .2:
if p_shear > .6:
image = transform_shear(image, config['HEIGHT'], shear=20.)
else:
image = transform_shear(image, config['HEIGHT'], shear=-20.)
if p_rotation > .2:
if p_rotation > .6:
image = transform_rotation(image, config['HEIGHT'], rotation=45.)
else:
image = transform_rotation(image, config['HEIGHT'], rotation=-45.)
if p_crop > .2:
image = data_augment_crop(image)
if p_rotate > .2:
image = data_augment_rotate(image)
image = data_augment_spatial(image)
image = tf.image.random_saturation(image, 0.7, 1.3)
image = tf.image.random_contrast(image, 0.8, 1.2)
image = tf.image.random_brightness(image, 0.1)
if p_cutout > .5:
image = data_augment_cutout(image)
return image
def data_augment_spatial(image):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
return image
def data_augment_rotate(image):
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
if p_rotate > .66:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .33:
image = tf.image.rot90(image, k=2) # rotate 180º
else:
image = tf.image.rot90(image, k=1) # rotate 90º
return image
def data_augment_crop(image):
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
crop_size = tf.random.uniform([], int(config['HEIGHT']*.7), config['HEIGHT'], dtype=tf.int32)
if p_crop > .5:
image = tf.image.random_crop(image, size=[crop_size, crop_size, config['CHANNELS']])
else:
if p_crop > .4:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .2:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
image = tf.image.resize(image, size=[config['HEIGHT'], config['WIDTH']])
return image
I will also leave two links to complete code examples using a similar approach.
— Complete code for the example above
— Introductory notebook for advanced augmentation with Tensorflow
If you wanna check out how to build a complete Tensorflow pipeline to train models on TPUs here is a cool article that I have written “Efficiently Using TPU for Image Classification”.
To learn even more take a look at the references:
— Tensorflow TFRecords tutorial
— Tensorflow data module documentation
— Tensorflow data module tutorial
— Better performance with the tf.data API
— Tensorflow data augmentation tutorial
— Efficiently Using TPU for Image Classification
— TPU-speed data pipelines | https://medium.com/towards-artificial-intelligence/building-complex-image-augmentation-pipelines-with-tensorflow-bed1914278d2 | ['Dimitre Oliveira'] | 2020-11-03 20:38:58.927000+00:00 | ['Machine Learning', 'Computer Vision', 'Image Augmentation', 'TensorFlow', 'Deep Learning'] |
Masturbating during a work meeting is sexual harassment. | What is it with men whipping their dicks out uninvited? We learned Monday that the New Yorker suspended Jeffrey Toobin for masturbating during a Zoom work meeting. A rift has since opened among media types, between those who rightly condemn Toobin’s actions, and those who defend him by claiming “everyone” masturbates and there are “graver sins” than what he did.
But masturbating during a work meeting is not accidental, or something “everyone” does. It is sexual harassment. Consider this: Why didn’t he go to the bathroom? Toobin was deliberate in his actions, and has only apologized for thinking his camera was off.
These apologists are wrong. By jerking off during the meeting, Toobin created a hostile work environment and should face the consequences. | https://gen.medium.com/what-is-it-with-men-whipping-their-dicks-out-uninvited-225cdd00c6a7 | ['Andrea González-Ramírez'] | 2020-10-20 18:05:49.702000+00:00 | ['Journalism', 'Politics', 'Sexual Harrassment', 'Workplace'] |
12 Ways to Fix Your User Interview Questions | In my practice, there were a lot of cases when people said they liked something but were reluctant to pay for it or utilize it later on. So, is there any method to make sure that a product or separate feature will be of high demand when implemented?
The interviewer asks a hypothetical question and opens the door to subjectivity
I cannot recall anything more relevant than referring to people’s experiences and examples of similar behavior from the past. For example, if your users don’t have a habit of saving articles for later in all their news apps, what is the chance they’ll start doing it in your app? As Jakob Nielsen said, “Users spend most of their time on other sites.”
Let’s recap:
Avoid starting your questions with “would you,” “how probable is that,” and “what if there was.”
Try to find out if a person has had an equivalent experience.
Pitfall 2. Closed questions
Closed questions appear from a natural human wish to be approved and gain support. However, in the interviews, they aren’t that useful. A yes-or-no question doesn’t provoke reserved people to talk and doesn’t help much to reveal their motives and way of thinking. | https://uxdesign.cc/interview-questions-3e3a49d7596b | ['Slava Shestopalov'] | 2020-10-20 09:48:02.824000+00:00 | ['Design', 'User Experience', 'UX', 'UX Research', 'Ts'] |
It’s Okay to Eat Taco Bell All This Month and Never Put the Tree Up | It’s Okay to Eat Taco Bell All This Month and Never Put the Tree Up
This year has sucked. It’s fine and fitting if your Christmas sucks, too.
Image by Eric Mclean on Unsplash
It’s December, and my former colleague Courtney has adopted Elf on the Shelf as a pandemic holiday project. Based on the 2005 book of the same name, Elf on the Shelf is a newish “tradition” in which parents position a plastic elf in new locations around the home every day in the countdown to Christmas. Each morning, children are tasked with locating the Elf, whom they’re told is watching over them and reporting on their activities to Santa. In some families, the Elf leaves behind presents for the kids, or hand-written notes, or candy. Of course, all of this is frequently documented on social media.
Every day this month, Courtney has posted a charming photo of her plastic Elf caught up in some new antic. Here he is folding laundry! There he goes, leading a parade of toys in a loop around the living room! Each tableau she’s created is detailed, each photograph of the Elf meticulously framed. The Elf even leaves behind Christmas ornaments and poems.
Witnessing this shit is giving me anxiety. How does Courtney find the brain space and energy to do this all month long? Courtney is an upbeat, generous person, so she probably doesn’t bat an eyelash at it. Still, just imagining doing that much extra work makes my stomach tie up in stress knots.
Courtney and I both grew up in the 1990s, when Elf on the Shelf did not exist. Neither did gender reveal parties. Or photo shoots depicting brides-to-be “popping the question” to their maids of honor. Or any number of now-obligatory traditions created by lifestyle influencers over the past few years. If you want to show the world that you’re a devoted parent, friend, or spouse, there’s a lot you have to do.
This holiday season, there is even more pressure than usual to do “special,” documentable things to prove your love. Parents are trapped inside with antsy, under-stimulated kids. Old traditions have been abandoned and new digital ones must be erected in their place. I have friends who have planned virtual holiday parties filled with activities — household scavenger hunts, Instagram costume contests, Twitch karaoke, and Zoom Hanukkah candle lightings.
All I’ve done to prepare for Christmas is hang a single plastic ornament off the door of my chinchilla’s cage. The ornament came free with a six-pack of beer.
In theory, I had all the time in the world to Christmas shop and decorate and plan Zoom extravaganzas. Sometimes I worry that I’ve squandered this time of year, failing to put on a brave face and make it somehow magical. Would I feel more comfortable being trapped at home if I found the motivation to decorate? Are all these virtual activities memory-making and sanity-saving, or are they just more things to feel shitty about not doing?
I’m sure the answer varies from person to person. “People need projects,” my partner likes to say, and I know it can be rewarding to plan hangouts, make gifts, and keep people’s spirits alight. Personally though, I’m out of energy. I can’t pretend this holiday season will be anything but a weak imitation of the ones that came before. I’m going to let this Christmas suck.
If you’re like me, you need some reassurance right now that it’s okay if your holiday is terrible. Put down the Chex Mix recipe and go disappoint someone. If you have children, forget about making magic this year. If your work has a Slack channel for planning the holiday party, mute it. It’s okay if you eat Taco Bell every night this month and never put the tree up.
There is a freedom in abandoning obligations, in relinquishing oneself to mediocrity. You’re living in a goddamned pandemic; just by staying home, you are actively saving lives. Doing this heroic work for months on end while your government repeatedly fails you is downright traumatic. Plus, if you’re like most people, you’ve lost work or lost loved ones during this horrific time on top of all the thankless, joyless sacrifices you’ve had to make. Who cares if the presents you ordered arrive three weeks late? Who cares if you don’t buy any presents at all?
I’m so sick of polluting my mind with meaningless standards I’ll never live up to. I can’t pretend that this holiday season will be a beautiful, special time. So I’m barely acting like it’s a holiday at all. I’m distracting myself by doing a lot of reading and playing a lot of Genshin Impact. I’m not cooking anything. I’m not mailing any cards. I’ll show up to Zoom parties when I feel like it, and nope out of them whenever the mood strikes by hitting the “leave” button. This year has fucking sucked. It’s fine and fitting if you let your Christmas suck, too. | https://forge.medium.com/its-okay-if-christmas-sucks-ff7544b4607e | ['Devon Price'] | 2020-12-15 18:00:42.843000+00:00 | ['Mental Health', 'Christmas', 'Holidays', 'Self', 'Pandemic'] |
Must Read Poetry Books by Black/African-American Women | I often get asked for poetry book recommendations, and since we are in the middle of Women’s History Month, and we just finished Black History Month, I thought it would be a good idea to share a list of recommended poetry books by black women. I chose a wide range of books. If you would like to suggest a book, please leave a message at the end of this article.
We Are Shining by Gwendolyn Brooks
Gwendolyn Brooks is the quintessential African-American poet. Brooks has an impressive body of work. Out of those works, I chose to recommend her children’s book We Are Shining. I love this children’s book. It is a story of humanity and emphasizes appreciating one another for our differences. We Are Shining speaks to children of all nationalities and connects them by their similarities.
Wade in the Water by Tracy K. Smith
Wade in the Water is a powerful collection of poems by Tracy K. Smith. In this work, Smith delves into the emotions of being a woman. While, I would not call this a religious collection of poems, it has many religious overtones. Smith makes it work without overwhelming the reader with the concept of God. After all, I think that you cannot separate the identity of women without examining womanhood and its religious conceptions and ideals. Smith has many beautiful poems in this work, my favorite being the title poem “Wade in the Water.”
Maya Angelou by Maya Angelou
Of course, no collection of poetry favorites would be complete without the grande dame of poetry, Maya Angelou. Maya Angelou helped bring poetry into the mainstream. Her “Caged Bird” poem has to be one of the more popular poems in American history. I cannot think of a person who is unfamiliar with her book, I Know Why the Caged Bird Sings. Caged Bird is far more well known than the book I chose for this list, “Maya Angelou.” This is a book that is a celebration of all the stages of life from childhood to advanced ages. It is definitely a great book for any age. I like this book because it includes a mini-biography, Angelou’s more popular poems, and it includes little life lessons with each poem.
Citizen: An American Lyric by Claudia Rankine
Citizen: An American Lyric reads as a deeply personal piece that examines the African-American experience. This book includes three lyric poems and a play. This was a multiple prize-winning book, and it was a finalist for the National Book Award. I like this book because it reminds me of the the stings and barbs you receive while getting along as a black person in American society. It addresses how we need to cope and move on.
Poems on Various Subjects Religious and Moral by Phillis Wheatley.
Phillis Wheatley is the first published African-American female poet, and Poems on Various Subjects Religious and Moral is the first book she published. This book rubs against my political grain at times as Wheatley expresses gratitude for being rescued from Africa by her slave owners. However, she served as a catalyst for abolition because she proved to others that blacks have the same intellectual capabilities as whites.
Maya Angelou The Complete Poetry
I know this is the second Maya Angelou book, but I didn’t call her the grande dame of poetry for nothing. This collection is everything Maya Angelou. It has poems from all of her collections, including her more popular poems and essays. If you can only get one Maya Angelou book, this is the one. This will be the go to book of your poetry collection.
One word comes to mind when I think of Audre Lorde, “powerful.” Lorde described herself as a “black, lesbian warrior poet.” I would be remiss if I did not recommend reading her essays, as well. Lorde puts together spectacular similes and metaphors. Her words mix together to give you a sense of urgency and explosiveness. Her poems address feminism and femininity, racism, oppression, and homophobia. She does a wonderful job of showing you the world through her eyes.
Magical Negro by Morgan Parker
This is one of the sharpest modern collections of poetry out there. Parker’s work is brilliant and humorous. I must admit that I did not get some of her poems the first go around, and I had to read them again. Her poems will definitely stay with you. Her titles are winners as well. As a poet, I often find the titles the most challenging part. Hers are inspired. I love her poem, “A Brief History of the Present” purely for its ironic nature. Seriously, read this book now.
Nikki Giovanni: A Good Cry
Nikki Giovanni will always sit on my list of favorite poets, and her latest book A Good Cry cements that idea. My favorite poems in this collection are “Baby West” and “Bears in Spring”. “Baby West” is a beautifully written poem about the struggles of her childhood. “Bears in Spring” has absolutely nothing to do with bears and is strongly political without being in your face political. Giovanni’s poetry speaks to the experiences of her life, and she does an adept job in conveying emotions. Her lines are clean and crisp, and her poems take you back to a place and time that many would sooner forget. She is a poet historian in the truest sense of the idea.
Teaching My Mother How to Give Birth by Warsan Shire is a short collection but it carries quite a blow. It is unlike any collection I have ever read. Shire’s poetry is strong and direct. Her writing covers birth, development into womanhood, immigration, and family. Her language is direct and energetic. Her verse pushes you on a see-saw of emotions. You can begin one poem thinking you’re headed in one direction, but then Shire turns around and takes you to a place you did not expect. Every poem in this book carries you into Shire’s world, which is one of abandonment, pain, and grief. This is a powerfully political work combined with a reflective familial bent. This work draws you in. It takes you into a world of blood, bone, fire and beauty. If you don’t understand what I mean by this, you will just have to read the book to find out. This is a book that above all deserves to be on this list. There is not a single weak poem in this collection.
Lucille Clifton speaks to the darkest parts of being human. Her book, Blessing the Boats uncovers the fragility of the human condition. This book covers the issue of death, dying, and loss, along with the emotions that go into these issues. Actually, you would be well served if you came across and read any of Lucille Clifton’s books. The woman is pure genius.
There are many African-American women poets worth noting that I did not cover here. Please feel free to leave me a note below to discuss. In making this list, I attempted to provide a broad range of styles, including children books, so that there could be something to appeal to a diverse group of people.
Katerina Canyon is a writer currently living in Seattle, Washington. She served as Poet Laureate of Sunland-Tujunga, California from 2000 to 2003. She has a MALD from The Fletcher School of Law and Diplomacy and a BA in English, Creative Writing, and International Studies from Saint Louis University. She has been published in the New York Times and Huffington Post. Her most recent poetry was published in From Whispers to Roars. Her latest collection Changing the Lines is currently available on Amazon and Elliott Bay Book Company. You can learn more about Katerina at PoeticKat.com. | https://medium.com/poetickat/must-read-poetry-books-by-black-african-american-women-a3cab6b62bc | ['Katerina Canyon'] | 2019-03-19 15:53:28.061000+00:00 | ['Women', 'Black Women', 'Writing', 'Poetry', 'African American'] |
UK military will use artificially intelligent lasers to expand Britain’s economic empire | Published by INSURGE INTELLIGENCE, a crowdfunded investigative journalism project for the global commons. Support us to keep digging where others fear to tread
This week, the British press has been rather over-excited about the Tory government’s plans to fund a new fleet of destroyers.
So over-excited that pundits have overlooked the First See Lord, Admiral Sir Philip Jones’, extraordinary assertion that Britain’s plan to expand its Navy is:
“… in order to support the UK’s growing global economic ambitions.”
Oh, and he also mentioned that under the same plan, the Navy would be deploying an advanced laser weapon, along with autonomous, artificially intelligent weapons systems.
To his credit, Admiral Sir Jones does mention the obligatory objective of ‘world peace’ and such like, but he is also quite candid about the pre-eminence of Britain’s economic interests.
Here are the relevant passages of his message in bold:
“Meanwhile in the Gulf [we] are working to protect international shipping in a region which is essential to the UK’s economic security… Now, at long last, we have an opportunity to reverse this trend, rebuilding in particular resilience in our destroyer and frigate numbers, the backbone of a fighting Navy. This would also permit a more frequent presence in parts of the world in which we have been spread thin in recent years in order to support the UK’s growing global economic ambitions. So, rest assured, I intend to work with the Government in the coming months and years to deliver their ambition for a larger Navy. Only this will ensure the Royal Navy can continue to deter our enemies, protect our people and promote our prosperity in these uncertain times.”
So bigger guns to expand Britain’s global military presence are necessary to promote and secure British wealth. Little has changed, it seems, since the days of the British empire.
But that’s not all. Admiral Sir Jones also throws this tidbit into the mix:
“Last month the Royal Navy held the largest international gathering of autonomous systems ever staged, and we will shortly trial both an energy weapon and artificial intelligence at sea. These are the technologies that will maintain our superiority over more conventional navies.”
Autonomous systems? Artificial intelligence? Energy weapons?
WTF? Is this Star Wars?
No — it’s Britain’s effort to keep up with the escalating arms race to dominate the potential to weaponise information technologies.
At a briefing in October, the Royal Navy announced some details on its plans to use artifical intelligence at sea. The plans include creating a single artificially intelligent ‘Mind’ that autonomously controls multiple Navy warships, and proactively engages in kill decisions without the need for human operators to interfere in the decision-making process.
Yes that’s right. Terminator, Battleship-style:
“The Royal Navy (RN) aspires to use AI technology to develop an RN-owned Ship’s ‘Mind’ at the centre of its warships to enable rapid decision-making in complex, fast-moving warfighting scenarios.”
Information will be fed in from “all internal and external sources” including command and platform management systems; radars; sonars; “Electronic Warfare sensors” and the “internet.”
What exactly will be gleaned from the internet is not stated here, but INSURGE intelligence has previously thrown some light on the central role of social media in military threat detection and target selection processes.
The briefing goes on to emphasise that a Ship ‘Mind’ will execute kill decisions by itself:
“Under Project NELSON, such a Ship’s Mind might go as far as being empowered to release defensive or offensive weapons, or conduct manoeuvres if the threat precluded time for crew interaction (such as against new breed hypersonic missiles). The Mind would ideally also inform/control Navigation, Logistics, Personnel, Medical, Engineering and Cyber Defence operations amongst others. The RN wishes to harness AI, but equally be prepared if others choose to attempt to use AI technologies against it.”
That’s creepy enough.
But ideally, it seems, the Navy eventually wants Project NELSON’s ‘Mind’ to autonomously fire “directed energy weapons”: yes, that’s right, real-life laser beams.
Earlier last summer, the Ministry of Defence (MoD) announced that the missile defence contractor MBDA had been chosen to build a prototype laser weapon system.
“The ‘directed energy weapon’ will be able to fire high energy beams to damage and burn up targets at the cost of only pence per shot,” reported the UK Defence Journal.
In September, the MoD announced that MBDA would receive a £30 million contract to create a laser “capability demonstrator” by 2018/19.
All in all, the Tories want to spend £19 billion over the next decade on new Navy ships in coming years. Supposedly, they think, this will make Britain richer, by allowing us to piss about with big guns in other peoples’ territorial waters.
Add to that the just under £1 billion the MoD plans to invest in its ‘Defence Innovation Initiative’, an effort to keep up with newfangled tech for stuff like “anti-missile systems, miniaturised surveillance drones and protective materials.”
Because if we don’t invest in this shit, Britain is of course at risk of being invaded by North Korea’s emerging fleet of autonomous, nuclear-armed Terminator Battleships. Probably if that happens, Iran will join in, followed by Russia and China, all armed with the same gear.
It’s because Britain has so much (shale) oil.
And Trump might not want to save us if that happens, so… | https://medium.com/insurge-intelligence/uk-military-will-use-artificially-intelligent-lasers-to-expand-britains-economic-empire-82fbfc66a243 | ['Nafeez Ahmed'] | 2016-12-01 21:38:29.811000+00:00 | ['Artificial Intelligence', 'War', 'Military', 'Economy', 'Britain'] |
Apple and Google’s Coronavirus Feature Might Be an Offer You Can’t Refuse | Apple and Google’s Coronavirus Feature Might Be an Offer You Can’t Refuse
The tech giants promise it will be opt-in. Will your employer, school, or church agree?
Photo: Jared Siskin/Getty Images
This story is part of a series on the possible impacts of Apple and Google’s contact tracing technology. You can read the others here.
When society begins to reopen, contact tracing will be a new fact of life. People who test positive for Covid-19 will be asked to trace their recent interactions with others, who will in turn be asked to get tested or stay home.
While contact tracing has traditionally been done via human interviews, Apple and Google last week announced a system that will use people’s smartphones to determine whether they’ve recently come within close range of anyone who’s tested positive for Covid-19. To protect users’ privacy, the system won’t track their geographic location or even their identity. And to protect civil liberties, the companies are adamant that the system will be opt-in — that is, you won’t have to use it unless you want to. The companies on Monday told reporters that government health authorities will be the only ones allowed to build apps using the technology, and they won’t be allowed to make those apps mandatory.
That system, if it works, promises to strike a real compromise between privacy and public health benefits. While no tracking system is foolproof, the companies appear to have taken great care to minimize the risk of exposing people’s sensitive information or becoming party to a surveillance state.
Yet there’s a paradox at the heart of opt-in contact tracing, as Apple and Google have conceived it. If it’s truly voluntary, then it may be hard to convince large swaths of the population to enable it: In Singapore, an app called TraceTogether that uses similar technology has been adopted by about 17% of that country’s population. And the fewer people who enable it, the less useful it becomes in facilitating the reopening of society.
Apple and Google are hoping to beat that number in the United States and other countries by eventually building basic tracing functionality into the operating system, so that at least some contact tracing features can work even if people don’t download an app. (The details are still being worked out.) That might help, but it’s likely months away, and even then public health authorities may be fighting an uphill battle to get individuals to opt in to a service whose primary benefit is to others, not themselves.
There is, however, a path by which contact tracing apps might go mainstream even without governments making them mandatory. It’s one that few have yet discussed, and Apple and Google themselves declined to comment on it when asked by OneZero. It involves private entities — workplaces, schools, and even social gatherings — telling people they have to use the app if they want to participate.
“Companies will require it before you’re allowed to go back to work,” predicts Ashkan Soltani, an independent security researcher and former chief technologist at the U.S. Federal Trade Commission, in a phone interview. “Your grocery store could require that you show it before you’re allowed to enter the store.”
Lewis Maltby, president of the National Workrights Institute and an expert in employment law, agrees. “Anything that can help keep Covid-19 out of the workplace is something employers will want to do,” he told OneZero. Unless a court rules that they can’t, he says, “Offices are going to have everybody lined up outside, checking your smartphone before you can come in.”
That would turn a system that’s designed to be opt-in into one that leaves people little choice: “Not mandatory, but compulsory,” as Soltani puts it.
This solution would be misguided, he believes: A Bluetooth-based system like the one Apple and Google are building has inherent technical limitations that will both produce false positives and miss some real exposures, along with potential for exploitation or abuse. (Imagine a student self-reporting symptoms in order to get classes canceled, or someone who tested positive turning their phone off before entering a crowded venue.) And while the companies have said they will dismantle the system once the coronavirus threat subsides, Soltani argues that to rely on smartphone apps to decide who can reenter society would set a dangerous precedent.
The efficacy of contact tracing is further limited, in the United States and many other countries, by the speed and availability of testing. If you can’t report Covid-19 symptoms until you’ve tested positive, the app will be days late in alerting the people you come into contact with. But if you can self-report symptoms, then some people who don’t actually have Covid-19 will be sending out false alerts that affect other people’s lives.
The U.K.’s National Health Service is working on an implementation that would give “yellow” alerts for self-reports and “red” alerts for those confirmed positive. But it’s not clear how the repercussions might differ.
For all its drawbacks, it’s easy to imagine how various institutions might decide that the benefits of requiring people to use such an app might outweigh the downsides. One obvious example would be nursing homes, where coronavirus outbreaks have proven especially deadly.
“For high-risk groups, I can really see the value,” says Mike Snyder, director of the Stanford Center for Genomics and Personalized Medicine. “If I were running a nursing home, I really don’t want people coming in who might have been in contact with someone who had Covid-19.” Requiring visitors to show that they have a contact tracing app installed, and that it shows no such contacts, could be part of preventing that — even if it’s far from a perfect method.
If it makes sense for nursing homes, might it also make sense for other hot spots of contagion, such as airplanes or cruise ships? What about businesses whose workers risk exposing the public, such as restaurants? College dorms? Gig workers? And might some employers, such as Amazon warehouses or Smithfield meat factories, be willing to trade their workers’ autonomy to decrease the likelihood of an outbreak that could compromise what they view as essential operations?
Maltby, the former director of employment rights for the ACLU, believes they will. “Put yourself in that employer’s position,” he says. “You’ve got employees, some of whom almost certainly have been exposed to Covid-19. Some of them don’t know they’ve been exposed to Covid-19. And they’re going to come to work and make everybody sick.”
The fact that the app is imperfect, Maltby says, may not deter them, as long as it’s better than nothing. “I’ve seen employers be more invasive than that with a whole lot less at stake. They’ll make you pee in a bottle to find out if you smoked pot on Saturday night instead of drinking beer.”
U.S. courts have historically granted employers broad leeway to intrude on workers’ privacy in the workplace, Maltby adds — with a few exceptions, such as eavesdropping on oral conversations. The tricky legal question in this case is whether employers will be able to force their workers to install and use the app on their personal devices when they’re off-duty, which would probably be necessary for the contract tracing system to be of value. “The ACLU will sue them over that,” he predicts. “But I’m not sure they’re gonna win.”
Tech companies, which were among the first to send workers home as the coronavirus began to spread stateside, might also be in the vanguard of requiring their employees to install contact tracing apps as part of their plans to safely reopen. Snyder, who has co-founded several biotech companies, says Silicon Valley tends to be more open to sharing data in service of innovative new technologies than other parts of the country.
It’s also possible that some workplaces will go further, especially in the early stages of reopening. Already, some experts are urging employers to consider implementing their own testing of workers. A doctor suggested in a Stat op-ed that some employers might also require vaccination, once a vaccine is available. Those measures could be much more effective than requiring workers to use an app but also more costly and invasive, and they’re not yet viable on a mass scale. Maltby, of the National Workrights Institute, says he could see a legitimate niche for smartphone-based contact tracing in the meantime as a way of identifying the asymptomatic people who most need to get tested.
The pressure might come not only from the top down, notes Karen Maschke, a research scholar at the Hastings Center, a bioethics think tank. Some employees might tell their employers they don’t feel safe going back to work without assurances that their colleagues aren’t carrying Covid-19.
For something like contact tracing, Maschke says, “The opt-in, in and of itself, is a problem.” The resulting data will necessarily be incomplete and may also be skewed toward demographic groups that are either more likely to sign up voluntarily or more vulnerable to being told they have to. And if the data is flawed, she adds, then it’s hard to justify using it to make decisions that affect people’s basic liberties.
In a blog post, the nonprofit Electronic Frontier Foundation echoes that concern. It suggests that an opt-in contact tracing app could be useful as a complement to other systems but warns that it would be a mistake to place too much importance on it. Specifically, the EFF’s experts argue, “Private parties must not require the app’s use in order to access physical spaces or obtain other benefits.”
While Apple and Google were clear that they won’t let governments mandate the use of their contact tracing technology, neither one responded to emails from OneZero asking whether employers and other entities could require it. Realistically, Soltani says, it’s hard to see how Apple or Google could stop them even if it wanted to. | https://onezero.medium.com/apple-and-googles-coronavirus-feature-might-be-an-offer-you-can-t-refuse-90c705541eaa | ['Will Oremus'] | 2020-04-17 05:31:00.919000+00:00 | ['Apple Google Collab', 'Contact Tracing', 'Coronavirus', 'Privacy', 'Surveillance'] |
8 AutoML Libraries to Automate Machine Learning Pipeline | 8 AutoML Libraries to Automate Machine Learning Pipeline
Overview of various AutoML frameworks
(Image by Author)
Automated Machine Learning (AutoML) helps in automating some critical components of the machine learning pipeline. This machine learning pipeline consists of data understanding, data engineering, feature engineering, model training, hyperparameter tuning, model monitoring, etc.
(Image by Author), Data Collection (Blue), Data Preparation (Green), Modeling (Orange), Deployment (Pink)
In an end-to-end machine learning project, the complexity of each aspect of the pipeline is dependent on the project. Generally, most of the time is spent on and data preparation and modeling. To automate the machine learning pipeline AutoML frameworks come into the picture.
In this article, you can read about 8 open-sourced autoML libraries or frameworks: | https://medium.com/swlh/8-automl-libraries-to-automate-machine-learning-pipeline-3da0af08f636 | ['Satyam Kumar'] | 2020-12-26 08:51:02.487000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Education', 'Automl'] |
Your Second Date Is More Important Than You Think- It’s The One That Really Counts | Your Second Date Is More Important Than You Think- It’s The One That Really Counts
First-date you is the performer, second-date you is the gatekeeper.
Photo by Cody Black on Unsplash
So you’ve met this person, and you like them. You made it through that dreaded first date and things went pretty well.
Now you’re ready for another.
First-date you is the performer, you’re on baby! Putting out your best material. The performer sees the audience, not the faces, sees the big picture, but misses the details. Second-date, you is more detail-oriented, ready to dig a little deeper. Ready to focus outward. A little less flash, more substance. More vigilant as well.
The first date is tricky. It can be tense. It can go so badly that you never want to see the person again, that’s the simplest outcome. But what about when there are little hints of problems but nothing overt? What about when it goes okay or even better? If it goes amazingly, you might be left wondering if you imagined it.
That’s what the second dates are for.
I created a set of guidelines for myself when I was on the market. These guidelines helped me navigate the world of online dating filled with first dates. It was these guidelines that helped me make the most of the experiences that eventually led me to my husband. I’ve focused a lot on first dates and the overall attitude of dating in general, but I also think it is worth taking a look at the second date a little more thoughtfully.
In many ways, the second date is even more important than the first.
Photo by Irena Carpaccio on Unsplash
That first date sweeps you up in emotion. You might be a bit awkward or say something silly because you’re nervous.
You or your date might put up a slight facade trying to impress. You might do a little stress drinking. There are lots of random things that can happen on a first date when everyone’s stressed out, so it’s the second date that tells you what you want to know.
When I was dating, I tried to focus on what I could control. Instead of judging what the other person was doing, I concentrated on how it made me feel. Something I’ve noticed is that you can usually get most of the information you need about a relationship by reading your own feelings.
Ultimately, when you’re with someone, your happiness boils down to how you feel when you are with them, not the things they do. There is a subtle but powerful difference there. The first way expects them to fit a mold to make you happy the second way lets them be themselves and allows you to decide whether or not they’re right.
On that second date, you’re not as focused on putting yourself out there so, you can be a little more objective and turn inwards for your answers.
Here are some things to be aware of on that second date that will let you know if the person is a good match.
Are you cringing?
If the person makes you cringe, that’s probably not a good sign. When someone makes you cringe, you are embarrassed. It could be anything from bad jokes to body odor, but if that person has something that rubs you that wrong on the second date, it’s probably not going to get any better.
Cringing is a strong sign in a budding relationship. The direct antonym for cringe is to advance, so if you’re cringing on this second date, maybe your subconscious is telling you that you don’t really want it to proceed.
Photo by Priscilla Du Preez on Unsplash
Are you laughing freely?
Laughter is an excellent sign for a couple of reasons. The first being that if you are laughing spontaneously, you’ve been able to let your guard down a bit. You’re comfortable. It’s important to feel comfortable in a relationship. It also means that the other person makes you laugh.
Having shared humor is one of those things that can get a long term relationship through hard times and keep things interesting. Laughter is an indication of bonding. It’s science! If you can laugh with someone you are in the process of bonding, that’s a good sign. If you are laughing freely and a lot, you are on the right track as long as it’s not the cringy kind.
Does the conversation flow easily?
This is along the same lines as laughter. When the conversation sweeps you away, you’ve entered that magical flow state with the other person. Time seems to fly by, you look at your watch, and somehow it’s two hours later.
My husband and I had a seven-hour first date. It flew by in a heartbeat.
We read a lot about flow concerning work and creativity, and it is a highly creative state of mind, but you can get into a flow when you are on a date. That is the creative side of the experience, you and your potential mate are creating something: a possible relationship. Getting into a flow on a date is a great sign.
Photo by Brooke Cagle on Unsplash
Are you physically comfortable?
Why not read your body language. Are you leaning into or away from the other person? You may be trying to make yourself like this person, but if you’re really not feeling it and you don’t even know why you might be trying to force it.
If you are doing that, your body is probably telling you something important. So don’t deny what your body is trying to say to you. If you’re not sure, it can make the final decision for you.
Are you feeling awkward?
If you feel awkward around your date, it could mean that there’s a power imbalance, or it could mean you are just you are so into them you are nervous. On the second date, though, if you are not able to loosen up around them at all, it could be a bad sign.
Do you feel like you’re out of your league? It’s okay to think the other person is a great score, but if you really feel like you’re not good enough for them, that is going to play on your self-esteem eventually. Being awkward can be endearing, but if you always feel this way around them, it’s going to wear thin after a while.
Photo by Milly Vueti on Unsplash
Are quiet alarm bells going off in your head?
This is an important one. If you feel very uncomfortable around this person, don’t blow that feeling off.
Even if you can’t put your finger on it, if they seem perfectly fine, if you are picking up something that is making you nervous, don’t ignore it. Your brain might be sensing something subtle or some subconscious danger you can’t place.
I once dated a guy that gave me those subtle creeps. I made myself ignore it because he was good looking, had a great job, and seemed nice. When I got to know him better, he had some weird habits and was into some pretty creepy stuff. By the time he revealed himself to me, we were already dating and breaking up with him was scary.
I’m not saying every person that makes you feel uneasy is that weird, but sometimes people rub you the wrong way for a reason, so don’t ignore this feeling.
Second dates don’t get the props they deserve. Sandwiched between the excitement of the first date and the settled in feeling of the third, their importance often goes unnoticed. But the second date is the gateway that your potential mate either gets through or not.
Use your feelings to determine if that person is worthy of going to the next level by really being present during that second date. You’ll be glad you did.
I hope you found this helpful, if you did, here are a couple more articles you may want to take a peek at:
Thanks so much for reading! | https://medium.com/illumination-curated/your-second-date-is-more-important-than-you-think-its-the-one-that-really-counts-396fdcfff4cc | ['Erin King'] | 2020-09-18 04:29:21.827000+00:00 | ['Relationships', 'Relationships Love Dating', 'Self Improvement', 'Self', 'Psychology'] |
“To Fill a Room with ‘Nobody’” — Sara Talpos puts poetry and mitochondria under the microscope. | By Sara Talpos
One summer, after ten years of writing poetry and teaching composition classes, I enrolled in a microbiology class at our local community college. I felt hesitant and out of place in the lab.
Still, I relished our earliest assignment: Use a light microscope to bring Pseudomonas aeruginosa into focus in three minutes or less. The bacteria, stained pink and magnified 2,000 times, looked like tiny pill capsules scattered across the slide.
If I could line them up lengthwise — 18,000 of them — they would stretch as far as the first line in Emily Dickinson’s poem “Estranged from Beauty”:
When Dickinson sensed beauty in the infinite, when she questioned the underpinnings of human identity, could she have been imagining bacteria? I like to think it’s possible. Bacteria and other microorganisms were discovered in the late 1600s, roughly a century and a half before her birth. Dickinson’s words resonate with me now when I look through the microscope at our boundary-hopping ancestors. These hands that turn the microscope’s nob, these eyes that peer into the lens — the body I call “mine” — evolved from single-celled bacteria. That bacteria may have resembled my P. aeruginosa, an organism that’s ubiquitous in soil and water and even on human skin as part of a larger microbial community.
***
As a microbiology student, my tidy definitions fall away. Self, species, heredity. In high school, I learned that genes are passed from parent to offspring. Bacteria, however, can transfer genes between individuals of the same generation and even between individuals of different species. For example, some bacteria use a hairlike appendage called a “sex pilus” to reach out and attach to a nearby bacterium. Once contact has been made, the two individuals are drawn together and DNA is transferred from one bacterial cell to the other, presumably through pores in their cell membranes.
This movement of DNA across membranes reminds me of poetic enjambment: how a sentence or a thought is carried across a boundary — the poetic line — without punctuation to stop it. In stanza 2 of Dickinson’s “Before I got my eye put out,” for example, the sentences overflow from one line to the next:
Here the sky is so vast it cannot be contained by a single poetic line; the idea of sky must continue unbroken into the next line. Nor can the sky be possessed by the speaker. Rather, the sky threatens to “split” her comparatively small body and mind.
Fortunately, we humans are not completely permeable to the outside world. Unlike bacterial DNA, ours is mostly housed inside a nucleus that prevents easy swapping of genetic material. Still, as a species, we would not exist without bacteria. The human gut provides a home for many bacterial species, which benefit humans by digesting food and synthesizing vitamins. In this classic example of symbiosis, each species benefits from living with the others. But what if two symbiotic species were to merge even more intimately?
The biologist Lynn Margulis (1938–2011) helped advance the theory of endosymbiosis: the merger of two different species to create a new, more complex species. In 1967, she published a paradigm-shifting paper, arguing that cells possessing a nucleus — including most cells in the human body — evolved only after two distinct types of bacteria physically merged, became one, and started to reproduce. “On the Origin of Mitosing Cells” was rejected by fifteen scientific journals before finding a home at the Journal of Theoretical Biology.
Three years later, Margulis published her first book, Origin of Eukaryotic Cells, in which she further explained and provided evidence for endosymbiosis. Still, the theory remained controversial among scientists, in part because it modified long-held beliefs about how evolution works.
Consider Charles Darwin’s tree of life, different species branching away from a common ancestor. Endosymbiosis asks us to imagine some of the individual branches on the tree of life joining together, forming loops. I think about this while teaching my daughter to tie her shoes. The purple laces bounce against her pink sneakers as she runs ahead of me to school.
The energy that propels my daughter down our street is generated inside her cells by structures called mitochondria. A mitochondrion is spherical and small — about half the length of the P. aeruginosaI located on my microscope slide. Its primary function is to provide our bodies with usable energy. More than forty years after Margulis’s paper on mitosis, it is now widely accepted that mitochondria are remnants of once free-living bacteria. Somewhere way back on the evolutionary tree, microorganisms of two different species merged to form the large energy-producing cells that now characterize humans and other animals. We are the products of endosymbiosis. It appears that Dickinson was correct — figuratively and genetically — when she questioned humans’ “power to be finite.”
Lynn Margulis was an admirer of Dickinson, naming two of her own books — Slanted Truths: Essays on Gaia, Symbiosis, and Evolution,1997and Dazzle Gradually: Reflections on the Nature of Nature, 2007 — after lines in the poet’s famous poem “Tell all the Truth.”
For Margulis, truth itself was “slanted,” requiring careful observation and a willingness to look beyond commonly held assumptions. Perhaps she was dazzled gradually by the unfolding discoveries made possible by new genetic tools and methods in the 1970s that lent support to her theory.
This affinity may have originated along a shared border: From 1988 until her death, Margulis lived in Amherst, Massachusetts, in a house next to the poet’s former home, now the Emily Dickinson Museum. In the year before her death, Margulis reportedly said, “Emily Dickinson talks to me all the time. She is my neighbor. She is ironic. She exposes pretentions. She is a botanist. She is my favorite poet.”
The Dickinson Museum includes a garden where the poet cultivated flowers. The museum website showcases foxgloves, their blossoms like purple bells, spilling from a shared stalk, proof that a single growing thing contains multitudes.
Dickinson often presented her flowers along with her poems as gifts. Her family and friends knew her as a gardener and a poet. She was not content to be “Nobody” in the eyes of outsiders. She chose not to enter the marketplace and publish her poems when doing so required altering her work.
“I’m Nobody” suggests camaraderie — connection — between the poet and her addressee, whom Dickinson invites to share the “Nobody” moniker.
Mitochondria and other microscopic entities are tiny nobodies, overlooked but plainly here, waiting to be heard. For much of the twentieth century, Western culture tried to banish microbes, overusing antibiotics, antimicrobials, and fungicides. This was an unfortunate result of the germ theory of disease, which took shape during Dickinson’s lifetime. While a small percentage of bacteria can harm humans, a healthy human body contains roughly as many microbial cells as human cells. I think of these beings as true Nobodies, in the Dickinson sense, existing without frippery or fanfare, but nonetheless important.
Emily Dickinson was a trained observer of the natural world. At age nine, she entered Amherst Academy, where she studied for seven years; among the subjects she studied was botany. She made her own herbarium. The University of Michigan, where I studied biology, has a facsimile edition published by Harvard University Press: a surprisingly weighty and oversized book, complete with slipcover. It presents digital photos of the original pressings, which amount to 424 plants. The facsimile’s notes explain that there “does not appear to be an overall [scientific] order to the collection.” Yet Dickinson labeled most of the plants by hand, writing their scientific names on slender white tabs affixed to each specimen. The plants appear purposefully arranged on the page. On page 60, the branching leaf of Lathyrus odoratus finds the empty space around the thin and leafless stem area of Anenome cylindrica. Like a slant rhyme, the curved stem of the former echoes the curved stem of Saturjea vulgaris.
***
Moving between magnifications, adjusting the light, I seek the desired density of pink on my microscope slide. Viewing P. aeruginosa from so many perspectives, I begin to feel an affinity for it. Looking through a microscope requires leaving my human-scaled world, the buckets of mordent and dyes, my lab partners, my notebook and pencil. It is similar to performing a close reading of a poem. In Dickinson’s writing, the enjambment and unexpected line breaks require that I look and look again, reconsider where the poem is heading. Peering through a microscope, I’m never quite sure what my slide with its blot of pink or purple dye will reveal. Microscope work is remarkably like close-reading a poem.
In an intriguing article published in The Emily Dickinson Journal, Joan Kirkby explores the “correspondences and echoes” between Dickinson’s poetry and the scientific debates of her day, specifically those accompanying the Darwinian revolution. The article notes that during some of Dickinson’s most productive writing years, New England was the site of vigorous evolutionary and theological debates. Further, “Emily Dickinson herself was imbricated in a unique web of affiliation with Darwin and Darwinian ideas; the key New England figures in this debate were all known to Dickinson” through family, school, libraries, and the periodicals to which her family subscribed. Kirkby argues that Dickinson, through her poetry, “participated in the reconceptulalization of the world that was taking place around her.” Her poetry creates a space to explore the cultural implications of evolution: what it means to live in a world marked both by extinction, and by ever-changing biological diversity.
One of my favorite poems, “It’s all I have to bring today — ,” addresses what I believe the poet, the microbiology student, and the naturalist share in common. The opening “It” may refer literally to a nosegay of Dickinson’s flowers, presented along with a poem to a family member or friend. Then again, the “It” could be something figurative that links us to the infinite. Notably, this thing is small: a P. aeruginosaor a few lines of a poem or my daughter’s crayon drawings or the stone my son picks up on the walk home from school:
The dash at the end of each line points toward the white space of the page, an open field, spaciousness. This is unlike the enjambment in “Before I got my eye put out,” which emphasizes the limits of the poetic line; it cannot contain an idea or place such as the sky. Here, instead, the dashes seem to reach out into the vastness, connecting the objects with something beyond themselves. I welcome the transformation of “It” into the more immediate “This,” which expands to encompass the speaker’s heart.
By line 4 (“And all the meadows wide — “), one might sense that the poem’s enumeration could go on forever; however, the next lines offer instruction:
I hear sadness in the possibility of forgetting. This tempers the tone in line 6, where a pun links “some one” with “sum.” To suggest that individuals are the sum of many? Or that the sum is all there is, though we forget to count?
Having widened her perspective to include fields and meadows, the speaker concludes by reminding us of what she sees when she narrows her focus: bees and clover. But this is not a narrow vision, for these things are changed in our minds, inextricably connected with the entire universe.
And this “This” is why Dickinson and the microscope hold equal positions in my mind and heart: Mitochondria, the tiny products of endosymbiosis, made it possible for Emily Dickinson to write over 1,700 poems and for Charles Darwin to climb 4,000 feet into the Andean foothills. Mitochondria helped me give birth, twice. They allow me to tie a shoelace and stow The Essential Dickinson (weighing 171 grams) in my purse. They fill a room with Nobodies — and the room keeps expanding.
********************************************
****************************************
Sara Talpos writes frequently about medicine, public health, and the environment. Her articles and essays have appeared in Kenyon Review, Undark, Mosaic, Medicine at Michigan, and numerous other publications. She is also a poet, with work in RHINO and Bellevue Literary Review, among others.
This essay, “To Fill a Room with ‘Nobody,’” was one of Broad Street’s nominations for the Pushcart Prize. | https://broadstreet.medium.com/to-fill-a-room-with-nobody-sara-talpos-puts-poetry-and-mitochondria-under-the-microscope-d57e5f17ecf3 | ['Broad Street Magazine'] | 2019-02-19 01:44:30.216000+00:00 | ['Emily Dickinson', 'Botany', 'Infection', 'Poetry', 'Science'] |
Why am I Writing about Islam | Why am I Writing about Islam
Someone needs to talk about Islam, why don’t I be that person
Photo by Ashkan Forouzani on Unsplash
I recently decided to start writing about my religion — Islam. I became a Muslim in early childhood when my grandmother raised me, but full awareness came to me much later, but what my grandmother taught me and what she told definitely affected me and I am very grateful to her. In early childhood it is very difficult to fully understand what religion is, what faith is, why you must fulfill religious precepts and what punishment will be if you sin, therefore as you grow up you must come to this consciously and yourself. I must say that to come and fully know religion and God will never work out because people simply do not have enough time or opportunities. But we must strive to the knowledge of God and must follow religious precepts and teachings.
Eradicating false news
Truth has come and falsehood has vanished. Indeed! Falsehood is ever bound to vanish. (The Quran 17:81)
Only a couple of years ago I became a Muslim who knows the answer to the question of why he is in this religion, why he fulfills religious precepts and what he aspires in this life. During this time I read a huge number of books, watched video lectures and teachings, talked with a huge number of scholars in Islam and people who have enormous experience in Islamic issues. During this time I had the feeling that most people do not know the truth about Islam, but only what they see on TV and what they read in the news. And this news is mostly false, so I want to show people real Islam and I want to show them that this religion is filled with kindness and peace. The best way to fight untruth is, to tell the truth, so I’ll do it.
Learning more by writing for others
Acquire knowledge, and learn tranquility and dignity. (Omar ibn al-Khattab)
Trying to explain Islam to others, I will learn more and force myself to delve even further into religion. I have recently started to read less religious literature and therefore it will be a great challenge for me. I can also learn a lot of new things and I will have new ideas for creating mobile applications. So it will also be a huge plus for me and will be a preparation for me before the holy month Ramadan.
Someone should write about Islam
And do not walk proudly on earth. You can neither pierce the earth, Nor can you match the mountains in height. (The Quran 17:37)
The medium is progressing very fast and more and more people write about different religions, but very few people write about Islam, so I decided to start doing this and give people to meet Islam. Learning about religions and Islam, people should understand that all religious people should be friends with each other and there should be no hostility, our task is to be closer to God and live in peace, trying to better know him and worshiping him.
Outro
In general, now I will write a lot about Islam and religion, so I will be glad if you share with me your opinions and questions on various topics, I will be happy to answer your questions and study topics that will help you understand Islam better. There are a lot of sinful and false in the world, people are trying to desecrate a lot of saints and throw mud on truly beautiful things, this concerns not only Islam but also other religions. But last years it is Islam that is very much exposed to these attacks, so I want to show you that they are all false. | https://medium.com/the-post-grad-survival-guide/why-am-i-writing-about-islam-52a5c2cf4f59 | ['Il Kadyrov'] | 2019-04-06 09:55:00.123000+00:00 | ['This Happened To Me', 'Writing', 'Religion', 'Self Improvement', 'Islam'] |
My Top 3 Poets to Watch in 2020 | They’re all as new as me
Photo by Thought Catalog on Unsplash
I love poetry. I love the way it sounds, the way it moves, everything about it. I like to dabble in poetry, too.
However, my poetry is child’s play compared to the poetry I’ve seen from these poets on here. As with my last Top 3 article, all of them have been on Medium since 2019. Everything I’ve read from them has been amazing.
These poets deserve all the love they can get.
This one’s a no-brainer. Jenny Justice started in Medium the same month I joined. Her poetry exploded on Medium, and I’ve been following her ever since. Once I find some extra money, I plan on buying her book as well.
Here is my favorite one of her poems. It’s about watching a monk in a Buddist temple. Beautifully written.
I found her by accident one day while looking at different articles in Facebook groups. When she’s not writing poetry, she’s writing about times with her son and mental health.
One of my favorite things about poetry is the many different ways it can be interpreted. With my favorite poem of hers, I see it as someone tired of the constant one-night stands. Someone else can look at it and see something completely different. It makes poetry beautiful in that sense.
I’ve stumbled upon her recently. I haven’t read as much of her poetry as I have the others, but everything I’ve read so far has been excellent. I’m excited to see what she does with her work in 2020.
My favorite poem of hers is about relating to James in the story of James and the Giant Peach. It surprises me how small things can make us relate to different characters in the stories we read.
And that is my Top 3 poets to watch for 2020. So far, their poetry has been excellent, and I can’t wait to see what they come up within the next year. I’ll be looking forward to reading anything they have out there. | https://medium.com/top-3/my-top-3-poets-to-watch-in-2020-6a7adb3ecb73 | ['Keara Lou'] | 2019-12-30 22:02:24.200000+00:00 | ['Top 3', 'Writing', 'Writers On Medium', 'New Year', 'Poetry'] |
The Power Of Tracking & Budgeting | Ignorance Of Finances
The vast majority of people are completely unaware of their financial situation. Even if they are, they swipe it under the rug and continue to struggle with this area of their lives.
The majority of people practice unhealthy financial practices such as:
Debt accumulation (Loans, Financing, etc.)
Spending more than they make
Practicing stagnation with the amount of income they make
Lack of saving and investing
Lack of tracking and budgeting
Lack of financial literacy
The list goes on.
The point is that most people are uneducated when it comes to the topic of building wealth. They weren’t taught these things in school or by their parents and so they just go with the flow of their ignorance and get themselves into unhealthy situations financially.
The single biggest change that we can make in our financial situation is to track and budget our spending. If we understand where our money goes and practice self-discipline then we can take control of our ability to gain freedom through the vehicle of financial independence.
Let’s go through the most common financial problems and how we can get around them to move towards a more secure financial future:
Debt Accumulation
In our society, it is unbelievably easy to fall into the trap of accumulating an enormous amount of debt in a short period of time. The system is designed or set up in such a way that people who don’t know what they are doing can get themselves in financial trouble very easily.
The main sources of this debt are student loans, financing vehicles, mortgages, taking out other loans, credit cards, so on and so forth.
Avoid these things until you can actually afford to buy whatever it is that you would like with cash. If you don’t have at least 3 times the amount of the purchase you want to make then don’t even think of buying it.
For example, if you want to buy a $100,000 car and don’t have at least $300,000 then you have no business buying that car. If you can’t comfortably purchase more than one of the item then it’s an absolute no when it comes to making the purchase.
Negative Cash Flow
Having negative cash flow just means that you're spending more money than you are actually earning. This ties intimately with debt accumulation.
Almost everyone, if they tracked correctly, would realize that they are either coming extremely close to having negative cash flow or are already passed the point of spending more than they make. This is a scary habit to develop because no matter how much money you make, you come out short as a result of this addiction to spending.
Set a budget and practice self-discipline with those budgets. Also, track what the biggest categories of spending are for your unique situation. For most people, housing, transportation, and food are the major culprits when it comes to spending more than we make. Let’s go into them a bit deeper below:
Housing
Housing costs can quickly add up, especially if you live in an expensive area. If you can, split rent with other people and keep utilities to a minimum.
This will allow you to free up much needed resources for the future.
Transportation
Transportation can also add up very quickly if we are not careful. Carpool with others if you can, ride a bike, look for cheaper insurance — these are all viable options to reduce the cost of your transportation.
Food
This, in my opinion, is the biggest and most insidious category that goes under people’s radars.
Last month, I spent $600 on food. I was absolutely shocked once I started using the Mint app and tracked every category of where my money was spent. Therefore, I set a budget for $150 a month on food and just by doing that, I have dramatically lowered the cost of my food spending.
Cook your meals, look for deals on healthy food, stop going out so much and if you do go out then meal prep ahead of time to avoid unnecessary food expenditures.
Income Stagnation
Many people are making a certain income and seem to keep themselves stuck at that certain level of income. They don’t look for new ways to increase their income or change their mental approach as to how to make money.
The solution to this consists in a few things. Firstly, look for new ways to make more of an income on the side because once you’re awareness is open to it, you’ll find solutions.
Secondly, surround yourself with people who earn way more than you do. By doing this, you will pick up the beliefs, mindsets, and behaviors and eventually move towards a place of real wealth. We become who we surround ourselves with.
Increasing your income combined with the focus on reducing spending is a powerful combination.
Lack Of Saving & Investing
Saving and investing is like watching grass grow, it is not exciting at all but it is a necessary habit to implement if you want to build real wealth.
The solution to this is to automate systems so that both saving and investing work automatically without your willpower. Set up an account such as a Roth IRA, index fund, so on and so forth and automate your savings. Start with $100 per month.
Lack Of Tracking & Budgeting
This one can be solved by downloading an app such as Mint and checking it every single day. Create a habit of doing this and you will already be ahead of most people in terms of finance.
Lack Of Financial Literacy
The vast majority of people are uneducated financially. The solution to this is to become curious about finances and learn as much as you can from high quality resources.
Read books about money and finance, listen to podcasts, read blogs, go to seminars, as long as you learn about finances then you will end up in a good place in the future.
These are the biggest financial culprits. | https://zaiderrr.medium.com/the-power-of-tracking-budgeting-6dae0493f65f | ['Zaid K. Dahhaj'] | 2018-08-25 13:43:35.830000+00:00 | ['Investing', 'Self Improvement', 'Money', 'Psychology', 'Finance'] |
Facing My Shame | I was able to hide my excoriation disorder pretty well until my late twenties. I lived alone a lot, or with people who were frequently not home. As soon as they were gone, I’d run to the bathroom and start digging into my face. I’d keep lights low, or sit a certain way so no one could see where I’d been doing the most picking. If someone asked why my face was so blotchy and red, I’d make an excuse. I have sensitive skin, so it wasn’t too hard to come up with something that sounded like a reasonably rational explanation.
Then, in April 2005, I met the man who would become my husband.
Excoriation disorder isn’t one of those things that comes up over dinner. Or ever, if you can help it. Literally no one in my life up to that point knew what I’d been dealing with. My future husband was already taking on someone with dysthymia and two anxiety disorders — what more could I ask from him?
I don’t remember exactly how he discovered my disorder, not the way I remember being a ten-year-old staring into the bathroom mirror and pressing my fingernails into my face for the first time. But when you live in a small apartment with your romantic partner for many years, it becomes almost impossible to hide. I tried to explain it to him but left out the OCD part. It felt like too much. “Stop picking your face,” he still admonishes me, as if in the past three decades the only thing preventing me from doing that is a stern lecture. If only it were that easy. This being America, I can’t afford more of the cognitive behavioral/exposure therapy that worked so well for my anxiety disorders. And I already know that my primary care physician will insist on my going to a psychiatrist for OCD treatment rather than prescribe something herself. I have yet to find a psychiatrist in this city who charges less than $250 an hour and takes insurance. (I did my previous round of therapy through an adult anxiety clinic at a local university.)
So, as many of us with mental illness, I’m fighting this one alone. | https://medium.com/swlh/facing-my-shame-df6e80cd1b41 | ['Jennifer Loring'] | 2019-05-21 12:51:39.816000+00:00 | ['Mental Illness', 'Ocd', 'Dermatillomania', 'Mental Health'] |
Installing Hadoop 3.2.1 Single node cluster on Windows 10 | While working on a project two years ago, I wrote a step-by-step guide to install Hadoop 3.1.0 on Ubuntu 16.04 operating system. Since we are currently working on a new project where we need to install a Hadoop cluster on Windows 10, I decided to write a guide for this process.
1. Prerequisites
First, we need to make sure that the following prerequisites are installed:
1. Java 8 runtime environment (JRE): Hadoop 3 requires a Java 8 installation. I prefer using the offline installer.
2. Java 8 development Kit (JDK)
3. To unzip downloaded Hadoop binaries, we should install 7zip.
4. I will create a folder “E:\hadoop-env” on my local machine to store downloaded files.
2. Download Hadoop binaries
The first step is to download Hadoop binaries from the official website. The binary package size is about 342 MB.
Figure 1 — Hadoop binaries download link
After finishing the file download, we should unpack the package using 7zip int two steps. First, we should extract the hadoop-3.2.1.tar.gz library, and then, we should unpack the extracted tar file:
Figure 2 — Extracting hadoop-3.2.1.tar.gz package using 7zip
Figure 3 — Extracted hadoop-3.2.1.tar file
Figure 4 — Extracting the hadoop-3.2.1.tar file
The tar file extraction may take some minutes to finish. In the end, you may see some warnings about symbolic link creation. Just ignore these warnings since they are not related to windows.
Figure 5 — Symbolic link warnings
After unpacking the package, we should add the Hadoop native IO libraries, which can be found in the following GitHub repository: https://github.com/cdarlint/winutils.
Since we are installing Hadoop 3.2.1, we should download the files located in https://github.com/cdarlint/winutils/tree/master/hadoop-3.2.1/bin and copy them into the “hadoop-3.2.1\bin” directory.
3. Setting up environment variables
After installing Hadoop and its prerequisites, we should configure the environment variables to define Hadoop and Java default paths.
To edit environment variables, go to Control Panel > System and Security > System (or right-click > properties on My Computer icon) and click on the “Advanced system settings” link.
Figure 6 — Opening advanced system settings
When the “Advanced system settings” dialog appears, go to the “Advanced” tab and click on the “Environment variables” button located on the bottom of the dialog.
Figure 7 — Advanced system settings dialog
In the “Environment Variables” dialog, press the “New” button to add a new variable.
Note: In this guide, we will add user variables since we are configuring Hadoop for a single user. If you are looking to configure Hadoop for multiple users, you can define System variables instead.
There are two variables to define:
1. JAVA_HOME: JDK installation folder path
2. HADOOP_HOME: Hadoop installation folder path
Figure 8 — Adding JAVA_HOME variable
Figure 9 — Adding HADOOP_HOME variable
Now, we should edit the PATH variable to add the Java and Hadoop binaries paths as shown in the following screenshots.
Figure 10 — Editing the PATH variable
Figure 11 — Editing PATH variable
Figure 12— Adding new paths to the PATH variable
3.1. JAVA_HOME is incorrectly set error
Now, let’s open PowerShell and try to run the following command:
hadoop -version
In this example, since the JAVA_HOME path contains spaces, I received the following error:
JAVA_HOME is incorrectly set
Figure 13 — JAVA_HOME error
To solve this issue, we should use the windows 8.3 path instead. As an example:
Use “Progra~1” instead of “Program Files”
Use “Progra~2” instead of “Program Files(x86)”
After replacing “Program Files” with “Progra~1”, we closed and reopened PowerShell and tried the same command. As shown in the screenshot below, it runs without errors.
Figure 14 — hadoop -version command executed successfully
4. Configuring Hadoop cluster
There are four files we should alter to configure Hadoop cluster:
%HADOOP_HOME%\etc\hadoop\hdfs-site.xml %HADOOP_HOME%\etc\hadoop\core-site.xml %HADOOP_HOME%\etc\hadoop\mapred-site.xml %HADOOP_HOME%\etc\hadoop\yarn-site.xml
4.1. HDFS site configuration
As we know, Hadoop is built using a master-slave paradigm. Before altering the HDFS configuration file, we should create a directory to store all master node (name node) data and another one to store data (data node). In this example, we created the following directories:
E:\hadoop-env\hadoop-3.2.1\data\dfs
amenode
E:\hadoop-env\hadoop-3.2.1\data\dfs\datanode
Now, let’s open “hdfs-site.xml” file located in “%HADOOP_HOME%\etc\hadoop” directory, and we should add the following properties within the <configuration></configuration> element:
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///E:/hadoop-env/hadoop-3.2.1/data/dfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///E:/hadoop-env/hadoop-3.2.1/data/dfs/datanode</value> </property>
Note that we have set the replication factor to 1 since we are creating a single node cluster.
4.2. Core site configuration
Now, we should configure the name node URL adding the following XML code into the <configuration></configuration> element within “core-site.xml”:
<property> <name>fs.default.name</name> <value>hdfs://localhost:9820</value> </property>
4.3. Map Reduce site configuration
Now, we should add the following XML code into the <configuration></configuration> element within “mapred-site.xml”:
<property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>MapReduce framework name</description> </property>
4.4. Yarn site configuration
Now, we should add the following XML code into the <configuration></configuration> element within “yarn-site.xml”:
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>Yarn Node Manager Aux Service</description> </property>
5. Formatting Name node
After finishing the configuration, let’s try to format the name node using the following command:
hdfs namenode -format
Due to a bug in the Hadoop 3.2.1 release, you will receive the following error:
2020–04–17 22:04:01,503 ERROR namenode.NameNode: Failed to start namenode. java.lang.UnsupportedOperationException at java.nio.file.Files.setPosixFilePermissions(Files.java:2044) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:452) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:188) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1206) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1649) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1759) 2020–04–17 22:04:01,511 INFO util.ExitUtil: Exiting with status 1: java.lang.UnsupportedOperationException 2020–04–17 22:04:01,518 INFO namenode.NameNode: SHUTDOWN_MSG:
This issue will be solved within the next release. For now, you can fix it temporarily using the following steps (reference):
Download hadoop-hdfs-3.2.1.jar file from the following link. Rename the file name hadoop-hdfs-3.2.1.jar to hadoop-hdfs-3.2.1.bak in folder %HADOOP_HOME%\share\hadoop\hdfs Copy the downloaded hadoop-hdfs-3.2.1.jar to folder %HADOOP_HOME%\share\hadoop\hdfs
Now, if we try to re-execute the format command (Run the command prompt or PowerShell as administrator), you need to approve file system format.
Figure 15 — File system format approval
And the command is executed successfully:
Figure 16 — Command executed successfully
6. Starting Hadoop services
Now, we will open PowerShell, and navigate to “%HADOOP_HOME%\sbin” directory. Then we will run the following command to start the Hadoop nodes:
.\start-dfs.cmd
Figure 17 — StartingHadoop nodes
Two command prompt windows will open (one for the name node and one for the data node) as follows:
Figure 18 — Hadoop nodes command prompt windows
Next, we must start the Hadoop Yarn service using the following command:
./start-yarn.cmd
Figure 19 — Starting Hadoop Yarn services
Two command prompt windows will open (one for the resource manager and one for the node manager) as follows:
Figure 20— Node manager and Resource manager command prompt windows
To make sure that all services started successfully, we can run the following command:
jps
It should display the following services:
14560 DataNode
4960 ResourceManager
5936 NameNode
768 NodeManager
14636 Jps
Figure 21 — Executing jps command
7. Hadoop Web UI
There are three web user interfaces to be used:
Name node web page: http://localhost:9870/dfshealth.html
Figure 22 — Name node web page
Data node web page: http://localhost:9864/datanode.html
Figure 23 — Data node web page
Yarn web page: http://localhost:8088/cluster
Figure 24 — Yarn web page
8. References | https://towardsdatascience.com/installing-hadoop-3-2-1-single-node-cluster-on-windows-10-ac258dd48aef | ['Hadi Fadlallah'] | 2020-05-03 13:42:51.137000+00:00 | ['Hadoop', 'Big Data', 'Windows 10', 'Hadoop Cluster', 'Hadoop 3'] |
Using Machine Learning to Predict Dying Stars in our Galaxy… and Beyond! | Using Machine Learning to Predict Dying Stars in our Galaxy… and Beyond!
In this article, I demonstrate how machine-learning models can be used to correctly classify dying stars or more specifically, Pulsars, as they seem to be the most potentially useful for space exploration.
This will be a journey into predicting whether or not observations, made by high powered telescopes on Earth and potentially deep space probes in the future, are pulsars. Before we jump into the machine learning model I have developed to help identify pulsars, let’s talk a bit about what pulsars, or ‘pulsar stars’, actually are since they aren’t pulsating and actually aren’t technically stars (anymore).
Image via: PITRIS/GETTY IMAGES
What are pulsars?
Consider, for the sake of explanation, that stars have a life. (no offense to any stars here on Earth) Stars eventually die when they go supernova and collapse into a black hole if they are extremely massive. If they are less massive, between 7 and 25 solar masses (7–25 times the mass of our sun) or maybe a bit larger if they are especially metal-rich, they then become neutron stars, a super-dense mass only around 10 kilometers in radius but so dense that a teaspoon full of their mass would be as heavy as Mt. Everest if placed on Earth. Neutron stars continue to emit radiation from their collapsed core for millions of years until they eventually cool completely and become cold dead remnants of their once brilliant heavenly bodies.
A smaller subset of these star remnants called neutrons is one called ‘pulsars’ or ‘pulsar stars’. While the term ‘pulsar’ is a combination of the words ‘pulse’ and ‘star’, pulsars aren’t pulsating and are no longer living stars. Their electromagnetic emissions are continuous but beamed from their magnetic poles as seen in the photo above. The magnetic axis is not usually the same as the rotational axis, so they are called pulsars due to how, from any singular perspective, a pulse of radiation is observed each time the beam sweeps through the observer’s point of view. These sweeps are extremely regular in their interval since they occur with each rotation of the pulsar. It should be noted that pulsars can never ‘die’ since they are already actually dead stars, but eventually, over the course of probably more than a million years, their spins will slow and their emissions will cease. At this point, they would only become detectable from the gravity they exert on other celestial bodies or by detection of something called black-body radiation.
What do earthlings care about pulsars?
People on earth cannot observe most pulsar bodies or even their beams with our limited five senses. They have no effect on us, so you may be wondering: “What does this have to do with me or humankind as a whole?”
It takes a good amount of time, money, and effort to discover pulsar candidates and to then correctly identify whether those potential pulsars are true pulsars or not. This has been going on for over half a century and has undoubtedly involved many scientists and lots of funding to accomplish the identification of the over 2,000 known pulsars to date.
We already have a system of navigating the solar system with a network of radio antennas on Earth called the Deep Space Network. This does a pretty good job of determining how far a craft has flown and how much further it must go to reach its destination. The only issues with the DSN are the further away from earth a spacecraft gets, the less precise the DSN gets, and it can only determine the distance of the craft from Earth, not its lateral position in space. Using pulsars as a set of lighthouse beacons, a craft could determine exactly where it is positioned in three-dimensional space. This will become more important as more crafts are sent out to explore the outer reaches of our solar system and beyond.
Image via: Multiverse Hub
Now for the Data Science!
Those that are here for the witty banter and pictures may feel free to scroll past all this… it’s about to get extremely technical.
What you are about to witness, if you stick around, is what’s called “supervised machine learning”. I have all the verified labels, (pulsar or non-pulsar) for each observation in the data set. After splitting the data into train/validate/test sets I then will take away the answers for the validate and test sets and train my model using the answers on a majority of the data (training data), and then use the model to predict the answers for the rest of the data.
The data set used for this project can be found on Kaggle here and UCI here.
Since this data was meticulously collected and documented by astronomers and was remarkably clean and complete, my first task was to split the data into a set used to train each model that I develop, one used to validate each models performance and check the metrics for tuning my hyper-parameters, and finally, a set on which to test my models that the models have never seen.
I chose to separate 10% of the data into the test set, and of the remaining 90%, 20% of that was used for my validation set. This means the model was trained on 72% of the data available and validated and tested on the remaining 28%.
# Setting aside a sample of the data for testing.
# this portion will not be “known” during the model adjustment process.
train, test = train_test_split(ps,test_size=0.10,
stratify=ps[‘target_class’],
random_state= 17) # Separating the remaining data into training and validation sets.
#this allows me to tweak my model to perform at it's best before testing
#it on unknown data.
train, val = train_test_split(train,test_size=0.20,
stratify=train['target_class'],
random_state= 17)
train.shape,val.shape,test.shape Out[7]:((12886, 9), (3222, 9), (1790, 9))
Next, I do a quick check of the distributions of the target class within my splits. This even distribution is accomplished using the ‘stratify’ argument on the ‘target class’ feature. (In this column pulsars are labeled with 1 and non-pulsars are labeled with 0)
train['target_class'].value_counts(normalize=True),
val['target_class'].value_counts(normalize=True),
test['target_class'].value_counts(normalize=True) Out[8]: (0 0.908428
1 0.091572
Name: target_class, dtype: float64,
0 0.908442
1 0.091558
Name: target_class, dtype: float64,
0 0.90838
1 0.09162
Name: target_class, dtype: float64)
Solid
If I just built a model that always reported ‘0’ (non-pulsar star) it would be correct about 90.8% of the time (our baseline accuracy). This may prove difficult to improve upon with machine learning, but I enjoy a challenge!
Image via: NRAO pulsar
Now to separate our data into X feature matrices and y target vectors.
target = 'target_class'
features = ps.columns.drop(target) X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
On to model production…
I won’t post the code for the rest of this as it can get a bit lengthy but feel free to visit my GitHub page where the project notebook lives if code examples are your jam.
Model The First: Random Forest Regressor
Validation Accuracy for Random Forest Regrssor : 0.8039317754042418
This model does not show improvement from the baseline accuracy. It only predicted the correct class 80.4% of the time. While this is about what I expected, and may still seem pretty good for many tasks, simply predicting that none of the observations were pulsars would have given me 90.8% accuracy, so to me, this model is not useful for this use case.
Model #2: Logistic Regression
Validation Accuracy for Logistic Regression 0.978895096213532
On the first try, with no adjustments from the stock Logistic Regression function in SciKitLearn’s linear_model module, this model does a great job at predicting pulsars given the data. This is great for me because I can use a linear model to determine the coefficients of each feature in the data set with a couple of simple lines of code.
Coefficients plot for Linear Regression model
Model V3.0: Random Forest Classifier
Validation Accuracy for Random Forest Classifier 0.9798261949099938
After multiple iterations, tweaking the hyper-parameters lower and higher, I found the sweet spot of this model's performance capabilities.
Now we’re talking! the RFC model was able to get even closer to predicting the proper classification 98% of the time on the validation set. There isn’t much more improvement that can be made, but I still will try for a better accuracy score… for science!
Although I’d love to see another season of ‘Lost in Space’, I wouldn’t want to be responsible for the reality-show version’s release.
So for my own peace of mind or the sake of proof, let’s take a look at another metric before deciding if this is a useful model.
Additional metrics
Confusion Matrix of predictions made on the validation set by my random forest classifier model :
Plot images created by the author
This confusion matrix displays how often my model was correct in its predictions and when it became ‘confused’ by the features and incorrectly labeled the observation.
If I were floating around aimlessly in space after an unforeseen course change from impact or spatial anomaly, I would want my analytics to err more on the side of caution, (classifying ‘maybes’ as ‘not pulsars’) than the opposite. In this hypothetical, my crew does not use the actual pulsars that the machine labeled as non-pulsars and just goes with the observations it reported back as pulsars to reorient and course correct.
ROC/AUC Curve
Plot Images created by author
ROC/AUC score = 0.9754350205277573
F1 score = 0.8969258589511754
Recall score = 0.8406779661016949
My RFC model leans more towards false negatives than false positives where errors in classification were made.
For this reason, I will accept this model as useful.
Still, the quest for ‘better’ goes on…
Final round: XGBoost Classifier :
Validation Accuracy using XGBoost 0.9823091247672253 This is a 0.0024829298572315306 improvement on the RFC model.
Another improvement in predictive accuracy that could not be achieved with my RFC model. This, again, took fine-tuning of hyper-parameters to achieve.
Confusion Matrix of predictions made on the validation set by my XGBoost classifier model :
Plot Images created by the author
Eureka?…
Not so fast. I’ll need to check the other metrics.
Plot Images created by author
ROC/AUC score = 0.9828035878698036
F1 score = 0.8969258589511754
Recall score = 0.8406779661016949
Accuracy and ROC/AUC scores have seen some improvements with XGBoost, with no drop in the recall or F1 score.
Warning: Objects in the following image may appear closer than they are.
Testing Accuracy:
That’s right, I finally get to break out the testing data that my model never got to know about until after all the fun was had in coding, validating, tweaking, validating, tuning, validating… well, you get it.
Testing accuracy with Random Forrest Regressor = 0.7627017333383153
Testing accuracy with Linear Regression = 0.976536312849162
Testing accuracy with Random Forrest Classifier = 0.976536312849162
Testing accuracy with XGBoost = 0.9793296089385475
We have a winner: XGBoost!
A lot of what goes on within machine learning algorithms are hidden to the user. Thanks to Lundberg and Lee, we data scientists now have an ML library called SHAP to help explain how this digital wizardry takes place. Fun fact: Shapley values were originally developed for game theory and named after Lloyd Shapley who introduced it.
Image via: SHAP Github. (link in acknowledgments)
Model interpretation
The models feature impacts separated into Class 1-Pulsars & Class 0-Not pulsars
Shapley Force Plot of an observation that is not a pulsar from RFC model
Shapley Force Plot of an observation that is a pulsar from RFC model
A couple more plots for those that can’t get enough. Again, all of the code for these plots and others can be found here.
Permutation importance list of all features using eli5
Partial dependence plot for the most impactful feature
Created by the author using pdpbox
Outro :
I can imagine that sometime after we get sailing through the cosmos much slower than the speed of light, some other beings from another time, place, or dimension may just hand us a glowing rock that can wisp us away to anywhere at the blink of an eye with little more than a thought… but until that day comes, we have machine learning.
Thanks for the read! All claps will be dedicated to the universe for keeping me intrigued. Cheers!
Citations:
R. J. Lyon, B. W. Stappers, S. Cooper, J. M. Brooke, J. D. Knowles, Fifty Years of Pulsar Candidate Selection: From simple filters to a new principled real-time classification approach, Monthly Notices of the Royal Astronomical Society 459 (1), 1104–1123, DOI: 10.1093/mnras/stw656
DOI of the data set:
R. J. Lyon, HTRU2, DOI: 10.6084/m9.figshare.3080389.v1.
Acknowledgments
This data was obtained with the support of grant EP/I028099/1 for the University of Manchester Centre for Doctoral Training in Computer Science, from the UK Engineering and Physical Sciences Research Council (EPSRC). The raw observational data was collected by the High Time Resolution Universe Collaboration using the Parkes Observatory, funded by the Commonwealth of Australia and managed by the CSIRO.
Additional sources:
[SPINN: a straightforward machine learning solution to the pulsar candidate selection problem]
[Selection of radio pulsar candidates using artificial neural networks]
[NRAO public image licensing terms]
[Introduction to Pulsar, Pulsar Timing, and measuring of Pulse Time-of-Arrivals]
[S. Lundberg, S. Lee “A unified approach to interpreting model predictions,” NIPS 2017]
[SHAP Github] | https://medium.com/analytics-vidhya/pulsar-stars-what-are-they-and-why-should-you-care-8eba0cbcdcf6 | ['Timothy Eakin'] | 2020-12-10 16:09:12.841000+00:00 | ['Pulsar', 'Xgboost', 'Astronomy', 'Python', 'Data Science'] |
Invest In a Green Planet, Invest In Blockchain Technology | Blockchain & the energy market
Blockchain will naturally create environmental benefits by ensuring the significant reduction of corruption in government and its collusion with the financial and energy industries
James Wallace, Co-Founder, Exponential University
I live in Spain, the country of sun. In Ibiza we have 300 days of sun. You would expect solar panels everywhere. Well, in the more northern countries like Germany and Sweden, many more solar panels can be found. Consumers are incentivized to put excess energy back on the grid. There is even a subsidy for using solar panels.
Not in Spain. At least not from 2015 to 2018, the former Rajoy government, at the demand of the Populist party PP, implemented a very controversial sun Tax. The former Spanish government made you pay to put your surplus energy back on the grid. Also, you were only allowed to use up to a certain percentage of green energy. When you went above this, it was heavily taxed and you risked a huge fine. This insane situation had probably 3 reasons;
It is actually expensive to redistribute energy on a grid. Because of the high price of energy storage. This is the energy storage problem. Elon Musk has yet not invented a super battery which production is cheaper than the energy that is saved using it. It is expensive to transfer energy on a grid. The energy infrastructure has to change first. To use the grid, people have to pay for it. The more efficient the grid, the cheaper using it gets. Spain hasn’t innovated its grid much lately. The energy cartel is still in favour of fossil fuels, also in Spain, and they have a strong oil lobby in politics. They even tried planning an oil rig only 30 KM away from Ibiza’s more than 60 pristine blue beaches. This is how crazy this world has got and how money and oil can blur the view of some people.
In all of the above, Blockchain technology can help.
When consumers produce something out of their consumption, this is called prosumption. An example is people putting surplus energy from Solar panels back on the grid. Blockchain technology can bring its core quality in this economic field of people producing value, in this case, electricity, and trading this peer to peer.
It can bring accounting (how much energy is created?), tracking and bringing 2 parties together (where can it be sold to, what household needs energy and how much?), and paying for it with. Energy tokens that represent the value can be paid through a secure system of crypto-wallets. This would make this market very appealing, and the costs saved by cutting away the conventional energy companies can be used to invest in alternative grids.
This peer to peer possible borderless trading of energy can be called a microeconomic movement that disrupts and innovates the macroeconomic energy structure bottom-up.
When the sun shines all day in Ibiza, but Valencia is clouded, I potentially can sell energy to households in Valencia. In this blockchain-powered prosumption system, “smart meters could be used to account and register the micro-generated energy on a distributed ledger” (page 76; Distributed Ledger Technology: beyond block chain. Published by the UK Government Chief Scientific Adviser)
Another option would be that my surplus energy becomes solar tokens. Instead of mining Bitcoins, people can earn Solar Tokens as a reward for generating solar energy. This model, in essence, can be applied to the creation of all sorts of sustainable energy. The conditions for these project to work are that the technology needs to work, enough people start using it or buy the coin and there is a smart distribution network of the created energy.
Or what about this one;
Hydropowered Bitcoin mining facilities in the Swiss Alps could purchase carbon credits. In this way, the relatively high carbon footprint of Bitcoin mining can pay for its emission. Since we don’t know the real value of Bitcoin for society, at the long run, it is difficult to judge its high energy cost for mining at this stage of our Blockchain and Bitcoin revolution.
I believe the energy-saving prospects of Blockchain technology as explained in these examples, how futuristic they might sound, will far outrun the initial investments needed to be done. | https://medium.com/spirit-of-crypto/invest-in-a-green-planet-invest-in-blockchain-technology-4c6619caf24d | ['Lucien Lecarme'] | 2020-01-17 16:55:43.917000+00:00 | ['Blockchain', 'Economics', 'Investing', 'Sustainability', 'Cryptocurrency'] |
Solving Deceiving Problems Presented by Heroku Dyno Processes | Solution Process
Included in the data sent by the POST request was the query of the vote URL. A typical URL looks like this:
By having the user click on a link with query parameters containing the server and channel id, I could have that data readily available when I received a vote event. My implementation looked similar to the following:
With that out of the way, I was ready to host my bot on Heroku. I cleaned up my code and merged my development branch with my primary branch, which was then picked up by Heroku’s automatic deployment. I changed the webserver endpoint on Discord Bot List (DBL) to point to where my Heroku app was hosted, instead of pointing to my local machine. I waited for my bot to start up and then sent a test POST request from DBL. Immediately, I started getting an error telling me that there were no web processes running.
at=error code=H14 desc="No web processes running" method=POST path="/dblwebhook" host=_.herokuapp.com request_id=7a54e44c-5c2a-40fe-8bfc-097313c0c919 fwd="_" dyno= connect= service= status=503 bytes= protocol=https
After researching the error, it seemed that my Procfile was configured wrong. The Procfile tells Heroku what type of process to run and what file to run the process on. I had my process type set as worker, which upon further research would not work for a web server. From Heroku:
“A Heroku app’s web process type is special: it’s the only process type that can receive external HTTP traffic from Heroku’s routers. If your app includes a web server, you should declare it as your app’s web process.”
Clearly, I had just configured my process type incorrectly and all that was needed was a quick change from worker to web. And so that was exactly what I did.
# worker: python bot.py
web: python bot.py
I pushed my code to GitHub and tested my DBL vote event once again by sending a POST request to the endpoint of my Heroku app, and it worked! My bot had received the webhook and reacted accordingly. However, I wouldn’t be writing an article about a simple solution I achieved on my second attempt if there were no caveats that came with it. And unfortunately, my change from worker to web was just that.
The reason a worker process is recommended for a Python bot is that it’s constantly active. This means that the worker is always listening for a command and is able to respond to it immediately. On the other hand, a web process falls asleep after idling for some time.
As noted in the Heroku documentation,
“If an app has a free web dyno, and that dyno receives no web traffic in a 30-minute period, it will sleep.”
For obvious reasons, this wouldn’t work for a Discord bot, which receives traffic via an API and not through the web. At this point, I tried many different solutions, from creating my own web server and hosting it outside of Heroku to restructuring my code to constantly calling the dblpy API for a vote count. None of these gave me much success, and I was right back where I started: needing the HTTP routing of the web process but the uptime of the worker process. How about just running these two processes at once?
I changed my Procfile to have my processes simultaneously run my bot.py file like so:
worker: python bot.py
web: python bot.py
I pushed my code and sent out a test webhook to my app domain. The web process then fired and sent the appropriate response in the right channel. Everything seemed good there. Next, I tried a command in my Discord chat and waited for the worker process to react to it. That too worked; however, the worker process wasn’t the only one that caught the command.
Remember how the web process idles for some time after receiving web traffic? During that idle time, it acts just like the worker process and is able to receive commands from the Discord API. That meant that for thirty minutes after someone voted, any subsequent commands entered would get duplicate responses like so:
I figured that the only way of preventing a double response would be to divide the tasks of the processes. I composed a copy of my bot.py file and named it bot_web.py . This new file ran a second instance of my Discord bot, but it would only be responsible for web functions.
To link the new file with my web server, I created a new folder called cogs_web and added my dbl.py file. Now the only cogs that the web process would be able to access were the ones associated with the webserver.
I changed my Procfile to read:
worker: python bot.py
web: python bot_web.py
And I pushed my new code to GitHub to be redeployed. I once again sent my test webhook to my Heroku app and my test command to my Discord server. Everything seemed perfect this time, and I was able to use the bot as intended. But alas, I had missed a line in the Heroku documentation that would be this solution’s downfall.
“In addition to the web dyno sleeping, the worker dyno (if present) will also sleep.”
The web process essentially brings down any other processes with it. This meant that whenever the web process went to sleep, the worker process followed. | https://medium.com/better-programming/deploying-a-python-discord-bot-using-dblpy-on-heroku-259e48c873ec | [] | 2020-11-18 15:44:02.979000+00:00 | ['Webhooks', 'Programming', 'Discord Bot', 'Python', 'Heroku'] |
100 Funny Programmer Quotes | 100 Funny Programmer Quotes
For when coffee isn’t enough to bring a smile to your face
Photo by Ben White on Unsplash
As I was looking for some useful coding quotes, I discovered many funny ones. Enjoy reading them — I thought these should be compiled together.
Most are from other sources (as per linked), with the exception of a handful that are my own.
I have separated them down into:
Hope you enjoy at least some of them! | https://medium.com/better-programming/101-funny-programmer-quotes-76c7f335b92d | [] | 2020-10-22 08:34:44.982000+00:00 | ['Humor', 'Software Development', 'Coding', 'Software Engineering', 'Programming'] |
There is Only Poetry | There are no experts,
Only poets are worth listening to.
There no moral authorities,
Poetry has always been the language of spirituality, of God.
There are no politicians, no leaders,
Poetry has always been the arbiter of truth, of law.
Poetry is how you know what love is,
Poetry is how you know what humanness is,
Poetry is how you know what life is.
There are pieces of heaven here,
Every day
In each moment.
And if the poets don’t point them out
Who will?
There are ways through, ways above, ways to rise
Every day
In each moment.
And if the poets don’t show us the way
Who will?
There is nothing that can move
As many, as quickly, as deeply.
There is only poetry.
It is everywhere,
It is in every one.
Just listen. | https://medium.com/literally-literary/there-is-only-poetry-587989f436a1 | ['Jenny Justice'] | 2019-08-05 23:06:48.185000+00:00 | ['Spirituality', 'Expression', 'Writing', 'Life', 'Poetry'] |
Experts at the End of the World | Photo by J W on Unsplash
This article contains detailed descriptions of PTSD and C-PTSD, and briefly references interpersonal violence. Please be gentle with yourself, and mindful of your spoons, before you read on.
All of us experience some kind of trauma. While it’s important to avoid the oppression olympics — that is, the act of competing with each other to determine who has suffered the most — it’s also true that not all trauma is created equal. A cancelled college graduation ceremony is different from the death of a parent. The loss of a limb is different from a lifetime of poverty. It’s not for me to say which of these experiences is more painful, or which does greater psychological damage. Only a person who has lived through multiple kinds of trauma can accurately compare them.
Those of us living with PTSD and C-PTSD carry our trauma differently. Some of us avoid all contact with triggers, even if doing so cuts us off from support. Some of us experience daily flashbacks to the worst moments of our lives. Some of us are insomniacs, and some of us are unable to get out of bed for days in a row. We struggle with emotion regulation, and we often lash out when we feel threatened. We feel threatened all the time. We are hypervigilant, constantly looking for danger, making escape plans before we enter any room. For years, I couldn’t sleep unless there was a clear path between my bed and the door. In college, that often meant piles of books and laundry would amass like snowbanks in my dorm, guarding a trail of imaginary breadcrumbs to an equally imaginary safety.
When you’re used to being afraid, the most dangerous situations can feel like home. Survivors of interpersonal violence, for example, often find themselves in abusive relationships again and again, repeating cycles they learned in the past. Our intimate knowledge of trauma can be very damaging. But it can also be a superpower.
Collective Trauma
Some events leave a global mark. Where were you when Kennedy was shot? What were you doing when the Berlin Wall fell? How did you spend the day on 9/11? People who are old enough to remember these events will almost certainly be able to tell you.
The coronavirus pandemic is another such event. Where are you right now? Who are you with? What are you doing? We will remember the spring of 2020 for the rest of our lives. Some of us may even develop PTSD. Years from now, those people will experience flashbacks to what is now the present. No matter where you are, no matter what you’re doing, this is a defining moment of your life.
There is no clear path forward right now. My laundry is folded and put away; my books are neatly stored on shelves. I can easily walk to my front door — but I don’t. There is nowhere to go. All we can do is wait.
Lessons Learned
In this global crisis, many people with PTSD and C-PTSD are reporting a sense of calm. We feel like we’ve been here before. Perhaps we were once isolated from our loved ones, or forbidden from going outside. Perhaps we were threatened with death because of disease or violent relationships. Whatever our history, we survived it long enough to get here. That means we have the skills to survive this, too.
Strangely — and for the first time, for many of us — we are the leaders now. That is not to say we’re having an easy time. This kind of isolation can be extremely triggering. Nevertheless, we are the ones who have been forced to navigate crises like this in the past. Even if our trauma happened on a much smaller scale, it made us feel like the world was ending. Now, everyone in America and across the globe faces that same feeling.
This pandemic is harmful and frightening. Within this vortex of destruction, though, there is a spark of opportunity. In our shared trauma, we can learn from each other. People who do not (yet) have PTSD can benefit from the skills that trauma survivors have been developing for years. And perhaps, through that most painful and necessary education, we can end some of the stigma against mental illness.
Trust the Process
As Octavia Butler once wrote, “God is Change.” In the post-apocalyptic world of her book, Parable of the Sower, a young woman survives the loss of her entire family by developing a new faith based on this precept. With or without following any religion, this is a time for us to have faith. But faith in what?
Acceptance and Commitment Therapy (ACT) works from the premise that by living in accordance with our unique values, we give our lives meaning. For trauma survivors, this is both essential and extremely challenging. Many of us have a hard time trusting our instincts. Because our past experiences have caused so much damage, we can’t easily tell whether we’re working toward a better life or falling into old patterns. This is true whether or not we are in any way responsible for our trauma. Even someone who experienced abuse as a small child, and was in no way to blame for the actions of the adults around them, may grow up to blame themselves for what happened. Just as we learn to mistrust other people who do us harm, we also learn to doubt ourselves.
As we live through quarantine, it is similarly hard to trust our instincts. Much as we want to, we shouldn’t seek out in-person support from family and friends; we shouldn’t commune with nature by spending hours outside. But we can still have faith in certain undeniable facts. In times of acute distress, I find it helpful to simply list true things. For example: time passes. The sun will rise tomorrow. The ground supports my feet. I have ten toes. These things are inarguable, and perhaps more importantly, they have simple emotional connotations. When I engage in this exercise, I am careful not to list any relationships with other people, even reliable ones. This is a way to ground oneself in unchanging, permanent facts.
Once you are reassured of these undeniable truths, if you’re feeling emotionally ambitious, the next step is to practice radical acceptance. It is true that we are living through a pandemic. It is true that our healthcare system is failing us. It is true that we do not know how long this quarantine will last. These facts are changeable and impermanent. They are not grounding or comforting, but they are still true. Try to acknowledge them without focusing on your emotional reactions. Accept that the world is as it is. By accepting our uncertain reality, we can begin to make peace with our grief.
As you grieve, make room for the unique freedom of isolation. In solitude, we are obligated to no one but ourselves. Without regular daily commitments, we can follow our own rhythms. If you sleep better during the daytime, you can do so without missing work. Eat when you’re hungry, not when you agreed to meet a friend for dinner. Go for a walk at 2 am without worrying about who might approach you on the street. Learn to trust your own instincts.
Of course, these exercises are much easier said than done. Work toward having faith in yourself, and understand that it may take time to learn how. Until then, have faith in the fact that you are learning. You are changing. That too, is a constant. As Butler reminds us, “The only lasting truth is Change.”
Embodying Trauma
Emotional trauma lives in the body. We are fortunately emerging from an era in which Western medicine did its best to divorce ailments of the body and mind. Although this stark separation made it possible to define more specific diagnoses for mental and physical illnesses, it has also stood in the way of healing many serious conditions, including PTSD and C-PTSD.
The paradigm is shifting. Besser van der Kolk’s immensely popular book, The Body Keeps The Score, asserts that the link between body and mind “is transforming our understanding of trauma and recovery.” Prominent psychoneuroimmunologist Andy Bernay-Roman has pioneered a style of therapy in which clients revisit the physical experience of past trauma, releasing pain and tension that have been stored in the body for years.
The first step in healing any ailment is the act of acknowledging its existence. As we live through trauma, it’s essential to connect with the physical manifestations of our emotional wounds. In my own practice as a massage therapist, I work with a number of trauma survivors. At the beginning of each session, I ask: how does your body feel? I watch my clients settle into themselves a little more deeply, wiggling their shoulders and stretching their arms before answering. Our current pain does not always fit with our narrative of “what hurts.” By staying present and building body awareness, we make it possible for healing to begin.
Whether or not you are ill, the emotional trauma of this pandemic will manifest in your body. Recognize this without judgement. If you can, take a few minutes each day to focus on your own body awareness. Simply noticing how you feel can be an immensely powerful experience.
How does trauma live within your body? Some of us carry tightness in our shoulders; some of us hold our breath when we’re stressed. You may furrow your brow, clench your stomach, or curl your toes. Perhaps you find yourself constantly fatigued. As you take stock of these sensations, resist the urge to chide yourself for any tightness or pain. Just feel your feelings. The existence of your body, in any state, is one more fact you can rely on.
In this era of political and emotional upheaval, we have to learn new skills and strategies if we want to survive. This is clearly true on an international level: our economies and healthcare systems are changing quickly and drastically in response to COVID-19. These changes are also needed on a much smaller and more personal scale. Difficult though this learning process may be, it is an opportunity for creativity. As you learn more about yourself, you may also want to consider your attitude toward mental illness. Strangely, beautifully, those of us with the most painful histories are in a position to act as guides. Trauma survivors, this is the moment to step into your power. Although none of us know what the future holds, we can navigate toward it with compassion for each other and for ourselves. | https://medium.com/age-of-awareness/experts-at-the-end-of-the-world-4ef719a9f70a | ['Hannah Friedman'] | 2020-04-03 03:21:53.944000+00:00 | ['Covid 19', 'Mental Health', 'PTSD', 'Body', 'Self Care'] |
Cross platform Mobile Apps | Hi everyone.
Today I would like to share Correlation diagram of those platform for developing mobile applications.
Cordova
Outsystems
ReatcNative
Flutter
Kotlin/Native
Correlation diagram
I summarized those applications as the following correlation diagram.
The first major category are Native UI or Non- Native UI.
Then, in Non-native UI, we can categorize Original UI and Web UI. It means Non-Native UI consists 2types.
Original UI: Rendering with original UI
Web UI: Rendering on web view
Let’s see each group!
Native UI
Platforms in this group compile source code to generate a bundle file for iOS and Android.
So they would be generated.
iOS→UITableView
Android→ListView
Therefore, the performance is same with other platforms.
Original UI
Flutter is categorized to this group.
Flutter compiles the source code and generates a bundle file for each iOS&Android.
Flutter uses Skia Graphics library as rendering engine and Google chrome also use it.
Web UI
Cordova and Outsystems are catogorized in this group.
The group’s platform is configured to generate only a WebView natively on iOS&Android and render the UI on that WebView using HTML/CSS/JavaScript.
The advantage is that they use a familiar web technology.
Comparison
Comparison
Non-Native Platform such as Cordova and fultter has more libraries which made by 3rd party because they cannot use native library
On the other hand, ReactNative has less libraries which made by 3rd party because it can use native library.
Summary
Technology used in the background
Universal technology or not
Easy to get support and information or not
Necessity to develop web app or not
Depending on the app development situation described above, which platform to choose is different.
Thank you for reading. | https://medium.com/dsf-developers/cross-platform-mobile-apps-9d96c87ab88a | [] | 2020-06-08 01:26:40.580000+00:00 | ['Outsystems', 'Mobile App Developers', 'Flutter', 'Mobile App Development', 'Kotlin'] |
Apache Hadoop: PIG | What is Apache Pig?
Apache Pig is a platform for analyzing large data sets. Pig’s language, Pig Latin, is a simple query algebra that lets you express data transformations such as merging data sets, filtering them, and applying functions to records or groups of records. Users can create their own functions to do special-purpose processing.
Pig Latin queries execute in a distributed fashion on a cluster. Our current implementation compiles Pig Latin programs into Map-Reduce jobs and executes them using Hadoop cluster.
Pig provides an engine for executing data flows in parallel on Hadoop. It includes a language, Pig Latin, for expressing these data flows. Pig Latin includes operators for many of the traditional data operations (join, sort, filter, etc.), as well as the ability for users to develop their own functions for reading, processing, and writing data. Pig is an Apache open source project.
Pig raises the level of abstraction for processing large datasets. With MapReduce, there is a map function and there is a reduce function, and working out how to fit your data processing into this pattern, which often requires multiple MapReduce stages, can be a challenge. With Pig the data structures are much richer, typically being multivalued and nested, and the set of transformations you can apply to the data are much more powerful — they include joins, for example, which are not for the faint of heart in MapReduce.
Pig is made up of two pieces:
The language used to express data flows, called Pig Latin. The execution environment to run Pig Latin programs. There are currently two environments: local execution in a single JVM and distributed execution on a Hadoop cluster.
Installing and Running Pig
Pig runs as a client-side application. Even if you want to run Pig on a Hadoop cluster, there is nothing extra to install on the cluster: Pig launches jobs and interacts with HDFS (or other Hadoop filesystems) from your workstation.
Installation is straightforward. Java 6 is a prerequisite (and on Windows, you will need Cygwin). Download a stable release from http://hadoop.apache.org/pig/releases.html, and unpack the tarball in a suitable place on your workstation
create a directory named pig in /usr/local copy the pig tar file to pig directory untar the tar file
Extract the tar file using tar command. In below tar command, x means extract an archive file, z means filter an archive through gzip, f means filename of an archive file
tar -xzf pig-x.y.z.tar.gz
4. set the path in bashrc
Edit the “.bashrc” file to update the environment variables of Apache Pig. We are setting it so that we can access pig from any directory, we need not go to pig directory to execute pig commands. Also, if any other application is looking for Pig, it will get to know the path of Apache Pig from this file.
% export PIG_INSTALL=/home/tom/pig-x.y.z
% export PATH=$PATH:$PIG_INSTALL/bin
5. exec bash
6. launch grunt shell
Execution Types
Pig has two execution types or modes: local mode and Hadoop mode.
Local mode
In local mode, Pig runs in a single JVM and accesses the local filesystem. This mode is suitable only for small datasets, and when trying out Pig. Local mode does not use Hadoop. In particular, it does not use Hadoop’s local job runner; instead, Pig translates queries into a physical plan that it executes itself.
The execution type is set using the -x or -exec type option. To run in local mode, set the option to local :
% pig -x local
grunt>
This starts Grunt, the Pig interactive shell, which is discussed in more detail shortly.
Hadoop mode
In Hadoop mode, Pig translates queries into MapReduce jobs and runs them on a Hadoop cluster. The cluster may be a pseudo- or fully distributed cluster. Hadoop mode (with a fully distributed cluster) is what you use when you want to run Pig on large datasets.
To use Hadoop mode, you need to tell Pig which version of Hadoop you are using and where your cluster is running. Pig releases will work against only particular versions of Hadoop.
% pig
Running Pig Programs
Script
Pig can run a script file that contains Pig commands. For example, pig script.pig runs the commands in the local file script.pig. Alternatively, for very short scripts, you can use the -e option to run a script specified as a string on the command line.
Grunt
Grunt is an interactive shell for running Pig commands. Grunt is started when no file is specified for Pig to run, and the -e option is not used. It is also possible to run Pig scripts from within Grunt using run and exec .
Embedded
You can run Pig programs from Java, much like you can use JDBC to run SQL
programs from Java. There are more details on the Pig wiki at http://wiki.apache.org/pig/EmbeddedPig.
Grunt
Grunt has line-editing facilities like those found in GNU Readline (used in the bash shell and many other command-line applications). For instance, the Ctrl-E key combination will move the cursor to the end of the line. Grunt remembers command history, too, and you can recall lines in the history buffer using Ctrl-P or Ctrl-N (for previous and next), or, equivalently, the up or down cursor keys. Another handy feature is Grunt’s completion mechanism, which will try to complete Pig Latin keywords and functions when you press the Tab key. For example, consider the following incomplete line
grunt> a = foreach b ge
If you press the Tab key at this point, ge will expand to generate , a Pig Latin keyword:
grunt> a = foreach b generate
You can get a list of commands using the help command. When you’ve finished your Grunt session, you can exit with the quit command.
Pig Latin, a Parallel Dataflow Language
Pig Latin is a dataflow language. This means it allows users to describe
how data from one or more inputs should be read, processed, and then stored to one or more outputs in parallel. These data flows can be simple linear flows. They can also be complex workflows that include points where multiple inputs are joined, and where data is split into multiple streams to be processed by different operators.
Comparing query and dataflow languages
SQL is a query language. Its focus is to allow users to form queries.
It allows users to describe what question they want to be answered, but not how they want it answered.
In Pig Latin, on the other hand, the user describes exactly how to
process the input data. SQL is oriented around answering one question. When users want
to do several data operations together, they must either write separate queries, storing the intermediate data into temporary tables, or write it in one query using subqueries inside that query to do the earlier steps of the processing. However, many SQL users find subqueries confusing and difficult to form properly. Also, using sub queries creates an inside-out design where the first step in the data pipeline is the innermost query.
In Pig, however, is designed with a long series of data operations in
mind, so there is no need to write the data pipeline in an inverted set of subqueries or to worry about storing data in temporary tables. SQL is designed for the RDBMS environment, where data is normalized and schemas and proper constraints are enforced (that is, there are no nulls in places they do not belong, etc.).
Pig is designed for the Hadoop data-processing environment, where
schemas are sometimes unknown or inconsistent. Data may not be properly constrained, and it is rarely normalized. As a result of these differences, Pig does not require data to be loaded into tables first. It can operate on data as soon as it is copied into HDFS. SQL is the English of data processing. It has a nice feature that
everyone and every tool know it, which means the barrier to adoption is very low.
Our goal is to make Pig Latin the native language of parallel data processing systems such as Hadoop. It may take some learning, but it
will allow users to utilize the power of Hadoop much more fully.
Pig’s History
Pig started out as a research project in Yahoo! Research, where Yahoo!
scientists designed it and produced an initial implementation. As explained in a paper presented at SIGMOD in 2008, the researchers felt that the MapReduce paradigm presented by Hadoop “is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain and reuse.” At the same time, they observed that many MapReduce users were not comfortable with declarative languages such as SQL.
Thus they set out to produce “a new language called Pig Latin that we have designed to fit in a sweet spot between the declarative style of SQL, and the low-level, procedural style of MapReduce.”
Yahoo! Hadoop users started to adopt Pig. So, a team of development
engineers was assembled to take the research prototype and build it into a
production-quality product. About this same time, in fall 2007, Pig was open-sourced via the Apache Incubator. The first Pig release came a year later in September 2008. Later that same year, Pig graduated from the Incubator and became a subproject of Apache Hadoop.
Why Is It Called Pig?
People also want to know whether Pig is an acronym. It is not. The
story goes that the researchers working on the project initially referred
to it simply as “the language.” Eventually, they needed to call it something. Off the top of his head, one researcher suggested Pig, and the name stuck. It is quirky yet memorable and easy to spell. While some have hinted that the
name sounds coy or silly, it has provided us with an entertaining nomenclature, such as Pig Latin for a language, Grunt for a shell, and Piggybank for a CPAN-like shared repository.
In the next article, we will discuss pig commands….!! | https://medium.com/swlh/apache-hadoop-pig-9c573518ba6c | ['Bhanu Soni'] | 2020-12-27 13:45:55.803000+00:00 | ['Hadoop', 'Apache Pig', 'Big Data', 'Mapreduce', 'Data Science'] |
How to Study Effectively: Expert Advice to Help You Ace Any Exam or Test | Learning how to study effectively can feel like some kind of an acquired skill. There’s nothing worse than sitting in front of your study notes, trying to figure out how this will possibly fit in your brain, and losing all motivation to even begin in the first place.
When it comes time for exam preparation, or for that end-of-term test, give yourself the best fighting chance at getting great grades and acing your class. That begins with the right study skills and habits. Luckily, we’ve got a lot of experts on our team who have the best study tips and advice, and we’re going to walk you through every step to show you how to study better, smarter, and more effectively. With our help, you’ll walk into that exam room like you own the place.
Don’t worry. It may seem like a lot. But with the right learning and preparation, you can handle even the most unexpected questions.
Ready to learn the secrets of how to study effectively and ace your next test or exam? Let’s get started.
Get Your Head in the Study Zone
First and foremost, you need to be mentally prepared. You can’t be constantly thinking about what happened on last night’s episode of Riverdale while trying to pack your brain with your history notes. That just doesn’t work. Forget about Archie and Jughead for a little while. They’ll be there when you’re done studying.
Start each study session by setting goals. What do you hope to accomplish? Do you need to study for one cumulative exam, or are you trying to nail down particular subjects at a time? This will help you figure out how you’re going to use your study time and stay on track to complete your list. It’s also a good way to determine which of your study goals are your priorities, and which things to focus on first.
Next, it’s time to get focused. Most importantly, find your motivation. Your goals should be the first step. The next is preparing to dive in.
Getting Motivated: How to Study Effectively When You Really Don’t Want to
Sometimes it’s hard to get motivated to study. Realistically, no one actually likes to study. In fact, study time is about as appealing as getting food poisoning. But we all have to do it, and it’s just a part of the learning process.
That being said, we’re all motivated by something. Getting good grades is usually the key motivator when it comes to honing in on your study skills. That’s a good enough reason for most of us to roll out of bed and open up the books.
To help yourself stay on track, develop good study habits. Some of our most effective expert tips include getting enough sleep and keeping your phone on airplane mode during every study session. When you’re not really motivated to study, your phone is a major distraction. It’s easy to just check that one thing one more time before you start. No matter what you think, it won’t just be one thing to check. You’ll end up down a YouTube rabbit hole like the best of us. Just turn it off.
Another great way to develop good study habits is to take breaks. Stop and treat yourself at appropriate times, such as between subjects or when you’ve completed an assignment. Giving yourself a reward every so often can help keep you focused on your study material. However, make sure this isn’t always a sugary or fatty treat. A few candies here and there are fine, but too much sugar or fat and you’ll interrupt your brain’s learning process. If snacking on carrots and hummus while studying doesn’t really appeal to you, try giving yourself a reward that isn’t food related, like 15 minutes to play Candy Crush or sending a few Snapchats to your friends (but turn your phone back off when time is up).
Find The Right Study Setting
One of the most important steps in learning how to study effectively is finding a study spot where you can focus and avoid distractions. Just like a good movie, setting is key. According to ThoughtCo, the top nine places to study include the library, bookstores, coffee shops, parks, empty classrooms, your bedroom, and community centers.
Your study spot has to be something that works for you. Just because your friend likes to go to the library to study doesn’t mean that will work for you, too. Consider where you’re going to be the least distracted, and where you feel the most at ease. Maybe this is your bedroom, or maybe it’s a cozy spot in the student center. Part of learning to study smart is learning where you are the most focused.
If you’re a people watcher, don’t study in public spaces. There is just way too much going on to really get into your work. The same thing goes if you’re a big snacker. Studying at a coffee shop can be a great, peaceful way to get work done, but if you’re just going to keep getting up to buy cookies and chugging caffeine the whole time, maybe it isn’t the best study spot. Likewise, if you have noisy roommates or your friends are constantly popping by, your bedroom might not work, either.
Wherever your study spot is, comfort is key. If you’re not comfortable, you’re just going to start shifting around and getting distracted trying to find a better position. Once you’ve found that study spot, it’s time to get organized and form some great study habits.
Organize Your Study Space
Once you have your setting in mind, it’s time to get organized. Staying organized is incredibly beneficial for your mind and, more importantly, your productivity. Your workspace is ultimately a reflection of your study habits. The more organized your space is, the less distractions there are, which means more study time and less of that other stuff. You know, the stuff you do to kill time and procrastinate.
The first thing you need is enough space on your desk to lay out your textbooks, assignment materials, and anything else that might be relevant. If you don’t have space, you’re going to end up spending more time figuring out where things will stay than actually studying.
Make sure you have all of your materials on hand as well. If you start looking for things as you go, you’ll increase your chances of being distracted, which makes it harder to get back on track when you go back to your study notes. To prepare for your study session, gather all of the materials you think you’ll need ahead of time, and put it all together in a container. This way, you can reach for things as you need them without digging around.
Time Management: The Downfall of Many College Students
Okay, so you’ve organized your desk or work space and you’re ready for the next step in learning how to study effectively. That would be time management. Even just hearing those words is enough to make people shudder, but the truth is this is a necessary study skill that you will use for the rest of your life. You may as well nail it down now, while you’re learning in school.
Use a Physical Calendar or Wall Planner For Your Study Schedule
You may think it’s easier to keep track of everything on your phone or iCal, but having everything laid out in a physical calendar you can hang on your wall can help you visualize and plan. It’s really easy to miss tasks and reminders when you’re relying only on your phone. A wall or desk planner is a useful tool for creating a study schedule that gives you enough time for breaks and a social life in between. This tells you exactly when your study time is, and helps you plan out everything you need to do.
Guess what? Those rewards that we were talking about before can be added to your study schedule too, so you have a goal you know you’re working towards. But don’t let your rewards take over the rest of your planning.
Tips and Tricks to Help With Time Management
Here are some helpful time management tips and tricks that you may want to use:
● Don’t try to multitask. This negatively affects your ability to focus on the task at hand.
● Avoid leaving everything to the last minute.
● Set study time limits for each task and use a timer to stay on track.
● Get enough sleep. Your body needs a good amount of sleep to be able to concentrate and work to its fullest potential.
● Relax! That’s what breaks are for.
● Don’t overcommit yourself and let your study schedule suffer. If you don’t have time for something, you don’t have time for it. Just say no.
● Make a checklist. This tells you what you still have left to do, and you’ll feel a sense of accomplishment and pride when you check things off.
Study Smart: Lay it All Out
Lay out each individual task for each exam or assignment. Don’t just write “study for history paper.” Write down each book you need to read, which notes you have to go over, and when the due dates are. Try to estimate how much time it takes to do each thing so you can effectively include it in your study schedule. If you have certain exams or class subjects that you know will need more study time than others, this is how you can determine where to prioritize.
When you visualize how much work you actually have to do, this can help with time management. It’s one thing to know which tests you have and when your exams are, but it’s another thing to actually see it and realize how much study time you’ll need to set aside for each test.
An important thing to note is don’t leave your biggest goal for the last thing in your study schedule or your checklist. You may want to put a bunch of little things first so you have more to check off and can feel more accomplished, but if you leave those big things until the end you’re more likely to feel overwhelmed than anything.
Mnemonic Devices
Mnemonic Devices are a common way for people to study material and remember information. They can come in a variety of forms, such as acronyms, rhymes, and songs. In fact, you’ve probably used them at least once before. Have you ever stopped to use Roy G. Biv to list out the colors of the rainbow? That is a mnemonic device. What about SOHCAHTOA from high school math class? That’s another common one.
Make up your own mnemonic device to help you remember important concepts or the order of things that you know are going to be on your exam. Poems and songs are most effective because they trigger the acoustic encoding in our brains and last longer in our memories. That’s why you’ve always got a song stuck in your head. Let’s try and get the right song stuck in there.
Using Study Groups
Another way to learn how to study effectively is to use study groups, but only if they work for you. Study groups aren’t the right fit for everyone. If you’re an easily distracted social butterfly and would be tempted to chat, you could end up breaking everyone else’s study habits, too. Some people prefer to study alone, and if that is what makes you study more effectively, then you should stick to that.
The Advantages of Studying Together
One of the best aspects of a study group is having other people there who are in the same situation as you. Not only do those other people help you stay in the study zone, but you can also use them as tools. Other people can help you stay focused because you’re all accountable together. They can also help you with your study notes and fill in any gaps you may have missed from class.
Try quizzing each other. Quizzing one another and taking practice tests are great ways to really tell if you’ve learned the material or are ready to actually take the exam. You could have a family member or friend quiz you, but it’s easier when it comes from someone in your actual class who is also trying to learn the material.
How to Find A Study Group
Creating or joining a study group isn’t hard when you approach it the right way. Ask the people in your class if they’d like to study with you. This could be a brief chit chat as everyone is packing up to leave, or settling in before the professor arrives. You could also start a Facebook group and find people in your class who you may not speak to otherwise, and then use Facebook to coordinate meeting times. Check out our blog on how to organize a study group for more tips and pointers.
If you want to know if there are already study groups that you can join, just ask around. Talk to someone at your student center. Sometimes they have resources you can use to make connections and join in on group study sessions. Ask classmates what they do to study and see if they already have a group they’d be willing to let you join. Speak up and you will find it.
What the Science Says
There are some scientific methods to studying that have been shown through academic research to help you learn how to study effectively and improve test scores. Here are some of the most effective and well known methods that you can try.
The Feynman Notebook Method
If you’ve ever watched the Big Bang Theory, you know who Richard Feynman is. He is a physics god — one of the most important physicists in history and a Nobel Prize winner. Well, Feynman developed a technique for learning difficult or confusing concepts that can help your study skills.
The Feynman Technique, also known as the Feynman Notebook Method, begins with a blank notebook. On the title page, he would write, “Notebook of Things I Don’t Know About.” Then, at the top of each page, he would write a topic he wasn’t familiar with. Next, he’d write everything he knew about that topic, and then would continue to add to the page in his own words when he learned more about it. The point of this is to explain complex material in ways that make sense to you. Then, you can go back during your study session and review the areas where you know you’re having trouble.
The Pomodoro Technique
The Pomodoro Technique, developed by Francesco Cirillo, is a time management method where you use a timer to break down study sessions into intervals. Each interval is called a pomodoro, and there is a small break between each one. Think of it like reps of weights at the gym. Four pomodoros generally make up one study session for one specific task. You can take a bigger break after four pomodoros if you’re ready to move on to the next task.
At Homework Help Global, we strive to provide as much help as we can to students who need it most. To learn more about the Pomodoro Technique, check out episode 13 of The Homework Help Show. In this episode, our host, Cath Anne, digs into developing good study habits using this method for productivity.
The Leitner System
Named after Sebastian Leitner, who developed the system, The Leitner System is essentially the use of flashcards for studying. This is a good technique to help your study skills because it boosts memorization and cognitive function. Many of us are familiar with using flashcards, but with the Leitner System, the goal is to keep the flashcards simple and straightforward so it’s easy to focus on certain concepts.
Active Recall
Active Recall is simple, but effective. It’s all about stimulating your brain while you’re reading or studying the material to promote long-term memory. After every section or chapter you read, close your book and review what you just read. Repeat to yourself some of the important information you just read. This way, instead of just reading something and forgetting it, you’re now actively learning it and will actually know it when it comes time to take your exam.
Not really sure what this means? That’s what we’re here for. To learn more about Active Recall, watch episode 40 of the Homework Help Show where, Cath Anne puts it in context and tells you how to incorporate it with your study habits.
Acing Exam Preparation
Okay, so you’ve gotten the gist of how to study effectively and now the big day is right around the corner. When it comes time to get ready for your exam, you might want to consider some extra preparation tips to be as prepared as possible.
● Plan out exam day. The last thing you want to do is be rushing around and building up your anxiety about getting there on time.
● Bring water with you and stay hydrated.
● Don’t wake up and roll out of bed for a morning exam. Wake up early to give your brain time to kick into high gear.
● Collect all of the materials you need, such as pens or pencils, and bring extras just incase.
● Eat a healthy breakfast.
● Don’t forget your student card and ID.
● Slow down and breathe. You’ve got this!
Cramming and Last-Minute Studying Don’t Work
You shouldn’t leave everything to the last minute, but sometimes things come up and it happens. In fact, some people do believe that they study better when they cram. If you’re starting to run out of study time, you may feel like you need to have some tricks up your sleeve in preparation, but it’s not going to work the way you want it to.
Learning how to study effectively in crunch time is nearly impossible. You need to give your brain time to process the information. If you run out of study time and are in a rush to get everything crammed in, often you’ll end up focusing on how you’re running out of time and forgetting what you crammed. In fact, cramming in a study session can actually cause more harm than good. Use your study schedule and plan out your time management and you won’t have to cram in the first place.
Helpful (And Free) Study Apps
Where would we be without smartphone apps? Since you can find an app for almost anything, it’s no surprise there are plenty of apps for Android and iPhone that can help you learn how to study better and improve your study habits. Here are some of the top rated apps.
Evernote: This is a helpful app for taking notes, as it lets you add your own attachments, audio clips, and even checklists to your class notes. The free version lets you connect two different devices, so you can take notes on your laptop in class and then see them later on your phone.
Quizlet For Mobile: The goal of Quizlet is to help you study smart. With the free version, you can create interactive quizzes and flashcards that can help you with learning new subjects or nailing down your material.
MyStudyLife: Essentially, this is a calendar app designed specifically for students to help you keep track of assignments, due dates, classes, and other projects. It can work in conjunction with your wall planner so you can check your schedule while you’re on the go.
SimpleMind: A good study tip is to create a mind map to go over what you already know and what topics you need to spend more time on. The SimpleMind app helps you create those mind maps and see them in a nice, visually appealing way.
If you want to learn about more apps that can help make your overall life easier as a college student outside of learning how to study effectively, check out our favorite apps for college students. There’s always an app for everyone.
Exercising and Your Brain
Boost your brainpower and prep for studying by exercising beforehand. Getting some exercise pumps more oxygen to your brain, which can in turn stimulate it and help it function. This can also help your brain release more hormones, which stimulate cell growth. When it comes to learning new things or remembering study topics, that’s your goal.
Just don’t try to memorize your material while you’re working out. Many students try to cram in a study session at the gym by bringing their notes and reading them while on the treadmill or elliptical. Give yourself a break and allow your mind to focus on one thing at a time, and sweat it out. Hit the books when you’re done.
As a bonus, exercising and your mental health are directly connected. Exercise is shown to be capable of helping cope with the symptoms of depression and anxiety, among other conditions. So when it gets close to exam time and you’re feeling anxious or stressed, hit the treadmill for a little bit.
Stress Relief in and Out of Your Study Sessions
Exam preparation and tough study sessions can be stressful. The problem with stress is that it can have negative effects on the brain and can actually impact your study success. It’s okay to feel stressed to some degree, because that’s a part of life, especially during exam time. However, when you are experiencing significantly higher levels of stress, this can be harmful to your physical and mental health. You can lose your ability to concentrate, become depressed, and even suffer headaches or stomach pains.
Try diffusing some essential oils or lighting incense (carefully) during your study session. Of course, you won’t be able to do this if you’re studying in a common space. But this can help you concentrate and de-stress. Taking regular breaks is also a good way to relieve stress.
It’s important to make sure you are de-stressing in other ways, not just during study time. In fact, studying can also lead to unwanted stress for many students. Check out some of these easy ways to de-stress in college for some helpful ideas to try.
A Few Final Study Tips Before You Get Started
Here are some final study tips that can be helpful to remember as you try to make the most out of your study time.
● Stay hydrated while you’re studying.
● Eat brain food — foods that are high in protein, vitamins, and other nutrients help increase concentration and memory.
● Read about our favorite secrets for acing multiple choice tests.
● Don’t listen to music during your study session. Studies have shown that listening to music while studying can lead to poor performance. If you must have music, choose classical songs without any lyrics.
● Switch up your study space if you start to get bored.
Figured Out How to Study Effectively, But Still Need Some Extra Assignment Help? We’re Here For You
Sometimes all of your assignments, homework, and study sessions can pile up and become too overwhelming. That’s okay — we all need a little help sometimes. That’s what Homework Help Global is here for. We’re here to help you learn how to study effectively in any way we can. But when you’re out of ideas and motivation, that’s our time to shine.
Homework Help Global is a network of highly educated, experienced academic writers who are on hand to help you with your schoolwork. Whatever the subject is, we have someone on our team ready to provide you with a custom paper, free from plagiarism and formatted to your specific requirements or guidelines. Order custom written essays and assignments, book reports, and even online exams.
Whether you’re too busy with study sessions to write your assignments, or you have so much on your plate you haven’t even thought about studying yet, get in touch with us for a quote and to order now. We will help you make your life a lot easier and less stressful. | https://medium.com/the-homework-help-global-blog/how-to-study-effectively-expert-advice-to-help-you-ace-any-exam-or-test-658000343d92 | ['Homework Help Global'] | 2019-09-13 23:13:45.961000+00:00 | ['Study', 'Tips', 'Self', 'Learning', 'Productivity'] |
It’s Okay to Still Not Be Okay | If you’re like me, you probably feel a bit guilty about having done “nothing” but watch election results all of last week, about not being “productive” during that time — never mind that the fate of your world hung in the balance. You might also be experiencing an emotional crash following the joyous high of Saturday morning. And your furious news-checking habit is probably still in full force, constantly igniting your sympathetic nervous system with announcements of Trump’s many lawsuits and firings.
Despite all of the pressure and stress you are still carrying, you might also be berating yourself for not being all better. Why can’t I get back to normal? you wonder. Why can’t I set down my phone, put on a smile, and get back to work?
If that sounds familiar, I’m here with a message: You deserve to be gentle with yourself. It isn’t “all over,” and you don’t have to feel “all better.” It’s not a sign of weakness to fully inhabit the present moment, and still be haunted by the horrors that came before it. It’s okay to not be ready to disappear into work. In fact, such awareness and sensitivity is normal, healthy, and deeply human.
All this year, you have endured significant uncertainty and stress. You’ve watched as hundreds of thousands of Americans died due to the negligence of a president who hasn’t attended a Covid briefing since the summer. You’re hearing mainstream Republican politicians question the election results despite a lack of evidence, and you’re watching Trump erect an apparatus of yes-men and sycophants around himself. This after four years of political turmoil and terror on a level we’ve never seen.
Even when Biden does take office in January, you’ll be living in a world beset with police violence, ever-growing income inequality, a climate that continues to destabilize, and tropical storms that rip through the globe. You’ll still have to worry about making rent in an economy ravaged by business closures and furloughs. The damage done by Trump and his administration will not be easily remedied. The wicked problems that existed long before his tenure will continue to be thorny and far-reaching. Even with a Covid-19 vaccine on the horizon, you can still expect months of social distancing measures and harrowing hospitalization numbers.
No wonder it’s hard for you to focus on writing quarterly reports. It’s no surprise doing laundry seems empty and pointless. My head is not clear right now; I’m sure yours isn’t either. How could it be? We need more time to heal. We need more reasons to have hope. Only then can we slowly, messily return to anything resembling our old lives.
When a person experiences trauma, they don’t always feel the full force of its effects until after they’ve gotten safe. Post-traumatic symptoms can be delayed; it’s when you escape an unsafe situation that you finally are free to process what happened. That is often when the hypervigilance, panic attacks, and nightmares start to come out in full force. All your old coping mechanisms start to break down as the energy you expended trying to holding yourself together finally wears out.
When Trump was first elected, I spent more than a year calling my political representatives every single day. I devoted an hour every day to that task, not just calling my own reps, but also calling on behalf of others who couldn’t use the phone. Friends complimented me for doing a good deed, but I was just trying to burn through my nervous energy. I needed something to do, to keep me from confronting my despair. After hundreds of calls, very few of which seemed to ever have an impact on any politicians’ stances, I became a weepy, apathetic mess, and abandoned the project entirely.
If you’ve spent the past few months (or years) text banking, donating to political campaigns, or just refreshing Twitter and FiveThirtyEight obsessively, you might be a weepy, emotionally drained mess right now, too. Even in the face of “good” news, you might find you have no focus, no will to go on. One of the key symptoms of trauma, after all, is a sense of a foreshortened future. It’s hard to plan or work toward any goals when you’ve gone years without having any hope.
It’s okay if you still feel like garbage. You have been through a lot. And you’ve probably had to grit your teeth, put on a chipper, “professional” smile, and work tirelessly through most of it. But now, you can consider letting down that facade. We are entering a new phase, no less complex than the one that came before, but perhaps a bit more hopeful. You can let yourself grieve all that was lost. Mourning is an important part of the healing process. It allows you to accept the world as it truly is, and to face the pain you’ve been quietly enduring all this time.
Life will get better. But it’s gonna take a few more fits and starts. Your jaw won’t stop clenching for a long while after that. This has been a hell of a four years. And it’s not over yet. So it’s okay to still not feel okay. | https://forge.medium.com/its-okay-to-still-not-be-okay-425a7986a40b | ['Devon Price'] | 2020-11-11 19:16:03.676000+00:00 | ['Life', 'Post Election', 'Mental Health', 'Not Okay', 'Pandemic'] |
Deep Deterministic Policy Gradient | Reinforce Algorithm
Please note, in the below analyses, the discount factor γ is assumed to be 1 for simplicity. But all the analyses can be easily extended to cases where γ is not 1.
The basic objective in all of Reinforcement Learning (RL) is to maximize the expected total utility Uθ, which is defined as follows [1]:
After doing some Math, it can be shown that Uθ is equal to the expected value of Q(s₀,a₀). And if the initial state distribution is uniform, then it means the goal in RL is to find a policy which maximizes the q-values of all possible states.
Using the definition of expectation, the above equation 1a can be re-written as:
Using Policy gradient method, we can maximize Uθ by first computing its gradient with respect to θ, which (using the Reinforce log likelihood trick [3]) can be derived to be:
One approach to improving the expected total reward is to randomly add noise to the current θ and if it results in better total reward, then we keep it, otherwise we ignore it, and we keep repeating this process. This method is called the random shooting method. There are other more sophisticated methods in the same vein such as the Cross Entropy Method. All these methods fall under the domain of stochastic optimization algorithms. However, while these methods are very simple to implement, they are not efficient and don’t scale well with high dimensional space. A more efficient approach is to change θ in the direction of the gradient using Stochastic Gradient Ascent as follows:
A basic policy gradient algorithm making use of the above gradient is known as the Reinforce algorithm, and here is how it works:
A Basic Reinforce Algorithm:
Start with a random vector θ and repeat the following 3 steps until convergence:
1. Use the policy Pθ(at|st) to collect m trajectories {τ1, τ2, …, τm}, where each trajectory is as defined above.
2. Use these trajectories to compute the Monte-Carlo estimator of the gradient as follows:
Note that the reason why the above estimator is valid is because the trajectories are generated by following the policy being learned, i.e. Pθ(τ) — i.e. it is an on-policy algorithm. Another way to say it is that we sample each of the trajectories in {τ1, τ2, …, τm} from the probability distribution Pθ(τ).
3. Update the weights/parameters of the policy network using the above estimator of the gradient:
The intuition behind the reinforce algorithm is that if the total reward is positive, then all the actions taken in that trajectory are reinforced whereas if the total reward is negative, then all the actions taken in the trajectory are inhibited. Moreover, to be computationally efficient, typically m is set to 1.
While better than stochastic optimization methods, the Reinforce algorithm suffers from a few drawbacks:
1. The gradient estimator is pretty noisy, especially for the case m=1, because a single trajectory maynot be representative of the policy.
2. There is no clear credit assignment. A trajectory may contain many good and bad actions, and whether those actions are reinforced or not depend only on the total reward achieved starting from the initial state.
3. It is very sensitive to the absolute value of the rewards. For example, adding a fixed constant to all the rewards can drastically change the behavior of the algorithm. Such a trivial transformation should have no effect on the optimal policy.
By the definition of the gradient, ∇θUθ points in the direction of the maximum change in Uθ. However, at a fundamental level, the above drawbacks of Reinforce algorithm are due to the fact that the Monte-Carlo estimator of ∇θUθ (i.e. ĝ) has high variance. If we can reduce its variance, then our estimate of gradient (ĝ) will be closer to the true gradient ∇θUθ.
While the Monte-Carlo estimator of the gradient (ĝ) is unbiased, it exhibits high variance. As discussed below, there are a few ways of reducing variance without introducing bias: 1) using causality and 2) using a baseline.
Actor-Critic Algorithm
One way to reduce variance is by taking advantage of causality: ĝ updates all the actions in a trajectory based upon total rewards and not the rewards to go. That is to say, future actions affect past rewards, which is not possible in our causal Universe. So we can make the gradient estimator more realistic by using rewards to go as shown in the below equation.
Note that using the rewards to go instead of the total rewards still results in an unbiased estimator of ∇θUθ because causality is handled in the expectation in Equation 3 using Pθ(τ). Moreover, doing so reduces variance because the rewards to go expression has fewer terms (and thus lower uncertainty) than the total rewards expression.
An important aside to note is that the rewards to go is really an estimate of the the q-value of (st, at). This is because the q-value is defined as follows:
And so, if the trajectory τ is sampled from Pθ(τ), then the single-sample Monte-Carlo estimate of QPθ(st, at) is just:
As shown above, instead of using the Monte-Carlo estimator of the rewards to go as in Equation 7, we can use the Q-value estimator of the rewards to go. As a result, Equation 7 can be re-written as:
If Qhat Pθ(st, at) is modeled using a neural network (parameterized by w), then we get:
Note that because the state-action space can be very high dimensional, it quickly runs into Bellman’s curse of dimensionality; and thus, in most practical situations with complex state-transition dynamics, Qhat Pθ(st, at) is modeled using a neural network based function approximator.
Then Equation 10 can be re-written as:
Whereby, Pθ(at | st) is the actor network that is parameterized by θ and Qhat Pθ(st, at) is the critic network that is parameterized by w. This is essentially what is known as the actor-critic algorithm.
For any visited state-action pair (s,a), the actor network is updated using Equation 6 (utilizing ĝ from Equation 12), and the critic network is typically updated using Temporal-Difference learning (due to its lower variance than Monte-Carlo learning) using the following update equation:
Whereby the weight vector w is updated to reduce the loss L(w), which is defined as:
and using Q-learning (so that the critic is based of off an off-policy algorithm):
and so
whereby
This is the basics of the actor-critic algorithm. While there are many variants of it, as we will see below, this is the basic core of it.
Advantage Actor-Critic Algorithm
In addition to using the rewards to go (due to causality), another approach to minimizing the variance of ĝ is by subtracting out a baseline b that is not dependent on θ or action a — and this combined term is known as the Advantage function. It can be mathematically proved that such a transformation is not only unbiased, but it reduces variance. An intuitive explanation for why it reduces variance is because the term multiplying ∇θlog(Pθ(a|s)) has smaller magnitude, which essentially reduces the variance of the overall expression.
There are many choices for the baseline b, and in theory, the optimal value of b can also be computed. However, in the interest of simplicity and to be intuitive, a commonly used baseline is the q-value averaged over all the actions, i.e. the state-value.
The Advantage function is then written as follows:
The basic idea with using this advantage function is that actions with higher q-value than the average (i.e. state-value) are reinforced where as other actions are inhibited. This makes a lot more intuitive sense than the gradient equation used in the original Reinforce algorithm. And so it’s not totally surprising that Mathematically it results in lower variance. Moreover, now the gradient is no longer dependent on the absolute value of the rewards.
One problem with the above Equation is that, in practice, it is very difficult to compute the above expectation — especially for continuous actions or high dimensional action space. Hence, the state-value function is modeled with a separate neural network that is parameterized by wᵥ as follows:
The advantage function now becomes:
The issue with this advantage function is that it requires two separate neural networks. With some clever re-ordering, we can re-write the Advantage function using a single neural network. However, inorder to do so, let us first re-visit the above analysis. Basically, the ideal Advantage function we would like to have is:
As defined in Equation 8 above, state-action value can be further simplified interms of the state-value function as:
The single-sample Monte-Carlo estimate of QPθ(st, at) as defined in the Equation above is:
And so now we just need to represent the state-value function using a neural network parameterized by wv as follows:
And thus the Advantage function can now be represented using a single neural network parameterized with wᵥ. Note with the above equation for the Advantage function, it is really just the one-step TD error (i.e. TD(0) error). Additionally, it is also possible to represent it using TD(λ) error.
The gradient equation for Advantage Actor Critic is now going to be:
And this is going to be a much better estimator of the expected gradient (Equation 3), i.e. with lower variance and still be unbiased, even for m=1. As a result, the algorithm will learn much faster.
wv is updated as follows:
whereby using one-step TD learning (i.e. TD(0)):
Using the gradient estimator from Equation 29, the weight update from Equation 30, and the remaining steps from the basic Reinforce algorithm results in what is known as the Advantage Actor-Critic algorithm.
To briefly summarize the above discussion, the main downside of the Reinforce algorithm is that the gradient estimator is based upon the Monte-Carlo estimator of the expected total reward from the initial state-action pair — which while has low bias, it has high variance. By using causality and subtracting out a baseline from the Monte-Carlo estimator, we can reduce the variance. The variance is further reduced by using TD estimator of the expected total reward to go instead of Monte-Carlo estimator.
Deterministic Policy Gradient (DPG) Algorithm
For stochastic policies in continuous environments, the actor outputs the mean and variance of a Gaussian distribution. And an action is sampled from this Gaussian distribution. For deterministic actions, while this approach still works as the network will learn to have very low variance, it involves complexity and computational burden that unnecessarily slows down the learning algorithm. To address these short comings, for deterministic actions, we can use what is known as the deterministic policy gradient.
In stochastic case, the policy gradient algorithm integrates over both state and action spaces, whereas in the deterministic case it only integrates over the state space. As a result, computing the deterministic policy gradient can potentially require fewer samples. But in order to fully explore the state space, the basic idea is to choose actions according to a stochastic behavior policy and learn about a deterministic target policy (i.e. needs to be an off-policy algorithm).
DPG is essentially a deterministic version of Actor-Critic algorithm. For a basic DPG algorithm, we have two neural networks, one network (parameterized by θ) is estimating the optimal target policy and the second network (parameterized by w) is estimating the action-value function corresponding to the target policy. The below equations formalize this.
As mentioned above, because the target policy is deterministic, the actor may not explore the state-space very well to find the optimal policy. To address it, we use a behavior policy (b(st)) that is different from the target policy. It is basically the target policy with some additional noise. For simplicity, we will use a Normal distribution as our noise source. But note that this term is like a hyper parameter, and in the below implementation for the Reacher environment, a different noise process is used.
Deterministic Policy Gradient Update:
1. Actor network is updated as follows:
which by chain rule, it becomes:
2. The critic network is updated as follows:
The TD error is given by:
and the weight update is:
To reiterate, in order to properly balance exploration-exploitation tradeoff, while the target policy μ is deterministic, the behavior policy is stochastic. So this is an off-policy version of the DPG algorithm. While stochastic off-policy actor-critic algorithms typically use importance sampling for both the actor and the critic, because the deterministic policy gradient removes expectation over the actions, and given the state transition dynamics are same for both the target and behavior policies as they operate in the same environment, importance sampling ratio is not needed. So we can avoid having to use importance sampling in the actor, and with same reasoning, we avoid using importance sampling in the critic [2]. For those who are wondering, similar reasoning applies as to why we don’t use importance sampling with Q-learning.
Deep Deterministic Policy Gradient (DDPG) Algorithm
DDPG is basically DPG with a few training changes adopted from the DQN architecture.
One challenge when using neural networks for reinforcement learning is that most optimization algorithms assume the samples are independently and identically distributed. Obviously this assumption doesn’t hold true because the samples are generated by exploring sequentially in an environment. Because DDPG is an off policy algorithm, we can use the replay buffer (a finite sized cache) as in DQN to address this issue. At each timestep the actor and critic are updated by sampling a minibatch uniformly from the buffer [2].
For the critic, since the network being updated is also used in calculating the target, this can potentially lead to training instabilities for highly nonlinear function approximators like neural networks. One solution to address this is using a separate target network, as with DQN [2]. Given the target values are determined using both the critic and actor networks, we create a copy of both of these networks and soft update their weights to the respective learned networks. Please refer to my github code for details. | https://medium.com/swlh/policy-gradients-1edbbbc8de6b | ['Amit Patel'] | 2020-12-23 22:52:26.478000+00:00 | ['Machine Learning', 'Reinforcement Learning', 'Artificial Intelligence'] |
Use Docker and Airflow to deploy your Data Science workflow | Contributors: Madana Krishnan V K, Nguyen Cao, Sanjana Chauhan, Sumukha Balasubramanya
This blog is written and maintained by students in the Professional Master’s Program in the School of Computing Science at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit {sfu.ca/computing/pmp}.
Can a UI designer decide which sorting algorithm to use? Well, you can say it’s a mismatch. But can a Data Scientist deploy his machine learning model at scale? Let us try answering this question through this post.
There are different stakeholders in a Big Data project like Data Scientists, Data Engineers, Analysts, etc. A Data Scientist is one who gets insightful meanings from structured or unstructured data by using various techniques and tools, while a Data Engineer is one who develops and maintains architectures such as databases and large scale processing systems.
Being a Data Scientist is not easy as it involves many fields such as Statistics, Computer Science, and Business Analysis. They are dependent on Data Engineers to make their code work the same way in the production environment. This blog aims at helping a Data Scientist to enhance their understanding of how their code would run in a production environment.
Why think about deployment as a Data Scientist?
A Data Scientist mostly thinks about developing Machine Learning models to provide insights into data or to predict valuable results. The commonly used interface/tool for the Data Science workflow is Jupyter Notebook/Google Colab. These interfaces are well designed for writing and executing code snippets, as they provide an interactive shell to play with. But the downside to this is that it runs on a single machine, and usually does not work well with large datasets and has no parallelism. It is not suitable for running on a production environment with a distributed system running on clusters.
The lack of skill to write a production-ready code among Data Scientists requires a Data Engineer to transform the code into scripts running on production systems parallelly, which takes time and resources. This article aims to provide a mapping of the typical Data Science workflow on a local machine to a sequence of tasks running on servers.
How to go about it?
Let us use Docker and Airflow to achieve this.
Why Docker?
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, the developer can be rest assured that the application will run on any other machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
The below matrix consists of possible software components as rows and possible places where you must run that software as the column.
Our job is basically to make sure that every intersection of that matrix somehow works right, passes all the tests on your laptop as well as in the production environment. How to be sure about that?
Suppose your laptop has a different version of Python or different a JDK, but the production environment has a different kind of distribution. So, in a nutshell, what we are trying to do with docker is to solve this kind of problem.
Let us take an example of Shipping Coffee Beans.
The above matrix here depicts the possible goods to be shipped, multiplied by every possible way to ship the goods. It had the same problem.
How did they solve it?
In simple words, if you want to ship Coffee Beans, it should not be your concern to decide the shipping, packaging and routing details and to make sure experts for each of these are available. Docker Container is similar to shipping containers with pre-decided infrastructure (size, number of doors, weights it can take, etc.). Just pack your coffee beans and hand it over to any infrastructure provider.
It can be organized in such a way that any new infrastructure providers or new infrastructure tools can be added. There is no need to repackage the Coffee Beans just because you are not going through the same route or using the same transport. Also conversely, if you are the ones who are providing the infrastructure you can decide on cheaper trucks and faster routes.
Voila!!! The solution to the above problem → “Docker”
Data Scientists spend so much time to build the perfect model and it takes a lot of effort to set up the infrastructure, installing the required libraries choosing the fastest tools. They do not want to spend the same amount of time setting up the same in the production environment or design their solutions as per the available production infrastructure.
To make this idea more powerful, using Airflow, let us see how can a Data Scientist automate their workflow.
Why Airflow?
Airflow is a platform created by the community to programmatically author, schedule and monitor workflows. We use Airflow to author workflows as Directed Acyclic Graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
In order to execute your workflow, you can use Airflow to automate and decide the sequence in which these activities should be scheduled. For example, you want to do web scraping to collect data using Java programming, store the data in Cassandra, build a machine learning model using TensorFlow, etc. All tasks require different infrastructure and should be performed in a sequence. Using Airflow you can schedule these tasks by running different Docker Containers.
Data Science workflow
Data Science workflow in action
The dataset considered here is the MNIST data, which is imported from the Scikit-learn datasets. It consists of 70,000 handwritten digits, where each image is fit into a 28x28 grid. Training a classifier model to predict the handwritten digits is a classical example in the world of Data Science and Machine Learning.
To demonstrate the workflow, we perform the task of training classifiers on the MNIST data. The MNIST data from the Scikit-learn library is collected, preprocessed, split and is saved in the file system. These are the data collection and preprocessing steps. The data is read from the file system, and two Machine Learning models (LogisticRegression and SGDClassifier) are trained and saved. This step is the data analysis step. Then the generated model is used and prediction is performed on the test set. The prediction and visualization procedures of the result are the data visualization step.
DAG Workflow
The above graph depicts the workflow we are demonstrating inside Airflow.
It shows the organization of tasks that we want to run in a way that reflects their relationships and dependencies. The above DAG consists of 6 tasks. Using Airflow we are trying to automate and sequence this series of tasks. Let us see in what sequence each of these tasks will be executed.
Each node in this DAG is a task and each edge represents the dependency between the 2 tasks. It says that Task 1 has to run before Task 2, Task 2 has to run before Task 3, and Task 5 can only run after both Task 3 and Task 4 get successfully executed. Lastly Task 6 will get executed after Task 5 completes.
Task Pipeline
Consider the above pipeline which demonstrates the working of airflow using Docker containers. We are considering the following 6 tasks.
Task 1 — Collect the data from the Scikit-learn MNIST datastore and save it into a CSV file.
— Collect the data from the Scikit-learn MNIST datastore and save it into a CSV file. Task 2 — Load the CSV dataset into an Airflow container using a Python code. The data is preprocessed and is stored as a CSV file in the Container File System. The output of this task will be training and test set.
Load the CSV dataset into an Airflow container using a Python code. The data is preprocessed and is stored as a CSV file in the Container File System. The output of this task will be training and test set. Task 3 & Task 4 — Load the training set and train the Model 1 and Model 2 on Airflow Container simultaneously. These models are saved.
Load the training set and train the Model 1 and Model 2 on Airflow Container simultaneously. These models are saved. Task 5 — Load the two saved models and evaluate the models on the test set obtained from Task 2.
Load the two saved models and evaluate the models on the test set obtained from Task 2. Task 6 — Print the results of the evaluation on the console.
Try it for yourself!
The above explanation has been implemented and is available in this Github repository. The execution steps are explained here. Note that this repository assumes that you have Docker installed. If you do not have Docker installed, please check out this official Docker page.
Step 1 — Clone the Github repository to your system.
Step 2 — Navigate to the airflow4bigdata directory and run the start_airflow command. This allows the Docker to start an Airflow Service by spawning an Airflow Container.
cd airflow4bigdata
make start_airflow
Step 3 — Open the Airflow Web Service from your web browser(ex. Chrome) at the port 8080.
Step 4 — Choose the mnist_workflow DAG from the list, switch it ON and click on Trigger DAG.
Step 5 — Wait until the workflow runs to completion. Upon completion, any data or models to be saved will be saved in the corresponding directory as mentioned in the code.
Understanding Airflow UI better
Apache Airflow provides an elegant UI to interact with. The default port of the Airflow UI is available at http://localhost:8080/admin/. | https://medium.com/sfu-cspmp/use-docker-and-airflow-to-deploy-your-data-science-workflow-dc17982d8dd8 | ['Madan Krishnan'] | 2020-02-04 07:46:59.124000+00:00 | ['Blog Post', 'Big Data', 'Data Science', 'Docker', 'Airflow'] |
200 universities just launched 560 free online courses. Here’s the full list. | In the past three month alone, more than 200 universities have announced 560 such free online courses. I’ve compiled this list below and categorized the courses into the following subjects: Computer Science, Mathematics, Programming, Data Science, Humanities, Social Sciences, Education & Teaching, Health & Medicine, Business, Personal Development, Engineering, Art & Design, and finally Science.
If you have trouble figuring out how to signup for Coursera courses for free, don’t worry — I’ve written an article on how to do that, too.
Here’s the full list of new free online courses. Most of these are completely self-paced, so you can start taking them at your convenience.
COMPUTER SCIENCE
MATHEMATICS
PROGRAMMING
DATA SCIENCE
HUMANITIES
SOCIAL SCIENCES
EDUCATION & TEACHING
HEALTH & MEDICINE
ENGINEERING
ART & DESIGN
BUSINESS
PERSONAL DEVELOPMENT
SCIENCE | https://medium.com/free-code-camp/200-universities-just-launched-560-free-online-courses-heres-the-full-list-d9dd13600b04 | ['Dhawal Shah'] | 2019-03-20 02:07:28.152000+00:00 | ['Programming', 'Startup', 'Self Improvement', 'Technology', 'Education'] |
Goody Bags Clothing Co. | Goody Bagz 420 Check out what all talk is about in the smoking community. Goody Bagz 420 Clothing Co. represents everithing that is 420 and what is the herbal community. They would appreciate your support and love for the 420 community. “All Our Products Are 100% New without tags, All brand-new, unused, and unworn item (including handmade items) that is not in original packaging or may be missing original packaging materials (such as the original box or bag). The original tags may not be attached. We guarantee that all our products are high quality without coming with a high price tag. We value your business and we are so thankful for you!. We will keep you up to date every step of the way to ensure that you are satisfied with your order. “ | https://medium.com/shabazz-publication/goody-bags-clothing-co-1b10ed40d133 | ['Shabazz Publication'] | 2015-11-25 00:00:22.973000+00:00 | ['Design', 'Marijuana', 'Clothing'] |
Lessons I Learned Dealing With Extreme Fear While in the Hospital Operating Theatre | Lessons I Learned Dealing With Extreme Fear While in the Hospital Operating Theatre
In moments of panic, your life is reborn.
Picture by Jason Lloyd-Evans
“Come this way. We need to check your blood pressure.”
This is a sentence that has haunted me for the last 5 years. Before I enter the hospital operating theatre each year, I get to hear it.
This sentence is a mental trigger. It triggers extreme fear in me. It’s the sentence where I begin to panic, my hands start shaking, I feel sick, and my heartbeat goes through the roof.
Everything changed yesterday.
As I entered the hospital waiting area ready for my procedure, I sat there and began to daydream. What if this regular visit to the hospital didn’t need to be so unpleasant?
I began to think how one could take a moment of extreme fear and turn it into something different. These are the lessons you can take away and use to beat fear and overcome panic.
You’re stronger than you think.
The walk to the nurse’s office made me make a split-second decision. “Today is going to be different,” I said to myself.
I had nothing to lose. Every other visit to the hospital was a disaster. The worst that could happen is what occurred every other time. I repeated in my head “you’re stronger than you think.” I knew that phrase to be true because I began collecting evidence to back it up.
Many years ago I wrote a list of all the times I had beaten fear and triumphed. I thought about the dot points on that list. The evidence was overwhelming. I was way stronger than I gave myself credit for. It was time to use that forgotten strength.
The nurse went to test my blood pressure. (This part never goes well.) Getting my blood pressure checked reminds me of a fear worse than death: needles. I wanted this time to be different. While the blood pressure game occurred, I started talking finance to the nurse. We had an interesting conversation to act as a distraction. It didn’t work.
“All the talking made your blood pressure explode. Let’s do it again in silence.”
The second attempt meant I had to be alone with my thoughts. I made a conscious decision to let the test be a success and it was. It was a tiny success — but it was a success worth focusing on during a day where my world normally collapses into a state of panic.
Lesson: Use evidence of your prior courage to defeat fear. | https://medium.com/the-ascent/lessons-i-learned-dealing-with-extreme-fear-while-in-the-hospital-operating-theatre-75cdb0946cd4 | ['Tim Denning'] | 2020-12-23 14:03:24.455000+00:00 | ['Life Lessons', 'Inspiration', 'Self Improvement', 'Life', 'Psychology'] |
Quincy Larson | Raised in Oklahoma City, Quincy never intended to be a programmer. Before learning to program, he worked in schools — first as a teacher, then as a principal. In a sort of proto-hack, Quincy automated teachers’ basic tasks (attendance and other headaches). Aha, he thought: tech + education = impact. A code camp was born, and Quincy has hardly taken a day off since.
A virtual water cooler for all things tech, Quincy’s freeCodeCamp publication is a space to swap tips about entrepreneurship, analyses of big-picture tech trends, and personal stories from the Silicon trenches. Contributors geek out on JavaScript, machine learning, and what “minimum viable product” actually means. With almost 250,000 followers, stories race across the interwebs, sparking discussion about what tomorrow brings.
In one of his most popular stories, Quincy sounds a (thoughtful) warning about the future of net neutrality. The internet “is a Cambrian Explosion of ideas and execution,” he argues, and it’s at risk of becoming just another maze of walled gardens (think: cable TV). The solution? Education. Read, contact your representatives, and — better yet — learn to code. The story encapsulates Quincy’s passion to ensure the internet stays as free as it was born. With over 100 responses, it’s clearly making people think.
Why is this conversation so important, especially today? “Technology helps people have a voice,” believes Quincy. It’s our digital megaphone, the place where we go to cut out the middleman. Access to tech, and our ability to shape its development, has the potential to solve some of humanity’s trickiest problems. (Of which there seem to be a lot lately.) Luckily, Quincy knows “there are already a tremendous number of developers out there who care, who have an itch to scratch and create their own tools.” The key is giving them the skills to do it.
Watch our short film to hear Quincy’s vision for programming and beyond, in his own words.
Now, Quincy has moved back in Oklahoma City from San Francisco, splitting his time between code, words, and family. What does the future hold — not just for programmers, but for tech writ large? “I think technology has always made things better,” he laughs, “I would never criticize someone for trying to make things more efficient. When they make the chips that just interface with your brain (if they’re secure), I’ll probably get one, you know?” | https://noteworthy.medium.com/quincy-larson-958f4903f9b7 | ['Medium Staff'] | 2017-05-31 17:49:51.313000+00:00 | ['Medium', 'Creativity', 'Noteworthy', 'Coding', 'Writers'] |
Abhorrent Personality Traits That Got Me Through 2020 | Don’t hate the neurotic, hate the year.
Photo by Nadi Lindsay from Pexels
Look, you ride out this crazy straw year your way, I’ll ride it out mine. Can I help it if all of the ways I choose to cope during a time of extreme isolation are super fucking annoying to all of you? No. I also don’t have to alter a morsel of my behavior, because none of you can see it happen or be mildly inconvenienced by it in any way. In fact I only write about these things now so that those inclined can peer into the cage and then move onto the next exhibit.
Haphazard was never going to be the way through this mess, at least not for me. In the absence of societal structure and the ability to go places and then be in those places, I’ve had to establish my own sense of order and routine. Otherwise the vines will grow over my brain such that you’ll find me, perhaps years from now, living in a blanket fort in my bathroom subsisting on gummy bears and raising a family of hand puppets as a single mother.
So below, my methods. The habits and practices I’d likely get my ass kicked for in 10th grade but because we’re what passes for adults now, I’m relatively safe to talk about. I fear no taunts or Twitter call outs, in a pandemic one can only fight with words on screens and in that department, fucks, I’ve got you all licked.
Here we go.
Inbox Zero.
And I mean goddamn goose eggs, man. I cannot function, sleep, or even complete whatever next task I’m holding in my brain if there is so much as a numeral “1” pressuring me from an open tab. I operate at a constant state of zero email items to deal with and I’m telling you, this is living. You know those people who have like 10,459 unread emails in their inbox? The notion of them simply drawing breath gives me hives. How do you not live in a constant state of stress? That’s going to manifest itself as a heart attack at some point and when it comes for you, I still won’t have any unread emails awaiting my attention.
Waking Early.
Not to be weird, but by the time you hit your desk at 9am I’m basically eating lunch. I have always been an early riser, an extremely early riser in fact, which was a real treat of a trait at slumber parties, I can assure you. Now that I’m in charge of myself, this tendency to rise before barnyard animals means that I handle the vast majority of my workday before the loud ass construction starts across the fucking street, and it suits me fine. Does this mean I go to bed at the same time as your 28 month old? Sure does. Do I give an airborne fuck about you and your judgements when it’s my sleep patterns we’re talking about? Nah. Have fun peeling yourself out of bed at 8am after pressing 14 snoozes on a device I haven’t had to use since college. Peasants.
Meal Prep.
Things can go off the culinary rails real quick, the pandemic has taught us this. I’ve learned to treat weekday lunchtime no differently than if I were in an office waiting in line at a communal microwave and cursing the ill-raised heathens that pile their dishes in a work sink without feeling the deep shame they deserve. I still prep all the components of some sort of salad or bowl situation every Sunday afternoon, and then toss them together for lunch during my workday. Not only does this serve as a good timekeeper for the day, because god knows real clocks are a fucking waste of wall space now, but it also ensures that at least once a day, my nutritional intake isn’t something I’d be scared to confess to my mother. I mean sure most of the time I have popcorn for dinner but during business hours I’m a respectable member of society.
Booze Rules.
No drinking on weekdays. I know. If ever there was a year to Mad Men the afternoon away and feel absolutely no remorse in the process, it’s 2020. But this year’s about as steady as a blindfolded kid about to hit a piñata. It’s best not to make the problem worse. I keep a clear head Sunday night through Friday afternoon, without fail, and in addition to being just like…fucking healthy, it’s been a fiscal boon to me as well. My sparking water mocktails on Tuesday are mere pennies on the dollar when compared to the Pet Nat I’ll partake in on Friday night. I hate to say it, but going back to paying $16 a glass for my Chenin Blanc at some brasserie with low lighting but impeccable french fries is going to be reeeeeeal tough after all this. I mean I’ll do it, but I’ll consider it paying for the privilege of sitting somewhere that isn’t my desk chair at home. Lord knows that configuration of future firewood has seen enough attention to last two lifetimes.
Water Walks.
I hate “working out.” Ugh. UGH! I hate everything about it, from rolling out a gross yoga mat in my goddamned living space to sausaging myself into a sports bra. Honestly sports bras need to perish. Can we just start wearing body armor or something? Sports bras are impossible at best and actually fighting us back at worst. Neither me nor this undergarment want to be here and I think we should both leave. And that’s before I’ve pressed play on some YouTube class I’m 100% doing incorrectly, sweating and panting a few feet away from where I prepare food. It’s a horrific process that never delivers the endorphins those fucking bloggers promise. Instead, I take Water Walks. I walk three miles every day to and from a grocery store where I purchase nothing more than 12 cans of sparking water. I know I’m going to drink it, I know I need exercise, I kill two delightful birds with one stone. Also it’s really hard to buy sparking water on a normal grocery run because it’s too heavy and I need to buy yogurt and lemons. Maybe I could carry more if I worked out. We’ll literally never know.
I don’t care if you don’t like me. I like me. And even when I don’t, at least I’m well rested, hydrated, and I know what I’m having for lunch. Now I’m going to go shop for a bunch of minimalist jewelry on Etsy and never buy any of it. I do that too. It helps.
____________
Shani Silver is a humor essayist and podcaster based in Brooklyn who writes on Medium, frequently. She is also the host of A Single Serving Podcast. | https://shanisilver.medium.com/abhorrent-personality-traits-that-got-me-through-2020-ac4950e85ef0 | ['Shani Silver'] | 2020-12-02 20:28:30.204000+00:00 | ['Advice', 'Humor', 'Habits', '2020', 'Productivity'] |
The #1 Mistake I Made on Medium this Month | The #1 Mistake I Made on Medium this Month
It may have lost me $92 in earnings
Photo by Bermix Studio on Unsplash
If I plan a beach day and my plans get thwarted, I get pretty grumpy. The same thing happens when I Uber Eats (yes, I’m using that as a verb) Taco Bell and open my Nachos Bell Grande to discover that they forgot to add my jalapenos and shredded cheese. The fact is:
Unmet expectations produce negative reactions.
Whether you’re ordering food, your vacation plans have been altered, or . . . you are looking forward to reading a new article from your favorite author, we all get a little sad when we don’t get what we expect.
I believe that even if you have only 40 followers on Medium, there is at least one person out there who will miss your writing if that little number doesn’t pop up next to your head on their Medium homepage. I know I miss my favorite authors when I log in and I don’t see a new article when I expect it. It’s like turning on your DVR to watch your favorite show and then realizing the new episode hasn’t aired yet.
Consistency is key
If you publish two articles per day, one article per day, or one article per month, people inevitably (either consciously or subconsciously) take notice. I know that, as a reader, I definitely do. Jon Simpson says in an article in Forbes that, “When your content quality, quantity or schedule isn’t consistent, it can confuse your customers.”
Mega-popular content creator extraordinaire Gary Vaynerchuk says, “The biggest miss for people trying to grow their account is frequency. If you aren’t giving viewers a reason to think about you each day then you are going to lose.” Consistency is important for content creators, but what makes things difficult for Medium writers is that consistency AND quality are key. If I post a crappy article, not only does it hurt my reputation, but it also probably loses me a few readers.
If you are posting and engaging constantly, you will grow your readership. As long as you stay consistent, you’re golden. If you suddenly stop delivering on your unspoken promise of regularly scheduled content, you might lose the interest of your happy followers.
Oh, and if you are thinking, “I just publish whenever I want. Nobody expects regularity from me,” you’re wrong. Everyone inherently expects regularity when it comes to content.
What happened to Michelle?
I made a goal of writing 50 articles in the month of December. I will get to that goal (I always keep my promises), however, because of some pretty significant time-consuming personal matters, I did not deliver the articles I published this month with the consistency that I have achieved since I began writing on this platform.
Because I had a tough month, I haven’t been publishing at the rate or quality at which I published last month. Do you know what happened? In the past 7 days, my earnings have plummeted. Why? I did not live up to the expectation of my readers.
This is not the end of the world, but it is a missed opportunity.
This lack of consistency cost me views for sure. According to my calculations, by the end of the month, it will have cost me about $92 in earnings. (I really love spreadsheets, so I’m pretty accurate with my engagement, word count, and quality calculations in relation to views and earnings). So, what could I have done to head this issue off at the pass? Well, I’ll tell you.
How to ensure consistency
Much like putting away money in your savings account for a rainy day, the mistake I made was not having any completed articles in my queue. If I had been more patient and not published every single thing I wrote right when I wrote it, I would have had a veritable savings account of prewritten articles. And I could have released them over the period of time while I was dealing with my personal life.
See, when I write something and I like it, I get SUPER excited to share it with everyone and I want to share it immediately. (Like, I have been able to wait for The Ascent to publish one of my pieces exactly two times.) There is a strategy to publishing consistently and this month, I did not employ that strategy.
I’m beginning to research and learn about this whole timing thing when it comes to publishing, but I have learned enough to know that I did it incorrectly this month. I encourage you all to if you want to optimize your readership, do a little research, and make a plan.
You don’t have to publish or submit every story immediately after you finish it. If you’re a writer on Medium, you probably have people who are looking forward to reading your content and if you don’t publish on a consistent basis, they might be disappointed. If you keep a few pieces in your pocket for a rainy day, you will not only keep your readers happy when life happens, but you’ll also likely relieve a little bit of stress when it comes to keeping the content coming out of your computer.
Best of luck, Medium writers. I’ll continue to share lessons I’ve learned, good and bad, and I encourage you to share any insights you have with me as well. Throw me a clap or 50 if this was helpful and leave any other tips you might have in the comments. Here’s to a fantastic 2021 of exceptional content and a growing and thriving community. | https://medium.com/the-innovation/the-1-mistake-i-made-on-medium-this-month-d876c13a8c23 | ['Michelle Loucadoux'] | 2020-12-27 21:57:42.493000+00:00 | ['Writing', 'Writing Tips', 'Advice', 'Inspiration', 'Blogging'] |
Can indigenous knowledge stop Australia from burning? | Can indigenous knowledge stop Australia from burning? We always think we can leave nature alone and she will be healthy and flourishing. Well, in my environmental studies I gained other knowledge.
People need nature to supply oxygen, drinking water, and food. But nature also needs people. Mindful collaboration works best. The prevention of Austrian bushfires is a great example.
“Before colonization, Indigenous Australians used fire not only to control the buildup of leaf litter and other fuel but also to maintain ecosystems and promote healthy growth.” — Brooke Boland
Firesticks Alliance is sharing knowledge on cultural burning, healthy communities, healthy landscapes. Maybe Americans can learn about this practice from the indigenous tribes on their continent too?
Here’s the full article.
Amazon Rainforest. Picture credit: PX-Fuel
© Désirée Driesenaar | https://medium.com/illumination-curated/can-indigenous-knowledge-stop-australia-from-burning-b78af82a676b | ['Desiree Driesenaar'] | 2020-12-07 16:30:47.165000+00:00 | ['Nature', 'Australia', 'Indigenous', 'Environment', 'Fire'] |
Building apps for editing Face GANs with Dash and Pytorch Hub | Making GANs transparent with TL-GAN
Created by Shaobo as part of a Insight Data Science project, the transparent latent GAN (TL-GAN) introduces a simple, but perfectly executed idea: shine a light on the random noise used to generate realistic faces by learning an association between the input noise and facial features like age, sex, skin tone, and whether the person is smiling or wearing glasses, hats, necklace, and more. To achieve this, the author first trained a ML model to classify images based on some 40 facial features (using labels from the CelebA dataset), which was then used this model to label hundreds of thousands of images generated by the officially-released PGGAN. Finally, a linear regression was trained to predict the features output by the ML model given the latent vectors (i.e. the random noise), and the trained weights were used to control the noise to give an output that correlated more heavily with the desired features.
Integrating PyTorch Hub with Dash
Pytorch Hub is an incredible repository of pretrained models built in Pytorch, which can all be imported and loaded in a few lines of code. For our own app, all we needed to do was to load the pggan model from torch.hub (which is included in the official PyTorch release) at the start, and start using it in our callbacks.
Traditionally, if you wanted to deploy a model loaded from Pytorch Hub, you would need to design a REST API with Flask, then communicate with a front-end built in a library like React.js. Since you are outputting images, you would then need to worry about encoding the image into string using schemes like base64, and ensure that the component for displaying the image is compatible. | https://medium.com/plotly/building-apps-for-editing-face-gans-with-dash-and-pytorch-hub-1e7026c0bc9a | [] | 2020-06-19 15:58:37.126000+00:00 | ['Machine Learning', 'Pytorch', 'Gans', 'Python'] |
Gradient Descent Algorithm | Title: What is the Gradient Descent Algorithm and its working.
Gradient descent is a type of machine learning algorithm that helps us in optimizing neural networks and many other algorithms. This article ventures into how this algorithm actually works, its types, and its significance in the real world.
A Brief Introduction
Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks. At the same time, every state-of-the-art Deep Learning library contains implementations of various algorithms to optimize gradient descent (e.g. lasagne’s, caffe’s, and keras’ documentation).
The reason we’re talking about it here is not merely theoretical. Gradient Descent algorithm is much more than it seems to be. It is used time and again by ML practitioners, Data scientists, and students to optimize their models.
Gradient descent is a way to minimize an objective function parameterized by a model’s parameters by updating the parameters in the opposite direction of the gradient of the objective function w.r.t. to the parameters. The learning rate $alpha$ determines the size of the steps we take to reach a (local) minimum. In other words, we follow the direction of the slope of the surface created by the objective function downhill until we reach a valley.
Now that you’ve gotten a basic insight of the algorithm, let’s dig deep in it in this post. We will define and cover some important aspects like its working, it’s working examples, types and a final conclusion to mould it all.
What is exactly Gradient Descent ?
Answer the question posed by the title of this post directly below this header. This will increase your chances of ranking for the featured snippet on Google for this phrase and provide readers with an immediate answer. Keep the length of this definition — at least in this very basic introduction — between 50 and 60 words.
After the brief definition, dive further into the concept and add more context and explanation if needed.
Gradient descent is an optimization algorithm used to find the values of parameters (coefficients) of a function (f) that minimizes a cost function (cost).
Gradient descent is best used when the parameters cannot be calculated analytically (e.g. using linear algebra) and must be searched for by an optimization algorithm.
Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. To find a local minimum of a function using gradient descent, we take steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. But if we instead take steps proportional to the positive of the gradient, we approach a local maximum of that function; the procedure is then known as gradient ascent. Gradient descent was originally proposed by Cauchy in 1847.
Gradient descent is also known as steepest descent; but gradient descent should not be confused with the method of steepest descent for approximating integrals.
Okay but why is it Important?
Provide your readers with a few reasons why they should care about the term or the concept you’re writing about. If this is a consumer-level concept, talk about the implications this could have on their businesses, finances, personal happiness, etc. If you’re writing for an audience of professionals, mention the impact this term or concept has on profit, efficiency, and/or customer satisfaction. To make the most of this section, make sure it includes at least one statistic, quote, or outside reference.
Include at Least One of These Next Three Sections
Gradient descent variants
There are three variants of gradient descent, which differ in how much data we use to compute the gradient of the objective function. Depending on the amount of data, we make a trade-off between the accuracy of the parameter update and the time it takes to perform an update.
Batch Gradient Descent: This is a type of gradient descent which processes all the training examples for each iteration of gradient descent. But if the number of training examples is large, then batch gradient descent is computationally very expensive. Hence if the number of training examples is large, then batch gradient descent is not preferred. Instead, we prefer to use stochastic gradient descent or mini-batch gradient descent. Stochastic Gradient Descent: This is a type of gradient descent which processes 1 training example per iteration. Hence, the parameters are being updated even after one iteration in which only a single example has been processed. Hence this is quite faster than batch gradient descent. But again, when the number of training examples is large, even then it processes only one example which can be additional overhead for the system as the number of iterations will be quite large. Mini Batch gradient descent: This is a type of gradient descent which works faster than both batch gradient descent and stochastic gradient descent. Here b examples where b<m are processed per iteration. So even if the number of training examples is large, it is processed in batches of b training examples in one go. Thus, it works for larger training examples and that too with lesser number of iterations.
Gradient Descent Procedure
The procedure starts off with initial values for the coefficient or coefficients for the function. These could be 0.0 or a small random value.
coefficient = 0.0
The cost of the coefficients is evaluated by plugging them into the function and calculating the cost.
cost = f(coefficient)
or
cost = evaluate(f(coefficient))
The derivative of the cost is calculated. The derivative is a concept from calculus and refers to the slope of the function at a given point. We need to know the slope so that we know the direction (sign) to move the coefficient values in order to get a lower cost on the next iteration.
delta = derivative(cost)
Now that we know from the derivative which direction is downhill, we can now update the coefficient values. A learning rate parameter (alpha) must be specified that controls how much the coefficients can change on each update.
coefficient = coefficient — (alpha * delta)
This process is repeated until the cost of the coefficients (cost) is 0.0 or close enough to zero to be good enough.
You can see how simple gradient descent is. It does require you to know the gradient of your cost function or the function you are optimizing, but besides that, it’s very straightforward. Next we will see the math behind it and how we can use this in machine learning algorithms.
Math Behind it
Suppose we have the following given:
Hypothesis: hθ(x)= θ^Tx=θ0x0+θ1x1+……………+θnxn
Parameters: θ0, θ1, θ2,……..,θn
Cost function: J(θ)=J(θ0, θ1, θ2,……..,θn)
Consider the gradient descent algorithm, which starts with some initial θ, and repeatedly performs the update:
θj := θj − α ∂/∂θj (J(θ))
(This update is simultaneously performed for all values of j = 0,…,n.) Here, α is called the learning rate. This is a very natural algorithm that repeatedly takes a step in the direction of steepest decrease of J.
We’d derived the LMS rule for when there was only a single training example. There are two ways to modify this method for a training set of more than one example. The first is replace it with the following algorithm:
The reader can easily verify that the quantity in the summation in the update rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function.
How to Calculate Gradient Descent
Note: This section only applies for posts about math and equations.
Provide a step-by-step explanation and example of how to calculate the rate, point, or number you’re providing a definition for.
**Variables used:**Let m be the number of training examples.Let n be the number of features.
Note: if b == m, then mini batch gradient descent will behave similarly to batch gradient descent.
**Algorithm for batch gradient descent :**Let hθ(x) be the hypothesis for linear regression. Then, the cost function is given by:Let Σ represents the sum of all training examples from i=1 to m.
Jtrain(θ) = (1/2m) Σ( hθ(x(i)) - y(i))2 Repeat {
θj = θj – (learning rate/m) * Σ( hθ(x(i)) - y(i))xj(i)
For every j =0 …n
}
Where xj(i) Represents the jth feature of the ith training example. So if m is very large(e.g. 5 million training samples), then it takes hours or even days to converge to the global minimum.That’s why for large datasets, it is not recommended to use batch gradient descent as it slows down the learning.
Algorithm for stochastic gradient descent:
In this algorithm, we repeatedly run through the training set, and each time we encounter a training example, we update the parameters according to the gradient of the error with respect to that single training example only. This algorithm is called stochastic gradient descent (also incremental gradient descent).
Randomly shuffle the data set so that the parameters can be trained evenly for each type of data.2) As mentioned above, it takes into consideration one example per iteration.
Hence,
Let (x(i),y(i)) be the training example
Cost(θ, (x(i),y(i))) = (1/2) Σ( hθ(x(i)) - y(i))2 Jtrain(θ) = (1/m) Σ Cost(θ, (x(i),y(i))) Repeat { For i=1 to m{ θj = θj – (learning rate) * Σ( hθ(x(i)) - y(i))xj(i)
For every j =0 …n }
}
**Algorithm for mini batch gradient descent:**Say b be the no of examples in one batch, where b < m.Assume b = 10, m = 100;
Note: However we can adjust the batch size. It is generally kept as power of 2. The reason behind it is because some hardware such as GPUs achieve better run time with common batch sizes such as power of 2.
Repeat {
For i=1,11, 21,…..,91 Let Σ be the summation from i to i+9 represented by k. θj = θj – (learning rate/size of (b) ) * Σ( hθ(x(k)) - y(k))xj(k)
For every j =0 …n }
Choosing the best α
For sufficiently small α , J(θ) should decrease on every iteration.
But if α is too small, gradient descent can be slow to converge.
If α is too large, J(θ) may not decrease on every iteration, may not converge.
To choose α, try …..,0.001,0.01,0.1,1,……. etc.
Batch vs Stochastic gradient algorithm
Batch gradient descent has to scan through the entire training set before taking a single step — a costly operation if m is large — stochastic gradient descent can start making progress right away, and continues to make progress with each example it looks at. Often, stochastic gradient descent gets θ “close” to the minimum much faster than batch gradient descent. (Note however that it may never “converge” to the minimum, and the parameters θ will keep oscillating around the minimum of J(θ); but in practice most of the values near the minimum will be reasonably good approximations to the true minimum.) For these reasons, particularly when the training set is large, stochastic gradient descent is often preferred over batch gradient descent.
Some real life examples and intuition
If you feel like it would benefit your readers, list a few examples of the concept you’re explaining in action. You can elevate this section by embedding images, videos, and/or social media posts.
Remember, this post is not a list post — so try to keep this list between three and five examples if you do decide to include it.
Think of a large bowl like what you would eat cereal out of or store fruit in. This bowl is a plot of the cost function (f). A random position on the surface of the bowl is the cost of the current values of the coefficients (cost). The bottom of the bowl is the cost of the best set of coefficients, the minimum of the function. The goal is to continue to try different values for the coefficients, evaluate their cost and select new coefficients that have a slightly better (lower) cost. Repeating this process enough times will lead to the bottom of the bowl and you will know the values of the coefficients that result in the minimum cost.
The basic intuition behind gradient descent can be illustrated by a hypothetical scenario. A person is stuck in the mountains and is trying to get down (i.e. trying to find the global minimum). There is heavy fog such that visibility is extremely low. Therefore, the path down the mountain is not visible, so they must use local information to find the minimum. They can use the method of gradient descent, which involves looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent (i.e. downhill). If they were trying to find the top of the mountain (i.e. the maximum), then they would proceed in the direction of steepest ascent (i.e. uphill). Using this method, they would eventually find their way down the mountain or possibly get stuck in some hole (i.e. local minimum or saddle point), like a mountain lake. However, assume also that the steepness of the hill is not immediately obvious with simple observation, but rather it requires a sophisticated instrument to measure, which the person happens to have at the moment. It takes quite some time to measure the steepness of the hill with the instrument, thus they should minimize their use of the instrument if they wanted to get down the mountain before sunset. The difficulty then is choosing the frequency at which they should measure the steepness of the hill so not to go off track. In this analogy, the person represents the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. The steepness of the hill represents the slope of the error surface at that point. The instrument used to measure steepness is differentiation (the slope of the error surface can be calculated by taking the derivative of the squared error function at that point). The direction they choose to travel in aligns with the gradient of the error surface at that point. The amount of time they travel before taking another measurement is the learning rate of the algorithm.
Tips and Reminders before practicing it
When breaking down a difficult concept or definition, some readers may still feel overwhelmed and unsure of their ability to address it. Break down a few best practices on how to approach the concept, and/or a few reminders about it. Again, this is not a list post, so keep this short list to three to five pieces of advice.
This section lists some tips and tricks for getting the most out of the gradient descent algorithm for machine learning.
Plot Cost versus Time : Collect and plot the cost values calculated by the algorithm each iteration. The expectation for a well performing gradient descent run is a decrease in cost each iteration. If it does not decrease, try reducing your learning rate.
: Collect and plot the cost values calculated by the algorithm each iteration. The expectation for a well performing gradient descent run is a decrease in cost each iteration. If it does not decrease, try reducing your learning rate. Learning Rate : The learning rate value is a small real value such as 0.1, 0.001 or 0.0001. Try different values for your problem and see which works best.
: The learning rate value is a small real value such as 0.1, 0.001 or 0.0001. Try different values for your problem and see which works best. Rescale Inputs : The algorithm will reach the minimum cost faster if the shape of the cost function is not skewed and distorted. You can achieved this by rescaling all of the input variables (X) to the same range, such as [0, 1] or [-1, 1].
: The algorithm will reach the minimum cost faster if the shape of the cost function is not skewed and distorted. You can achieved this by rescaling all of the input variables (X) to the same range, such as [0, 1] or [-1, 1]. Few Passes : Stochastic gradient descent often does not need more than 1-to-10 passes through the training dataset to converge on good or good enough coefficients.
: Stochastic gradient descent often does not need more than 1-to-10 passes through the training dataset to converge on good or good enough coefficients. Plot Mean Cost: The updates for each training data set instance can result in a noisy plot of cost over time when using stochastic gradient descent. Taking the average over 10, 100, or 1000 updates can give you a better idea of the learning trend for the algorithm.
Convergence trends in different variants of Gradient Descents:
In case of Batch Gradient Descent, the algorithm follows a straight path towards the minimum. If the cost function is convex, then it converges to a global minimum and if the cost function is not convex, then it converges to a local minimum. Here the learning rate is typically held constant.
In case of stochastic gradient Descent and mini-batch gradient descent, the algorithm does not converge but keeps on fluctuating around the global minimum. Therefore in order to make it converge, we have to slowly change the learning rate. However the convergence of Stochastic gradient descent is much noisier as in one iteration, it processes only one training example.
Closing and a final conclusion
Wrap up your amazing new blog post with a great closing. Remind your readers of the key takeaway you want them to walk away with and consider pointing them to other resources you have on your website.
In this post you discovered gradient descent for machine learning. You learned that:
Optimization is a big part of machine learning.
Gradient descent is a simple optimization procedure that you can use with many machine learning algorithms.
Batch gradient descent refers to calculating the derivative from all training data before calculating an update.
Stochastic gradient descent refers to calculating the derivative from each training data instance and calculating the update immediately.
Do you have any questions about gradient descent for machine learning or this post? Leave a comment and ask your question and I will do my best to answer it.
Sources for the above article / Call-to-Action
Last but not least, place a call-to-action at the bottom of your blog post. This should be to a lead-generating piece of content or to a sales-focused landing page for a demo or consultation.
Introduction to Gradient Descent Algorithm (along with variants) in Machine Learning
Gradient Descent For Machine Learning — Machine Learning Mastery | https://medium.com/swlh/gradient-descent-algorithm-3d3ba3823fd4 | ['Garima Singh'] | 2020-08-10 20:09:33.587000+00:00 | ['Machine Learning', 'Gradient Descent', 'Working', 'Artificial Intelligence', 'Neural Networks'] |
Did You Know That Only 26% of Computing-Related Jobs Are Held by Women? | Did You Know That Only 26% of Computing-Related Jobs Are Held by Women?
It’s high time that companies start investing in making technology departments more gender diverse
According to research conducted by Accenture in the U.K., nearly half (48%) of girls and women believe that STEM subjects line up more with male careers. This is the biggest reason for boys being more likely to choose STEM subjects over girls.
The following were the main reasons for young girls and women not wanting to do STEM subjects:
Two reasons that I find especially surprising in this list are:
“Perception that these subjects are more suited to boys”
“Worried I would be the only girl/one of only a few girls”
Being a female, born and brought up in India, STEM subjects are actually one of the most preferred choices for women — not just by women themselves but also by their parents, teachers, and counselors. As per a study done by 451 Research, India is now almost at gender parity in graduate-level study, in contrast to the decline in the U.S. and a stagnation in the U.K.
“Women now make up 34% of the IT workforce in India, with the majority of these workers under the age of 30. Indeed, the youth of the Indian IT labor force has significantly powered its rapid growth, and the country is now almost at 50:50 gender parity rate in STEM graduates.” — Katy Ring for 451 Research
From a workplace perspective, we still have a long way to go. As per a report in The Muse, women only hold 25% of IT jobs. In order to attract more women to IT jobs and to bring about some much-needed diversity, we really need to start from the bottom.
There are many programs — like Women in IT, Girls Who Code, etc. — where companies are trying to attract young female talent. However, I personally feel that’s not enough. Young women are always looking for female role models (I know I was back when I was growing up), and we need to do a better job of giving them these role models. Promoting women to C-suite and managerial roles can be the first step.
Today’s generation has so much screen time with smart phones and easy access to internet, and they gravitate towards “cooler” things, such as TikTok and YouTube, because they seem easy and like a quick money-making option. No disrespect to any of these platforms, but I think IT companies need to come up with creative ways to advertise how these “cooler” things are built and how the younger generation can help shape the next TikTok or YouTube. | https://medium.com/better-programming/did-you-know-that-only-26-of-computing-related-jobs-are-held-by-women-ace9aca97d21 | ['Asha Rani'] | 2020-12-23 17:03:30.559000+00:00 | ['Programming', 'Women In Tech', 'Technology', 'Diversity In Tech', 'Startup'] |
Offshore, Outsource Or In-House Development, Which Is Best For Small Business Apps? | Across the globe, numerous businesses are facing rising pressure to compete and grow their operations to the next level. The way of doing business is evolving today with the transformation in the tech arena. Technology is now a competitive keystone of most companies.
But now many companies are asking, should I develop in-house or outsource? What about offshoring?
Outsourcing Vs In-house is a decade old debate in the custom software development industry and mobile app development is adding new dimensions to it. In-house is suitable only in few instances. On the other hand, outsourcing is evergreen option to explore with the onset of technologies. Also, offshore development has become a trend and tried-and-tested model today. IT services market growth will be largely driven by offshoring organizations throughout 2016.
A small sized business rarely appoints external developers to build up the application. On the off chance, a huge enterprise, however, has some potential benefits to manage that hard work in-house. Comparatively, it makes more business sense to hire a faithful, external mobile app developer to construct your successful mobile app instead of keeping it in-house, even for huge companies.
Let us evaluate the pros and cons of all 3 developments mentioned above, to help you make the correct choice for your next app idea.
In-house Development
Small Businesses often believe that hiring in-house mobile app developers will make the development process faster, more controllable, cheaper. But if your company is planning for in-house developers then a few points need to be analysed for finer resource utilization.
Pros:
It brings in transparency in operations during the app development process.
There are no boundations regarding culture.
Save extra expenses by employing present resources.
You can cast for quality developers at affordable prices.
No worries about who owns your source code.
Cons:
Starting expenses are very high, especially to set up entire infrastructure.
It may also cost high if you want to get all essential certificates and licenses for tools, code and software.
If your app developers get stuck mid-way and external consultants have to be roped in, there is a risk of running additional costs.
Outsourcing Development
Now we come to those app developers which are professionally hooked in the mobile app development process. Handing your project to a best mobile app development company is a wise step if you wish to save considerable money and time. Thus have a superior quality app that truly delivers value.
Pros:
There is a fixed budget for a particular room.
Instant start of development process and no time wasted on market formalities.
You can find the experts by choosing the best mobile app development firm.
Fixed monthly salary and related expenses involve minimal risk.
Best mobile app developers get the job done faster with their superior resources and working experience.
Cons:
Small changes could be expensive or take too much time to implement.
No control over the whole app development process and drain of intellectual capital.
Price rates are determined on per hour basis for outsourcing development services.
Offshore Development
Offshoring has made many rising companies whether small, medium or large dreams touch reality. Offshoring is the practice of placing some of your Business’s needs to an overseas supplier in order to take minimize the cost and time involved. It is a long-term competitive strategy to for success and profit.
While technically the idea of offshoring falls on the outsourcing side of the debate, it generally signifies important cost saving otherwise unavailable in your market. A growing number of firms and IT service suppliers now collaborate with offshore software developers (Like Us).
Pros:
Offshore developers can seriously minimize development cost and the production cycle.
Deliver high-quality services; the services which have a variety of flavours.
Renowned offshore software firms carry out timely delivery of projects.
Business will remain operational round the clock.
Complete maintenance and post launch services once the project is completed.
Cons:
Stability of the Offshore Countries may be a potential risk.
Sometimes native language and customs do not align well with yours.
Potential language barriers in countries like China, however not so much with India.
We Conclude
Having worked with clients on 5 continents and delivered more than 800 projects in last 10 years, we really think Outsourcing has saved tons for our customers and given them significant cost and competitive advantages. For most businesses this works.
Offshore Development is the best option for app development project. Today the modern communication technologies have made it feasible to connect you with your great offshore app development team within the fraction of the moment. Through video chats and other modes of communication it possible to feel you all in a single room. Offshore providers pull from an entirely different talent pool and may have a ready pool to tap.
The main objective is to get the desired service at a reasonable rate. Business means to make more profits without cutting down the quality. So build an offshore development team that won’t suck with Aquevix.
Let’s start a project! | https://medium.com/tech-ketchup/offshore-outsource-or-in-house-development-which-is-best-for-small-business-apps-14ec8c2aa337 | ['Amit Jindal'] | 2016-12-15 13:41:12.134000+00:00 | ['Inhouse Development', 'Mobile App Development', 'Outsource Development', 'Offshore Developers'] |
The DVD Seller | It’s been a year since our family settled in the city and a lot of external aspects related to my life have changed. Having been a newbie to the business and a stranger to the city, I’m totally satisfied with the level my tutoring career has reached within a year. Tutoring sessions and college stuff keep me busy all the time and my mental health conditions give me a pretty hard time when it comes to juggling both things. Burnouts have become a regular thing and I’ve temporarily quit medication since it keeps me in a sedated state when I’m on them.
These circumstances prevent me from visiting my friend frequently as I used to do but still I try my best to purchase something from him at least once a month. However, when I cross the town, unconsciously I always tend to check whether he is in his usual spot even if I didn’t get to talk to him. Maybe my fondness for the movies and their seller causes that.
It’s 2017 and the whole world has been anticipating the release of the action thriller John Wick: Chapter 2. As usual, I miss the screenings in the theatres and looking forward to getting my hand on a good copy of the movie. It’s May and four months have passed since the release of the movie.
One evening, when I’m heading home after work, I go to his corner of the sidewalk looking for him. But he is not there and surprisingly, it’s a Sunday. People like him barely miss doing business at weekends because that’s when they can sell more items. However, a busy week is ahead of me and that means I’m going to have to hit the city more often. Eventually, I’ll have the chance to see him.
For the next few weeks, I keep looking for him but strangely, there is no sign of him. I even ask about him from the nearby stores and they don’t even have any clue about his absence. I curse myself for never saving his number on my phone.
Is something wrong? Or has he quit his movie-selling job for something more profitable?
I cook various scenarios in my head.
Fortunately, my worrying ends in the next week.
I’m on my way back to the city center after visiting one of my teachers in the hospital. It’s about 10 AM in the morning but despite the cool weather, I begin to feel thirsty. I’ve decided to take a few days off for the sake of my mental health. Sometimes, it’s pretty difficult to function without the pills. But when I’m on medication, I merely do anything rather than lying on my back most of the day.
I walk into the restaurant, order a chocolate milkshake, and sit at my usual table. From where I’m sitting I can see the spot where my friend does his business. Even today he is not there and I haven’t bought the movie from anyone else either. I have the rest of my life ahead if I need to watch John Wick 2. But I want him to receive that small amount of money I’m willing to spend on that.
“Where has this guy disappeared to?” I silently ask myself. Someone puts my order on the table but still, I’m lost in my thoughts.
“I think young people shouldn't do this much overthinking.” A familiar voice brings me back to my senses. I look at the lady who just brought my milkshake. That’s when I recognize her.
“How are you doing, sir? Long time no see.” She sits on the chair opposite mine.
Honestly, I feel so glad to see her. Sometimes, there is no better way to unwind rather than having some female companionship. Her glittering eyes make me think that she is pleased to see me too.
“Rashmi, what are you doing here?” I don’t want to hide my surprise. I rest my eyes on this extremely attractive lady who makes me feel at ease every time I’m with her. Having been a shy person, I wonder what magic she works on me. Maybe that’s because she is older than me in two or three years. For the record, I’m 26 at the time.
“Oh! I never had the chance to tell you before. This place belongs to my uncle. He and most of his crew have attended a wedding today. So, he wanted me to take care of business for one day.”
“Well, that’s news to me.” I take a few sips from the shake.
“And, sir, I almost forgot. Last week I received my IELTS results. Guess what? I had nailed it.”
I honestly feel happy for her. But I do my best to remain stoic.
“Actually, it barely surprises me and I tell the same I used to tell before even today. Even before I started doing classes for you, your English was pretty good. But there was this irrational fear that made you think that you wouldn’t be able to go through IELTS on your own. However, as a teacher, you can’t even imagine how happy I am to hear the news.”
She smiles and looks at my half-finished drink. “Tell me if you feel like trying one more drink. And it’s on the house.”
“Thanks. But I’m done for today.” With few more sips, I finish the stuff in the glass.
“So, do you come here often?” She asks looking into my eyes.
“Of course, this is where I dine when I’m unable to hit home for meals in the day.”
“Too bad.” She gives me a teasing look. “If my uncle had known that you are my teacher, he would never have charged you.”
“Do you want your uncle to go bankrupt?”
She laughs it off and waves to a small child who just entered the restaurant.
“Truly, sir, what were you thinking before I got here?” The seriousness of her tone catches me off guard. “Are you still off your medication?”
“Actually, it’s not something about that. Have you ever seen the guy who sells DVDs in front of this place?”
“What about him?” She doesn’t show any special interest.
“Well, I didn’t see him for a while. It’s been about a month but nobody knows why he isn’t here anymore.”
“Well, people have their own reasons.” She says leaning on the chair. “Besides, this is not stone age. Do some digging and you’ll find out something.”
I sense something unnatural. Usually, this is not the woman who takes things too lightly. She is rather the person who does everything in her power to make others feel better.
I observe her face very carefully. I don’t like reading people and I’m not good at it either. But in this case, I feel that she is trying to hide something.
We stare at each other for a few moments and I feel my mouth takes the shape of a silly smile. In the end, she gives up and burst into laughter. And I need no further invitation to join her.
“Okay. I give up.” Finally, she begins to talk dancing her eyes. “You are right. I know where he is at the moment and I know how to contact him. And I could tell you that right now but I need something in return.”
“Sure. What’s he up to?”
“Be patient, sir, let me finish.” She winks at me. “There is this newly open Chinese near the police station and the food is awesome. Promise me you are gonna join me for dinner tomorrow and you’ll get the information.”
Her request makes me cringe. Of course, I’d do anything to spend some quality time with her but stuff like dinner dates are out of my comfort zone.
“Deal. But on one condition.”
“And what’s that?” She raises her eyebrow. Maybe she thinks that I’m trying to figure out some way to avoid going on a date. After all, she knows well that it’s not my thing.
“From today onwards, please call me by name. I’d be eternally grateful.”
She gives me a look that says something like “please don’t show off your modesty.”
“So, can I take your word for that? And of course, I’ll remember your request.”
“Be my guest,” I say with hope.
She gets up and picks the empty glass on the table.
“Let me fetch him.”
“Fetch him? What do you mean?” I want to make sure what I heard is not wrong.
“He is in the kitchen helping the chef, my good sir,” She teases me.
I look at her in disbelief. Needless to say that I never saw that coming.
“Gotcha.” She says triumphantly after staring at me for a couple of seconds.
Later, after I got to meet him he lets me know what caused his absence. His business has been below average for the past few months and despite his wife being a self-employed seamstress, they’ve faced many difficulties when it comes to making the ends meet. Therefore, within the past four weeks he went AWOL, he has worked as a helper to a mason. But after that project ended he's found this temporary gig in the restaurant until he figures out something.
His words make absolute sense to me. Most of the people are getting used to downloading movies or using VOD to watch whatever they desire. Even I, myself have recently started to appreciate the Blu-ray format more.
“So, does this mean you are never gonna do the old thing again?”
His answer is quicker than I expected.
“I’m never gonna drop that thing, sir, after all, I’m a movie junkie myself and I love the community. But I won’t be able to do it often like before. | https://medium.com/grab-a-slice/the-dvd-seller-de8ed90af7ab | ['Salitha Nirmana Meththasinghe'] | 2020-10-18 20:02:47.692000+00:00 | ['This Happened To Me', 'Life Lessons', 'Kindness', 'Nonfiction', 'Life'] |
The BuildContext classs in Flutter. | Do you know what that context object is? You know what object I mean, the BuildContext object named, context, that’s passed to the build() function all the time. It’s a necessary parameter to a bunch of static functions as well:
build(BuildContext context)
Theme.of(context)
Scaffold.of(context)
MediaQuery.of(context)
Navigator.of(context)
It helps you go up and or through the ‘render tree’ (or widget tree). In this article, we’ll take a closer look at it. That means, we’re going to look ‘under the hood’ of the Flutter framework and get down to what exactly makes up this BuildContext object named context. That means we’re walking through the code. In fact, let’s not keep you in suspense, I’ll tell you right now what this object is.
It’s an element.
I Like Screenshots. Click For Gists.
As always, I prefer using screenshots in my articles over gists to show concepts rather than just show code. I find them easier to work with frankly. However, you can click or tap on these screenshots to see the code in a gist or in Github. Further, it’s better to read this article about mobile development on your computer than on your phone. Besides, we program mostly on our computers — not on our phones. Not yet anyway.
Let’s begin.
Not In Context
As it happens, We’re not going to examine the BuildContext class itself to any great detail per se. With it seemingly being such a pivotal player in the Flutter framework, simply presenting the part it does play in a typical app should be enough. Besides, it’s an abstract class — you have to create a subclass and implement all the fields and functions it contains.
I’ll present a screenshot of the class below with all its documentation and deprecated functions removed — just to give you a hint as to its role in Flutter. You may recognize some of the functions in contains and may even be surprised that it’s in this class where these functions reside. Next, we’ll determine the precise subclass that uses BuildContext. Granted, I have already spilled the beans on that one, but act surprised anyway, ok?
The Element of Widgets
Let’s step back a bit and first look at the class, StatelessWidget. Below is a screenshot of one with all it’s documentation stripped out as well so you can see just what a StatelessWidget is made up of. Not much to it, is there? It’s an abstract class, and so a subclass must, of course, implement its build() function. We pretty much knew that already. However, what’s this createElement() function? It instantiates another class called, StatelessElement, and actually passes a reference to itself as a parameter. Let’s take a look at that class next.
For the class, StatelessElement, I decided to leave the documentation in — what little there is. Pretty short class as well. Note, it takes the StatelessWidget parameter and passes it on to its own parent class, ComponentElement, and so, we’ll press on there.
The class, ComponentElement, gets a little more involving, and so I just took a screenshot of the start of this class. Note, it too is an abstract class. Makes sense too, as it contains the very build() function that needs to be implemented. Of course, what we’re doing here is going back through the class hierarchy at the moment. Don’t worry, we’ll be re-visiting these ‘intermediary’ classes again. We now have the class, Element, to look at next.
Again, we’re looking at just the start of the class, Element, next. Good thing too. In comparison to the classes so far, this one’s huge. In all, it’s made up of 90 functions and properties. Yeah, this is indeed an important ‘element.’ However, we’ve arrived at what I was getting at. What do you see?
The class, Element, implements the class, BuildContext. Unlike Java, for example, any class in the Dart programming language can be utilized as an Interface — you merely append its name on the end of a class declaration behind the keyword, implements. Of course, unless your class’s abstract, you then have to implement all the functions and fields that make up that class. In the case of the class, Element, implementing the BuildConext class with its many members — no wonder there’s now a large number of functions and fields. With this little exercise finally over, we can conclude, as with all object-oriented languages, an object of type, Element, can also be passed to functions or constructors as an object of type, BuildContext. Surprise!!
The Flutter documentation states they decided to use the BuildContext as an Interface so to discourage direct manipulation of Element objects — whatever that means. I mean, who wants to deal with 90 class properties anyway!?
So, now you know, when you place a breakpoint in your favorite IDE right on the build() function line, you’ll discover the context parameter passed to that function is, in the case of the StatelessWidget, the very StatelessElement object we first saw instantiated back in the StatelessWidget screenshot. Note, the same is true for the StatefuleWidget. In its case, however, it involves the object, StatefulElement. Both these two types of widgets are listed below. | https://andrious.medium.com/the-context-in-flutter-e2403bab4632 | ['Greg Perry'] | 2020-12-22 17:23:13.520000+00:00 | ['Programming', 'Flutter', 'Android App Development', 'Mobile App Development', 'iOS App Development'] |
PayPal Co-Founder and First YouTube Engineer Talks About His Work at Origin | Yu Pan was one of the six co-founders of PayPal and the first engineer at YouTube. He’s now working as an R&D Engineer at Origin Protocol. In this video, Yu Pan shares why he joined the Origin team and what he enjoys most about the culture.
If you’re interested in being part of the Origin team and getting to work alongside Yu Pan and our other absurdly talented engineers, we’re hiring!
Origin has an untraditional application process. As an open-source project with over 150 contributors, we hire almost exclusively from our community. There are no white-board interviews. The best way to get our attention is to jump into our Discord, join one of our weekly engineering calls, pick an issue from our public project board and send us a pull request.
Learn more about Origin: | https://medium.com/originprotocol/paypal-co-founder-and-first-youtube-engineer-talks-about-his-work-at-origin-e8c9b4ba2973 | ['Josh Fraser'] | 2020-01-17 19:32:51.070000+00:00 | ['Blockchain', 'YouTube', 'Startup', 'PayPal', 'Videos'] |
Why Bragging About Your Online Success Isn’t Helping Me | Why Bragging About Your Online Success Isn’t Helping Me
Or anyone else for that matter
Image: olezzo/Adobe Stock
If you’ve spent any amount of time on the internet, then you know what it’s like to come across the humblebragger. At least, they think they’re being humble. If you’ve been on social media, then you’re also familiar with overt braggarts.
You’re minding your own business, scrolling your newsfeed, and you stumble across a longwinded post from an old friend. You know them well, but a lot of people also know them because they’ve grown into something of an Internet sensation.
Let’s be honest, anyone with a bit of knowledge these days can build a healthy following online. And your old friend just made a post that was a longwinded way of bragging about all of their achievements. There was literally no other point to the post, but to highlight all of the incredible work they’ve done.
While there’s nothing wrong in theory with being proud of one’s accomplishments, it hits a little different when you take time out of your day to write a post with the intent to brag about those accomplishments.
I’ll be completely honest with you. That post changed my view of that person. It’s something I hadn’t noticed about them before so I went to browse the rest of their posts and realized it was par for the course. That’s all they seemed to do. And, by injecting words like blessed, proud, humbled, they tried to make it sound like something it wasn’t.
What is Bragging?
Look, social media is social. It’s there to share your life with your friends and family. So, it makes perfect sense that you share moments of success and happiness. Of course, you want to announce a promotion, a new addition (whether human or furry), or any number of positive achievements or events going on in your life. Generally speaking, people share in your happiness. They congratulate you because friends like to see their friends succeed.
The issue comes when you share not to spread happiness, but to make others feel envious of you. The announcements or information you’re sharing isn’t useful, there’s no informative purpose, it’s intentionally dysfunctional, and is all about showing off. When you brag, there are two considerations — the information you’re sharing, as well as the people you’re sharing it with.
If we take it in a professional context, you may make a post on LinkedIn announcing an upcoming paper. That’s useful information. However, making a random post where you tell your audience you’ve been cited thousands of times… that’s not helpful or useful.
Which begs the question, if someone wants to share information that isn’t useful or helpful, why do they do so? What were they trying to accomplish? What harm are they causing by sharing as they did?
Let’s define bragging. If it comes up naturally, then revealing impressive information about yourself isn’t necessarily bragging. For example, if someone asks you where you live and you happen to live in The Hamptons, then you aren’t bragging by responding to their question. Now, if you were to disclose it without being asked, then it’s bragging.
If you complete your LinkedIn profile by filling in relevant achievements, then you’re not bragging you’re simply doing what the platform requests. So, perhaps the best way to determine whether something is bragging or not is to consider whether you’re imposing your status elevating thoughts on others by sharing what you’re sharing.
The Result of Bragging
What bragging does is highlight negative information about you, the sharer. You’ll become known as a braggart. Guess what? Most braggarts don’t know they’re braggarts, but the rest of the world recognizes it and they don’t like those people. What that bragging impresses upon your audience is that you hold a negative view of the people around you because ultimately, the message you send is that you’re better than them.
If only those people kept their bragging to social media where you could mute them. Unfortunately, the people who brag online are just as apt to do so in person. Research shows that bragging is linked to more undesirable traits. People who tend to brag when positive events occur are reportedly less empathetic, less agreeable, and less conscientious. Whereas those who brag and overshare have higher levels of narcissism.
It shouldn’t come as a surprise, but what might come as a surprise is how often you brag without even realizing you’re doing it. I’d encourage you to go look at the posts you make and determine whether you’re guilty of a humblebrag or two or if you go all out with straight-up, overt bragging. If you do, what does it do for you?
Often, what happens is the braggart attracts a crowd. A lot of people gravitate to them. Those people ingratiate themselves to the braggart because they see a benefit in doing so. Generally, these are people who have a lower status and/or ulterior motives to build a relationship with the person. The braggart now has an entourage. It sounds innocent, but in the real world (and online) that entourage can be used to tear others down in a bid to protect the braggart.
This is something you can watch unfold online all the time. A celebrity can call someone out and all of their fans rush to attack that person. It isn’t just celebrities who do this, however, anyone with a following can sick their followers on someone with who they have an issue. It creates this vicious cycle of bullying and power exertion.
There’s a new habit emerging, people brag (online and in-person) about the success they’ve found and they brag about it by acting as though they’re trying to inspire you to also succeed. They package their brag as a way of empowering you to take the same steps they did to create the success they have.
In fact, some people even go to the trouble of writing it all out and selling the information in a book. Then, they brag about that success. Look at what I did! I managed this online business while working a full-time job and now look at me! What are you waiting for? It may come with good intentions, but it doesn’t always lead to positive results.
We’re all guilty of a bit of self-promotion now and again. We want to highlight our strengths, we want to have our moment to show our expertise or competence, whether it’s in the workplace, online, or in general. We all do it.
But what is your intention when you do so? Because I can tell you this, when you brag about your online success it isn’t helping me. It isn’t helping anyone. It isn’t helping you either, because one over-the-top brag or poorly worded post highlighting your success can tank your reputation. You might continue to brag, thinking others are jealous of your success or inspiring success in others and the reality is it’s making everyone dislike you.
All that to say, be careful how you communicate your success to others. You might think you’re helping, but you’re more than likely doing real damage to the people around you. | https://medium.com/assemblage/why-bragging-about-your-online-success-isnt-helping-me-3471fe1a6d65 | ['George J. Ziogas'] | 2020-12-28 13:43:03.628000+00:00 | ['Psychology', 'Self', 'Success', 'Freelancing', 'Work'] |
Books That Foster Critical Thinking: How Not to Be Wrong | The irony of the title of Jordan Ellenburg’s How Not to Be Wrong: The Power of Mathematical Thinking is the emphasis he places on attempting to prove himself wrong daily. He writes, “Believe whatever you believe by day, but at night, argue against the propositions you hold most dear.” Ellenburg’s mindset can benefit everyone.
I love Ellenburg’s style of blending the art of writing and telling a perspective with the science of mathematics and behavioral psychology. Ellenburg makes statistics and mathematics approachable to the lay individual and causes the clinician to challenges their assumptions. I view research and statistics in a new light since reading this book.
“Human beings are quick to perceive patterns where they don’t exist and to overestimate their strength where they do.”
Find more recommendations at zacharywalston.com
*Book link is an Amazon Affiliate Link | https://medium.com/curious/books-that-foster-critical-thinking-how-not-to-be-wrong-50b5f0306feb | ['Zachary Walston'] | 2020-12-20 00:48:27.920000+00:00 | ['Personal Development', 'Personal Growth', 'Reading', 'Books', 'Thoughts'] |
The Big Bang and the Big Whisper | The Big Bang and the Big Whisper
What these models are telling us about our scientists
There is no greater question than understanding the beginning of our material universe. The prevailing Big Bang theory has a single flaw: it is stuck in understanding the very beginning itself. There is an alternate view, very similar, but distinct nevertheless in the Big Whisper theory.
— -
The Big Bang and Big Whisper models.
The Big Bang model has a storyline that is shown in this simple table with two green fields, indicating the hot and dense state as the starting point and the maturation point for matter as found with the Cosmic Microwave Background Radiation.
Prior to the two green fields, there is actually no real input in the Big Bang model. Basically, scientists nowadays start the green section with something called a singularity, which means that they don’t have a good answer for whatever was prior. In more proper language: The scientific models, when applied to the origin of matter, do not lead to an intelligible, usable outcome.
While the storyline is not captured fully due to the evasiveness of the singularity, scientists consider the overall storyline to be one storyline indeed. They are looking for a single storyline in green.
— -
The Big Whisper takes a different approach. It starts with the reasoning that something cannot come forth from nothing. Ergo: there was something prior and this prior reality is then undeniably true.
Instead of starting out with a singularity (or with a quantum fluctuation field), the Big Whisper starts with that something prior. The question is then not how matter derived from it, but rather how that prior state could have ended?
How did the prior state end?
By turning around the most fundamental question of where the material universe came from, we can start out with a different position to ultimately look at the same results.
The prior state could not have ended by itself; it could not have simply decided to become matter. Something real must have happened that was not based on what would happen in the future, but what took place in the then and there. Something had to be amiss for something new to come forth from the prior state. So how could something have gone amiss, then and there?
Naturally, there are very few handles. A good scientist would likely be lost quickly because a scientist cannot dwell in a prior state without much input. Particularly, if the future outcome has self-based features, then a scientist would find zero help in anything material for understanding what took hold prior to materialization. Scientists are lost because they are tied to a single storyline.
In the Big Whisper theory, there are two story lines. For the sake of keeping them apart, let’s call the material storyline Universe 2.0.
—
The two story lines are shown in this same table, with a storyline in grey about Universe 1.0. The grey storyline line ends at the CMBR where the green storyline starts for Universe 2.0. Note how the prior grey storyline extends into the ‘Prior state’ field, where the Big Bang model has a green field instead.
Universe 2.0 starts out at a distance of 380,000 years from its mathematical center.
As we all know, the oldest data we have about our universe is the CMBR. Anything declared older is hypothetical, although much can get theorized from known (younger) information about what existed prior to the CMBR moment and location.
In the Big Bang model, time and space started in the mathematical center.
In the Big Whisper model there is no such assumption about time and space. We know nothing about time and space (there are no scientific facts about their origin), and so they are declared phenomena that already existed in Universe 1.0.
The two grey fields do not tell the entire storyline of Universe 1.0. It is not known how Universe 1.0 came to be, but this is also not part of the quest. The quest is figuring out how it ended.
— -
Because matter is moving outwardly in our material universe — and we have known about this since Lemaitre and others proposed this 100 years ago — it is not outlandish to imagine an inward motion first that would have ended Universe 1.0.
With an inward motion, there has to be a stop at some point. If the inward motion stops, it can revert itself. A wind-up toy, for instance, is a good example to discuss this. Wind it up, and the release of tension causes the toy to do a trick or two until the pent-up energy is spent. Then, it can get wound up again.
If the inward motion occurred in an energized environment, involving an entire environment or a large part of it, and it didn’t stop:
What would happen then?
Like a wind-up toy being wound up too much, beyond its capabilities, the innards would have gone kablooey. All the tension that got built-up went awry in one smooth jolt and if one isn’t careful, a person could get hurt by the exploding innards. The sharp metal of the spring mechanism could easily leave a cut in someone’s fingers.
— -
Most likely, the inward motion got started up kind of innocently, perhaps for no good reason at all. But slowly, bit by bit, the inward push grew intenser. With more push coming in from behind, surely another push forward toward the center could get made.
Let’s ask the same question differently, this time about a vase and how it can shatter. All we need is a drop and a floor of course, and the vase can break in one thousand pieces. But if Universe 1.0 is a vase, how can there be a floor? There is not yet any matter, so how can the inward motion stop?
The vase broke on a self-established floor when inward push bumped into inward push and the very ingredients of the vase itself succumbed to the experienced pressures. It is possible that just the center broke under all this pressure and a subsequent catapulting action jolted everything towards their outer boundaries.
Yet this scenario has one trick up its sleeves that makes it less likely. If all kablooeyed back into place, then there would be the opportunity for Universe 1.0 to remake itself as it had been before. The scenario is too simple in that it did establish the vase and the floor, but it kept all broken pieces as part of Universe 1.0 and so a return to the original version was possible. The storyline as presented established a breaking of the vase, but it did not establish a full and eternal break among the parts.
— -
There is a phenomenon on Earth that provides us a glimpse how to answer a final stage of inward motion. A hurricane has a depression at heart and air flows are streaming in from all sides. But what is most interesting is that the winds form an Eye. There is really not much of a there there in the center, other than it being flanked by rising air in the surrounding circular wall forming the Eye. Blue skies above inside the Eye.
Jules Verne provided us another clue in The Journey to the Center of the Earth when Professor Otto Lidenbrock and his team get to the center. At that point, a compass does not give them direction anymore.
In the center, the data becomes zero. There, there is no there.
In the Big Whisper model, an inward motion is envisioned in the dark-energy setting of Universe 1.0, and fortunately for us, the inward motion did not stop.
In the scenario of the inward motion like a hurricane, or the center of the Earth in which there is a spot without data, then one can envision a clean break between various parts because the center behaves differently from the outer regions.
If the inward motion established a center in which all pressure got locked into place and friction would not even be possible, then we have a center without any movement, a stalemate situation. However, there would automatically be a first spot where friction would be possible right next to it.
Inward motion occurred in a larger setting, enormous but by its very nature also limited. Once the frictionless center had grown extremely large, the remaining push from behind would not be able to add more frictionless pressure to the center. At that point, friction becomes first possible. That is the very spot where all hell broke loose.
Next to the almost wind still Eye of the Storm, the strongest winds on Earth are measured in the Eye wall. The zero data point in the center is accompanied by the data point of maximized output sitting right next to it. All inward pressure left pushing-in establishes a side-way move at first point of possible friction. The strongest winds on earth are found next to the calm Eye; the strongest movements on earth occur right next to the solid core; the most extreme movement of Universe 1.0 are found right next to the frictionless center.
The fabric of dark energy was torn apart in the very first spots available, right next to the supersized center of tension, occurring with the final throes of Universe 1.0.
There is an outcome with a clean break.
While all catapulted outwardly, the energy of the center had not sustained any fundamental damage. It broke apart in all directions, but it remained dark energy.
The dark energy with its fabric torn right next to the center also catapulted outwardly, as did all dark energy that had been involved in the inward motion. Yet once the tension had been reversed and decompression had been established, the damaged dark energy kept on going and going and going. A full disconnect had been established by then between the original dark energy and the damaged dark energy.
The damaged energy was expressed at the CMBR, in the locations where the release of tension is spent and decompression therefore a fact.
While both Big Bang and Big Whisper models have the outward motion, the storyline for Universe 1.0 in the Big Whisper model is ending with this outbound beginning. Some dark energy had already been damaged by then but, due to the ongoing decompression, there was no option yet to express this. Universe 2.0 began with the expression of the damaged dark energy once all pent-up tension was spent.
In the Big Bang model the entire story is about matter and how it came about. Therefore, scientists are making truly one mistake only. They give themselves the freedom to theorize, and yet they fail to theorize about the ending of the prior state.
All they end up doing is theorizing about the extent of matter, how matter could have come about. It is like asking a baby where it came from. We all know the answer, because there are mature people around to give us that answer. Yet scientists in their Big Bang environment hold on to the baby with all their might and never ask themselves the real question:
What happened prior?
This question brings us back to the original and even more fundamental question:
Can we have something from nothing?
Most folks nowadays agree that it is impossible to have something from nothing. Even those that play with quantum fluctuation fields do not start from nothing; they start from fluctuation fields.
The baby did not arrive here from itself. It did not produce itself.
— -
There may be other ways to end up with having matter come about in our current universe. But the proposed inward motion of the Big Whisper theory sketches an environment in which not creation of matter but simply transformation of energy is possible. Meanwhile, the resulting matter is holding on to unification in the current universe — but only where that is possible. Viewed from the overall perspective, we can know that for one instance unification was not upheld, at the end of the inward motion, at the end of Universe 1.0.
—
I am a structural philosopher interested in the beginning of materialization. In my communications with scientists, I realized they are incapable of going where a philosopher has no problem going. That said, all as proposed in this article can be viewed from all known scientific facts and scientifically be considered plausible. | https://medium.com/carre4/the-big-bang-and-the-big-whisper-da5df67718bc | [] | 2020-12-25 20:26:10.472000+00:00 | ['Beginning', 'Big Whisper', 'Universe', 'Physics', 'Science'] |
The Fast Reader’s Dilemma | Well, here we go! Monday was the day when stats took on an even more addictive quality by showing writers our earnings daily. Like many of you, I’ve been hitting refresh constantly waiting for the new stats figures to take hold — I have taken appropriate breaks to sleep, eat, and go to the washroom. If I wasn’t before, I’ve now become a stats junkie in need of a constant hit.
Earnings are now calculated from the amount of time a reader spends on each story. While this may prove to be a fairer model than the previous claps based one, what does this mean for fast readers?
According to Medium, the new system does more than simply track how long a writer stays on a particular story, it allegedly monitors how often you scroll, and how long you stay in one particular spot.
In essence, it knows if you’re actually engaging with the story or if you’re just sitting there twiddling your thumbs or picking your nose. I also think it may know if you’re being bad or good and forwarding a report to Santa.
I am a quick reader and I get through many stories on Medium in half the posted reading time. While I do completely read all stories I clap for — I’ve never been one to clap and dash — I’ve often wondered if I stick around long enough for it to count as a read.
So, what can we fast readers do if we want to ensure a writer is being fairly compensated?
Slow down
This is probably the most obvious solution. Sit back and take your time while reading a story. Kick your feet up, take a sip of tea/coffee/or alcoholic beverage between paragraphs, respond to a text, trade jibs with your partner, etc.
My speed reading goes back to my school days. My legendary procrastination skills meant that I would hold off reading assigned work until the last possible second, and magically pull an essay out of my backside just in the nick of time. Reading with speed has stuck with me ever since.
I have found that when I slow down and read at a more casual pace, I end up absorbing more — imagine that!
So, when reading a new story, remember this is not a race! Perhaps one of the benefits of the new system is that we fast readers might learn to slow down and enjoy the process a bit more.
Engage with the story
Okay, I’m a big lover of highlighting. You should see my university textbooks! They are alive with vibrant splashes of yellow, orange, and blue — I love shiny things. My classmates would make fun of me until I aced a test or the final exam, and then they were in awe of my abilities — well, not really.
My love of highlighting extends to Medium articles. If you write something I agree with, that I think is clever, or that I wish I had come up with, I’m going to highlight it.
Stopping to engage with the story you’re reading through highlighting or clapping is a great way to moderate your reading speed. I know I get a thrill when that hallowed green circle appears informing me that someone has highlighted something I’ve written. I get all warm and tingly and have a Sally Field moment: “You like me, you really like me!”
A slow scroll
Can we get real here? You know sometimes you click on a story that has an amazing headline and then you’re kind of “meh” after reading the first paragraph or two — don’t judge me because I know you do the same thing! I don’t want to be one of those jerks who are only on the story for a minute — I don’t roll that way.
These types of stories are like going on a date with someone who is really attractive but has the personality of a walnut. So, you decided that you’ll be nice, pay for your half of the meal — maybe get a few good consensual feels in — and then head home never to meet them again.
If you’re not taken with the story you’re reading, just scroll through it slowly. Treat the article like it’s that co-worker who goes on and on about their kids. You couldn’t care less about the fact that her son is a prodigal child because he used the toilet on his own for the first time — you kindly avoid mentioning that the kid has to be about ten now — but for the sake of office peace you smile, nod, and just process enough of the conversation to you get the gist of what she’s saying.
Use the opportunity to note why the story didn’t grab your attention like the headline did, so you can avoid the issues in your own writing. Turn a negative into a positive learning experience.
I guess what I am saying is, if you’ve shown up to the party, stay long enough to make the host happy and then run out of there (and fill up a Tupperware container of appetizers when no one is looking). | https://medium.com/the-partnered-pen/the-fast-readers-dilemma-efef49d8c25b | ['Daryl Bruce'] | 2019-10-29 23:31:50.957000+00:00 | ['Medium Writers', 'Stories', 'Writing', 'Reading', 'Funny'] |
Remote Work Heralds the Start of a Cultural Change | 2020 has been a tough year for most people — private constraints and uncertainty are a grueling force when they continue without a clear end on the horizon. Coupled with store and restaurant lockdowns, productivity losses, and frozen project budgets, 2020 was a punch in the gut for millions of people.
But there is also a reason for optimism. The world of work is a heterogeneous spectrum — those who were among the people who were able to switch to remote work this year experienced the beginning of a cultural change and a profound shift in how people evaluate work and their role as workers.
The numbers are clear
In August, the technology group Cisco conducted an international study called “Workforce of the future”. More than 10,000 people from Great Britain, France, Germany, Spain, Italy, Poland, Russia, the United Arab Emirates, Switzerland, the Netherlands, Belgium, and Luxembourg, who worked from home for at least 10 consecutive days, were asked about their experiences and assessments of working at a distance.
In Germany, the tendency is clear: the majority wants to hold on to the positive sides that have been created by a more flexible working environment: 61% like the opportunity to work in distributed teams, 60% appreciate the greater autonomy, and for 58%, faster decision-making processes are a big plus.
It seems that the initial burden of the changeover has subsided: More than half of them say they have their private lives under better control than before the crisis. Only 16 percent reported a deterioration. Besides, 60 percent say they work more productively than during the first lockdown in spring. In most cases, the confidence of superiors in the work of the teams had not suffered. In international comparison, there are no major differences — people around the world have gained through distance work.
According to the Cisco study, there could also be another positive side effect: less traffic. Around half of the teams would like to take fewer business trips as a result of the newly established work routine.
A new self-image of work and workers
One number that stood out for me is this one:
86 percent of those surveyed would like more personal responsibility.
This statement supports the tendency that we are in the middle of a cultural change. This cultural change is challenging old ways of looking at the world of work. While for decades it was considered an achievement to have fixed working hours to be able to enjoy the union-fought free evenings, it seems to be okay for more and more people to work at orthodox working hours — as long as they can decide for themselves.
Moreover, a more abstract concept of what a company actually is, is developing. As work shifts more to the private sphere, questions of occupational health and safety, but especially of loyalty and a sense of belonging, arise. I believe that one of the biggest challenges in the long term will be to foster a strong corporate culture, trust, and cohesion within teams when direct contact is rare.
The self-image of workers is also changing. The longing for more autonomy and the increased focus on the private sphere is creating an image contrary to the previously widespread model of “command & control”, according to which workers must subordinate themselves and decisions about their place of work and working hours are primarily made by the employer.
The flip side of this is that management tasks of work organization are handed over to employees who cannot or do not want to take responsibility for the organization themselves.
More than just a change of workplace
In all the chaos that 2020 has brought, it is easy to overlook the fact that the home office has heralded more than just a change of workplace.
Rather, after a long time, a new space is opening up again to reflect on the value of work for the individual, society, and the economy, and how this space is being designed. | https://medium.com/work-today/remote-work-heralds-the-start-of-a-cultural-change-363bfe015c9a | ['Alice Greschkow'] | 2020-12-07 16:09:07.827000+00:00 | ['Productivity', 'Remote Work', 'Work Life Balance', 'Work', 'Work From Home'] |
Investing In Business Intelligence Software | A wide range of industries are starting to lean into a data-driven culture, focusing on data, as one of their most important resources for decision-making.
Photo by Campaign Creators on Unsplash
It is a fact that every company, no matter the size, from start-ups to even more established companies, manage different types and volumes of data. Data can talk, but how can we read it?
When it comes to data, we must understand that originally it will be raw, so it needs to be cleaned up, organized, and analyzed in order to provide us information, so we can communicate something with it. All these processes take time, and as we know, time is non-refundable also representing a cost.
So, what if it is possible to access all the data of business in real-time? Try to imagine, accessing the right information in a matter of seconds, making faster and better decisions. This is what a business intelligence software is about. A business intelligence tool can provide much needed information about a business without relying on IT, in a short period of time and with the right visualizations, that will help us to digest the data and take insights from it.
The Benefits Of Business Intelligence Software
Photo by Monty Rakusen on Getty Images
A real-time business intelligence software will help companies to get access to a huge volume, variety, and sometimes complex data, in just a fraction of seconds in a much easier way. One of the big benefits of BI tools is that they are highly accessible. These tools allow the users to see different types of graphs, dashboards, and visualizations in every kind of device, from any screen you may have in the office, to mobile phones. No matter when or where, the data will be available in real-time.
Real-time business intelligence will also help to improve decision-making by understanding in a more detailed way all the business numbers, metrics and key performance indicators. Having all these daily, weekly, monthly KPIs and contrasting them with previous time periods, short- and long-term trends, historical data for patterns, etc., will allow us to deep dive on the data, extracting amazing insights from it. This will lead not only to an improved metrics performance but also to find possible gaps and areas for improvement.
Photo by Luke Chesser on Unsplash
Besides discovering new market opportunities, going deeper in the data will also help to identify outliers. These anomalies can be hiding behavioral trends that could probably not be seen, so by discovering them, it will be easier to find those answers we may be looking for. Information can provide answers that can explain the reason for different issues or problems the business has, but we haven’t found before.
Another point worth highlighting is that most of BI tools are self-service software. What this means is that everyone in an organization can have access to crucial business data without requiring high technical knowledge and without relying on other company towers. This gives the final user much more independence, saving a lot of time and resources.
Why Invest In Business Intelligence?
Not everything is about fast analysis, intuitive visualizations, and data-driven decisions. It is also a matter of costs.
As it was mentioned before, it is a matter of reducing costs in a short-term period. Of course, buying and investing in a business intelligence software will require an initial investment, but it will be reflected into an increase in business performance and efficiency.
Therefore, the initial payment will be transformed into a faster route to the goals aimed by the company, which will translate into a significant increase in the profits.
How?
By understanding better our data, we will understand better our business. This means, we will uncover opportunities, as well as areas with place for improvement, cutting costs and, yet again, earning more money.
When to start investing in Business Intelligence Software?
If there is a need of analyzing reports to find points of improvements, grow a certain business and of course, somebody to consume the data of those reports, it could be a first signal that there is a need of a business intelligence software. Business intelligence tools can help to scale business.
A second reason could be the volume and different data sources a business can have. Sometimes data does not come from the same source, so for analyzing and show it all together, it has to be integrated in a same place. This is also possible with a BI tool.
For last but not least, spending a lot of money on adverstising but not analyzing the impact of it, can also be an alarm. Maybe it is time to spend less on advertising and start thinking on investing some of that budget in business intelligence.
Conclusion
Having real-time data will not only improve decision-making but also it will help taking more proactive actions instead of being reactive and moving slower. It is vital to every company to understand their data because this will help to better understand their customers behavior, make better decisions and also learn about mistakes committed in the past in order to not repeat them in the future. With a real-time business intelligence software, all this is possible.
There are many different types of business intelligence tools, from cheaper and more simple ones, to others more expensive and complex. It is all about finding the one that adapts better to your business possibilities and needs.
Sometimes we feel like we need to get more data-driven in our business but we don’t know from where to start.
For me, here is the start…
Let’s take the next step! | https://medium.com/digital-diplomacy/investing-in-business-intelligence-software-c4b489c42edf | ['Martina Burone Risso'] | 2020-10-19 12:18:42.651000+00:00 | ['Investing', 'Technology', 'Business Intelligence', 'Data', 'Data Visualization'] |
From Need to Impact: Designing Products for Social Good | If you’re one of the over 2.5 billion people who uses Facebook each month, you likely know it as a place to find news, keep up with friends, or even buy or sell some of your belongings. But did you know that you can also mark yourself as safe during a crisis, raise money for nonprofits you care about, sign-up to donate blood to local blood banks, and help safely return children when they go missing nearby? In fact, there is a team at Facebook dedicated to using the platform to create real-world, social impact.
But at a company as large as Facebook, how do we translate needs into ideas, and get those made into features that will positively impact billions of people?
The secret to making a valuable product is to identify latent demand in people’s behavior and to make that behavior easier and more impactful. This is the same way we approach Facebook’s social impact products. Below, I’ll take a look at two examples: Facebook’s AMBER Alerts and Crisis Response and show how they went from concept to launch to become the helpful products they are today.
What is Social Impact?
You’ve probably heard the term “social impact” a lot, especially recently. Organizations use this term to refer to the effects their actions have on the well-being of society. For example, Habitat for Humanity makes a positive social impact on communities by providing housing for low-income communities. On a platform like Facebook, the community is diverse and global. Therefore, as designers on the Social Impact team, we need to design products that positively affect the well-being of people around the world.
I’ve heard this type of design referred to as “Design Activism” by Francesca Desmarais in her talk about climate adaptation at Design Matters 2019. She said that for her, design activism is a marriage of her passions and her work. At Facebook, our Social Impact team of product managers, engineers, data scientists, researchers, content designers and product designers gets to marry their passions with their work every day.
Identifying Latent Demand
We sometimes see people who use our app trying to accomplish things that we don’t yet support. We can identify this type of demand in many ways, but often this identification process follows the same pattern: First, we see a behavior anecdotally such as users posting about missing children — like what inspired AMBER Alerts. Or, a Facebook company employee experiences or becomes aware of a crisis and wonders how the company might use one of its apps to help. Next, we either substantiate that behavior through data or build a small test to measure latent demand for a product that might meet it.
An example of how our observation of latent demand led to a product innovation is the recommendations feature. The team found that a significant number of people posted to ask their networks for recommendations. Upon further investigation, the team found that these posts often followed the same pattern: Someone would post about an upcoming trip, asking their network for recommendations on what to do, where to go out to dinner, etc. Friends and family would reply, filling the comment thread with suggestions about places to visit or restaurants to try. So, in order to make this behavior easier, the team built a new type of post to solicit recommendations, which organized all the suggestions into map points for easier discovery and planning. | https://medium.com/facebook-design/from-need-to-impact-designing-products-for-social-good-9061be38cefa | ['Garron Engstrom'] | 2020-10-21 17:03:06.038000+00:00 | ['Product Design', 'Social Impact', 'Technology', 'Social Good', 'Design'] |
Your Views Are About to Drop | If you’re at all like me, you constantly obsess over your stats. It’s the last thing you do before bed and the first thing you do in the morning. My partner is constantly in my ear, “Seriously? You’re looking at your stats again?”
“Yeah, that’s right buddy. Ain’t no shame in my game.” I said as I shooed him away.
I’m not anti-stats so I’m not going to sit here and tell you to stop checking your stats, because A) I’m no hypocrite and B) stats can teach you a lot when used correctly.
During my time on Medium, I’ve made it a priority to pay attention to patterns and I’ve been enjoying sharing them with other writers.
For example, in this piece, I write about how I noticed my views are higher on days where I don’t post and that in general, my posts do better when I give them time to breathe. I don’t agree with the writers who say you need to publish every day to be successful.
Not only is it not realistic for many of us but in my experience, it’s also just not true.
In this piece, I talked about how you don’t need to adhere to the unspoken ‘rules’ of Medium. You don’t need to use headline analyzers, agree to everything an editor says or choose only one niche to be successful. I know because I’m the living, breathing proof.
I write true crime, history, crappy ‘poems’ and random things I just feel like sharing, and I do alright on Medium.
Today, I’m here to share another pattern I’ve noticed.
Your views are about to drop dramatically but you shouldn’t panic. Why? Because the holidays are soon upon us.
Since September, I have had anywhere between 1,500 to 2,000 views daily, on a consistent basis, even when I didn’t post for days. Some days, I get lucky and manage to hit 3k.
On November 11, I noticed my views DROPPED big time. Barely anyone was reading my work. No highlights, no claps, no comments, no new followers.
On November 8th and 9th, I had 3,552 and 3,480 views, respectively. On November 11 and the following weekend, my views ranged from 1,115 to 1,300 views.
That is a substantial drop in views, it’s more than cut in half.
And, like the drop in views, my heart dropped, too.
However, by the end of the weekend, I noticed my views went right back up, ranging from 1,700 to 2,000.
I racked my brain all week trying to figure out what changed, what I had done wrong, and suddenly, it hit me.
It was Remembrance Day on November 11.
I said to myself, “Not everyone is racing to the computer to read your stories Fatim, people have lives.”
When you begin to spiral and panic about low views, it’s important to remember people have their own lives outside of Medium.
I may not be an American myself, but today is Thanksgiving Day in the United States, and guess what? My views are down.
This got me thinking that views are likely to go down during the holidays too.
Yes, it’s ‘Corona Times’, but that shouldn’t stop people from (safely) celebrating the holidays. You can still enjoy a delicious turkey dinner with your family, (it might have to be on Zoom but at least you have them to celebrate with), you can still decorate a Christmas tree, you can still bake Christmas cookies, make ‘special’ eggnog, perfect your peppermint hot chocolate recipe (I finally got it right!), and more.
For your own sake (and probably for the sake of your loved ones too), don’t spend the Christmas holidays panicking over your views.
Enjoy the holidays (at least what you still can during the pandemic.) | https://medium.com/wreader/your-views-are-about-to-drop-279eb7ab5a54 | ['Fatim Hemraj'] | 2020-12-05 17:42:46.889000+00:00 | ['Medium', 'Writers On Medium', 'Holidays', 'Christmas', 'Writing'] |
Query your (big) data with the power of Python, Dask, and SQL | Query your (big) data with the power of Python, Dask, and SQL
How to get the best of all worlds
This post will describe what an SQL Query Engine is and how you can use dask-sql to analyze your (big) data quickly and easily and also call complex algorithms, such as machine learning, from SQL.
Photo by Moritz Kindler on Unsplash
SQL rules the world
If data is the new oil, SQL is its pipeline. SQL used to be “only” the language for accessing traditional relational OLTP databases. But today, it is much more: any BI-tool builds on SQL, for Data Analysts, Data Scientists, and Data Engineers SQL is a base skill, and even NoSQL databases often implement a language quite similar to SQL to make adoption for its users simpler.
In summary: whenever you are able to map your data pipeline and analysis on SQL, you open up your data for a large range of applications and users. The only question is: what to do if the data does not fit into the constraints of a traditional relational SQL database (e.g. too much data, too many different formats)?
SQL Query Engines
This is where SQL Query Engines come into play. Their job is, to make data from various data sources queryable with SQL — even though the data is not stored in a (traditional) database at all. Typical SQL Query Engines work as shown in this figure:
Schematic overview of SQL Query Engines with examples. Image by the author.
Let’s quickly walk through the components:
When you issue a SQL query to the query engine (e.g. via a JDBC/ODBC connection in your BI-tool), the query gets first parsed and analyzed by the tool. The SQL query engine compares and enriches the query with metadata it has about the actual raw data. As there is no relational SQL database it can issue the query against, the SQL query is converted into API calls of one of the distributed computing frameworks. Additional optimization steps can happen before and after this conversion. For example, Apache Hive submits MapReduce jobs whereas Apache Impala will (simplified) create on-the-fly-compiled C++ programs. The distributed computation framework is now responsible for performing the actual data analysis and distributing it over the cluster. These frameworks are the reason why SQL Query Engines work especially well with a lot of data: they can parallelize the work well and make use of the full power of your cluster. A typical well-known example of such a framework is Apache Spark. The distributed computation framework needs hardware to perform the calculation. Different frameworks can work with different cluster types and resource schedulers, e.g. YARN, kubernetes, Apache Mesos, … Finally, the data to query lives on an external storage device, e.g. S3 or hdfs. This is very different from a traditional relational database, which contains both the data and the functionality to query it.
For the user, it seems like querying a traditional relational SQL database — but the similarity is really only the SQL language.
Why so complicated?
This system of different components that need to play together nicely seems very complicated in comparison with a traditional relational database. However, there are some crucial benefits that explain why SQL Query Engines are developed and used by many small and large companies today.
Using a distributed computation framework is a must to query data in the quantities we need to deal with today. However, distributed computation is hard. SQL Query Engines give us an easy to understand interface and hide all the difficult particularities of the systems.
Separating data from computation allows having multiple (automated) ways to ingest and analyze the data simultaneously. It can also reduce costs as computation and storage can be scaled independently.
Reusing the same cluster for more complex, custom data pipelines and more simple SQL queries is again a way to reduce the overall costs.
As SQL is so widespread in use, SQL Query Engines democratize the access to the important data and algorithms and make them usable throughout the full company.
What is missing?
Especially the more modern representatives of SQL Query Engines like Apache Impala and presto are so widely in use for a good reason: when it comes to stability and overall performance it is very hard to beat them. However, there are some open issues that none of the presented solutions can easily handle so far:
If your data is in CSV or parquet, none of the tools has a problem. But what if you have your data in a strange proprietary format, for which you need a special library to read in the data?
Machine learning is already playing a large role in many companies and is also making its way from the Data Scientists to the Analyst. Currently, the support for ML prediction or training within the SQL queries is only basic (except for some cloud providers, where you can not influence the code), and interacting with the modern well-known libraries such as tensorflow, keras or scikit-learn is mostly impossible.
SQL Query Engines make it very simple and comfortable to query the data, but sometimes too simple. The world is messy, with inhomogeneous clusters, old historic batch systems, messy data, and complicated transformation steps. Including these things into the rigid concept of the SQL Query Engines can be tedious if not impossible.
Dask and dask-sql
So, how can we do better? With the power of Python and Dask!
Dask is a distributed computation framework written in Python and is comparable to Apache Spark (but is actually a lot more than that). dask-sql (disclosure: I am the author) adds an SQL query layer on top of Dask. It uses Apache Calcite for query parsing and optimization and translates the SQL into a Dask computational graph, which is then executed on the cluster. The SQL queries can either be issued from within your Python code and notebooks directly or via an SQL server speaking the presto on-wire protocol (which allows connecting any BI tool or external application).
How might Dask (and dask-sql) solve some shortcomings of the other SQL Query Engines?
Using python as the primary language opens up a wide variety of integrations and a vast ecosystem of tools and libraries. Every format and file you can read in with Python (which is basically everything) can also be used in Dask (and dask-sql), so you are not only limited to the typical big data formats such as parquet. (But of course, also those are supported).
Dask is able to connect to a large variety of cluster types and systems, including YARN, k9s, Apache Mesos, various batch systems, cloud providers, HPC systems, or manual deployments. You can even mix and match these, which makes it a lot more flexible than most of the other systems. Its large set of debugging and monitoring tools makes the process of finding performance bottlenecks easy.
Python is the language of Data Science — so porting and reusing algorithms, implemented transformation steps, and tools from your Data Science team is much easier. User-defined functions (UDFs) inside SQL queries come without any performance drawbacks in dask-sql and can range from simple numeric formulas to complex calls to machine learning libraries or other tools. They can be used when the SQL standard functions (a large fraction is already implemented in dask-sql, but not everything) are not enough anymore. As Dask DataFrames mimic the well-known pandas API, it is also very simple to define additional complicated distributed transformations and use them from within SQL.
Probably even more than any other framework (such as Apache Spark), the Dask ecosystem contains connectors, integration, and wrappers for so many things, ranging from machine learning to external system support and special use-cases (such as geospatial data). Just a small glimpse here.
Data on S3 queried distributed from a BI tool (Apache Hue in this case) via dask-sql in the background. Image by the author.
There must be a problem, or? Well, there is. Dask itself is a very mature project with many years of experience and a large community. dask-sql on the other hand is still a very young project and there are still a lot of possibilities to improve (in features, performance, and security). At least this is something, you can help with your feedback and contribution :-)
dask-sql is compatible with blazingSQL, a SQL Query Engine for computations on GPUs. Using GPUs adds a huge performance boost and allows you to perform SQL analysis on large amounts of data in no time. Adding custom functions is a bit more complicated, but the RAPIDS framework, which blazingSQL is using, adds many possibilities — also for machine learning or graph analytics.
Summary
SQL Query Engines are a crucial part of the data architecture: they open up the data and computation power not only to the few Data Engineers and distributed system experts but to a large group of users and applications. Dask and dask-sql also port these benefits into the python world and the combination of distributed processing, Python, and Dask give you even more benefits.
We have touched on a lot of different topics, and you might want to read more about details on the specific parts: | https://towardsdatascience.com/query-your-big-data-with-the-power-of-python-dask-and-sql-f1c5bb7dcdbe | ['Nils Braun'] | 2020-12-07 21:34:44.369000+00:00 | ['Python', 'Machine Learning', 'Sql', 'Data Science', 'Dask'] |
Stimulated | To become inspired it is mandatory that neurons within the mind fire in a pattern that is unrecognised by habit. Without such stimulus, the mind remains in a state of autopilot; where neural activity (and therefore brain function) remains baseline and creation-less.
With new ideas means new neuron wiring (read: new pathway of firing). This is achieved through introducing unexpected and new stimuli into sensory experience.
A short list of what may reinvigorate the mind include the following (but are not limited to): | https://medium.com/live-mighty/stimulated-de7e93beef5b | ['Yunus Celik'] | 2017-06-18 02:26:35.390000+00:00 | ['Motivation', 'Growth', 'Inspiration', 'Personal Growth', 'Personal Development'] |
Do you need a writer on your design team? | Do you need a writer on your design team?
5 reasons why a UX writer makes designers more efficient and products more successful
Illustrations by: Jasmine Rosen
The role of a UX writer has grown and evolved in recent years as more and more tech companies integrate writers into the design process. UX writing — a skill that combines content strategy and copywriting with UX design principles — has now become an essential ingredient in any successful design org.
If you’re a product designer or manager of a design team, you might be thinking: “The designers on my team know how to write words. Do I really need a specialized UX writer or content strategist to join my project?”
The short answer is yes. An experienced writer will elevate your product by making it feel more polished, more consistent, and easier to use.
You’ll be able to let your designers focus on what they do best, design. Even more importantly, a writer adds another creative partner to the team who will approach problems with a different mindset, one who can bring new solutions to the group that haven’t been thought of before.
Designers who’ve had good partnerships with writers will tell you of the numerous benefits. Here are some thoughts from Malia Eugenio, a designer at SurveyMonkey:
“I’ve found that working with a writer really improves my designs. The words can make or break a user experience. Having the perspective of a writer at the inception of a new design helps immensely with establishing IA and understanding where information, clarity, and direction are needed to help users be successful.”
1. Consistency creates trust in a product
Words are the voice of your product. When there are typos or grammar mistakes in your words, it erodes user trust in your brand. Research suggests that typos and other errors damage the credibility of a website. When I see a typo, or even something as harmless as inconsistent capitalization, I can’t help but think, “Do they know what they’re doing? Did they rush this?” A writer has a specially trained eye to catch even tiny errors in grammar, capitalization, and punctuation.
2. If you don’t have a writer, who writes your copy now?
The answer is usually some combination of the designer, product manager, marketing partners, and engineers. Each of these people likely have a different approach to style, voice, tone, word choice, capitalization, and punctuation. What you end up with is a Frankenstein-like product where different parts of your app have different personalities — and worse yet, conflicting terminology.
A writer will create (and habitually use) style, voice, and tone guidelines and consistent terminology, so that your product sounds and feels the same no matter where the user is in their journey. What capitalization do you use for buttons? What punctuation do you use for modals? A good UX writer will have a point of view on all the small details that others might miss.
3. Good content can speed up your process
As an accomplished designer, you might say, “I like to move fast. Bringing in a writer will slow down my project.”
Actually, bringing in a writer — especially early on in a project — can save you from backtracking or redoing work later on. A content expert can help you establish the right information architecture, taxonomy, and product or feature names from the beginning, so you’re less likely to need big foundational changes later on. They’ll think about how the terms you choose fit into your broader systems, product portfolio, and future growth plans, setting you up for success in the long term.
Also, better UI copy can speed up your review process. Design reviews with placeholder copy can often get the conversation going in the wrong direction — drawing focus to the messaging and not the design. Working with a writer to get more thoughtful copy into your designs will only make critiques go smoother and get you better feedback.
4. A writer helps you scale your team
If you’re not at a big tech company (or even if you are) resources are probably an issue. In a small, scrappy design team, it may be hard to convince people that hiring a writer is the right way to spend limited resources.
I’d point out that having a small, scrappy design team may actually be more of a reason to hire a writer. Think of what areas your team is the strongest in and where you have skill gaps and needs. Will hiring more people with a similar skill set help you fill in those gaps? Hiring a talented person with a different skill set can help you get to more balanced and complete team faster.
With a new content partner, designers can focus on pushing their work further without spending valuable time struggling with skills that are outside their wheelhouse. Plus, a writer can bring a new dynamic to your team by thinking about ways to solve user problems with words and by addressing information needs. That design problem you’ve been noodling on for weeks? Maybe it’s not a design problem after all, but a messaging problem.
Adding a writer early can set you up for future success by starting with thoughtful writing guidelines and systems. It’s also an opportunity to set a precedent about how an effective design team should function as your company grows.
5. Good copy will move your metrics
Lastly, but maybe most importantly, the business case for bringing in a skilled writer is strong. Even small changes to the text in a key flow or prominent CTA can lead to big moves in metrics.
Almost all big tech companies now have growing UX writing or content strategy teams, and it’s because they’ve seen the value it brings to their bottom line. In fact, Booking.com has a team of nearly 60 writers who perform scores of copy experiments to optimize every part of their site.
In a 2017 talk at Google I/O, Content Strategy Director Maggie Stanphill showed an example from Google’s hotel booking flow where they changed a line of copy from “Book a room” to the less committal “Check availability.” The result of the change was a 17% increase in engagement.
Giving people the information they need — when and where they need it — can increase engagement with your product. My teammate Deanna Horton wrote an onboarding tour for SurveyMonkey that increased feature engagement by 15%. Along the same lines, she wrote tours for a survey gallery that showed how you might go about making important decisions based on survey data, which increased the rate people sent surveys by 9.2%.
A small copy update to Wufoo increased clicks by 60%.
Small changes matter. Earlier this year, I helped our Wufoo team rework copy for a button to get customers to try a new experience in the product. After we updated the button copy, the click rate increased by 60%, just from the copy change.
So, there’s clearly enormous value to unlock by adding writing skills to your design team. But if you’re still not sure, here’s more on the subject: | https://medium.com/curiosity-by-design/do-you-need-a-writer-on-your-team-62805a8de5df | ['Michelle Wu Cunningham'] | 2019-09-10 15:01:01.189000+00:00 | ['Ux Writing', 'Content Strategy', 'Design', 'UX', 'Surveymonkey'] |
How To Trust Again? | Long term relationships, whether they are friendships, love relationships, or family relationships are often fraught with ambiguities. Often, we are projecting our realities and feeling hurt when we don’t receive the desired responses. We tell ourselves narratives about one another that clouds our judgment and amplify our differences.
When breaks happen, it’s difficult to find the peace of mind to come together again. Forgiveness is the key to understanding past actions and reconciling them. But, the space between forgiveness and trusting again can feel like that insurmountable gap that lingers and creates that eternity of silence.
I often call this gap, the “chasm”, in my relationships that suffer breaks.
Before you trust again, you need to understand that there’s no requirement for you to trust again.
When you are evaluating whether to trust someone again after forgiving them, you should not be thinking about them, rather you should be thinking about “you”.
Thinking about them at this stage cues to tendencies to people please. If you think about what their needs are at this stage, then you are missing the point.
The point is to evaluate when you have healed enough to move on. It is okay to take time. Moving on doesn’t imply that you will want to commit to overcoming the “chasm”.
Sometimes, moving on implies moving on with different people.
Overcome the guilt of forgiving quickly and moving on.
I meditate. I do yoga. But, there are times, when even the best mechanisms of letting go are simply not enough to heal that gap in between forgiveness and trusting again.
You have to overcome the guilt of not making a speedy decision, not deciding to give it another chance, and not wanting to continue when so much time/effort has been invested.
Stop feeling guilty for feeling that hurt even when you have forgiven the person. Stop feeling guilty for feeling triggered. Acknowledge the trigger and determine a better response.
Even when you feel genuine compassion for who they are, what they have gone through, and how they have changed, you still have a choice on whether you want to re-engage with the person.
Little steps every day are more trustworthy than the big decision to trust again.
I don’t rely on those big decisions anymore. They are not trustworthy in relationships unless they are backed by people showing up every day. There are times when people are simply not able to show up emotionally in the same way that you’d like them to.
Then, time heals. Time must be taken in those circumstances.
When trust has been breached in a major way, you have to allow for that period of adjustment, where small steps are taken every day to shed light on whether this relationship is worth pursuing again.
This goes for every kind of relationship, even family relationships.
I’ve noticed that when I jump into that big decision, forgiving people and then immediately trusting them again, there is the inevitable incongruence in my thinking and actions.
This incongruence shows up as “forced” actions.
You are not ready to engage in every day close interactions with them, yet you are thrust into that scenario because you think you have forgiven them and it’s okay.
What you find is that you are “forcing” yourself to be okay when they fall into the same old patterns that made you disconnect from the first place.
Re-establishing trusting patterns imply mending the old.
There’s no way that re-engaging can be productive unless it’s a brand new relationship or you are establishing healthier relationship patterns.
For instance, with estranged parents who may have their struggles and frequently fall into old patterns of relating to the world, when you re-engage with them, you may need to establish rules, boundaries, and teach them new ways of communicating with you.
With an ex-lover or an ex-friend, you may want to pursue an entirely new relationship with them than the one that you had before.
Perhaps they are meant to be a certain kind of person in your life. They are not meant to fill the “heavy shoes” of being a partner, a lover, or even a parent.
Sometimes, you have to reparent yourself and be your partner to see that they are not here to complete you. When they don’t meet your expectations, you can pick yourself up and fill that emotional void.
With every relationship, it’s better to find out where they fit in your life and steer that relationship toward this ultimate fit, than to force it to be something else that you imagined it to be.
You should be scared and they should meet you halfway.
Trusting relationships involve two people. If it’s one person doing the work, then it’s never going to work out. If you are in a family, then everyone has to contribute. There are no freebies.
It’s tempting to think that every relationship can overcome that gap between forgiving and trusting again.
In truth, very few relationships can overcome that gap to become something entirely new and worth cherishing. This is why people don’t date their ex-lovers again. This is why estranged parents often stay estranged for many years and decades.
On the other side, you can come out of the cycles of disfunction and decide to engage differently. By engaging differently and trying to overcome the gap, you learn relationship skills, conflict resolution and you learn to trust your instincts.
By overcoming what seems impossible at times, seeing people as multifaceted beings who grow into better versions of themselves, and taking the “you” out of the equation, you can extend a compassionate hand even in bad situations.
How do long term spouses forgive their better half’s infidelities during midlife crisis?
What overcoming that gap looks like?
Overcoming the gap between forgiveness and trusting relationship takes months and years of work. It’s not rainbows and roses. It’s arguments, conflicts, and trying to bridge the ever-changing needs of one another. It’s compromising and understanding the other person’s point of view.
It’s letting new situations point out the worst parts of you and learning to change your ways.
When you step through that gap, you will see a new relationship slowly emerge between you and your person. You will feel like old patterns are being demolished with each engagement. You will feel like you are coming to new understandings. You will feel like they surprise you in how they are engaging with you. You will feel like there are new possibilities for the relationship now. You will like how you are with them now.
You will feel joyful in your interactions with them. You will relinquish past assumptions about them and commit to choosing to trust them again.
Every day, you will feel stronger and more solid in your relationship. Every day, you are on a discovery of who they are now and what your relationship can become.
How do you get there?
Many small steps are taken to overcome this gap. One of the biggest is to choose to not let yourself fall into old patterns with them now. The only person you can control in a relationship is you.
Think strategically about how you want to engage with them now. For instance, if you were weak with a bully before and was taken advantage of, then you need to establish strong boundaries with them now.
If you feel like there were many misunderstandings in the past because you didn’t communicate well, then take the time to communicate, verify, and come to an objective conclusion about them.
The point is to take the time to see how you can handle this situation, react to them, and adjust your expectations.
For relationships where one person simply cannot change, then radical acceptance is called for. I always think about the type of relationship that will allow me to stay in contact with the person without having to fully immerse myself in their struggles.
This is difficult to do with family but it can be done. With family members that have mental health issues, this is what you will need to do to support them. No matter how many times they breach your trust, because they are your family, you try.
You are compassionate in understanding their struggles and making allowances for them. And, you don’t have to be a pushover.
You can draw firm boundaries, demonstrate the ideal behavior you’d like to see, and see if your person can meet you halfway. | https://medium.com/jun-wu-blog/how-to-trust-again-cf456c42b9f2 | ['Jun Wu'] | 2020-09-21 11:00:38.265000+00:00 | ['Family', 'Relationships', 'Trust', 'Self', 'Friendship'] |
About Us | Inspiring by Example
Community Works Journal has been published by Community Works Institute (CWI) since 1995, in support of teaching practices that build community. The Journal features educator written essays and reflections, along with program and curriculum overviews that highlight the importance and use of place, service, and sustainability to a relevant and meaningful education.
Our overarching purpose is to stimulate and support innovation —indeed, an entirely different way of looking at education — one that is rooted in empowering students as members of a community with shared purpose.
Service-learning has proven to be a very powerful vehicle for this, encouraging student empowerment, stimulating real social change, bringing social justice issues forward, and most of all helping schools look at education through a new and community focused lens. We thank our subscribers supporters, partners, and individual donors for their generous support for this publication.
“Community Works Journal is a resource that truly speaks to teachers with excellent, provocative ideas.”
Steve Seidel, Ed.D, Chair, Arts in Education
Project Zero Director, Harvard Graduate School of Educatio
Building a Community of Educators
Community Works Institute (CWI) is an educational non profit that serves as a beacon, drawing together and galvanizing individual educators, institutions, and community members from around the world.
Our focus is on curriculum that has place as the context, service-learning as the strategy, and sustainability as the goal. CWI provides collegial collaboration, publications for educators, professional development, and innovative tools and resources. We believe that building a long term community of connected like minded educators offers the best opportunity for encouraging and sustaining change and innovation. Learn more about our professional development opportunities.
“In seminars for education and environment funders on community and place based education, I have highlighted Community Works Journal as one of the best articulations of the work in this new field. The synthesis of arts, environment, literature and cultural heritage work in the Journal reaches out to all segments of rural and urban communities.”
David Sobel, Director of Teacher Certification
Antioch New England Graduate School
Community based learning at AUC in Cairo
Inspiring By Example
Since 1995 Community Works Journal has consistently featured stories, models and resources intended to inspire by example. We showcase innovative educational strategies, practices and curriculum that involve educators and students in meaningful work within their communities. We stand for a shared belief in education that is centered on community with students as active learners in service to their community.
Educators as Agents of Change
Community Works Institute seeks to mobilize participating educators, students, and community members through events, publications, and our web presence. We are working to bring together educators with a shared interest, focus, beliefs and methods.
“…wonderful! I found it delicious to find so many spirited educators who have a wide and deep understanding of the possibility of what education can be.”
Hope Hawkins, Project Manager
Future of Life, Wilmington, Delaware
Share Your Story
Contact us if you are involved in an educational project like the ones you read about in Community Works Journal. Our readers will greatly value your story, experience and curriculum ideas. We will help you put your story together. See our Submission Guidelines for general information on content and length. Contact us at [email protected] or 909.480.3966 with any questions you may have.
“It is absolutely the best magazine on Sustainability and Service-Learning I have seen. I have learned so much….”
Peter DiMaio, Spanish Instructor
Octorara Area Middle School, Atglen, PA
Publication and Distribution
Community Works Journal is widely distributed to individual teachers and educational networks across North America and beyond. include K-16 and non-formal educators, along with students, administrators, program personnel, policy makers and most importantly of course, community members.
Publication of Community Works Journal is made possible, in part, through a grants but relies mainly generous donations by individuals.
BACK ISSUES
Recent issues are available online, go to Journal Archives.
SUBSCRIPTIONS
Subscribe online and receive notification of future issues. You may also subscribe by emailing us at: [email protected] l or call (909) 480–3966
MAILING ADDRESS
Community Works Journal, PO Box 6968 Los Angeles, CA 90022 909.480.3966 l fax: 213.402.5220 | https://medium.com/communityworksjournal/about-us-4c014406770d | ['Joe Brooks'] | 2016-12-29 01:18:58.818000+00:00 | ['Education', 'Sustainability', 'Teaching', 'Social Justice', 'Service Learning'] |
Where we are Now | Where we are Now
I’m not happy with the way that the internet has made me, or should I really say that I have just noticed how the internet has made me.
Photo by Anshu A on Unsplash
I was just browsing a health site because I’m interested in that and I was dealing with a WhatsApp message at the same time, so I was back and forth for a few minutes, and when I returned back to the website I had forgotten what I was looking at, and I was drawn to a search bar that asked what I was looking for? you know where you enter a subject, etc., and press the search icon?
How easy has it become to click into that empty space and wait for the drop-down menu to appear?
How easy have things become where we don’t have to think about what we’re doing, we just click?
The internet is clever!
And we’d be lost without it too!
But how many of you have actually realised that it’s changing the way that we think?
I was actually looking for the drop-down as if it was second nature, and when there wasn’t a drop-down I got stuck. Hence being here to tell you lot.
What’s made me feel so deliciously happy though is that it was second nature for me to come here in the first place and to start writing.
We don’t need drop-downs!
It’s hard to remember these days when we used to have to send everything via the post. Who remembers pen pals where some of us would write to someone in another town/city/county? The very start of wider communication and look at us now.
We’re worldwide, and I just love it!
Hello to you where ever you are and I get so much pleasure at being able to say that knowing that in an instant after I’ve published you’ll be able to read me.
If the world is not such a bigger place as we once thought, it’s definitely a lot smaller now that we can touch each other so readily with words.
Maybe it’s just me?
My computer is my life.
I look, every day through this screen into a world where everything is at my fingertips. I have a mind that quickly flits from one thing to another and my pc screen with all of its easily obtainable windows can keep up.
I’m in heaven.
If only I had someone to talk to I really don’t think I’d know what to say.
I’ve started listening to 2 females talking over a podcast called REDHANDED where they talk about true crime. I accidentally came across it while working away on a website that I’m building and saved it to listen to a bit later on that evening. What’s appealing to me is hearing their lovely London accented voices and their beautiful sweet intelligent minds. If it wasn’t for that I don’t think I’d bother listening as what they talk about are the most horrific true crimes that I’ve ever heard. SHOCKING! But put into the context of what they are doing and where they’re coming from it makes really good listening.
They have a show called ‘Under The Duvet’ too, I’ve not listened yet but I think a subscription is on the cards, for me at least.
My every day is full of me.
I rarely see or hear anyone apart from my own thoughts and occasionally when I talk to myself. I’m not complaining! I like my own company and this internet thing on my computer keeps me more than occupied. Ah! I kind of dread the day when I’m not here as much and I have to go to work.
I’ve started to write on paper, with a real pen.
I actually surprised myself earlier on. I lay on my bed listening to an audiobook called Zero Negativity and as I always do when I listen to good audiobooks, I make notes. What I’d usually do is make a bookmark in the app but while listening to it through my television I couldn’t add a clip so I rushed to the living room to grab a pen and some paper.
Hey, it’s a free world.
And sometimes it’s refreshing to ‘not rely on technology’.
We can’t manage without it tho, can we?
Not now we can’t, it’s too ingrained in us and society and we need it, it’s a way of life.
I feel sad for all of those people who have an idea of it and can’t behold it as we do. Maybe feeling sad isn’t the right word. I wonder what it must be like to hear about and see it and want it but can’t get it is what I mean. Is that sad?
Where we are now is at a tipping point in my view.
What with the pandemic and the climate problems that we have, and that the world has never been so closely knit together because we all need to make things right and stop the changes that are happening.
Technology just brings us closer together and more and more are finding it.
Just look at Medium now.
There has never been so many writing about so much in one place, and when I think of all the other places that there are to write and share on the internet, with all of the drop-down menus and technology-led ways of doing things it makes me wonder who else has been trapped by that dupe and that one day this all will be full up too and where then do we go? | https://medium.com/an-idea/where-we-are-now-f4b0cadfa822 | ['Robert Walker Aka Num'] | 2020-12-13 03:02:47.594000+00:00 | ['Internet', 'Writing', 'Life', 'Learning', 'Technology'] |
In the Cards | Can I borrow your passionate soul one more time?
I promise if I feel your claws sinking into my wings
I will let us fly, if only you promise me in return-
Could you spare me one final moment of peace
Before we go back to pretending we never each other at all?
I’m not asking your cruel heart for forgiveness
And I won’t give you any more of myself
Than you have already taken and suckled from
Until you’d sucked it all dry and I ran away
Into the arms of a man who makes me moan with his kisses
I know you that hurts you, but I bet you didn’t even know.
No, I will not give you any more of myself
But you already are myself because in a way,
You could’ve been my soulmate, twin flame
That matched my every stride perfectly,
The soy butter to my jelly. So I’m sorry
For getting so nutty on you. I’m sorry
You couldn’t be sweet enough to mask the poison.
I’m sorry it all came undone, sorriest of all
That it doesn’t seem to be over yet. | https://medium.com/makata-collections/in-the-cards-68f1c31a7352 | ['Brianna R Duffin'] | 2019-06-04 16:01:02.603000+00:00 | ['Friendship', 'Love', 'Trust'] |
Cea mai mare schimbare cu care se vor confrunta afacerile | Founder & Growth Hacker @SMARTERS Romania, my purpose is to show people that it can be done.
Follow | https://medium.com/smarters-grow/cea-mai-mare-schimbare-cu-care-se-vor-confrunta-afacerile-5b61589de507 | ['Toma Grozăvescu'] | 2016-12-05 18:02:06.302000+00:00 | ['Romania', 'Marketing', 'Antreprenoriat', 'Strategie'] |
Speed up Bulk inserts to SQL db using Pandas and Python | This article gives details about:
different ways of writing data frames to database using pandas and pyodbc How to speed up the inserts to sql database using python Time taken by every method to write to database Comparing the time taken to write to databases using different methods
Method 1:
The approach here:
every row in a dataframe is converted to tupple Every record is then inserted to the table using pyodbc
params = 'DRIVER='+driver + ';SERVER='+server + ';PORT=1433;DATABASE=' + database + ';UID=' + username + ';PWD=' + password #df_op is the dataframe that needs to be written to database and test is the table name in database and col_name1, col_name2,... are the respective column names cnxn = pyodbc.connect(params)
cursor = cnxn.cursor()
for row_count in range(0, df_op.shape[0]):
chunk = df_op.iloc[row_count:row_count + 1,:].values.tolist()
tuple_of_tuples = tuple(tuple(x) for x in chunk)
cursor.executemany("insert into test" + " ([col_name1], col_name2],[col_name3],[col_name4],[col_name5],[col_name6],[col_name7],[col_name8],[col_name9],[col_name10]) values (?,?,?,?,?,?,?,?,?,?)",tuple_of_tuples)
Please find the respective rowcounts of a data frame and time taken to write to database using this method,
rows_count=[‘50’,’1000',’5000', ‘0.01M’,’0.05M’,’0.1M’,’0.2M’,’0.3M’]
time(sec)= [0.005, 0.098, 0.440, 0.903, 4.290, 8.802, 17.776, 26.982]
Method2
Now lets add cursor.fast_executemany = True to the function already used in method1. difference between method1 and method2 is highlighted
#df_op is the dataframe that needs to be written to database and test is the table name in database and col_name1, col_name2,... are the respective column names cnxn = pyodbc.connect(params)
cursor = cnxn.cursor()
cursor.fast_executemany = True
for row_count in range(0, df_op.shape[0]):
chunk = df_op.iloc[row_count:row_count + 1,:].values.tolist()
tuple_of_tuples = tuple(tuple(x) for x in chunk)
cursor.executemany("insert into test" + " ([col_name1], col_name2],[col_name3],[col_name4],[col_name5],[col_name6],[col_name7],[col_name8],[col_name9],[col_name10]) values (?,?,?,?,?,?,?,?,?,?)",tuple_of_tuples)
Please find the number of rows in a data frame and respective time taken to write to database using this method,
rows_count =[‘50’,’1000',’5000', ‘0.01M’,’0.05M’,’0.1M’,’0.2M’,’0.3M’]
time(sec) = [0.009, 0.179, 0.574, 1.35, 6.718, 14.949, 28.422, 42.230]
Method3:
writes dataframe df to sql using pandas ‘to_sql’ function, sql alchemy and python
db_params = urllib.parse.quote_plus(params)
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect={}".format(db_params)) #df is the dataframe; test is table name in which this dataframe is #inserted
df.to_sql(test,engine,index=False,if_exists="append",schema="dbo")
Please find the number of rows in a data frame and respective time taken to write to database using this method,
rows_count=[‘50’,’1000',’5000', ‘0.01M’,’0.05M’,’0.1M’,’0.2M’,’0.3M’]
time(sec)= [0.0230, 0.081, 0.289, 0.589, 3.105, 5.74, 11.769, 20.759]
Method4:
Now lets set cursor.fast_executemany = True using events and write to database using to_sql function.(difference between method3 and method4 is highlighted)
from sqlalchemy import event @event.listens_for(engine, "before_cursor_execute")
def receive_before_cursor_execute(
conn, cursor, statement, params, context, executemany
):
if executemany:
cursor.fast_executemany = True
df.to_sql(tbl, engine, index=False, if_exists="append", schema="dbo")
Please find the number of rows in a data frame and respective time taken to write to database using this method,
rows_count =[‘50’,’1000',’5000', ‘0.01M’,’0.05M’,’0.1M’,’0.2M’,’0.3M’]
time(sec)= [0.017, 0.015, 0.031, 0.063, 0.146, 0.344, 0.611, 0.833]
Now, lets compare the time taken by different methods to write to database for inserting dataframes with different sizes (ranging from 50 to 0.3 million records). ‘rows count’ represents number of rows written to dataframe, ‘time’ represents time taken by different methods to insert the respective nunber of rows to database
As you can see, method4 takes very less time compared to any other. Using method4, the insert speed will always be atleast 15 times faster.
References
https://github.com/mkleehammer/pyodbc/issues/547 | https://medium.com/analytics-vidhya/speed-up-bulk-inserts-to-sql-db-using-pandas-and-python-61707ae41990 | ['Kiran Kumar Chilla'] | 2020-07-03 00:03:12.011000+00:00 | ['Insert', 'Sqlalchemy', 'Python', 'Pyodbc', 'Data Science'] |
The AI blackbox | Theuth: “Here is an accomplishment . . . which will improve both the wisdom and the memory of the Egyptians.” Thamus: “The discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.” Technopoly: The surrender of culture to technology — book by Neil Postman
With the explosion of ‘things’ that opened access to personal data at scale and intense computing technologies like GPUs, it is no doubt that AI — the technology that has gone through multiple futile hype cycles in the past — has a better chance this time in getting traction outside the labs. Most of us consider AI as inorganic intelligence that is still experimental and consumer implications are wide, vague, and in the twilight zone. It is true that AI today can’t match the way we humans think and act. But if we look close enough, the implications of such intelligent technologies on us are already being felt.
Earlier this year, a swarm AI algorithm created by a company called Unanimous.ai predicted the super bowl’s final score with 100% accuracy. Companies like Scripps Howard have been predicting the final scores for the past 19 years but have got it right only twice. Later last year, the British police field-tested an AI system called Halo, which learns over a million features from every suspect’s photo. Once learnt, the algorithm can identify the suspect from any data source with the most minimal of cues like ‘a portion of the suspects ear’ — and all this in milliseconds. Such algorithms, which mathematician Cathy O’Neil refers to as WMDs (weapons of math destruction), decide if we get admitted in colleges, get diagnosed positive for a particular disease, and even get a job.
Intelligent algorithms are a cause of concern for three main reasons. One is that these algorithms learn from what humans serve up to them and can perpetuate flawed individual morality into the real world faster than ever before. The second, once the algorithm starts learning and programs itself, it gets inscrutable and hard to find the reason for a single action — even for the ones who designed it. And lastly, the majority who leverage established intelligent algorithms for business cases have very little understanding of how it actually works. For the scope of this write-up, let’s look at two and three in detail
Explainability
It is true that anything intelligent by nature isn’t completely explainable and a large portion of it is instinctual. Even if there is a rational explanation to specific actions, it somehow doesn’t feel sufficient. This could be applicable for AI too. Consider this — when a company recruits a recent college grad, on day one his/ her arguments/ opinions are valid only if they are backed by rational explanation. However, opinions of an expert consultant are valid from day one even if they are largely instinctual. A crucial reason why the weightage of opinion differs vastly between a college grad and a consultant is “trust”. Instinctual opinions are accepted based on trust. For a consultant, the trust component bestowed is often large and hence opinions voiced are instantly important.
However, this trust wasn’t instantly achieved; it came after years of voicing opinions and carrying out actions that were rationally explainable by the consultant. Hence, explainability is at the core of trust in any relationship. Trust is important for any technology to become a common and useful part of our daily lives. Which is why AI algorithms need to be scrutable to become mainstream. Even the Defence Advanced Research Projects Agency is working on a field called ‘explainable AI’, because for defence, explainability is a stumbling block to achieve usable outcomes from their investment in billions. Algorithmic mystery won’t be tolerated.
“If it can’t do better than us at explaining what it’s doing, then don’t trust it.” - Daniel Dennett (cognitive scientist — Tufts University)
Using without knowing
Most of the prominent technology companies like Google, IBM, Amazon, Microsoft have deep R&D investments to create intelligent algorithms. Such algorithms are exposed via APIs so that developers can leverage the APIs and build on top of them. For example, Google’s API.ai allows one to develop intelligent conversational apps on top of it.
If we look at the other side, most ideas built on artificial neural networks have zero intellectual property. The IP is just access to data to get the algorithm running. This means it can be replaced at any time by other sources with the same or more data.
Companies like Google use your data set to make their intelligent algorithms better. At any point of time, you can stop using API.ai and part ways with your data set, but you can never take the algorithm you trained with your data set via API.ai out of the Google ecosystem and deploy it as a custom algorithm inside your enterprise environment. For someone developing customer care bots on top of API.ai, this means that if tomorrow Google shuts down API.ai for some reason, he/ she would repent firing all their employees because their bot on API.ai automated most of their task, and will be left with an empty floor that can no longer take care of any customer.
“AI’s a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, it’s too late” - Elon Musk
The roads of technology are always twisted and bent, we can never clearly see through the corner. Although current AI applications are useless outside the exact purpose they were designed for, we can never strip them off their future potential. We always start with an illusion of 100% control in emerging technologies but this doesn’t last. it is our duty as a race to stay smart enough to use our technology wisely. And guess what, we will be smart because humans are extremely competitive and dangerous for exactly the same reasons that machine aren’t, at least for now.
The write-up got cross posted on Imaginea deep learning | https://medium.com/wirehead/the-ai-blackbox-cb87484bb7fb | ['Niranjan Ramesh'] | 2017-07-28 08:02:03.269000+00:00 | ['Deep Learning', 'Artificial Intelligence'] |
What do doctors, teachers and writers have in common? | The economic theory that claims self-interest is what drives the world may no longer be valid
The Back Story
Medium just introduced a $5/month membership fee. It’s not much even in Indian money. Around the same as my 4G mobile bill for a month, or a quarter of my internet broadband bill. If this works out, the good thing is the site will not be defiled by ads.
Maybe it’s the right way for Medium to go. But I have my doubts. Firstly, I’m a bit confused. When I click on the new ‘Become a Member’ link in the pull down menu, I see ‘Medium will remain free and open for anyone who wants to share ideas with the world.’ In the next section, I see ‘You’ll have access to exclusive stories from leading experts…’ And then, ‘You’ll get the first look at our newest reading features, starting with a new homepage…’
Does ‘exclusive’ mean some sections of Medium will not be ‘free and open’ to non-member readers? Or does it mean, I just get early access to the some articles, and some other privileges like a new homepage design, but the site will remain free and open to readers as well as writers? As I understand, all this is a reward for my willingness to support Medium pay for good content.
I am giving Medium the benefit of doubt, assuming it is the latter, and signing up for now. Medium does have a help page clarifying a lot of this, but I will have to see how it pans out in reality. In any case, $5 is a small return for all that I have got from Medium in the last year or so.
If I find out later that Medium is not truly free and open, I will most probably back out of the membership. The idea of having a wall between the ‘haves’ and ‘have-nots’ turns me off. For instance, I know most Indians will not pay to read. It’s just a habit, as they are used to a free internet.
All this was on my mind when I was introduced to a new economic theory a few days ago. It says in the new economy of knowledge, information and human services, you can’t motivate people solely by self interest (money). It seemed relevant to what was happening at Medium. Let me elaborate.
The Teacher
A few days ago, my kid’s maths teacher posted a picture of himself with his student who had just won an Oscar for technical achievements. Teaching in India is not a well paid job. The reward this teacher gets is not money for himself. But the happiness of knowing he has been able to help a student fulfill his potential. Recently, he worked his miracle on my daughter. She used to hate maths, but began to enjoy the subject after he started teaching her. As a father, I know the difficulties in raising a child. To selflessly do that with not just one child but many children, boggles my mind.
The Doctor
Or take this brilliant Indian neurosurgeon I know. He could have migrated abroad like most of his classmates and minted money. But he chose to stay back in India, even though he was earning but a fraction of what he could make abroad. He told me he felt a kind of ‘calling’ to help the sick and needy in India. His reward comes from knowing he helped heal others, and not from the size of his bank balance. As he put it, if he were to work solely for money, it would be demeaning what he did. Ironically, today almost the entire medical industry in India is driven by money, and is an unholy mess of predatory profiteering off the sick. This only reiterates the fact that pure self-interest will not work in such a field.
The Writer
Let’s take Medium. There are so many writers here, posting fresh and insightful thoughts and stories. The majority aren’t getting paid for writing but they still churn out tons of posts. So why do all these writers write?
Obviously, they wouldn’t mind getting paid for it. But money does not seem to be the prime motivator. Could it be like doctors and teachers, these writers are motivated by a need to help others? Do they share their thoughts in the hope that it makes a difference to some reader?
I know this holds true for me. There’s this post I put up recently about how I used WhatsApp to get help from the Indian police. India’s people share an uneasy relationship with its police and usually avoid having anything to do them. So if a few of my readers began using WhatsApp to get help from the police, I would feel my post was worth the time and effort I took to write it.
The Wall
This is where Medium has thrown a spanner in the works with its new exclusive paid members only zone. I’m not sure how it works but let’s call it a paywall for now.
Would I object if Medium made that post of mine available only beyond that paywall? Yes, that would defeat its purpose. I would want as many people as possible to know about being able to anonymously contact the police in India via WhatsApp. A paywall would kill the post, as most Indians would not pay to read, like I mentioned earlier.
What if Medium offered to pay me for putting the post beyond a paywall? Again, my answer would be no. My primary motivation to write that post is to get people to know about the above. So if it’s not going to be read, then the post is pointless.
Ok, forget Medium. What if New York Times offered to pay me to write an article on Indian politics that would be available solely behind a paywall. I would jump at it as it’s good money and recognition. But in the long run, the carrot, stick, or even the sack won’t be enough to motivate me to write.
Because if I don’t have a sense of purpose for my writing, I’ll just go blank.
The Conundrum
I had recently written a post on how Medium could generate income instead of putting the paywall for readers. I know it’s done and dusted now. But Medium is an evolving platform, and maybe these thoughts still make sense.
But the biggest problem of being an unknown writer is a lack of credibility. Like if Zuckerberg were to call Ev Williams and suggest how Medium could generate income, Ev might probably give him a hearing, and mull over the advice. Why? One reason is because Mark is famous. The second is he’s a recognised expert in related fields. The third could be that he would be able to build a strong rationale for his advice, backed up with data.
In short, Zuckerberg has credibility.
However Ev is highly unlikely to read the thousands of suggestions by writers at Medium on the same subject as most of the writers lack credibility. Which is a pity as some of those thoughts are quite insightful.
It’s a vicious circle. If you don’t have credibility, no one will pay attention to what you write. And if no one pays attention to your writings, you are never going to gain credibility.
The Nowhere Man gets a hand
The whole thing reminded me of the Beatles song, ‘Nowhere Man.’ So is the nowhere man destined to forever be, “Sitting in his nowhere land, Making all his nowhere plans for nobody”?
I found the answer to that in the song itself, “Nowhere man don’t worry, Take your time, don’t hurry, Leave it all till somebody else Lends you a hand”
Sure enough, someone did lend me a hand.
My post was tagged by another writer, Keith Parkins, on top of his post, which was itself a reproach to Medium for putting a paywall for readers. He had an apt analogy of Medium being the ‘commons’ which provided a fertile ground for writers all over the world to exchange ideas. And the paywall being the equivalent of the medieval ‘enclosure of the commons.’ And just as that medieval enclosure destroyed a culture of farming, this modern enclosure can destroy a fertile ground for writers. His post is attached below.
What was equally interesting was an appended video interview with Samuel Bowles, an American economist and Professor Emeritus at the University of Massachusetts Amherst. He talks about a new economic theory, which is the basis for this post of mine.
But what really took me by surprise was how the Professor actually provided a solid rational foundation for some of my intuitive suggestions in the ‘paywall for writers’ post. I guess that’s why my post was tagged by the writer. If you have 20 minutes, listen to the interview. The guy is brilliant.
Talk to the Man
One of my favorite fictional characters used to be Professor Calculus of the French comic series starring Tintin by Herge. Professor Calculus is a lovable, good natured, well-meaning sort, and a brilliant scientist. But what really defined Professor Calculus was that he was hopelessly impractical.
Somehow Ev and the others who run Medium remind me of Professor Calculus. They mean well, but I sometimes get the feeling they are just as clueless as Professor Calculus.
One thing about Professor Calculus. He seemed to get along very well with other Professors. That made me wonder. Professor Calculus might not interested in what I have to say, but he just might listen to another professor.
Go on guys, meet up with Professor Bowles, and have a chat about what motivates people to do things. It just might open up a new line of thinking. | https://medium.com/hackernoon/what-do-doctors-teachers-and-writers-have-in-common-f8c078a6d437 | [] | 2017-07-17 17:19:53.383000+00:00 | ['Economics', 'Medium', 'Writing', 'Altruism', 'Paywall'] |
Tokenize Text Columns Into Sentences in Pandas | READ CSV
I’ll keep this very short. Put scripts.csv to the same directory with your Python script or Jupyter notebook and then run the following commands:
give the path of CSV file to FILE_PATH variable
You can directly give scripts.csv to Pandas’s read_csv() but it is always more robust to use os or glob to give file path. It’s also a good practice for production. You might forget to change all of magic filenames if you use strings.
Let’s create a DataFrame from our CSV file and assign the first row’s Dialogue column to first_dialogue variable.
we will use “first_dialogue” in sentence tokenization section
So, we are ready to try different sentence tokenizers. This might be a great example because it has many punctuations such as three dots, exclamation mark, question mark; long sentences; sentences without punctuation between them; etc.
Sentence Tokenization
1. Tokenize an example text using Python’s split()
This will be a naive method, which you should never use for sentence tokenization! I prefer using built-in functions as much as possible in my projects. But if it doesn’t fit, then don’t use it, check for the alternatives (you will probably find a better library because Python has a vast community and lots of great open-source libraries!). So, you might use the following gist, which will only work if you are very lucky and all of your sentences end with “.”:
Not so good, right? I see that split() is used in many articles for word tokenization. Which might be acceptable because it splits texts by taking care of extra spaces, etc. For sentence tokenization, it just doesn’t work. You might use replace() and then split() to replace all end-of-line characters with one character and split the text into sentences using that character. It would give a better result, but the performance of your code would decrease.
Another problem is we’re losing the character we used for splitting. If we aggregated the transformed data, we wouldn’t have the original punctuations any more.
2. Tokenize an example text using regex
Regular expression is always useful if you’re working on texts. It’s a quite old and robust approach. Many programming languages offer it natively. So, you can use your regex across different languages with small or no changes.
Let’s give it a try to one of the accepted answers on Stackoverflow (with a small change, by adding |! to the third group):
Let me explain the regex pattern: (?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?|!)\s step by step:
(?<!X)Y is called negative lookbehind. It tries to capture Y (any whitespace character \s in our case) where preceded characters of Y do not match with X. For example, let’s say you have this regex: (?<!TEST)MATCH . If your text is “TESTMATCH”, it will not match, if it’s “RANDOM_MATCH” (preceded characters are not “TEST”), it will match.
As you may see, all “MATCH” values are matched but the first row because it has “TEST” as preceded characters
(?<=X)Y is called positive lookbehind. It tries to capture Y (any whitespace character \s in our case) where preceded characters of Y match with X. Let’s use the same example as above and check the outcomes:
This time only the first row is captured because the rest of the rows don’t have “TEST” as preceded characters
In our regex, we have three capturing groups: the first two are looking for negative lookbehind and the last one is looking for a positive lookbehind. Now, let’s look at the inside of these capturing groups to understand what kind of strings we’re looking for:
Inside of the first capturing group is as follows: \w\.\w. . \w matches any word character (letter, number, or underscore). \. matches literal dot character (backslash is the escape character in regex). . matches any character. An example match: i.e. MATCH
. matches any word character (letter, number, or underscore). matches (backslash is the escape character in regex). matches any character. An example match: Inside of the second capturing group is [A-Z][a-z]\. . It matches any lowercase or uppercase letter following with dot character. An example match: Mr. MATCH .
. It matches any lowercase or uppercase letter following with dot character. An example match: . Inside of the third capturing group is \.|\?|! . We already know that \ is used for literal characters. | means to match either the first part or the latter part. In other words, (A|B|C) means either match A, B or C. In our case, it will match either dot, question, or exclamation mark.
We may finally know all we need to know about our regex. To sum up, we’re searching for whitespace characters by checking its preceded characters. If it passes from our three validations, we match it. So, we can split the text from these matches.
The disadvantage of using regex is:
It might be quite painful to explain what is going on! It’s not intuitive . You should spend some time to learn it, otherwise, it will look scary.
. You should spend some time to learn it, otherwise, it will look scary. You might create infinite loops or low-performance regular expressions if you’re not much familiar with it.
with it. It’s not easy to cover edge cases before you face with them . When you start to write a regex, you might easily miss some of these cases.
. When you start to write a regex, you might easily miss some of these cases. Our regex relies on whitespace characters. If there is no whitespace between two sentences, then it doesn’t match anything. So, it might fail if your text is not properly formatted. For example, if you give this text to our regex, “This is an example sentence.This is another one without space after the dot.”, it will think it’s one sentence.
The advantage of using regex is:
You don’t need to rely on third-party libraries .
to rely on . It’s lightning fast because just as many of Python’s built-in functions, re module is converted and executed on C.
because just as many of Python’s built-in functions, module is converted and executed on C. It exists since the 1950s, and it’s a quite known thing for string search, replace, etc. You can write one regex in Python and use it in Javascript with minor changes. You only need to know the programming language specific behaviours, the rest will be easy. But, you can’t use a Python library in Javascript. If you can, you will need a workaround, or you will use a Python API etc.
3. Tokenize an example text using spaCy
spaCy is capable of preprocessing texts in many languages. It offers tokenization, lemmatization, linguistic features, creating pipelines, training, running on GPU, etc. So, it’s a resourceful and powerful library. If you have unstructured text data such as scraped texts from the web, I’d suggest using it without any hesitation. If you think of only doing basic preprocessing, it might add extra complexity which you don’t actually need.
Before you try spaCy’s tokenization, you should download the library and one of its models. We will use the small English model which will be sufficient for our task.
Download library and language model, then tokenize first_dialogue
This approach takes a long time. However, this is a bit misleading because spaCy might work much faster. It took a long time because we need to convert each value to an nlp object and we used the default sentence segmentation component. It is called DependencyParser. It’s possible to modify -to remove dependency parser and use custom segmentation- nlp() object and define it without using any model. Let’s use sentencizer pipeline component this time:
As you can see, there is a trade-off between the speed and the result. sentencizer struggled just as nltk and regex because it uses a rule-based strategy.
The advantage of using spaCy is:
It is a well-documented library which is maintained actively by a large community.
library which is by a large community. It offers tons of NLP functionalities that you might use.
that you might use. It supports many languages such as German, Dutch, French, and Chinese.
The disadvantage of using spaCy is:
It’s slower than re module in normal usage.
in normal usage. It might be overkill to include it to your project and use it only for tokenization.
4. Tokenize an example text using nltk
nltk is another NLP library which you may use for text processing. It is natively supporting sentence tokenization as spaCy. To use its sent_tokenize function, you should download punkt (default sentence tokenizer).
nltk tokenizer gave almost the same result with regex. It struggled and couldn’t split many sentences. | https://towardsdatascience.com/tokenize-text-columns-into-sentences-in-pandas-2c08bc1ca790 | ['Baris Sari'] | 2020-12-27 16:10:40.951000+00:00 | ['Spacy', 'Python', 'Pandas', 'Regex', 'Tokenization'] |
Doing DevOps for Snowflake with dbt in Azure | Doing DevOps for Snowflake with dbt in Azure
How to Use CI/CD to Deploy Your Snowflake Database Scripts with dbt in Azure DevOps
by Venkatesh Sekar
I recently wrote about the need within our Snowflake Cloud Data Warehouse client base to have a SQL-centric data transformation and DataOps solution. In my previous post, I stepped through how to create tables using custom materialization with Snowflake.
Continuing in that vein, I was recently asked by a customer to provide a path for them to do database DevOps for Snowflake. In general, database DevOps has involved quite a bit of complexity and ongoing tweaking to try and get it right. There are some tools available in the market today including:
But, other than Sqitch, they don’t support Snowflake yet, although, with the amount of momentum that Snowflake has in the market, I expect they will provide support in the not too distant future.
Enter dbt
Having used dbt as a data transformation and Jinja template-based tool, I was interested to see if it could potentially be the key to help unlock database DevOps for Snowflake.
As noted above, I was able to create the ‘persistent_table’ materialization which provided an answer for creating ‘source tables’ in DBT, and having done that I next developed a simple CI/CD process to deploy database scripts for Snowflake with dbt in Azure DevOps.
Stay with me and I’ll step you through how to setup dbt to deploy the scripts. As always, the code is available in my git repo venkatra/dbt_hacks.
The Tooling
Here is a glimpse into the tools and solutions that I am using to make this happen…
dbt
dbt is a command line tool based on SQL and is primarily used by analysts to do data transformations. In other words, it does the ‘T’ in ELT.
It facilitates writing modular SQL Selects and takes care of dependencies, compilation, and materialization in run time.
Azure DevOps
Azure Devops provides developer services to support teams in planning work, collaborating on code development, and building and deploying applications.
Snowflake
Organizations across industries rely on Snowflake for their cloud data warehousing needs — net new data warehouses, migrations from legacy DW appliances (Netezza, Teradata, Exadata, etc.), and migrations from traditional Hadoop and Big Data platforms (Hive, HBase, Impala, Drill, etc.). Our clients are also using Snowflake for high-value solution areas such as Security Analytics and Cloud Visibility Monitoring.
Snowflake is fully relational ANSI SQL cloud data warehouse and allows you to leverage the tools that you are used to and familiar with while also providing instant elasticity, per second consumption-based pricing, and low management overhead across all 3 major clouds — AWS, Azure, and GCP (GCP is in private preview).
Continuous Integration with Azure Pipelines
The Continous Integration (CI) process is achieved using Azure Pipelines within Azure DevOps. This pipeline is typically invoked after the code has been committed, and the pipeline tasks generally handle:
Code compilation
Unit Testing
Packaging
Distributing to a repository, e.g., Maven
In the case of a database scripts file there isn’t a great deal of validation that can be done, other than the following:
Code formatting check
Scripts follow certain in-house practices, e.g., naming conventions
Script compilation (this is possible in SQLServer via DACPAC/BACPAC).
Snowflake currently does not have a tool that validates the script before execution, but it can validate during deployment, so in the Build phase I typically do these checks:
Code format check
Naming convention check
Packaging
Distributing to a repository, e.g., Maven
Identifying the Commit Changes
Given the set of all scripts, it’s essential to determine which scripts were added or updated. If these scripts can’t be identified, you will end up re-creating the entire database, schema, etc. which is not desired.
To solve this issue, I’ll use the Azure DevOps Python API. Going through the docs, you will see different REST endpoints and determine detailed information on what was committed, when it was committed, and who committed it, etc.
The python script IdentifyGitBuildCommitItems.py has been developed in response to this. Its sole purpose is to get the list of commits that is part of the current build and their artifacts (the files that were added/changed). Once identified it would write them into a file ‘ListOfCommitItems.txt’ during execution.
I’ll review the results in the below sections.
Identifying the Deployable Scripts
During the course of development, the developer might have created scripts for table creation as well as developed transformation models, markdown documentation, shell scripts, etc. The ‘ListOfCommitItems.txt’ that was created earlier would contain all of these scripts. Note that if a file was committed multiple times, the script will not de-dup the commits.
To keep things modular, the script FilterDeployableScripts.py was created. Its responsibilities are to:
Parse the ‘ListOfCommitItems.txt’
Identify the SQL scripts from various commits
Filter out only those scripts which are to be materialized as ‘persistent_tables’
Write the result to the file ‘DeployableModels.txt’
Build Pipeline
The build pipeline is a series of steps and tasks:
Install Python 3.6 (needed for the Azure DevOps API)
Install Azure-DevOps python library
Execute Python script: IdentifyGitBuildCommitItems.py
Execute Python script: FilterDeployableScripts.py
Copy the files into Staging directory
Publish the artifacts (in staging directory)
These are captured in azure-build-pipeline.yml
Published Artifacts
The following screenshot highlights the list of artifacts that get published by the build. It also provides a sample output of ‘ListOfCommitItems.txt’ which was captured in the initial run.
Notice that the ‘DeployableModels.txt’ file contains only the CONTACT table definition file, and ignores all other files that are not meant to be run.
Now take a look at the next screenshot from a different build run — during this build run we saw the following:
The script file ‘deploy_persistent_models.sh’ was updated
The table definition for ‘ADDRESS’ was added. You could see that the script identified only these changes and captures them in the ‘ListOfCommitItems.txt’ and safely ignores all the other files.
Again, the ‘DeployableModels.txt’ file contains only the ADDRESS table definition file and is not concerned with any other files that are not meant to be run.
Continuous Deployment
A Continuous Deployment (CD) process is achieved with Azure Release Pipelines. The pipeline we are working on is geared towards the actual deployment to a specific snowflake environment, e.g. Snowflake Development Environment.
The “Stage” section is usually specific to the environment in which the deployment needs to happen. It consists of the following tasks as seen below:
Task: DOWNLOAD_INSTALL_DBT
This is a Bash task with inline code below:
#Install the latest pip sudo pip install -U pip # Then upgrade cffi sudo apt-get remove python-cffi sudo pip install — upgrade cffi sudo apt-get install git libpq-dev python-dev # Specify the version on install sudo pip install cryptography==1.7.2 sudo pip install dbt dbt — help
Task: DBT_RUN
This is a Bash task with inline code below:
export SNOWSQL_ACCOUNT=$(ENV_SNOWSQL_ACCOUNT) export SNOWSQL_USER=$(ENV_SNOWSQL_USER) export DBT_PASSWORD=$(ENV_DBT_PASSWORD) export SNOWSQL_ROLE=$(ENV_SNOWSQL_ROLE) export SNOWSQL_DATABASE=$(ENV_SNOWSQL_DATABASE) export SNOWSQL_WAREHOUSE=$(ENV_SNOWSQL_WAREHOUSE) export DBT_PROFILES_DIR=./ chmod 750 ./deploy_persistent_models.sh ./deploy_persistent_models.sh
This sets up the various env configurations needed for dbt and used as part of the execution.
The ‘ENV_’ are variables that will be substituted at run time. They need to be defined in the variable section as below:
Logs of the DBT_RUN Task
Upon release, the table will be created in Snowflake. Here is a screenshot of the successful run and the logs of the DBT_RUN task:
and below is the artifact in Snowflake:
Are There Any Limitations?
Keep these limitations in mind when leveraging dbt for CI/CD with database objects…
Deployment of objects in a specific order is a roadmap item
Enhancements for Views, Functions, etc. is a roadmap item
Ultimately, every possible scenario won’t be covered by this approach so take a close look at what you need to do from a design and planning perspective
What’s Next
I hope you’ll try to replicate this simple CI/CD process to deploy database scripts for Snowflake with dbt in Azure DevOps. While there are some limitations, the potential is there to add value to your data ops pipelines.
You should also check out John Aven’s recent blog post (a fellow Hashmapper) on Using DBT to Execute ELT Pipelines in Snowflake.
If you use Snowflake today, it would be great to hear about the approaches that you have taken for Data Transformation, DataOps, and CI/CD along with the challenges that you are addressing.
Some of My Other Stories
I hope you’ll check out some of my other recent stories also… | https://medium.com/hashmapinc/doing-devops-for-snowflake-with-dbt-in-azure-db5c6249e721 | [] | 2019-10-16 20:59:41.010000+00:00 | ['Snowflake', 'Open Source', 'Cloud Computing', 'DevOps', 'Dbt'] |
People Transform, And You Should Too So You Can Surpass Them | People Transform, And You Should Too So You Can Surpass Them
3 ways to deal with change.
Photo by Arseny Togulev on Unsplash
I remember seeing one of my old friends at Panera a couple of years ago. I got so ecstatic at the sight of him because he used to support me a ton in high school. He’d defend me against a racist-ass bully who wanted to beat me up, he gave me the confidence to be myself no matter how much of I people-pleaser I was, he’d even give me his mom’s legendary Buffalo sauce she made on Halloween (If you haven’t had it, your life has no meaning).
Let’s just call this person Joe.
As I stood in front of the counter waiting on my sandwich, baguette, and broccoli cheddar soup, Joe walked in with a few of his friends (I never really liked those people, but we were still cordial). I approached him and almost went in for a warm hug.
“Hey, Joe! It’s so awesome to see you again! How are you doing?”
*Awkward stare*
He refused to talk to me. All he did was stand to the left of me with his friends and wait to order. My heart broke. It seemed as though he had a personal problem with me. I don’t talk to him anymore after that. I chalked that whole situation down to us growing apart. | https://medium.com/live-your-life-on-purpose/people-transform-and-you-should-too-so-you-can-surpass-them-8c5250d0215c | ['Khadejah Jones'] | 2020-12-23 14:03:24.864000+00:00 | ['Self-awareness', 'Life Lessons', 'Evolve', 'Self', 'Change'] |
Crispy | Written by
I’m an award-winning comic artist, writer and graphic recorder. All words + images © Sarah Firth. Contact me www.sarahthefirth if you want to use them. | https://sarahthefirth.medium.com/crispy-d5500ba4d7ce | ['Sarah Firth'] | 2018-06-15 21:34:22.567000+00:00 | ['Addiction', 'Trauma', 'Self Improvement', 'Self-awareness', 'Self Aware'] |
Crash Course: Reinforcement Learning | A short high-level introduction (without all the complicated math) to Reinforcement Learning
What is the process of training a dog to sit like? Well, your dog may initially be completely untrained and have no idea what to do.
You might tell him to sit, and the dog might start barking. You scold him and tell him and tell him to sit again, but this time he starts wagging his tail. Once again, you scold him. You continue to try and tell your dog to sit, and finally, on the 27th try, he sits! You give him a treat and a word of praise.
As you keep up this cycle of scolding and praising, your dog eventually learns to sit when you tell him to. Voila! You have just demonstrated reinforcement learning with your dog!
What exactly is Reinforcement Learning?
Reinforcement learning is essentially where an agent is placed in an environment and is able to obtain rewards by performing certain actions. The agent’s only goal is to maximize the amount of reward it can get.
In the dog training example, your dog serves as the agent, and the rewards were your words of praise and treats.
Reinforcement learning works by using trial and error (it can be very tedious, as seen from the dog training) from its own actions and experiences.
Reinforcement Learning vs Supervised/Unsupervised Learning
Reinforcement learning is a subset of machine learning, as are supervised and unsupervised learning.
The main difference between reinforcement learning and supervised learning is sequential decision making. While in supervised learning, the actions the agent makes does not affect the future, in reinforcement learning, every single input depends on the previous action the agent made.
The main difference between reinforcement learning and unsupervised learning is their goals. In unsupervised learning, the main goal is to find structure within a given dataset, whereas in reinforcement learning, the goal is to find a course of action that would maximize the total reward for the agent.
All 3 of these fall into the massive umbrella of machine learning in which agents learn from data.
Markov Decision Process
The Markov Decision Process is the mathematical framework that describes the environment of a reinforcement learning model. In reinforcement learning models, all future states depend only on the present state, which means that they are a Markov Process. Reinforcement learning is a technique that attempts to learn an MDP and find the optimal policy.
Common Terminology:
State is essentially what the agent observes in its environment at a certain moment
is essentially what the agent observes in its environment at a certain moment Actions are the possible moves that the agents can perform in the environment
are the possible moves that the agents can perform in the environment The Reward is what the agent receives if it achieves a desirable result
is what the agent receives if it achieves a desirable result Discount is an optional factor that determines the importance of future rewards relative to now; It can range from 0 to 1.
is an optional factor that determines the importance of future rewards relative to now; It can range from 0 to 1. The Value of a state is the expected long-term return (may include discount for the state)
of a state is the expected long-term return (may include discount for the state) Policy is the strategy that the agent employs to determine the next action. The optimal policy is the one that maximizes the amount of reward expected to receive.
Maze Game Example
Let’s say the agent was the robot and the maze was the environment. The state would be the position of the robot at any point in time. It would utilize a policy to determine which path to take. The actions would be going left, right, up, or down. We could award the robot +1 point for hitting an empty square, -1 point for hitting a wall, and+100 points for reaching the exit.
At first, the robot starts off with no experience at all, so it has completely random movements. However, as it starts to learn the values of each state, it begins to become smarter and smarter, finally completing the maze.
Exploitation vs Exploration
Let’s revisit the maze example. Let’s pretend that the robot initially chooses to go to the right, thus assigning a value of 1 to the white space to the right of it. Then it goes on and ends the episode. When the robot starts again, it now knows that the state of the space to the right has a value of 1, while the state of the space on top has a value of 0. Thus, it will always go right no matter what!
This brings us to the choice of exploitation vs exploration. The robot never actually had a chance to explore all the other options; It simply chose the state that had the highest value. This is known as the Greedy Policy, where the agent always picks the highest value.
One option here is to “exploit” the agent’s previous knowledge, in which it always picks the state that will give it the highest reward. The other option would be to “explore” the other states to see if they would potentially give a higher reward.
Depending on the situation, both have their advantages. If you needed to minimize the amount of loss, exploitation would work well for you to use previous experiences of what states received rewards. If you simply wanted to find the best possible method, without simulations that did not have any restrictions, exploration would be best.
Normally, a combination of both exploitation and exploration is used depending on the problem that needed to be solved (this can be changed as the project progresses).
Episodic vs Continous
A reinforcement learning model can be either episodic or continuous.
Episodic simply means that there is a “terminal” condition for the game, whether that be winning or losing. An example of a program that would require episodic reinforcement learning would be the game pong. In pong, the simulation resets every time the agent wins or loses the game (first to 11 points).
Continuous means that there is no end condition for the game, and that the model will just keep on running until stopped. For instance, a reinforcement learning model applied to the stock market would keep on going until it is manually terminated.
Monte Carlo vs Temporal Difference Learning Methods
In the maze example, the reinforcement learning model could’ve used either the monte carlo or temporal difference method.
The Monte Carlo method waits until the end of the episode, then checks the cumulative reward that it has received. It then calculates and updates the the expected award for each state at the end.
The Temporal Difference method updates the value of each state after each time step instead of at the very end. Thus, it continually receives feedback from rewards and updates its guesses of each value of the state.
Different Approaches to Reinforcement Learning
Value Based
In a value-based reinforcement learning, we want to find the maximum value function.
The value function is a function that tells us the amount of reward that an agent can expect to get in the future at each state (That is what we were using for the maze example). With this learning, the agent will always pick the state that has the highest reward.
In this example, the agent will start at -7, then continue on to -6, -5, and so forth until it reaches the goal, as those states have the largest values. Once you find the optimal value function, the optimal policy is able to found from it.
Policy Based
In a policy based reinforcement learning, the agent is essentially told where to go by the policy function. The policy function is a function that tells how the agent will make a decision.
Policy functions usually start off random and with a value function that corresponds to it. It then finds a new value function and improves its policy. It keeps going until it finds the optimal policy and value function.
As shown, the policy function essentially tells the agent the best direction to go.
Wrap-up
So that’s it! You just completed a high-level overview of the components of Reinforcement Learning. Just to recap:
RL is essentially where an agent is placed in an environment and is able to obtain rewards by performing certain actions.
RL is different from supervised/unsupervised learning
RL attempts to find the most optimal policy of the Marko Decision Process
Exploitation picks the most optimal choice from is known, while exploration picks a random choice that may not be considered optimal at the moment
Episodic means that the RL model has a designated win/loss, while continuous means that the RL model will keep running until manually stopped
Monte Carlo method receives reward and updates value function at end of episode, while TD makes guesses to improve the value function at the end of each time step
Value Based finds optimal value function then derives optimal policy function from it, while policy based continuously updates its policy function to find the most optimal policy and value function
Hope you learned something from this article! | https://medium.com/swlh/reinforcement-learning-cb9de05fb60 | ['Allen Wang'] | 2020-11-28 10:09:17.705000+00:00 | ['Reinforcement Learning', 'Artificial Intelligence', 'Machine Learning'] |
How Twilio Bested Hemingway | Ernest Hemingway has long been credited, perhaps apocryphally, with authoring the shortest story ever told: “For sale: baby shoes, never worn.”
The other day, driving off the San Francisco-Oakland Bay Bridge, I took in the Twilio billboard and realized that Hemingway’s record had fallen:
d
What a brilliant illustration of how even stripped down messaging can convey a powerful, moving story. (By the way, I have no relationship with Twilio.)
Hemingway’s six-worder is notable because it contains, if implicitly, all of the essential narrative elements. There’s a protagonist (an unnamed mother), high stakes for a future outcome (life and death of a child), a climax (death), and a new state of the world (moving on by selling the shoes).
Remarkably, Twilio’s three-word gem similarly leverages story structure to pack an emotional punch. Although the message ostensibly speaks to an executive (non-developer), its protagonist (and target audience) is a developer. The high stakes for a future outcome? Whatever communications-related challenge the executive cares about enough to be asking questions. The implied climax, then, is when the executive’s developer overcomes that challenge — armed with Twilio, of course. The new state of the world? One in which developers are respected and valued.
Check out this response to a tweet about the billboard:
OK, “Warren Peace” might be half joking, but I think he’s half truly moved, too. | https://medium.com/firm-narrative/how-twilio-bested-hemingway-41cd667f1d01 | ['Andy Raskin'] | 2015-06-22 14:36:11.132000+00:00 | ['Strategy', 'Messaging', 'Storytelling'] |
22. Revisiting the Slowdown, and the end of the Great Acceleration | In his book How Democracy Ends, David Runciman also dwells on Japan. This is partly because the American political scientist Francis Fukuyama did so, in his hugely influential The End of History and the Last Man (You can also hear Runciman discuss Fukuyama, and why his perhaps flawed view in underestimating the East, in his brilliant History of Ideas podcast series, from which I’ve been taking regular doses, as if a palliative soundtrack to the pandemic.)
“Francis Fukuyama cited Japan (along with the EU) as the likeliest illustration of what we could expect from the end of history: the triumph of democracy would turn out to be stable, prosperous, efficient and just a little bit boring … Today Japan and Greece are rarely invoked by politicians in other democracies as exemplars of the possible fate that awaits us all. They don’t work as morality tales any more because their message has grown too ambiguous. Japan remains stuck in an political and economic rut yet it continues to function perfectly well as a stable, affluent society that looks after its citizens. Imagine drawing a ticket in the great lottery of life that assigned a time and a place in which to live from across the sweep of human history. If it read: “Japan, early twenty-first century’, you would still feel like you’d won the jackpot.”—David Runciman, How Democracy Ends (2018)
Here, Runciman doesn’t quite unpack the promising nature of that ambiguity. How can winning the jackpot coexist with being in an economic rut? That’s interesting! Ambiguity is where we are. It is our current condition, not least under the virus. It’s something we have to learn how to work with, and within, and it describes a very different form of ‘drama’ than the simple narrative arcs and resolutions that Runciman suggests politicians are looking for.
“It is Japan and Greece that now offer the best guides to how democracy might end up … As morality tales ago, they are missing something. What they lack is a moral. Instead of the drama reaching a climax, democracy persists in a kind of frozen crouch, holding on, waiting it out, even if it is far from clear what anyone is waiting for. After a while, the waiting becomes the point of the exercise. Something will turn up eventually. It always does.”—David Runciman
This ‘waiting it out’ is clearly problematic, including in the context of Greece and Japan, as Runciman makes perfectly clear with reference to the various toxicities in their political culture, and their various inequalities. Things do not ‘turn up’ equally. I’ll pick up the imperative that we must not simply wait it out later on in this series. In his podcast on Fukuyama, Runciman describes the familiar refrains of ‘missing years’ and ‘spinning wheels’ usually applied to the stasis perceived in Japan and Greece; as if there is nothing to see there. Of Japan, Runciman says that “it does not seem like it’s providentially surfing the river of history — it’s got stuck in the reeds.”
But perhaps this is the point. There is much to see in Japan, just not from when viewed through the orthodox lens (as Runciman knows). Fukuyama, in his The End of History and the Last Man, used the motif of the tea ceremony in his uncomfortable dismissal of Japanese culture, conveying what he saw as a pervasive pointlessness, or stasis, at best, and at worst, a humiliating subjugation to an unchallenged hierarchical social order concerned with recreating a self-protecting stasis. Fukuyama saw Japanese culture as exemplifying the latter of his two manifestations of thymos: isothymia and megalothymia. (Isothymia is the desire to be recognized as the equal of other people, whereas megalothymia is the demand of certain individuals to be recognized as superior to others.)
“After the rise of the Shogun Hideyoshi in the fifteenth century, Japan experienced a state of internal and external peace for a period of several hundred years which very much resembled Hegel’s postulated end of history. Neither the upper nor lower classes struggled against each other, and did not have to work terribly hard. But rather than pursuing love or play instinctively like young animals — in other words, instead of turning into a society of last men — the Japanese demonstrated that it is possible to continue to be human through the invention of a series of perfectly contentless formal arts, like Noh theater, tea ceremonies, flower arranging, and the like. A tea ceremony does not serve any explicit political or economic purpose; even its symbolic significance has been lost over time. And yet, it is an arena for megalothymia in the form of pure snobbery: there are contending schools for tea ceremony and flower arrangement, with their own masters, novices, traditions, and canons of better and worse … (I)n a world where struggle over all of the large issues had been largely settled, a purely formal snobbery would become the chief form of expression of megalothymia, of man’s desire to be recognized as better than his fellows. In the United States, our utilitarian traditions make it difficult for even the fine arts to become purely formal. Artists like to convince themselves that they are being socially responsible in addition to being committed to aesthetic values. But the end of history will mean the end, among other things, of all art that could be considered socially useful, and hence the descent of artistic activity into the empty formalism of the traditional Japanese arts.”—Francis Fukuyama, The End of History and the Last Man (1992)
This passage feels somewhat like a political scientist picking up the entire field of aesthetics in his analytical tweezers and peering at it, as if an exotic insect, or frustratedly shuffling columns in Excel, attempting to find the value of art via arithmetic. To reduce art to a simple gradient opposing solid, utilitarian and responsible (American) and contentless, empty and snobbish (Asian) is uncomfortable indeed, as well as entirely missing the point.
I’m not the person to produce a fuller critique of that position, but seeing as Fukuyama has landed on the tea ceremony’s function—if that is what we must reduce it to—do read the opening words from The Book of Tea (茶の本, Cha no Hon) by Okakura Kakuzō, published in 1906:
“(It is) founded on the adoration of the beautiful among the sordid facts of everyday existence. It inculcates purity and harmony, the mystery of mutual charity, the romanticism of the social order. It is essentially a worship of the Imperfect, as it is a tender attempt to accomplish something possible in this impossible thing we know as life. The Philosophy of Tea is not mere aestheticism in the ordinary acceptance of the term, for it expresses conjointly with ethics and religion our whole point of view about man and nature. It is hygiene, for it enforces cleanliness; it is economics, for it shows comfort in simplicity rather than in the complex and costly; it is moral geometry, inasmuch as it defines our sense of proportion to the universe. It represents the true spirit of Eastern democracy by making all its votaries aristocrats in taste.”— ‘The Book of Tea’, Okakura Kakuzō (1906)
This hardly seems pointless. Indeed, it could be that cultivating a “comfort in simplicity rather than the complex and the costly”, or the “moral geometry (in) defining our sense of proportion to the universe” would be extremely valuable practices right now.
The Slowdown conditions in Japan after the Shogun Hideyoshi—in Fukuyama’s words, “a state of internal and external peace for a period of several hundred years”— would be interesting to understand more about, particularly after Dorling’s ideas. What could be learned from this time, and what might not—given the isolationist sakoku period that followed; what might translate to a contemporary globalised condition under Slowdown dynamics? | https://medium.com/slowdown-papers/22-revisiting-the-slowdown-and-the-end-of-the-great-acceleration-f2bd074b8bee | ['Dan Hill'] | 2020-09-25 08:28:46.986000+00:00 | ['Politics', 'Geography', 'Sociology', 'Slow Down', 'Books'] |
7 Signs You Might Be Codependent in Your Relationship | Shawn Meghan Burn, Ph.D., author of Unhealthy Helping: A Psychological Guide for Overcoming Codependence, Enabling, and Other Dysfunctional Helping, defines codependence as “an imbalanced relationship pattern where one partner assumes a high-cost ‘giver-rescuer’ role and the other the ‘taker-victim’ role.”
By the time I left my ex-husband, I managed nearly everything for our household.
I thought raising two children on my own would be back-breakingly difficult, but it wasn’t because instead of taking care of three dependents (my two children and my husband), I’d be just taking care of two.
Rob Weiss, Ph.D., author of Prodependence: Moving Beyond Codependency, says,
“The codependent taker is usually some combination of needy, under-functioning, immature, addicted, entitled or troubled. They rely on the giver to take care of them, assume or soften the negative consequences for their actions, and to compensate for their under-functioning.”
Until I discovered my ex-husband had been actively using for five or more years, I would have never said we were in a codependent relationship. I thought what we were doing was normal.
Every couple balances shared duties their own way, right? Every couple has their problems, right?
When I realized for years that I had enabled his addiction and bad behavior, I was devastated. I hadn’t been able to decipher love from codependency because I thought that if we love someone, we put that person’s needs before ours and make their happiness our priority. That couldn’t be a bigger lie.
You might be in a codependent relationship too if:
1. You start taking on all of the responsibility to connect.
As a partner pulls back in how much time, effort, and care they are giving, the other partner instinctively fills in the gap by working harder to stay bonded. As soon as this happens, the relationship has shifted in an unhealthy direction towards codependency.
2. You want to fix your partner.
Codependent personalities tend to be people-pleasers, thriving on a desire to “help” or “fix” others. When caring for another person stops you from having your own needs met or if your self-worth is dependent on being needed, you may be heading down the codependent path.
3. You lose all of your boundaries.
Codependent people are over-givers. They always feel overly responsible for other people and that they need to keep giving and overcompensating. We need boundaries to protect ourselves, but often codependent people will drop them entirely in a romantic relationship with a dysfunctional person.
4. You don’t feel like you have an independent life.
The time you are away from your partner is time you can work on yourself and your other relationships and become a more well-rounded happier person. If you cannot do anything without your partner at your side, you are moving into a codependent relationship.
5. You lose contact with friends and family.
While falling in love often means we pull away from our other relationships, healthy ones will spring back to a more balanced state once the pink cloud of new love fades. If you begin sacrificing your personal relationships to be with your partner instead far after the new rosy period, it’s a sign that your romantic relationship has become too much of a priority and hurting your other relationships.
6. Your partner has unhealthy habits.
Real love is when we encourage our partner to be the best version of themselves they possibly can be.
To a codependent though, who wants to feel needed, their partner choosing to be healthy can actually feel like a threat, so they can often subconsciously or consciously sabotage their partner’s attempts at being healthy.
For example, Leah might be in a relationship with Peter who is a binge drinker. She binge drinks whenever he does or she takes care of him whenever he is hungover, calling in sick to work for him and bringing him water and ibuprofen. But when he decides to quit, instead of supporting him to do this, she keeps alcohol around the house and often drinks in front of him.
7. You’re always looking for reassurance.
When your emotional needs are not being met, you’ll find yourself constantly asking for and wanting reassurance. Maybe you constantly wonder if your partner will leave you or you fantasize about leaving them.
Maybe you start fights just to have the lovey-dovey make-ups later or you flirt with other people of the opposite sex to get a rise out of them. If so, you are likely in a codependent relationship.
Codependent relationships are never healthy and hard to fix if both people don’t get help from licensed professionals. The first step though is always awareness. If you have codependent behaviors, it’s very likely your partner has an issue that you are or are not completely aware of.
If you have any concerns about your safety in an abusive codependent relationship, get to a safe place and call the National Domestic Violence Hotline at 1–800–799–7233 or go to their website for resources and help. | https://tarablairball.medium.com/7-signs-you-might-be-codependent-in-your-relationship-e4604973d99a | ['Tara Blair Ball'] | 2020-12-14 20:01:49.011000+00:00 | ['Relationships', 'Life Lessons', 'Mental Health', 'Happiness', 'Self'] |
Ghost | I’m the ghost of a girl
Who has wandered the world
Chasing after what seemed to be right
I thought I’d found it one day
And got carried away
And since then I’ve been chasing the light
I’m the ghost of a girl
With my banner unfurled
Struggling to stand my ground and not fall
Once I fell in and drowned
In the lights and the sound
But they always meant nothing at all
I’m the ghost of a girl
A gaunt face framed with curls
I was beautiful or so they say
I thought I could survive
On the fire in my eyes
But I blinked and the fire went away | https://medium.com/the-scene-heard/ghost-d0b865417e9 | [] | 2017-07-12 14:21:36.524000+00:00 | ['Thesceneandheard', 'Writing', 'Poem', 'Life', 'Poetry'] |
Mother Cosmos | Ranging much of the Indo-European and Proto-Indo-European world, and more broadly approximately 4,000–25,000 years ago, one senses echoes of an image now almost entirely evaporated into the time before humanity’s earliest memories or histories. Though not at all clear today, with some trepidation and humility, one may venture to describe this widespread image as maternal. Were one to consider only two works from the era in question, they might be the “Venus” of Willendorf of Paleolithic Europe and the Upanishads of ancient India. Both the Venus and the Upanishads may not only stand as artifacts of humanity’s deep past, in the form of sculpture and literature, but as indications of the mental landscape common to all human beings.
Measuring little more than four inches high, carved from limestone in ca. 25,000–20,000 BCE, the “Venus” of Willendorf was discovered in Lower Austria (Humanistic 5). This particular Venus, one instance of a number of female figurines discovered around eastern Europe, may grant a clue as to the cultural and psychological state of Paleolithic humans in that region — including, or especially, as an indication of what these people thought of women generally and of Woman in the abstract. In this era women “secured food by gathering fruits and berries” and “acted as healers and nurturers.” Additionally, “the female (in her role as child-bearer) assured the continuity of the tribe … As life-giver, she was identified with the mysterious powers of procreation and exalted as Mother Earth” ( Humanistic 4). Embodying not only practical concerns but cosmic vision, as well, the Venus and her counterparts may indicate not only the ways in which prehistoric human communities viewed women, but the way in which they saw themselves in relation to the world around them.
Venus of Willendorf, frontview. Borrowed from Wikimedia Commons.
Jungian psychologist Erich Neumann suggests as much in The Great Mother, describing the ostensibly prevalent mother-imagery of eastern Europe as an indication of the psychological development of prehistoric humans: the conscious ego emerges from the automatic unconscious mind in much the same way a child is gestated and birthed by their mother; while the child develops through separation, individuating themselves from their mother (a stage Neumann sees in the subsequent prevalence of father- and hero-imagery), the child begins in close relation to their mother — their contingent origin. A number of figurines like the Venus of Willendorf “show the female nude with pendulous breasts, large buttocks, and a swollen abdomen, indicating pregnancy” (Humanistic 4); moreover, perhaps these maternal images are related to the numerous caves and their art which prehistoric humans carved and painted (Lascaux and Chauvet, for instance), harking back to one’s biological womb in order to evoke the sense of a more cosmic womb which gave rise to all of nature, humanity included.
Lascaux cave, interior; Prehistoric Sites and Decorated Caves of the Vézère Valley. Borrowed from Wikimedia Commons.
Neither the Venus nor the caves contemporary with her carving are self-explanatory, meaning that explanations such as those offered by Neumann are merely speculative. However, in a roundabout way, Neumann’s analysis and the explicitly maternal contents of the Venus and her counterparts may testify to these artifacts’ place in human (or at least Indo-European) development. Though their meanings are not immediately obvious, the Venus and the caves stand as precursors to the thought and culture of subsequent societies, a kind of civilizational womb, mother to the peoples and cultures to follow in the Neolithic era and beyond. The Venus of Willendorf, then, may serve as a reminder not only of the mysteriousness and obscurity of much of human history and prehistory, but as a concrete reminder of humanity’s contingency, deriving its contents from an origin which is beyond memory or history and yet which sits at the very root of much of the present.
The Upanishads of ancient India may capture this maternal cosmos in a literary vein. Initially orally transmitted from the eighth to sixth centuries BCE, these 250 prose commentaries on the more ancient Vedas capture the essence of Hinduism: pantheism, which, in the case of Hinduism, “identifies the sacred not as a superhuman personality, but as an objective, all-pervading Cosmic Spirit called Brahman.” The Hindu view “that divinity is inherent in all things” and that “the universe itself is sacred” ( Humanistic 65) seems fairly cogent with the prehistoric maternal cosmos at which the Venus and her caves hint, perhaps being a distant descendant of this worldview. Consider the Hindu cosmology as a whole:
“In every human being, there resides the individual manifestation of Brahman: the Self, or Atman, which, according to the Upanishads, is ‘soundless, formless, intangible, undying, tasteless, odorless, without beginning, without end, eternal, immutable, [and] beyond nature.’ Although housed in the material prison of the human body, the Self (Atman) seeks to be one with the Absolute Spirit (Brahman). The spiritual (re)union of Brahman and Atman — a condition known as nirvana — is the goal of every Hindu. This blissful reabsorption of the Self into Absolute Spirit must be preceded by one’s gradual rejection of the material world, that is, the world of illusion and ignorance, … achieving liberation of the Self and union with the Supreme Spirit” (Humanistic 66).
Aitareya Upanishad, Sanskrit, Rigveda, Devanagari script, 1865 CE manuscript. Borrowed from Wikimedia Commons.
A vision startlingly similar not only to Neumann’s own model of the emergence of consciousness but to human reproduction, this Hindu cosmology and its literary exploration in the Upanishads may evoke similar phenomenal states as those sought by the prehistoric populations who carved figurines like the Venus of Willendorf and who occupied and painted caves such as Lascaux and Chauvet. Furthermore, the Bhagavad Gita, the most popular Indian text next to the Upanishads, serving as something of a commentary on the latter, further explicates this theme in its conversation between Arjuna, who feels dissociated from and adrift in the world, and Krishna, a manifestation of Brahman who calls his interlocutor to abandon desire for this or that particularity and to instead embrace the whole of existence — Brahman, the whole of which everything and everyone is a part (Humanistic 66–67).
Both the Venus of Willendorf and her caves on the one hand, and the Upanishads and their adjacent literature on the other, are ancient enough as to have created numerous varying and even contradictory interpretations. Neither piece is self-explanatory. And yet both stand at the basic origins of much of modern human thought and culture, perhaps even of humanity’s overall psychological development. If serving as nothing more than a hint at humanity’s mysterious contingency (as in the case of the Venus), or in exploring that contingency from an existential and psychological perspective (as in the case of the Upanishads), both the Venus of Willendorf and the Upanishads secure themselves as landmarks and staples of human history, and thus as deserving of recognition and preservation in the overall narrative of human origins and development. | https://medium.com/interfaith-now/mother-cosmos-cebba0ec9a59 | ['Nathan Smith'] | 2019-10-06 23:15:47.430000+00:00 | ['Literature', 'Spirituality', 'Philosophy', 'Religion', 'Psychology'] |
A Beginner-Friendly Explanation of How Neural Networks Work | The Mechanics of a Basic Neural Network
Again, I don’t want to get too deep into the mechanics, but it’s worthwhile to show you what the structure of a basic neural network looks like.
In a neural network, there’s an input layer, one or more hidden layers, and an output layer. The input layer consists of one or more feature variables (or input variables or independent variables) denoted as x1, x2, …, xn. The hidden layer consists of one or more hidden nodes or hidden units. A node is simply one of the circles in the diagram above. Similarly, the output variable consists of one or more output units.
A given layer can have many nodes like the image above.
As well, a given neural network can have many layers. Generally, more nodes and more layers allows the neural network to make much more complex calculations.
Above is an example of a potential neural network. It has three input variables, Lot Size, # of Bedrooms, and Avg. Family Income. By feeding this neural network these three pieces of information, it will return an output, House Price. So how exactly does it do this?
Like I said at the beginning of the article, a neural network is nothing more than a network of equations. Each node in a neural network is composed of two functions, a linear function and an activation function. This is where things can get a little confusing, but for now, think of the linear function as some line of best fit. Also, think of the activation function like a light switch, which results in a number between 1 or 0.
What happens is that the input features (x) are fed into the linear function of each node, resulting in a value, z. Then, the value z is fed into the activation function, which determines if the light switch turns on or not (between 0 and 1).
Thus, each node ultimately determines which nodes in the following layer get activated, until it reaches an output. Conceptually, that is the essence of a neural network.
If you want to learn about the different types of activation functions, how a neural network determines the parameters of the linear functions, and how it behaves like a ‘machine learning’ model that self-learns, there are full courses specifically on neural networks that you can find online! | https://towardsdatascience.com/a-beginner-friendly-explanation-of-how-neural-networks-work-55064db60df4 | ['Terence Shin'] | 2020-06-03 15:09:12.829000+00:00 | ['Artificial Intelligence', 'Data Science', 'Machine Learning', 'Technology', 'Education'] |
Heat and Eat Vegan Holiday Treats | Heat and Eat Vegan Holiday Treats
Everything doesn’t have to be scratch-made for you to eat well this holiday season.
Photo by Brooke Lark on Unsplash
It seems like this last half of 2020 is zooming by at record speed, but not without more lessons. I don’t know about you but I’ve learned an awful lot about myself in the last nine months. Mostly, I’ve learned that I can persevere under almost any circumstance.
However, I’ve also come face to face with the fact that I tie my worth to work. I feel better and more grounded when I’m being “productive.” Even my love language — how I express love — is tied to doing. I cook everyday because without that physical manifestation of my love, I’m not sure what value I bring to my family. What I am learning — through 2020 — is that self-worth and love don’t have to be tied to anything — least of all productivity and work. In making this admission, I hope that if you are like me, you feel seen.
Thus, in this issue, we’re going to focus on fabulously healthy plant-based heat and eat options that will help you save time this holiday season. Everything doesn’t have to be scratch-made to be delicious. Furthermore, you don’t have to spend hours stressing or slaving over a hot stove to prove that you can. You can relax a little and enjoy these lovely products and make as little or as much of your meal as you can reasonably do without feeling compelled to burn yourself out.
For this list, I’ve decided to focus on Black-and-Brown-owned plant-based businesses. 2020 has also reinforced for me, the need to align everything I do with my core values. Supporting Black-and-Brown-owned businesses globally is high on my list.
And don’t worry, if you’re absolutely committed to making scratch-made food to feel good, I’ll be linking to my quick and dirty recipes for some of these dishes as well. With Christmas on the horizon, this list is meant to lighten your load as you figure out what your socially distanced holiday will look like.
That said, let’s get to heating and eating…
Holiday Staples
These are both ingredients and dishes that will make any vegan feast pop. The wonderful thing about the holidays is that you get a melange of flavors on one plate. In that vein, please note this entire list is likely going to read like a pan-African holiday feast. As an Afro-Caribbean person, I’m sharing products that closely relate to flavors and dishes that conjure memories of home. I hope some or all of them align with your needs/wants.
Macaroni and ‘Cheese’:
Photo credit: macandyease.com
1. For a complete heat and eat dish, check out Mac & Yease. It comes in two flavors; original and Jalapeno Cheddar. They also offer a BKLYN Bolognese. As a Brooklyn Girl, I’m so tempted to order that the next time I go to the states All products retail for about $24 USD. They might be sold out on their website, but according to my research, you can find it at Wholefoods. Check the freezer section. Another tip is if you find it and want more of a Caribbean Macaroni pie vibe, add a flax or chia egg to get a firmer pie.
2. If making vegan cheese sauce gives you anxiety, you can also just buy a sauce, boil your noodles and season to your taste. There are a couple of highly rated Black-owned companies offering prepackaged sauces:
Mylkdog’s Notcho Cheese gets a ringing endorsement from PETA. It comes in Original and Spicy and is priced at about $15 USD/jar.
Fineapple Vegan’s Liquid Gold has been burning up my Instagram feed with positive reviews from a host of vegan bloggers. Retails for $12.99 USD/jar. Watch her make her mac and cheese recipe using the sauce here.
3. If you are hellbent on making your own scratch made vegan mac and cheese, checkout my homemade Holy Mac & Cheese recipe.
Meat Substitutes
Photo credit: @mothercluckerr on Instagram
1. If fried chicken, ribs, and/or fried turkey is high on your list, the streets are talking and Atlas Monroe is all the rage. Their chick’n and ribs options range from $13-$23 USD. You can get a whole deep-fried vegan turkey for $95 USD. I am DYING to try their chick’n. Apparently you just heat and eat it according to instructions. Every single vegan blogger — Black, Brown, White and Blue — that I follow has sworn that this stuff is amazing. I want to live vicariously through you, if you try it, drop me a note. They even have vegan bacon.
2. My partner and I started a tradition of making holiday lasagna a couple of years into our relationship. This year, I’ll be veganizing that recipe likely using walnut or pecan meat. If you have recipes that require mince/ ground beef, consider trying Hella Nuts’ prepackaged Walnut Meat.
3. Simple homemade solutions: check out my walnut meat recipe. For other meaty options, you can opt for a beef-less bolognese or meaty bean-loaf. For a tasty fried chick’n substitute, you can try my baked cauliflower bites. | https://medium.com/one-table-one-world/heat-and-eat-vegan-holiday-treats-eb79beae8883 | ['Melissa A. Matthews'] | 2020-12-05 20:48:32.536000+00:00 | ['Food', 'Vegan', 'Plant Based', 'Holidays', 'Cooking'] |
Making Data Trees in Python | A tree you say ?
Most likely you are already using a tree or have used one, well at least a simple tree like this one:
We’ll get into the terminology in a minute, but the important thing here is the parent child relationship which might make more sense with a couple of examples:
Dad -> Son
Boss -> Employee
Favorite Food -> Chinese takeout
3 -> Silver Medal
Implementing this simple tree is fairly common in Python with dictionaries and lists :
⚠️ NOTE:The following examples are actually multiple trees ( technically a forest), in reality you are more likely to find this structure rather than a single tree ( at least at this level of tree complexity), if you remove say Jim and Carlos in the following example you would have a single tree. # Dictionary: Families = {'Peter':'Paul', 'Jim':'Tommy', 'Carlos':'Diego'} for Parent, Son in Families.items():
print(f"{Parent} is {Son}'s Dad")
OUTPUT: Peter is Paul's Dad
Jim is Tommy's Dad
Carlos is Diego's Dad
---------------------////------------------ #List: Prizes = ['Gold','Silver','Bronze','Nothing','Zilch'] for place, prize in enumerate(Prizes):
print(f"Place number {place+1} gets {prize}") OUTPUT: Place number 1 gets Gold
Place number 2 gets Silver
Place number 3 gets Bronze
Place number 4 gets Nothing
Place number 5 gets Zilch
But that looks nothing like a tree and these are simple linear relationships you might say. Well, yes but a tree starts to take shape and make sense once we add more children to the root node:
The key thing here is that these children have only one parent, if they had more this wouldn’t strictly be a tree ( it would be some sort of graph ), some examples:
Dad -> Son, Daughter
Boss -> Manager_1, Manager_2, Manager_3
Favorite Foods -> Chinese, Pizza, Tacos
1 -> Gold Medal,$10000,New Car,Sponsorship
Implementing these on Python should be straightforward since they are just expansions of the previous examples:
# Dictionary #(Same note as before, these are multiple trees, family trees in this example) : Families = {'Peter':['Paul','Patty'], 'Jim':['Tommy','Timmy','Tammy'], 'Carlos':['Diego']} for Parent, Children in Families.items():
print(f"{Parent} has {len(Children)} kid(s):" )
print(f"{', and '.join([str(Child) for Child in [*Children]])}") OUTPUT: Peter has 2 kid(s):
Paul, and Patty
Jim has 3 kid(s):
Tommy, and Timmy, and Tammy
Carlos has 1 kid(s):
Diego # Reduced/Alternative way of saying the same without thing formatting: for Parent, Children in Families.items():
print(str(Parent) + ' has ' + str(len(Children)) + ' kid(s):')
print(*Children) Note the use of the *Operator for unpacking the list.
---------------------\\ | // ------------------ # List: Prizes = [['Gold Medal','$10000','Sports Car','Brand Sponsorship'],
['Silver Medal','$5000','Budget Car'],
['Bronze Medal','$2500','Motorcycle'],
['Participation Trophy','Swag'],
['Swag']] for place, prizelist in enumerate(Prizes):
print(f"Place # {place+1} gets the following prize(s)")
print(f"{', and '.join([str(prize) for prize in [*prizelist]])}") OUTPUT:
Place # 1 gets the following prize(s)
Gold Medal, and $10000, and Sports Car, and Brand Sponsorship Place # 2 gets the following prize(s)
Silver Medal, and $5000, and Budget Car Place # 3 gets the following prize(s)
Bronze Medal, and $2500, and Motorcycle Place # 4 gets the following prize(s)
Participation Trophy, and Swag Place # 5 gets the following prize(s)
Swag
A more common tree then has a root element and multiple nodes which themselves have children of their own :
Some tree terminology ( Treeminology ?):
Part of what makes implementing a tree difficult (in general), is the hidden complexity a tree carries with it, here are some common terms you might encounter along with their graphical description: | https://medium.com/swlh/making-data-trees-in-python-3a3ceb050cfd | ['Keno Leon'] | 2020-03-14 15:05:15.603000+00:00 | ['Data Science', 'Python', 'Programming', 'Data Visualization', 'Coding'] |
How “The Magic” Book Changed My Life. | To understand the concept shown in “The Magic,” you have to know the basic concept of how the universe actually works in terms of energies and vibrations.
Let me tell you a little back story before I get to “The Magic.”
I learned about the book “ The Secret” by Rhonda Byrne in 2013/14. I was in the early 20s, and around that time, I discovered the word “spirituality.”
But at such an age, many life-altering events take place like studies, pay back the student debts, struggle to find a job, make a career and try to find “the one” for you, which threw the word “spirituality” out of the window.
In that chaos, The curiosity to learn more about spirituality got lost.
In 2016, I hit rock bottom (At least that’s what I thought.) I was in a bad phase. I stayed in for weeks, avoided human contact at all cost. Not because I was depressed all the time, but I didn’t respect myself enough. I felt like a complete loser who wasted precious time, money, energy, and the opportunity to create a good life in a different country.
But you know, sometimes you just get lucky with friends. I can’t thank you enough for those beautiful souls in my life.
My friend, who is an intense reader, suggested me to read “ The Secret.”
Since he drove for 30 minutes to give me the book, I couldn't disappoint him. Out of respect, I kept the book. But it was just lying on the table next to my bed for weeks.
Every few days, he would message me asking, “How do I like the book?”
And I kept saying, “Honestly, I don’t think I want to read it at the moment.”
He patiently waited for weeks after weeks. (Who does that after such disappointment for months?)
Anyway, out of guilt, I started reading the book.
At first, the book sounded like a fiction story to me. The words like Imaginations, conscious, and unconscious mind, the universe, and vibrations sounded too spiritual and far from logic.
But whatever they were saying started making sense after few pages into the book.
“The secret” opened up a whole new invisible realm to me that I never knew existed. But the problem was it just gives you an insight into how the universe works. Not how to apply it in our daily life.
So again, I ran to the same friend and asked,
“I have an idea of how the universe and vibrations work, but I honestly have no idea how do I apply it in my daily life?”
He messaged me a quote from Anton Chekhov,
“Knowledge is of no value unless you put it into practice.”
Again, the sweetheart as he is took “the secret” and handed me “The Magic” book.
But while handing me the book, he put a condition to me.
He said, “ Promise me, you won’t miss a single day of practice for 28 days.”
Not going to lie; I thought he was being dramatic (says a person whose life was full of drama at that time)
Well, I wanted the book, so I agreed.
It was the “The Magic” that showed me step by step process of how exactly I can engrave a deep sense of gratitude in my life.
Indeed it was magical.
The book breaks down detailed daily practices for 28 consecutive days with clear explanations of the process, effects, and gratitude results.
All my life, I thought I appreciated people and situations around me. But, I was far from a sense of gratefulness.
The book taught me what’s really being grateful feels like.
When I committed myself to consistently practice the exercises for 28 days. The people around me and I noticed a drastic positive change in my attitude.
I became more positive about people and situations. In the worst scenarios, I could keep myself calm and could understand others' perceptions. The positivity was so uplifting that the vibrations started to spread to the people and the environment around me.
Here are the things I observed that brought positive change in the attitude of whoever read the book.
Lift your mood instantly.
Do you notice an intense roller-coaster of emotions in your daily life?
If you look at it scientifically, you find out that all human beings experience many emotions because of our hormones and brain activities that release different chemicals in our brain.
One moment you are feeling happy or cheerful, and the next minute you feel annoyed or frustrated.
We could often track down what made us feel the way we did, but we can’t find a concrete explanation most of the time.
We can’t change it; after all, we live in society. One way or another, situations and people will get to your nerves.
The Magic techniques make you focus on the positive qualities of the person whose upsetting you? To focus on how he/she the positive things they have done for you in the past or present?
By asking these questions, you will feel grateful to have this person in your life. And it starts to shift your annoyance to a sense of gratefulness. This will trick your brain into feeling happy, which automatically lifts your mood up.
The fun part is, it only takes a minute or two for you to get out of any negative mood at any time and anywhere.
Boost your confidence
When do you feel most confident?
When you feel that everything is under control, under YOUR control.
The Magic have a practice where you write down the list of things you desire. Then they ask you to feel the exact same way you would feel if you received it at this moment.
How would you feel? Overwhelmed? Excited? Satisfied?
This is the key. The practice may seem like an illusion, but it’s not; this practice releases intense vibrations into the universe as if you have already possessed the things you have asked for.
And universe only understands vibrations, so it sends the same intense emotions your way.
It represents what Albert Einstein has discovered,
“Every action always has an opposite and equal reaction.”
The opposite reaction explains that the vibrations you are “sending out” into the universe, you are “ receiving” the same vibrations.
You feel confident when you feel the authority to change your circumstances and achieve new heights in your life by practicing gratitude. It’s like a positive spiral circle that brings in more desired things in life and boost your confidence.
Trust me, any logical things (not the flying elephant, please) you have dreamed of, you can fuel it up with gratitude, and you will see it happening right in front of your very eyes.
Clears your vision
At the beginning of the book, they ask you to make a list of things you want to accomplish in life. It has 7 different categories, and you have to make a list of detailed things you desire in life.
Plus, while listing them down, you have to be as specific as you can. For example, if you desire a car, you have to be specific about which model, color, engine, features even the colors of seat cover, you want for your dream car.
This practice forces you to think of the things you desire in life. And making a list of your goals and future plans helps you be more affirmative about your choices.
I personally was quite confused between being a writer or a manager at a multinational company.
Being a writer would have made me feel satisfied at the end of the day,. Still, the financial growth might not be the same as the managerial position. But when the magic forced me to visualize my next few years,
I found my answer, I wanted to make a difference in the world. And as a writer, I could reach the level of fulfillment.
See yourself as a good person.
When your perception of other people and situation changes, you feel more connected with them.
You will start to understand their side of the story with a positive approach, making you more compassionate. And, it will help you find the right solution for any situation,, which can be a win-win for both parties involved.
When your emotions shift to compassion, it decreases the events where you feel negative emotions such as guilt, remorse, anger, irrational and unreasonable. That way it will only increase your respect for yourself.
Plus, the magic has a practice where you love each part of your body, your own thoughts, and your present life in general.
This reflection creates a positive impact on how you look at yourself. This teaches you to go easy on yourself.
These self-reflection exercises will make you fall in love with yourself, inside and out.
Success keeps your feet on the ground.
“Whoever has gratitude will be given more and they will have an abundance. Whoever does not have gratitude, even what he or she has will be taken away. — The Magic
This quote simplifies that one can achieve almost anything in life by being grateful for the things he already has.
But at times, we get consumed with success, and at some point, we feel superior than others, which can last for a moment, day, weeks, or years.
It only breeds arrogance that affects our relationship with others.
This book teaches you to recognize other’s strengths and be appreciative of how they have fought their battle.
By doing these, the focus shift from “Your” success to “Our” success.
Consideration for other's contributions when things are accomplished helps you keep your feet on the ground.
Train your brain to see good things in any situation.
In this book, 28 magical practices have been specifically designed so that you impregnate your cells and subconscious mind with gratitude.
This allows you to make gratitude a habit and a new way of life.
Each magical practice in the book is a wealth of secret teachings that will expand your knowledge about your past, present, and future in magical ways.
Gratitude is portable. — you take it with you wherever you go. — The Magic
The good thing is, you don’t need to clear your calendar because each of the practices has been specifically created to fit into your daily life, whether it’s a workday, weekends, holidays, or vacations.
I have accomplished most of the things I have desired through the practices shown in the book.
With this write-up, I am trying to play my part to spread the word around about “The Magic” so others can reform their lives as me and many of my closed ones have.
Thank you, Thank you, Thank you. | https://medium.com/illumination/how-the-magic-book-changed-my-life-1bb6a1328581 | ['Ruchi Shah'] | 2020-11-30 14:14:04.595000+00:00 | ['Life Lessons', 'Self Improvement', 'Life', 'Book Recommendations', 'Books'] |
Productivity at work | When it comes to work, being organized and having good habits always helps. The more we know our working habits, the better we exploit our virtues and manage our weaknesses. Increasing productivity is not about working harder and harder, but about using every available resource to achieve better results — with no extra effort. Many small decisions we take throughout the day can lead us to either a very productive day, or end up turning it into a complete failure.
Now I’ll point out some concepts I think can be very useful to optimize our way of working:
Tasks
Planning
Spending the last minutes of the day planning for the next one could end up being highly productive. We can go home knowing which tasks will be the important ones to accomplish, as well as our goals for the following day. It’s about knowing what to do, what may go wrong, and being prepared for unexpected problems, which can always surprise us whether we like it or not.
Key tasks
Always identify the most important tasks for the day. It’s highly recommended to start by one of them, also keeping in mind that those tasks will probably represent more than 50% of your productivity.
Routine tasks
Do not start with these ones. The key is to find the right moment for doing them. Maybe when our workflow is low, we are less tied up with things, or our productivity is not at our highest. Sooner or later, we’ll have to deal with them.
Large tasks
Don’t be afraid of them. Finishing those ones early will clear the rest of the day up. Divide & Conquer is a great strategy that can helps us out a lot. We can accomplish great things by breaking them up into little ones. Solving simpler and smaller stuff first, and then putting everything back together is a good practice. Although it seems pretty intuitive, most of the times we forget to do this.
Microtasks
It could be very productive to apply the “2 minute” rule. It states that whenever we come across tasks that can be finished in one or two minutes, we should do them right away. If we are not in the middle of something important, getting those sort of tasks done can speed things up a lot. The only thing we should ask ourselves is whether we think a task can be wrapped up quickly, and if so, go ahead and do it. If we put them off, they will slow us down later.
Avoid Multitasking
Multitasking generates stress and most important reduces our concentration and creativity. Let’s focus only on one thing at a time and do it right.
Concentration
Digital environment
For those of us who work all day long with our computers, there is a great variety of sites and applications that are just one click away. As exciting as it may seem, it also ends up being a true source of distraction. It’s considered a good practice to think of them as if they were not there, that way we can focus entirely on our tasks until having them finished.
Interruptions
Automated notifications and cell phones are the two main concentration breakers. How many times do we check our phone just to see if anything new came up? How many mouse clicks do we spend on closing notification pop-ups? We should deactivate them and keep our phone away so we can work disconnected and check any of them when we are not in the middle of something.
Intensity
Doing work sprints, i.e. working non-stop for short periods of time, helps to stimulate concentration and to increase intensity. It also lets us rest from time to time and stay relaxed.
Information
We’ve never had that much information at the reach of our fingertips. Maybe too much. That’s why it’s important to know how to ration it out.
Feeds
Choose carefully which ones are interesting, meaningful or simply those ones that are worth reading. Ignore the other ones or spend the rest of the day reading them.
Social networks
A well known sickness for some of us. They always manage to take a lot of our precious time, therefore we should avoid them at work or use only the ones that have something to do with it. Don’t keep them open in the background.
E-mail
Knowing how to use it wisely is very important for those of us who employ it both as a working and as a communication tool with colleagues and clients. We have to make it wait. To check it obsessively generates too much distraction and gives us the wrong feeling of being more effective, but it’s not true. It’s better to read it periodically and answer only the important ones. It also helps a lot if you keep it organized, e.g. with the use of labels and filters or any other technique. What you shouldn’t do is to get lost inside of it searching for a message.
Rest time
Free time
Always find time for hobbies. Sports, outdoor activities, or some kind of art, can all help to enhance our creativity and mental clarity. That’s exactly why here at Moove-IT we always arrange extra group activities we can enjoy together.
Night’s sleep
A good night’s sleep is imperative to make our day pay off. Between seven and eight hours should keep us far from being a zombie in front of a computer.
Breaks
It’s always good to take a break and stretch our legs a bit. Free our mind from work for a while. Here we simply go to the kitchen and have a cup of coffee with some of our colleagues. There’s always someone willing to have a nice short chat!
Tools
Paper & pencil
A faithful friend by our side. Some of us can find very useful to have both paper and pencil handy and write everything down. Even when something looks extremely trivial, we might forget it and it’ll still be there when we need it. There’ll always be time to decide what to do with it later.
Whiteboard and post-its
A great way to keep every important tasks in sight. We use them to manage our TO-DOs, so that everyone can be aware of everyone else’s responsibilities.
Software
We all work with the software we find more comfortable. There are a lot of tools, some tend to be more productive, others have some other kind of advantages. “Different strokes for different folks”.
Projects
Fake limit date
It helps a lot to set a fake previous date limit and work hard to finish in time. If so, we can have time to review everything. Otherwise we can still try to get it done.
Phases
As I said earlier, breaking things up always helps. Then we can base ourselves on more estimable parts of the project and make a more accurate planning.
Daily and weekly evaluation
It’s absolutely necessary to know if the project is going in the right direction. Making evaluations helps to keep track of everything and plan following phases even better.
Proactivity
Solving our problems and being successful in what we do is strongly related to be the ones who take the initiative. Take the first step, call for a meeting, be the one who investigates, make a phone call, stay always active and most important, be positive about it. Getting desperate is the first step into failure.
Restrospective
By keeping record of our achievements, we’ll always be aware of the things we can accomplish and also find motivation in times we need it the most. It’s very important to be self-critical, learn both from our mistakes and our successes.
All these things I’ve mentioned are not meant for you to do by the book, I just think it can turn out to be quite useful to bring some of these good practices into our work routine. I’m not saying it’ll be easy, I also have a hard time trying to do so. Let’s always try to improve and better ourselves. Enjoy the jokes!
References
Productivity tips from thinkwasabi.com
Click on images to see their source. | https://medium.com/moove-it/productivity-at-work-1563bc58c7a5 | ['Blog Moove-It'] | 2016-08-04 12:14:01.737000+00:00 | ['Productivity'] |
What the West Can Learn From Vietnam’s Response to the Coronavirus | Dance to this song. It might be more viral than the virus.
I remember it was during the last week of January when my American family & friends were all at a dinner table discussing what was then a foreign, “China-contained” new disease called Coronavirus. The virus had infected about ~1000 people worldwide, 500 alone in Wuhan, China, and 1–2 cases in the US. I told the group that just days ago, the Vietnamese government ordered all schools to be closed for the foreseeable future, as would be many “non-essential” public gatherings and businesses. People were on high alert. Masks and hand sanitizers were severely out of stock.
“How many cases are current there?” — someone asked.
“8” — I said.
There was an audible gasp. Wow. Paranoid much? 8 cases and the whole country was already under a semi lockdown?
To be honest, I was just as surprised. By then my mom in Vietnam had been sending me daily messages for weeks, warning me of things like “avoid large gatherings”, “stock up on your essentials”, “reconsider daycare for Norah if you can”, all of which I brushed off as overreaction from my typically worried-about-everything Asian Mom.
Yeah I Facebook a lot.
Up North, neighboring Vietnam, the epicenter of the new disease, Wuhan (Hubei, China) went under a city-wide lockdown, together with 10 other Chinese cities soon after.
“Good thing we don’t have many Chinese tourists here!” — concluded someone, at the end of said dinner table.
For the next few weeks, we wouldn’t hear much about the virus from the US government (or the general American public), besides a few stories detailing the racism towards people of Chinese/Asian descent.
The underlying assumption: The virus is a foreign Chinese thing. It’s just like a common flu. The panic is worse than the thing itself.
Fast forward just about a month after, what we thought was a “Chinese disease” was declared “a global pandemic” by the WHO, soon after Italy went under a nation-wide lockdown, and the US declared a National Emergency, with number of cases worldwide passing 200k from 100k in just under a week, with over 9k deaths, overwhelming many country’s health infrastructures. As a global economic recession is looking more and more inevitable, no country, the US included, can afford to pretend that it’s just an exclusive “Asian import” anymore. The trajectory is looking bad for the US and worse for Europe.
Now as many eyes are turned to Asia, the continent that’s no longer considered the epicenter of the outbreak (it is now Europe, with over 60% of total active cases), I would like to spotlight Vietnam as a country at the forefront of this global fight against an invisible enemy. Not Taiwan, Hong Kong, Singapore, or South Korea (or as I like to put it, Western media’s favorite examples of “good” Asian countries). Yes, those countries have been doing spectacular job containing the virus. But so is Vietnam.
Yes, Vietnam, the country that borders China. Yes, Vietnam, the 15th most populated country with 97M people. Yes, Vietnam the communist, totalitarian country in most Western narratives. It is this country that has successfully kept the number of cases at 76 (as of March 19, 2020) and fatality at zero, over two months after the first cases were reported. And it’s not because of underreporting. The West has a lot to learn from this tiny little country south of China, namely: 1. fast, efficient, affordable test kits 2. 14-day mandatory quarantine and 3. transparency via technology and social media.
One: fast, efficient, affordable test kits
Did you know? Vietnam is the first country to develop a fast, efficient, AND affordable test kit in one month that the WHO says should have taken 4 years to develop. The test, developed by a group of Vietnamese researchers from the Institute of Biotechnology under the Vietnam Academy of Science and Technology, costs about $15, and is capable of returning results within 80 minutes, with a specificity of 100% and sensitivity of 5 copies per reaction.
Here’s an account of one of my friends who gets tested in Vietnam:
“I went to one of the 30 testing centers available in Vietnam. They swabbed my nose and mouth. About 2–3 hours later, they let me know the preliminary results. In my case, it was negative. I then waited for a few more days for the Institute of Epidemiology in Hanoi to confirm the final result, which was also negative. It was quick, efficient, and painless.”
The test kits were so efficient and easy to use, that as of last week, 20 countries and territories in the world are looking to purchase tens of thousands of test kits from Vietnam. Vietnam’s current production capacity is 3,600 kits/day, but the country could make 10,000 kits a day, and triple the capacity if needed, said a representative.
All I see in our news is how the US is facing a testing shortage (how?) and the first few test kits developed by the CDC were faulty (really?) Nothing about Vietnam test kits, of course.
Two: 14 day mandatory quarantine
As US closes its border to China & Europe, the country’s been facing criticism regarding these “too little too late” actions. The reason: the disease has already been present within the US borders, and the virus knows no boundaries. The community spread will only get worse, even with these closing border actions.
Vietnam, on the other hand, has been mandating 14-day quarantine for all foreigners as well as returning Vietnamese from Covid-19 epicenters, on top of restricting travels from these regions. At first, these applied to people coming from China & South Korea. Of late, the mandate has been extended to people coming back from all of Europe, the UK and the US.
Here’s the kicker: the Vietnamese government provided 100% of Vietnamese citizens and foreigners under quarantine with shelter, food and medical attention during these 14 days. They have been for the past 2 months.
Don’t believe me? Read this story from of a British citizen who has been held in a Vietnamese government-run quarantine in Son Tay since March 14 after landing in Hanoi on a direct flight from London, England:
“Suddenly it all becomes very human, we’re guests in a country doing their best to protect themselves and are extending us that courtesy. Such is the good nature of Vietnam.” Outside, everything is peaceful. The location is quiet, the soldiers work tirelessly to sterilize the rooms daily, log our temperature and clear out our bins. They live here to help their country and despite what they might have heard, they’re friendly and caring. So far, this feels more like a holiday camp than a quarantine. In our room, we share snacks, fruit, and start getting deliveries from loved ones.”
I personally know a lot of friends coming back from South Korea, the UK and the US, most of whom study-abroad students whose spring term are disrupted. One common theme shared among these friends are just how grateful and protected they feel for these 14 days.
Last week, this 12-hour vlog of a Vietnamese traveler who returned from Milan, titled “How “scary” is the local quarantine area?” went viral. The video described how quail and “not scary at all” the area is, bringing peace of mind to the thousands of Vietnamese students who will soon be returning to Vietnam as schools and universities in the US are cancelling classes one by one.
These 14 day measures are not only available for those returning to Vietnam from abroad, but also people who are already in the country. Vietnam’s case number 17th, a Vietnamese heiress who returned to Hanoi from Milan, was suspected to have dodged quarantine at the airport. Her entire street was then disinfected, its people went under 14-day quarantine, with food & shelter as well as medical staff provided by the government.
Three: transparency via technology and social media
The US has a long way to catch up with Vietnam when it comes to transparency of information regarding the spread of Coronavirus. Here’s a screenshot of an opinion piece I wrote to my local newspaper, the Vail Daily, that did not get accepted for publication last week:
It’s been 14 days since the first case of Covid-19 was announced in Colorado, and we had less information than ever with regard to:
how many actual cases there are the whereabouts of these cases (which cities, which neighborhoods) general demographic information about the cases.
This government site used to carry case-by-county information just under a week ago. According to the Colorado government, testing will be more available by private companies, and demographic data will be presented here. Yet, when you click on it, the site freezes about 4 out of 5 times.
What you see when clicking on the CO Government site on Covid-19 data.
Let’s take a look at how the Vietnamese government and media have been treating information regarding Covid-19.
Whenever there’s a new case, the Ministry of Health (MoH) online portal immediately publicizes the case to all major news outlets and the general publics with details including: where the cases are, how they get infected, what actions are to be taken. Information is to be wide-spread across social media and television channels, even texted to your phone via a hotline.
The MoH and Ministry of Information & Media’s sponsored mobile app, called NCOVI is extremely friendly and easy to use. The app allows you to: 1. submit health & travel information so you can get yourself tested 2. learn about the “hotspots” within the cities/whole country where new cases are defected 3. get up-to-date information regarding best practices re/ Covid-19 in Vietnam and in the world.
Vietnam’s State-sponsored Coronavirus app, NCOVI
And lastly, y’all have probably seen this (frankly this is the only thing reported from Vietnam by the Western media), the Vietnamese viral hand-washing song. It was popularized by John Oliver’s Last Week Tonight and TikTok. But did you know, the production company behind the song was the Vietnamese Ministry of Health? They partnered with the original composers and performers of the songs to encourage young people to pay more attention to hygiene.
Imagine a government that’s actually good on social media and inspires a Tiktok movement. And no, screaming all caps in Twitter doesn’t count.
Closing note
As someone who lives in between two countries (born & raised in Vietnam, but have grown up and work and live in the US), I understand that culture plays a big factor in the difference between “the East” and “the West” reaction to the Coronavirus. For one thing, privacy is of much larger concern for Americans (and I would imagine Europeans too). It wouldn’t fly well in the US or Europe if information about someone down to where they live or who their families/friends are get circulated with such speed, like it did with patient 17th in Vietnam. Furthermore (and this is obviously a gross generalization), the East generally values the community (or in this case, public health) while the West generally values the individuals (or in this case, privacy and personal freedom). It would make sense to me that then countries such as China, Vietnam, or Singapore, have a much easier time imposing “draconian measures” such as early lockdown or mandatory/voluntary reporting of suspicious cases.
What I don’t understand, nor agree with, is Western portrayal of some countries’ response to the Coronavirus purely on their caricature of the country/culture. When Wuhan, Hubei, went under lockdown, the Washington Post headline was “China’s Coronavirus lockdown, brought to you by authoritarianism.” When Italy went under lockdown, footages of how people are still loving life and singing from balcony widely circulated. Double standard much. And of course, no reporting whatsoever on the amazing job the Vietnamese government is doing.
I’m not saying I’m 100% in favor government being able to interfere with people’s daily lives in all kinds of circumstances. But I truly do believe, in this case, the Western governments have got ways to go to play catch up with Eastern governments, especially the Vietnamese government. If only Vietnam had an “approval rating” like the West has, you probably would see a 99% approval rating for Vietnam’s Deputy Prime Minister Vu Duc Dam with regard to his stellar campaign to fight Covid-19 in Vietnam thus far, from all Vietnamese young and old, liberal and conservative alike. Could you say the same thing about the testing and treatment provided by the US administration? | https://medium.com/hackernoon/what-the-west-can-learn-from-vietnams-response-to-the-coronavirus-79dcffddbacf | ['Blacklivesmatter', 'Sayhername'] | 2020-03-23 05:09:04.271000+00:00 | ['Corona', 'Coronavirus', 'Covid 19', 'Vietnam', 'Hackernoon Top Story'] |
Message from Mohammed bin Salman visit: It is Pakistan first | Crown Prince Mohammed bin Salman and Narendra Modi in New Delhi. Twitter @MEAIndia
The Saudi Crown Prince’s visit to New Delhi was historic, but perhaps only for the frankness with which Mohammed bin Salman never even pretended he was on the same page as India with respect to Pakistan.
MBS, as the Crown Prince is known, courteously stood by and listened as Prime Minister Narendra Modi made reference to the Valentine’s Day Pulwama attack. But for his part, MBS spoke only in vague terms about India and Saudi Arabia’s shared concerns on terrorism. In choosing not to join the dots, or to allow any explicit link to be made in his presence between Pulwama and Pakistan, the man who will be king of Saudi Arabia when his 83-year-old father passes sent a message. It is instructive and goes as follows: India and Saudi Arabia can certainly have a strategic partnership, as it’s being called. But there must be acceptance that its terms are circumscribed by a previous, long-term relationship. Don’t expect monogamy or fidelity. It’s complicated.
But really, how could it be otherwise? Pakistan, the world’s only Muslim nuclear state, has deep ties with Saudi Arabia going back decades. Islamabad helped fight the 1979 siege of Mecca’s Grand Mosque; stationed military forces in Saudi Arabia during the Iran-Iraq war; collaborated with Riyadh in support of the mujahideen in 1980s Afghanistan; trained Saudi pilots and soldiers; and roughly this time last year, sent 1,000 troops to Saudi Arabia to add to the 1,600 already there. For the past two years, retired Pakistani army chief Raheel Sharif has headed the Islamic Military Coalition against Terrorism, which is run out of Riyadh.
In return for Pakistani military help at strategic moments, the Saudis have provided direct financial aid. In Pakistan, just before visiting India, MBS signed investment deals worth up to $20 billion. In October, Saudi Arabia gave Pakistan a $6 billion loan to keep its ailing economy afloat. As Pakistan’s Prime Minister Imran Khan said in his first speech after winning the July election, Saudi Arabia is “a friend who has always stood by us in difficult times”.
But it was in May 1998 that the Saudis made their most crucial and significant offer to Pakistan. With the promise of 50,000 barrels of free oil a day to offset the effect of expected Western economic sanctions, Saudi Arabia gave Pakistan the nerve to proceed with nuclear tests in response to those of India. With that, Riyadh changed the dynamic in South Asia and enabled Pakistan to assume outsized importance in the Muslim world.
Pakistan’s status as a nuclear power is enormously important to Saudi Arabia. Yoked together by religious faith as well as their own, very real national needs, nuclear-armed Pakistan gives Saudi Arabia heart, nerve and sinew. There is a mutual understanding Pakistan will spring to Saudi Arabia’s defence in the event of any threat to the House of Saud and Muslim holy sites.
According to Yoel Guzansky, formerly in Israel’s national security council and now a researcher at Tel Aviv’s Institute for National Security Studies, Pakistan seems to have granted a “nuclear umbrella” to Saudi Arabia.
That symbiotic relationship will continue at least until one of the parties finds it needless and constraining. Might it be the point at which Saudi Arabia itself acquires nuclear capability? This is a worrying prospect in terms of nuclear non-proliferation, but it no longer seems far-fetched. A new report released by Democratic members of the US House of Representatives shows that some within the Trump administration have been pushing for the export of nuclear weapons technology to Saudi Arabia. While that is controversial and still no more than a nebulous plan to circumvent US policymaking processes on nuclear exports, there is no telling what might happen, if and when it does.
In essence then, it is ‘Saudi First’ policies that direct Riyadh’s reckoning of how to deal with Pakistan and India. Right now, it is a transactional three-way equation. In India, MBS simply reiterated the terms of the deal. It really was a historic visit in that it was a page out of a textbook, simply reprising the way things have been for quite some time.
The author is an international affairs columnist based in London
Originally published at www.firstpost.com on February 22, 2019. | https://rashmee.medium.com/message-from-mohammed-bin-salman-visit-it-is-pakistan-first-26fe77a6a6b7 | ['Rashmee Roshan Lall'] | 2019-02-23 10:50:47.430000+00:00 | ['Terrorism', 'Nuclear', 'India', 'Pakistan', 'Saudi Arabia'] |
Powering big data at Pinterest | Mohammad Shahangian | Pinterest Head of Data Science
Big data plays a big role at Pinterest. With more than 30 billion Pins in the system, we’re building the most comprehensive collection of interests online. One of the challenges associated with building a personalized discovery engine is scaling our data infrastructure to traverse the interest graph to extract context and intent for each Pin.
We currently log 20 terabytes of new data each day, and have around 10 petabytes of data in S3. We use Hadoop to process this data, which enables us to put the most relevant and recent content in front of Pinners through features such as Related Pins, Guided Search, and image processing. It also powers thousands of daily metrics and allows us to put every user-facing change through rigorous experimentation and analysis.
In order to build big data applications quickly, we’ve evolved our single cluster Hadoop infrastructure into a ubiquitous self-serving platform.
Building a self-serve platform for Hadoop
Though Hadoop is a powerful processing and storage system, it’s not a plug and play technology. Because it doesn’t have cloud or elastic computing, or non-technical users in mind, its original design falls short as a self-serve platform. Fortunately there are many Hadoop libraries/applications and service providers that offer solutions to these limitations. Before choosing from these solutions, we mapped out our Hadoop setup requirements.
1. Isolated multitenancy: MapReduce has many applications with very different software requirements and configurations. Developers should be able to customize their jobs without impacting other users’ jobs.
2. Elasticity: Batch processing often requires burst capacity to support experimental development and backfills. In an ideal setup, you could ramp up to multi-thousand node clusters and scale back down without any interruptions or data loss.
3. Multi-cluster support: While it’s possible to scale a single Hadoop cluster horizontally, we’ve found that a) getting perfect isolation/elasticity can be difficult to achieve and b) business requirements such as privacy, security and cost allocation make it more practical to support multiple clusters.
4. Support for ephemeral clusters: Users should be able to spawn clusters and leave them up for as long as they need. Clusters should spawn in a reasonable amount of time and come with full blown support for all Hadoop jobs without manual configuration.
5. Easy software package deployment: We need to provide developers simple interfaces to several layers of customization from the OS and Hadoop layers to job specific scripts.
6. Shared data store: Regardless of the cluster, it should be possible to access data produced by other clusters
7. Access control layer: Just like any other service oriented system, you need to be able to add and modify access quickly (i.e. not SSH keys). Ideally, you could integrate with an existing identity (e.g. via OAUTH).
Tradeoffs and implementation
Once we had our requirements down, we chose from a wide range of home-brewed, open source and proprietary solutions to meet each requirement.
Decoupling compute and storage: Traditional MapReduce leverages data locality to make processing faster. In practice, we’ve found network I/O (we use S3) is not much slower than disk I/O. By paying the marginal overhead of network I/O and separating computation from storage, many of our requirements for a self-serve Hadoop platform became much easier to achieve. For example, multi-cluster support was easy because we no longer needed to worry about loading or synchronizing data, instead any existing or future clusters can make use of the data across a single shared file system. Not having to worry about data meant easier operations because we could perform a hard reset or abandon a problematic cluster for another cluster without losing any work. It also meant that we could use spot nodes and pay a significantly lower price for compute power without having to worry about losing any persistent data.
Centralized Hive metastore as the source of truth: We chose Hive for most of our Hadoop jobs primarily because the SQL interface is simple and familiar to people across the industry. Over time, we found Hive had the added benefit of using metastore as a data catalog for all Hadoop jobs. Much like other SQL tools, it provides functionality such as “show tables”, “describe table” and “show partitions.” This interface is much cleaner than listing files in a directory to determine what output exists, and is also much faster and consistent because it’s backed by a MySQL database. This is particularly important since we rely on S3, which is slow at listing files, doesn’t support moves and has eventual consistency issues.
We orchestrate all our jobs (whether Hive, Cascading, HadoopStreaming or otherwise) in such a way that they keep the HiveMetastore consistent with what data exists on disk. This makes is possible to update data on disk across multiple clusters and workflows without having to worry about any consumer getting partial data.
Multi-layered package/configuration staging: Hadoop applications vary drastically and each application may have a unique set of requirements and dependencies. We needed an approach that’s flexible enough to balance customizability and ease of setup/speed.
We took a three layered approach to managing dependencies and ultimately cut the time it takes to spawn and invoke a job on a thousand node cluster from 45 minutes to as little as five.
1. Baked AMIs:
For dependencies that are large and take a while to install, we preinstall them on the image. Examples of this are Hadoop Libraries and a NLP library package we needed for internationalization. We refer to this process as “baking an AMI.” Unfortunately, this approach isn’t available across many Hadoop service providers.
2. Automated Configuration (Masterless Puppet):
The majority of our customization is managed by Puppet. During the bootstrap stage, our cluster installs and configures Puppet on every node and, within a matter of minutes, Puppet keeps all our nodes with all of the dependencies we specify within our Puppet configurations.
Puppet had one major limitation for our use case: when we add new nodes to our production systems, they simultaneously contact the Puppet master to pull down new configurations and often overwhelm the master node, causing several failure scenarios. To get around this single point of failure, we made Puppet clients “masterless,” by allowing them to pull their configuration from S3 and set up a service that’s responsible for keeping S3 configurations in sync with the Puppet master.
3. Runtime Staging (on S3): Most of the customization that happens between MapReduce jobs involves jars, job configurations and custom code. Developers need to be able to modify these dependencies in their development environment and make them available on any one of our Hadoop clusters without affecting other jobs. To balance flexibility, speed and isolation, we created an isolated working directory for each developer on S3. Now, when a job is executed, a working directory is created for each developer and its dependencies are pulled down directly from S3.
Executor abstraction layer
Early on, we used Amazon’s Elastic MapReduce to run all of our Hadoop jobs. EMR played well with S3 and Spot Instances, and was generally reliable. As we scaled to a few hundred nodes, EMR became less stable and we started running into limitations of EMR’s proprietary versions of Hive. We had already built so many applications on top of EMR that it was hard for us to migrate to a new system. We also didn’t know what we wanted to switch to because some of the nuances of EMR had creeped into the actual job logic. In order to experiment with other flavors of Hadoop, we implemented an executor interface and moved all the EMR specific logic into the EMRExecutor. The interface implements a handful of methods such as “run_raw_hive_query(query_str)” and “run_java_job(class_path)”. This gave us the flexibility to experiment with a few flavors of Hadoop and Hadoop service providers, while enabling us to do a gradual migration with minimal downtime.
Deciding on Qubole
We ultimately migrated our Hadoop jobs to Qubole, a rising player in the Hadoop as a Service space. Given that EMR had become unstable at our scale, we had to quickly move to a provider that played well with AWS (specifically, spot instances) and S3. Qubole supported AWS/S3 and was relatively easy to get started on. After vetting Qubole and comparing its performance against alternatives (including managed clusters), we decided to go with Qubole for a few reasons:
1) Horizontally scalable to 1000s of nodes on a single cluster
2) Responsive 24/7 data infrastructure engineering support
3) Tight integration with Hive
4) Google OAUTH ACL and a Hive Web UI for non-technical users
5) API for simplified executor abstraction layer + multi-cluster support
6) Baked AMI customization (available with premium support)
7) Advanced support for spot instances — with support for 100% spot instance clusters
8) S3 eventual consistency protection
9) Graceful cluster scaling and autoscaling
Overall, Qubole has been a huge win for us, and we’ve been very impressed by the Qubole team’s expertise and implementation. Over the last year, Qubole has proven to be stable at Petabyte scale and has given us 30%-60% higher throughput than EMR. It’s also made it extremely easy to onboard non-technical users.
Where we are today
With our current setup, Hadoop is a flexible service that’s adopted across the organization with minimal operational overhead. We have over 100 regular Mapreduce users running over 2,000 jobs each day through Qubole’s web interface, ad-hoc jobs and scheduled workflows.
We have six standing Hadoop clusters comprised of over 3,000 nodes, and developers can choose to spawn their own Hadoop cluster within minutes. We generate over 20 billion log messages and process nearly a petabyte of data with Hadoop each day.
We’re also experimenting with managed Hadoop clusters, including Hadoop 2, but for now, using cloud services such as S3 and Qubole is the right choice for us because they free us up from the operational overhead of Hadoop and allow us to focus our engineering efforts on big data applications.
If you’re interested in working with us on big data, join our team!
Acknowledgements: Thanks to Dmitry Chechik, Pawel Garbacki, Jie Li, Chunyan Wang, Mao Ye and the rest of the Data Infrastructure team for their contributions.
Mohammad Shahangian is a data engineer at Pinterest. | https://medium.com/pinterest-engineering/powering-big-data-at-pinterest-3c4836e2b112 | ['Pinterest Engineering'] | 2017-02-17 22:33:08.305000+00:00 | ['Big Data', 'Hadoop', 'Datascience', 'MySQL', 'Infrastructure'] |
Leveraging AI to fight COVID-19 | Artificial intelligence has a critical role to play in fighting the global threat of COVID-19—the Allen Institute for AI has taken the lead in partnership with several prominent leaders and research groups to produce the COVID-19 Open Research Dataset (CORD-19), a unique resource of over 29,000 scholarly articles, including over 13,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. This freely available dataset is intended to mobilize the global AI community to generate new research insights in support of the ongoing fight against this infectious disease.
A coalition including the White House, the Chan Zuckerberg Initiative, Georgetown University’s Center for Security and Emerging Technology, Microsoft Research, and the National Library of Medicine of the National Institutes of Health came together to provide this service. The corpus behind CORD-19 will be actively updated as new research is published in peer-reviewed publications and archival services like bioRxiv, medRxiv, and others.
Join us in sharing critical research
AI2 and many other scientific authorities are actively encouraging scientists and publishers to make their research content openly available for AI projects working to benefit the common good. If you’re a publisher or research group interested in contributing content to the CORD-19 dataset, please contact [email protected].
Participate in the CORD-19 Challenge
Kaggle is hosting the COVID-19 Open Research Dataset Challenge, a series of important questions designed to inspire the community to use CORD-19 to find new insights about the COVID-19 pandemic including the natural history, transmission, and diagnostics for the virus, management measures at the human-animal interface, lessons from previous epidemiological studies, and more. | https://medium.com/ai2-blog/leveraging-ai-to-fight-covid-19-82840393d678 | ['Semantic Scholar'] | 2020-03-16 21:37:42.235000+00:00 | ['NLP', 'Data', 'Semantic Scholar', 'AI', 'Co Vid 19'] |
VC Corner Q&A: Katie Palencsar of Anthemis | Katie Palencsar is an investor at Anthemis, a fintech-focused venture capital firm. Based in New York, she leads the Female Innovators Lab in partnership with Barclays and develop new, digital-era concepts and ventures with a focus on women entrepreneurs.
Previously, Kate served as an advisor and interim executive for an early stage sports marketing & technology company, where she supported retired professional athletes on their investment opportunities, new business build-outs, and corporate brand partnerships. She was also the founder and CEO of Unbound Concepts, a tech company focused on metadata, search and sell-thru for publishers in the school and library market. Unbound Concepts was then acquired by Certica Solutions.
Her passion lies in bringing new businesses and products to market, as well as increasing access to capital and support for female founders; recently, she advised on state legislation for investment in female and minority-owned businesses that was successfully passed in 2019.
— What is Anthemis’ mission?
I oversee the Female Innovators Lab, a New York City-based venture studio dedicated to cultivating entrepreneurial talent in women from all sides of the financial services ecosystem. The Lab’s mission is to identify female founders at the idea stage of their journey and match them with the resources and mentorship required to develop a company and bring it to its first round of fundraising. The combination of Anthemis’ track record as early stage fintech investors and venture builders, coupled with the power and global footprint of Barclays, makes this an exceptional opportunity for prospective founders to progress their business ideas.
— What’s one thing you’re excited about right now?
The opportunity for typically underrepresented founders. With investors traveling less and Zooming into meetings with founders, it will become less and less important for startups to be physically located in Silicon Valley. This could potentially open up the tight knit ecosystem of Silicon Valley enough to see more capital going to more diverse founders. Anthemis has always been a Valley outsider and a distributed team since our founding — this is reflected in our portfolio, with 19% of the companies having female founders — but for many VCs this mindset shift will be an adjustment.
— Who is one founder we should watch?
A couple of notable founders for me are Eli Polanco, Rhian Horgan and Jess Gartner. These women married gaps in the marketplace with very personal stories connected to why they’re building their businesses and using that as a guide for their entrepreneurial journey. If we’ve learned anything from this year, one thing is certain, truly resilient and nimble businesses are born out of necessity and as entrepreneurs I have followed these women’s journeys and continue to be impressed by how they navigate the tech and investment landscape.
— What are the 3 top qualities of every great leader?
The ability to listen.
The ability to to motivate & inspire.
The bravery and commitment to push the boundaries and not accept the status quo.
— What’s one question you ask yourself before investing in a company?
I am often focused on the who, what, where and how of the customer. We also often ask founders about their diversity plans and have passed on investing in companies if they don’t have an adequate answer. Building a diverse organization must be a priority from day one, founders cannot “tackle diversity” later.
— What’s one thing every founder should ask themselves before walking into a meeting with a potential investor?
“Why do I want to raise venture capital?” There are amazing businesses that are not venture backed where founders have made larger returns on smaller exit sizes — even compared to investment backed founders with larger exit sizes. The media often glamorizes venture capital and all that goes along with it, as the only way to build a business.
If your business is in fact a massive market size and requires capital to scale and fast, then the second and related question is — “if I had all the money in the world, would I still be relentless in solving this problem in the industry?”
— What do you think should be in a CEO’s top 3 company priorities?
(1) Company culture which leads to (2) employee motivation which leads to (3) capturing committed customers.
If the majority of your employees are not happy, I can guarantee you that the majority of your customers are not happy.
If you have motivated employees you will build kick ass products that drive kick ass sales and that will drive kick ass opportunities for your employees personally and professionally, and the cycle repeats itself.
— What’s your favorite thing to do when you’re not working?
I actually like to read books that aren’t actual “business books” with learnings you wouldn’t expect that can be applied to business.
Bird by Bird: Some Instructions on Writing and Life by Anne Lammott, has guided so much of my thinking and work. This passage has been my mental mantra through all types of challenges, big and small:
“Thirty years ago my older brother, who was ten years old at the time, was trying to get a report on birds written that he’d had three months to write. It was due the next day. We were out at our family cabin in Bolinas, and he was at the kitchen table close to tears, surrounded by binder paper and pencils and unopened books on birds, immobilized by the hugeness of the task ahead. Then my father sat down beside him, put his arm around my brother’s shoulder, and said, ‘Bird by bird, buddy. Just take it bird by bird.’”
— Who is one leader you admire?
It’s a popular one but I would say Barack Obama. His rise to the presidency from humble beginnings was the first time in my life that truly made me feel that there were opportunities despite where you came from, that our contributions to society matter and these contributions make up a collective. And that’s how real change happens.
I’m also fascinated by Gary V, I love his in your face business real talk, but I am also simultaneously fascinated by how a female would be perceived if she was delivering the same message.
— What’s one interesting thing most people won’t know about you?
My guilty pleasures include investigative journalism, an icy Mexican Coca-Cola and anything by Madonna.
— What’s one piece of advice you’d give every founder?
You will definitely, definitely, definitely go through tough times in the business — look no further than the current landscape we are in due to COVID. Make sure you’re 100% committed to solving the problem your company is tackling and surround yourself with people that you genuinely want to be around day in and day out; this commitment and community will get you through the dark days.
— Anything else you’d like to tell us?
The COVID crisis has put a spotlight on industries where women are at the forefront — Healthcare (where 80 percent of health professionals in the US are women) Education — areas like online and distance learning, etc.
At Anthemis, we see financial services as embedded, augmented and ubiquitous. Rather than finance being discrete, we believe it is becoming an intimate part of the products and services that drive our economies. This means the type of founder we are looking for can come from these adjacent areas like education, healthcare, mobility and others. I am hopeful we will see an uptick in investment/innovation in women-lead companies in those industries as well as others.
Ready to make a pitch? Startups looking for an opportunity to pitch Anthemis can apply here! | https://medium.com/startup-grind/vc-corner-q-a-katie-palencsar-of-anthemis-f3c8203755 | ['The Startup Grind Team'] | 2020-08-19 17:59:11.230000+00:00 | ['Startup Lessons', 'Vc Corner', 'Startup', 'Venture Capital', 'VC'] |
Simmer your data science recipe with Python (Part -2) | Python, an open-source high-level programming language. The fact that you don’t know is Python was named after the comedy television show Monty Python’s Flying Circus. It was not named after the Python, the snake species. A great programming toolbox for professionals of backend web development, data analysis, artificial intelligence, and scientific computing. It has been proved not less than a blessing for beginners as it comprises of a lot of libraries inbuilt for coding with ease. Also, is a high-level and scripting programming language which has readable and easily maintainable. It is the fastest-growing programming language in the world because of the explosion of artificial intelligence (AI) productivity and data science.
33% of the data scientist practice python for their data.
Why learn Python?
A data science practitioner combines statistical and machine learning techniques with python programming to analyze and interpret complex data. Python not just being a highly functional programming language but it can do almost what other languages can do with comparable speed. It is used to make data analysis, create GUIs and websites. Python is simple enough for things to happen quicker than it seems and powerful enough to allow the implementation of the most complex ideas.
Why choose Python?
Not just it covering the pitfalls of advance programming that R language has, also available in various platforms of operating systems like Mac, Windows, Linux, and Unix. Python supports exception handling that would help you make your code less error one. Also, used for scripting (A small code used for automating a small task in a specific environment for sending automated response emails, FTP, etc.) Python could be used for GUI (Graphical User Interface). Most commonly used is Tkinter outputs the fastest and easiest way to create GUI applications. Then PyQT, Kivy, PyGUI, etc. frameworks are used and popular amongst coders now. Game development and Web development is also possible with the same platform. Web development frameworks like Django and Flask made a sensation nowadays.
Where to use which Python libraries?
Numpy provides a high-performance multidimensional array and basic tools to compute with and manipulate these arrays. This library provides fundamental scientific computing. It uses less memory to store data. Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It used for plotting and visualization. Pandas is applied for data manipulation and analysis. Scikit-learn is a library designed for machine learning and data mining. Scikit is a library that provides many unsupervised and supervised for machine learning algorithms. StatsModels is packed with statistical modeling, testing, and analysis. The library is used for the statistical function of the data. SciPy is a bunch of mathematical algorithms and convenience functions built on the Numpy extension of Python. It helps as to do the mathematical and scientific operation and used extensively in data science. Plotly is a web-based toolbox for constructing visualizations. It is famous amongst the generations for plotting the visualizing the various types of data.
Conclusion
Pros:
Python is commonly used as a high-level interpreted language. Many developers use Python to build productivity tools, games as it is easy to use, powerful, versatile, making it a great choice for beginners and experts. A huge variety of statistical packages present in python is widely used amongst data science practitioner. Seaborn and Theano are statistical libraries that are followed by the data analyst nowadays.
Cons:
Python programs are generally expected to run slower than Java programs. The drawback of run-time typing, Python’s run time work harder than Java’s.
Top Learning sites of Python | https://medium.com/codingurukul/simmer-you-data-science-recipe-with-python-part-2-977af2d1414f | ['Shubhangi Gupta'] | 2019-09-10 05:00:08.187000+00:00 | ['Python', 'Data Science', 'Data Visualization', 'Data Analysis'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.