title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Classictic.com | Getting to know the users
The first thing I did was to get to know their customers better.
We are talking about people who want to go to a classical music concert, and this portal offers tickets worldwide, in 8 languages: English, German, French, Italian, Spanish, Chinese, Japanese and Russian. Why so many and why those ones? Because they cover most of the tourists in the world.
Basically, the website has three types of users, in this order:
people who are already in a foreign country
or who plan a trip
or who just want to go to a concert in their hometown
Interestingly, most of classical music concert halls can’t provide a website with translated content, assuming that they actually have one!
So, I dove into the users’ stats and knowledge base, and interviewed the customer and marketing services to narrow their users to 6 personas.
An example of Classictic’s personas.
Each persona came with:
A name, age, and defined lifestyle and work style (symbolized by an image)
A catchphrase that distinguishes the persona from others
Key attributes that affect use and expectations of the product, service, or website (payment preferences)
Frequently performed tasks
Tools and resources used
Pain points relevant to their goals
UX design
Apart from making the new website responsive and mobile first, which meant thinking about 4 given dimensions at the same time, I came up with all the UX patterns that I could muster. Everything comes with the little big details, right ?!
Main principles
The idea was to stay within big conventions across the web and e-commerce sites to not disturb our customers’ experience, who were considered to have a low level of surf experience.
Therefore, the portal is divided in 3 big parts, consistent across the user’s path:
the header with a navigation divided in two:
first, the most used ways to browse the offer — with a high contrast put on the search button
the main tools used through the entire website (login, help, etc.)
the main block has the content, divided into 2 columns, except for the home page
the footer that would host a sitemap for the more organizational part of the portal
In order to keep the grasp and reading of the website smooth and easy, I kept most features simple and consistent:
only two different fonts are used with a vertical rhythm implemented
in most cases, 3 complementary colors are used throughout the site with some variations of light
3 levels for forms (especially used in the checkout process)
4 different types of panels with background color and depth used to symbolized their importance
4 levels of buttons, the first of which is dedicated to the main action of the page like “Buy tickets”
animations to guide the user, with a duration that is long enough to be understood
Expandable boxes were used to avoid cluttered pages and favored over tabs to let the users choose if they want to have all content available at the same time.
Interesting problems faced and their solutions
Smart scroll
As I said earlier, expandable boxes were favored over tabs as a main pattern of the website. Trouble is, what if a box opens below the fold of the page? You can’t really tell something happened.
That’s why every opening of an expandable box of the site is testing the possibility and will scroll smoothly to show you the best option available: always at least the beginning of the box and its end at the bottom at the viewport if it’s not taller than it.
Of course, this motion will be discarded if the user scrolls manually.
Date of the event
The users were often mistaken with the time and place of their purchases. So we added the information on the website at strategic places:
on the event page:
in the main information box right after the title of the event
in the ticket box where they choose their seats
on the cart page:
in the first step where they can review their cart
in the third step as a recap before they choose their payment method
The change category panel
We also added the possibility to book different rates (full rate, children discount fare, etc.) of an event at the same time. The problem was then that for back-end reasons, I was asked to find a solution so that if the customer wants to change the category in the cart page, he will be redirected to the event page where he can make the change, while his previous choice was removed.
The solution involved a panel that would appear on top of the page and triggered on the select box change.
One page checkout process
To reduce the loss of transactions, the checkout process has been redesigned in a unique page. Each step is displayed from the start but only the first one is activated and open.
The user has to complete each step to open the next one, but can open previous steps to review his/her inputs.
Inputs in form
The inputs on a form
To help the understanding of inputs and keep a clear look, labels have been placed inside their corresponding inputs using a default value. On focus, they are placed on top of the input in a small panel so they are always visible, even on a smartphone.
Some inputs like country selection were enhanced with the technique described by Christian Holst: instead of a very long select box, we replaced it with an input field with a smart auto-complete when Javascript is available.
Language choice
The former website used images of countries’ flags next to the language options, but to avoid offending susceptibilities, they have been removed since the same language can be used by various countries.
Visual design
The main principles here for the redesign were guided by:
no changes on the logo, except for the main color that was redefined for a lighter gold
classy and classical but also warm and not intimidating
differs from the competitors
most customers are women
I chose to follow two main inspirations: the look and feel of paper sheets as something that everybody will recognize and understand and the material design released by Google for a more modern approach.
The result is what I consider to be a clean and modern design, colorful and dynamic but not confusing for the target audience with some elegant 3D effects brought by discreet shadows on blocks.
Sketch for the upcoming portal of classictic.com
Live style guide
Classictic’s style guide
I also delivered to the Classictic team a live style guide so they could maintain and expand the project of the new portal. After some research, I chose to use the Hologram plug-in to compile the style guide. The main reasons of my choice are the fact that besides its relative novelty, it was stable enough and very easy to implement in my workflow with Grunt, even more so because Hologram is compiling comments directly implemented in the source files using the Markdown syntax. So it’s easy to read for the developers in the files AND in the resulting style guide. | https://medium.com/ccoutzoukis/classictic-com-4c2dd94a7405 | [] | 2017-06-28 21:22:31.419000+00:00 | ['Design', 'Ux Por', 'Portfolio', 'Front End Development', 'UX'] |
Things I Learned After My First Year On Medium | It’s possible to use Medium earnings as a source of (side) income.
This is for those who are skeptical about earning money online. I’ve made significant money through this article that’s been read over 20,000 times. January was the month I earned the most.
I put all of my Medium earnings into my savings account and reinvest them into mutual funds that will keep earning.
I also invested in building my first ever website which is intended for gaining writing clients.
Focus on writing, not on views, reads, claps, and earnings.
I hope you are on the Medium Partner Program because you love writing and seeing your work online gives you a sense of fulfillment.
If you’re here for the money, there’s a big chance that it’s not going to work. You’ll end up disappointed with earning something like this.
Screenshot by the author
Write and publish as often as you can.
In this interview with Medium top writer Sinem Günel, Sinem shares how she stays consistent with writing daily.
Others let drafts sit for 2–3 days before submitting it to a publication. Others prefer not to submit to publications all the time.
There are different methods and ways to get a good writing rhythm but the main goal is to publish articles — good ones, of course.
Interact with your fellow Medium writers.
People in Medium support groups on Facebook such as Womxn On Medium want to learn and succeed as much as you do. Most of them are supportive and positive. Ask questions, respond to polls, share ideas, and energy.
Be a Medium reader.
Read other Medium writers’ work, highlight the best lines, clap and leave a comment on every article you read. This is the best way to support a writer. I wrote about this here.
Learn how to restock your energies.
Writing can be exhausting especially when done daily. A few ways to address this, according to Julia Cameron, author of The Artist’s Way, are writing Morning Pages, going on solo nature dates, and filling the well by doing something repetitive physically. She says
A ny piece of work draws heavily on our artistic well. Overtapping the well, like overfishing the pond, leaves us with diminished resources.
I wrote about my experience in reading this amazing book here. | https://medium.com/the-innovation/things-i-learned-after-my-first-year-on-medium-8f105105f1f9 | [] | 2020-12-05 01:43:16.774000+00:00 | ['Writers Life', 'Writing', 'Writers On Writing', 'Writer', 'Médium'] |
How To Respond When That Moment Has Gone | The Encyclopedia
This phrase didn’t make Denis Diderot famous. It would be his ground-breaking work on an Encyclopedia that would make him a household name. His revolutionary idea was to ask his peers to contribute entries for which he would edit. Unlike every other encyclopedia before his, all of which contained knowledge only from the author.
Some of the entries, like the expanded idea of allowing multiple entries from other authors, were deemed revolutionary. The French authorities were outraged at some of the suggestions posed in the book. One of them, that governments should be concerned with the welfare of its citizens, drew particular attention. Mired in controversy, the project was suspended by the courts in 1752. Just as the second volume was completed, accusations arose regarding seditious content concerning the editor’s entries on religion and natural law.
Denis was a pessimist. He believed human beings lacked free will and that our characters and behaviors were completely determined by our genetic inheritance. We couldn’t change who we are. Nature over nurture. A man who was associated with such a positive project, the Encyclopedia, that would bring enlightenment to so many, didn’t believe people could be enlightened. In effect, he was a progressive thinker who didn’t believe in the possibility of progress!
“Diderot wanted the Encyclopédie to give all the knowledge of the world to the people of France. However, the Encyclopédie threatened the governing social classes of France (aristocracy) because it took for granted the justice of religious tolerance, freedom of thought, and the value of science and industry. It asserted the doctrine that the main concern of the nation’s government ought to be the nation’s common people. It was believed that the Encyclopédie was the work of an organized band of conspirators against society, and that the dangerous ideas they held were made truly formidable by their open publication. In 1759, the Encyclopédie was formally suppressed.” — Wikipedia
The Encyclopedia was banned by the Church in 1758. One year later, the government had followed suit. Many of his contributors was thrown in jail.
It would be twelve years from finishing the final chapters of the manuscript to being fully published. By then, the publishing house had got cold feet. Not wanting to court controversy, the editors heavily censored Denis’s words, removing anything that formed an opinion against the church or the state.
Pessimist Denis would later express his concerns to his friends that the twenty-five years he had spent on the project had been wasted. Yet the Encyclopedia was considered one of the forerunners of the French Revolution such was its impact. | https://medium.com/lessons-from-history/how-to-respond-when-that-moment-has-gone-93a12f438330 | ['Reuben Salsa'] | 2020-10-25 17:08:29.130000+00:00 | ['Ideas', 'Writing', 'History', 'Salsa', 'Philosophy'] |
Machine Learning and Artificial Intelligence in StackAdapt | The rapid growth of technology has connected and empowered consumers more than ever before. As consumers, our behaviour has evolved — we have multiple devices to consume content and we want to know everything instantly, which makes us more knowledgeable, curious, demanding and impatient. Marketers are embracing this evolution, and are constantly looking for ways to make instant changes to their digital strategy in response to these shifts. This is where machine learning and artificial intelligence, AI for short, comes in.
Machine learning and AI helps marketers streamline their advertising efforts by making decisions at scale and rapidly responding to pivots in how people consume information. Before we dive into how StackAdapt uses these capabilities to help advertisers reach their campaign goals, let’s quickly review what machine learning and AI are in the world of digital advertising and how they work.
A Breakdown of Machine Learning and AI
To understand machine learning, we need to start with artificial intelligence. AI is the ability for machines to digest large amounts of information and make (potentially a ton of) decisions in a short amount of time compared to the manual efforts by humans, which can be overwhelming for a person. The growth of AI in advertising is powered by data signals generated either by the time or way in which users interact with their devices, as well as the type of online content being consumed. This ability can be used to help advertising platforms deliver more relevant marketing messages to users both whenever and wherever they are.
Machine learning, on the other hand, is a method in which AI can be achieved. It involves algorithms to process the ingestion of vast amounts of data, identifying and categorizing this data to then computing useful and informative analysis. Some advertising platforms use machine learning as an added feature to help identify patterns in massive volumes of real-time data. This enables them to predict the outcomes of campaigns, and therefore determine what steps need to be taken in order to improve campaign efficiency. For example, machine learning helps marketers pinpoint where users are in the purchasing funnel, based on the actions they have completed last. It is a tool that can also be used to help find similar users, known as lookalike audiences, who have never been exposed to the brand before and introduce them into the funnel.
When used together, machine learning and AI are incredibly powerful because of the speed and scale of data processed. Understanding if AI is used effectively by your DSP is critical to keeping up with an ever-changing advertising landscape and ensuring your campaigns run as efficiently as possible.
Machine Learning and AI in StackAdapt
Machine learning and AI are the foundation on which the StackAdapt demand-side platform (DSP) is built. It is the basis of how every single campaign is executed on the platform. While human insights are needed in the initial set-up of the campaign, human interventions could be made throughout the campaign flight to guide the way of any external factors the AI may need to consider, such as a change in business goals. The StackAdapt DSP processes over 2 billion auctions each day and nearly 6 billion decisions per second — which is why the heavy lifting is left to AI. Here are the three main aspects of how the StackAdapt platform make AI happen:
Collecting data: Data is gathered from the moment a request is received from our supply partners to the real-time bidding we provide and how users interact with the ads being viewed.
Learning from the data: Machine learning is applied at the time of a bid request to analyze and identify patterns that can be used in real-time. These patterns identified are of features found on the platform, such as bidding and pacing, fraud detection, targeting parameters, and much more. For example, let’s say we have set up a campaign that has “pace evenly” enabled. In this scenario, the algorithm will calculate the budget and number of days within a flight date to ensure the campaign not only paces well throughout the duration of the campaign, but also considers that not all times within a day are the same. With that being true, you can rest assure your budget is spent efficiently and not wasted during downtimes in the day. The more data available for the AI to ingest, the better it can learn, identify patterns and predict possible outcomes.
Applying the learnings: The StackAdapt AI applies machine learning to every single campaign on the platform. Some examples include its ability to proactively predict the likelihood of fraudulent inventory, determine a user’s interest level before bidding or to optimize a campaign towards any KPI set. It is important to note that when any optimizations are made to a campaign, it could take anywhere from 2 to 3 days for the machine to re-learn and identify the most efficient method to reach your KPIs with the adjustment implemented.
Practice Makes Perfect
Algorithms need to be constantly trained in order for them to adapt to real-time changes in traffic and inventory sources. With all the features and capabilities offered on the platform, StackAdapt’s algorithms are constantly learning when it comes to making decisions.
Marketers who are aware of how AI is applied in their marketing efforts are one step closer to finding the audience that has an affinity for their brand. With attention spans decreasing and ads getting lost in all the content-noise, machine learning and AI can work to your favour to increase precision in reaching your audience at the right time, in the right place and with the right message. Having this workhorse on your team ensures you are effectively spending your media dollars and getting the most rewarding result for your investment. Most importantly, it will enable you to better strategize and plan for future campaigns. | https://medium.com/stackadapt/machine-learning-and-artificial-intelligence-in-stackadapt-b7de153aa8cb | ['Christiana Marouchos'] | 2020-09-09 14:11:24.940000+00:00 | ['Machine Learning', 'Programmatic Advertising', 'Artificial Intelligence', 'Stackadapt', 'Programmatic'] |
5 real reasons your work husband might be mad at you. | Photo by engin akyurt on Unsplash
5 real reasons your work husband might be mad at you.
Your polyamorous inferences are not helping.
Hannah Rothblatt gave us a pretty good list of reasons that a “work husband” might be angry about a relationship with a “work wife.”
Her article is helpful only in this sense: when a critical collaborator at work resorts to passive-aggressive manipulations, you’re going to take a productivity hit until you figure out why and fix it.
However, I reject familial metaphors for work relationships. The inferences that “work spouse” cliches invite are distasteful to me, cheapen the institution of marriage, and titillate the imagination in ways that have little to do with improving worker productivity. Although Rothblatt stops short of recommending that women “offer sex” to their male co-workers, her use of the marriage metaphor is nonetheless salacious — at least in the sense that she suggests that women should “not be afraid to use their feminine wiles” in the workplace.
Rather than tackling real issues of sexism and intersexual relationships in professional relationships, Rothblatt suggests that women leverage their sexuality to reduce their own workloads — by teasing their male co-workers without ever sleeping with them.
Although Rothblatt’s bio says she’s a comedy writer, I don’t share her sense of humor.
For those who are interested, here’s some real (unfunny) reasons men might be angry at their female coworkers:
1. Your work husband has been accused of contributing to a work environment hostile to women.
Not accused by you, of course. More likely he has been accused by a third party. It could be a jealous work mistress, or it could be your home husband (perhaps the father of your children) who simply doesn’t understand the metaphor of work marriages and all the inferences that should not be made about them. Because your work husband is prohibited by company policy to discuss the ongoing investigation into your flirty text messages, he seems distant to you (which he is). Also, he is trying to figure out whether his career is finished and if he’ll still be allowed to see his kids after his wife divorces him.
2. Your work husband was told “Congratulations, Jeff. You’re the last white man we’re allowed to hire!…
… and by the way, your opportunities for advancement within the company are nil, because we don’t have room for any more white men three levels above you.” It leaves your work husband in an awkward position, doesn’t it? I mean, you could say, “Well, la-dee-da… so some men get a little bit of the glass ceiling treatment that women have had to deal with for decades!” and you’d be right. But that’s a general point, and your work husband is struggling with an issue specific to him. Whether you think it’s fair or not doesn’t change his feelings about it, because he probably didn’t sign up to martyr his career prospects on the altar of correcting past injustices.
3. You shared salary info, and your work husband figured out that you make 15% more than he does, despite having 8 fewer years of experience — because you’re a woman, and women are in demand in your industry.
What makes it a little harder for your work husband right now is that he remembers he helped get you the job. Although he wasn’t involved in your salary negotiations, he sent you the opening, wrote you a letter of recommendation, advocated for your hire within the company, and coached you on the interview. He’s glad you’re raising the salary expectations at the same level of responsibility at which he works… he just can’t figure out why his salary hasn’t been adjusted upwards, in lieu of the new market information.
Oh, wait… he did figure it out, after all.
4. Your work husband was passed over for a prestigious assignment, a raise, and a promotion that went to you instead… because you are not a white man, and he is.
He’s pleased to see your career advance so much faster than his, because in a way it’s a validation of many of the things that the two of you have been collaborating on, and the good ideas and projects that you’ve brought to fruition. Besides, there’s some compersion he can’t help but feel towards someone whom he recommended for your position in the first place, and it boosts his credibility within the company as a good judge of talent. But… he’s still mad that he’s got no opportunities for advancement, and he knows you’re about to dump him. Now that you’re one level above him, you’ll need a new work husband at a higher position.
5. Sex.
Your work relationship has never been the same since he declined your amorous advances, and he has distanced himself because he doesn’t want you to think that he’s leading you on. | https://medium.com/storygarden/work-husband-problems-ec66c4f04db3 | ['Thomas P Seager'] | 2020-12-13 17:13:24.460000+00:00 | ['Startup', 'Workplace', 'Sexism', 'Red Pill', 'Feminism'] |
Five Reasons Hillbilly Elegy Became A Runaway Bestseller | …and is now becoming a high-profile Hollywood feature film.
Photo by Wes Hicks on Unsplash
One measure of how widespread a book is being read is to note how many folks have rated it on Goodreads.com or have left reviews. By this measure, it’s quite apparent that J.D. Vance’s Hillbilly Elegy has been widely consumed since more than 225,000 people have given it a rating, and more than 23,000 have left a written record of what the book was about or meant to them.
Public domain.
According to the inside cover flap, “Hillbilly Elegy is a passionate and personal analysis of a culture in crisis-that of poor, white Americans. The disintegration of this group, a process that has been slowly occurring now for over forty years, has been reported with growing frequency and alarm, but has never before been written about as searingly from the inside. In Hillbilly Elegy, J.D. Vance tells the true story of what a social, regional, and class decline feels like when you were born with it hanging around your neck.”
The drumbeat of praise for the book includes this endorsement from The Economist: “You will not read a more important book this year.” To which I would say, “Poppycock.”
When the book was published, J.D. Vance was a 31-year old who grew up in a supremely dysfunctional home in Southern Ohio. I picked it up to read because the narrator’s life path in some ways is an echo of my father’s story, whose roots are Eastern Kentucky, who grew up in Hamilton, Ohio (a few miles from Vance’s beleaguered Middletown) and escaped by way of the military and college on the G.I. bill. (My father was army, and went to college at Hiram in Ohio, Vance joined the marines and found himself at Yale.) My father’s trek along that path diverged at a few key points, however, foremost of these being that his story took place a half century earlier and that though his parents, my Grandpa and Grandma Newman, were dirt poor, they remained married to one another for fifty years.
As I near the end of Vance’s memoir I can’t say I agree that it’s a “must read.” It’s been an interesting read for me personally because I do know the social terrain from which this story emerges.
Here are five reasons I believe the book caught on and has been widely circulated.
Curiosity
Anton Chekhov wrote a short story once about a couple people who stopped to watch something — it may have been the behavior of a couple birds on a rooftop — and a crowd begins to form in the street to see what they are staring at. When the birds fly away and the two people leave there is a crowd still standing there looking up and wondering what everyone else is looking at. Can this be one reason people read books that are on bestseller lists? I don’t doubt it.
I think, too, that there is a curiosity about the lives of people who are very different from our own. This is why shows like Lifestyles of the Rich and Famous gain a following. Or stories about Mafia families.
Authority
He writes with authority because it’s his story. He essentially lays down in lines the experiences of his life.
Transparency
There have always been kiss-and-tell books on bestseller lists, but social media has elevated voyeuristic reading to a new level. This book operates on the assumption that by zeroing in on one messed up story we can draw conclusions about all kinds of people in this particular social set, the transplanted “hillbillies” of Eastern Kentucky and Tennessee who migrated a generation or two earlier to the Rust Belt, a region economically challenged now with few ways out for many.
Apparent Authenticity
The story rings true because he just tells it like it was. He is exceedingly candid, possibly urged on my the publisher who sensed that there are profits to be mined from stories like this. I think here of Thomas Wolfe’s You Can’t Go Home Again, which was sequel to Look Homeward, Angel. ThoughWolfe recast his rural life experiences as a fictional narrative, there were too many places where real life people had been reflected unfavorably, and recognizably, in the story. I can’t help but wonder how Vance’s friends, family and neighbors reacted to this mucky tale.
American Tragedy
The gap between haves and have nots has grown significantly over the past fifty years. Vance presents his story as a microcosm of the broader issues facing this population demographic. | https://ennyman.medium.com/five-reasons-hillbilly-elegy-became-a-runaway-bestseller-c55635dc714c | ['Ed Newman'] | 2019-05-01 12:59:04.001000+00:00 | ['Book Review', 'Books', 'Appalachia', 'American Culture', 'Poverty'] |
The road ahead: beyond self-driving | The road ahead: beyond self-driving
Autonomous vehicles will trigger new services and changes that transform the way we work, travel and live
Many autonomous vehicle discussions don’t go far enough in describing the impact on behavior over time. We are concerned mostly with “hands on” or “hands off” the steering wheel, but at some point, confidence will grow and we won’t have to pay attention to the road or other cars at all as we ride. Then we can begin to consider how other areas of life, work and travel can be supported by these evolving vehicles.
For example, when all riders are focused inward and the driving is handled by a sensor network, indicators like road signs, brake lights and lane separators become unnecessary. If there are no drivers, we won’t have a need for these visual guides.
By dividing the roll-out of autonomous vehicles into stages, breaking down the component parts and connecting to other trends, we can reveal the most likely areas of impact.
The Launch with Trucks, Rides and Safety
Examples of self-driving trucks are already appearing — a primary suggested benefit is that autonomous trucks will make roads safer. Rides for the elderly and others who are unable to drive is another clear early benefit of driverless vehicles.
Possible outcomes:
Rides for kids going to after-school activities with in-vehicle monitoring for parents — an autonomous ride becomes preferable to a stranger in the driver’s seat
Rides for homebound elderly and vision-impaired people with in-vehicle monitoring and voice services — Amazon Alexa is already joining this part of the trend.
In-vehicle ‘Meals on the Way’ services for riders become extensions of food service and delivery — this also prompts in-vehicle packaging and storage innovation for ‘on the way’ services
Seat-surround airbag systems protect passengers independent of orientation
Highways institute dedicated night-time hours and lanes for self-driving trucks
Continuous shipping, battery-swapping stations and mobile-charging vehicles keep autonomous trucks on the road at all times
New autonomous and manual vehicles transmit location automatically to provide system awareness to all cars on the road — this improves the flow of traffic overall, but presents some inherent security concerns
Improved solar panel efficiency enables roof charging for trucks and cars — this extends travel time and reduces the need for charging stations
The Evolution of Work and Roads
As riding becomes the preferred way to travel, larger ‘travel pods’ become a natural extension of the growing shared-workspace trend. All visual indicators can be removed from the road and new elements move inside the vehicle when the act of driving is handled by sensors.
Possible Outcomes:
Self-driving working pods for small-team domestic travel — this can reduce travel costs and increase continuity of work
Mobile workspaces connect with shared workspaces
“Sleep cars” become the new, less expensive way to travel short distances, reducing short-range air and train trips
In-car video calling is standard for new self-driving vehicles
Interior brand elements and lighting become more important than exterior as visitors and viewers are focused inward
With awareness of approaching vehicles and traffic, intersection traffic lights become less necessary
Night sensor driving reduces the need for streetlights on highways
Road signs and lanes disappear with roadway intelligence built into vehicles
Highway lanes expand and contract automatically for high-traffic times
Autonomous-only highways allow for much higher rates of speed
Mobile and Wifi networks installed in vehicles allow for dynamic moving networks
The Shifting of Ownership, Homes and Recreation
As people focus more on rides and less on cars, this will start to shift how we design and use areas of our homes and could start a shift toward ‘manual driving’ as a recreational activity.
Possible outcomes:
Garages are hired out as self-driving car charging and storage stations
Personal car insurance becomes less common — insurance handled by driving services
Recreational driving services appear for manual driving — fewer car dealerships
Specialized recreation areas appear for manual driving
Street pick-up area indentations at the curb in front of homes become the new driveway — driveways and garages are no longer standard in home construction
Self-driving tiny homes merge two growing trends
The Merging of transportation
As these vehicles begin to look less like cars and more like transport pods, they can easily be seen as modular plug-in points for other modes of travel.
Possible outcomes:
Modular self-driving pods appear, which can drop into Hyperloop tubes for traveling longer distances
Modular vehicles appear, which can ‘dock’ into homes, making travel easier
Aircraft with docking bays for the seating pods from mobile driving units become available, increasing efficiency of ticketing, boarding and air travel
Autonomous cars are not the only area that can be broken down into component parts and sequenced over time and trends. This kind of service and product decomposition can be a good way to look at strategic areas of focus in general, and this technique can reveal many unexpected new products and services for companies to explore in any industry. | https://medium.com/design-voices/the-road-ahead-beyond-self-driving-12f996dd8ac3 | ['John Jones'] | 2018-11-12 14:25:47.078000+00:00 | ['Design', 'Design Process', 'Transportation', 'Autonomous Cars', 'Self Driving Cars'] |
The Ultimate Guide to Linting | The Ultimate Guide to Linting
An essential guide to linting for making your code more readable and life easier
Background
Code quality has been a big topic of discussion in the last decade. More people are talking about it now as more people have to read more code on a daily basis. Reading other people’s code is hard. We all know that! A lot of tools, IDEs have made our lives easier by providing a standard framework to work within. These tools enforce some level of sanity but a lot still needs to be done by the developer.
Good, readable code saves more time than we think. The first step, I think, to writing good, readable code is to be stylistically consistent — be it indentation, the spaces between two functions, the way you declare variables, even the way you import libraries. All this has to be taken care of while you’re writing your code for the first time. After that, there are tools to help you out.
A linter is one such tool that helps identify potential issues with the code based just on style guides, naming standards, typos, and bugs too. Linter is a part of the larger picture of code quality. Most languages, based on their stylistic grammar and syntactical structure, have found standard ways of writing code in them. Yes, there are debates over tabs and spaces but most of the other debates are, more or less, settled. Based on set patterns, tools that enforce these patterns are available in the market.
A very short introduction to Linting by Ahsan Zahid.
As engineers have realized the importance of writing readable code. This trend has spiked the use of fancy IDE features which help you write code better. It could be as simple a feature as autocomplete (e.g., Intellisense) or a query beautifier. Software developers, data engineers, data scientists, infrastructure engineers, data analysts — all these people spend a large chunk of their time writing code — code that would be maintained, code that other people would read. An average data engineer uses at least 2 IDEs — one for writing Python or Scala scripts for ETL, orchestration etc., the other one for writing good old SQL.
Good, readable code saves more time than we think.
Linters for Python
As Python is, more or less, the de facto language for data engineers and data scientists, let’s talk about it. One of the first proposals to use a style guide for uniformity and consistency was from the original authors of Python themselves. It is PEP8. Since the PEP8, a couple of other major linters have evolved and taken over the market like — Pylint and Flake8.
Flake8 is essentially a wrapper around the following — PyFlakes, pycodestyle, and Ned Batchelder’s McCabe script. Every major IDE, code editor supports this. For instance, if you’re using VS Code, you can follow this link to understand how to set up or disable linting there. Google’s Python style guide is based on pylint.
If you write code in Java, Scala, or any other language, there are linters available for them too. Here’s a repository from Google which contains style guides for all the languages they use
Linters for SQL
Almost all major SQL IDEs have an option to enable syntax checkers and query beautifiers and linters. SQL is also, for some reason, one of the most abused languages when it comes to style, consistency, and readability. Try reading someone else’s queries. I have. Bothered by that issue, a while ago, I wrote about the benefits of using a style guide while writing SQL.
Similar to Python, there are a number of style guides for writing SQL too. But there are different challenges when it comes to styling SQL. Different databases, data warehouses support different features of SQL, some are specific to the platform being used. It is, therefore, hard to come up with a unique style guide that works for Snowflake, Redshift, SparkSQL, MySQL, PostgreSQL, MS SQL Server, and Oracle at the same time. Not only that, but the style guides also have to be version-specific too.
Try reading someone else’s queries.
The most comprehensive and sensible style guide I could find was written by Simon Holywell. There are a couple of other style guides like GitLab’s SQL Style Guide and Mozilla’s owner SQL Style Guide. Both of these are also well thought-through. All that’s good but how to check if your SQL code follows a given style guide. Where are the linters?
There are two well-maintained linters for SQL — sql-lint and sqlfluff. You can choose one of them or write your own and integrate it into your CICD pipeline. Let’s now talk about the last bit I said — the CICD pipeline.
Conclusion
Finally, linting should be a part of your CICD pipeline. Executing the linter manually on your code is, again, moving in the wrong direction. The idea of linting was to reduce the possibility of inducing human errors. Running the linter in the IDE, hence, shouldn’t be the only place where linting is run. Just like automation tests, linting has to become a part of the pipeline. Before your code is merged to the next level, it should be linted automatically — and if the lint fails, the pipeline should fail.
Linting is more than just checking for style, it can be about whatever you want it to be under the purview of code quality. You can write your custom checks inside the linter. A linter can contain any kind of logic that you want. The idea is that there’s a piece of code that essentially goes through your piece of code to check if it follows the rules you have defined. | https://medium.com/dataseries/the-ultimate-guide-to-linting-edc55fc88b9b | ['Kovid Rathee'] | 2020-08-30 15:45:39.486000+00:00 | ['Programming', 'Code', 'Software Development', 'Style Guides', 'Data Engineering'] |
How to Create an Instance in AWS | Thanks for reading! I’m Alfredo Barrón, Feel free to connect with me via Twitter. | https://medium.com/modulr/how-to-create-an-instance-in-aws-7882ed4c557e | ['Alfredo Barron'] | 2019-07-31 18:09:36.756000+00:00 | ['Ubuntu Server', 'Ubuntu', 'AWS', 'Linux'] |
Do We Need Next-Gen Consoles? | Do We Need Next-Gen Consoles?
All the trends suggest that dedicated home game consoles are on the way out
The game console is becoming obsolete. It’s a controversial statement, I know. But let’s step back from the truckload of hype surrounding the imminent launches of the PlayStation 5 and Xbox Series X. Let’s explore this question more seriously: consoles have been around a long time. Where are they going, and are they becoming redundant?
Many of us won’t forget the wonderful moment we acquired our first game console. Whether it was sitting nestled under a Christmas tree, or whether you were old enough to purchase it from a store yourself, few things in life can compare to that moment when you first unbox your brand new machine. It wasn’t just about accessing the games; it was about the ritual of unboxing the thing, plugging it in, and firing it up for the first time.
But times are changing. People’s needs and desires are shifting. In particular, free time is becoming more and more limited by the day. Technology is responding to our shifting habits, too, in an effort to better fit in with our daily routines. It’s worth remembering that way back in the ’90s and ’00s, most gaming experiences were delivered either through a dedicated console or a PC. Sure, you could also pick up a Game Boy — but it could never match a home console experience in terms of fidelity. The full fat experience always demanded that you plonk yourself down in front of a TV or monitor for a period of time.
But technology marched forward. Smart phones drove a lot of the change. And it can be argued that smart phones, as a product category, represented the single-largest expansion of the gaming market since its inception. In fact, smart phones account for more than 45% of the video game market share. And if that number doesn’t surprise you, consider that at least one-third of the entire world’s population has played or downloaded a game on a mobile device of some kind.
Of course, dedicated game consoles are still highly poopular. The PlayStation 4 is one of the highest-selling consoles of all time. But the trends aren’t in favour of consoles. According to the Entertainment Software Association’s (ESA) recent survey “2020 Essential Facts About The Video Game Industry”, women of all age groups and men aged between 55 and 64 prefer to play games on their phone. Consoles seemed to be the gaming medium of choice for men aged 18–54. Sure, it’s still a sizeable demographic, but the stats speak to a far broader market that increasingly prefers to experience games in different contexts.
There are other data points that are relevant here. For example, there’s this chart from the Statista Research Department from 2016. It shows that according to data gathered from video game companies around the world, only 29% of their customers prefer to play on console. This compares to a whopping 76% who play on mobile devices.
These numbers should surprise exactly no one. Mobile devices have increasingly become a necessity in the modern world — not the luxury they once were. They offer the broadest possible range of functionality, having subsumed a range of devices or products over time (from everything to physical calendars and diaries to dedicated music players). Consoles, despite offering additional entertainment options (like Netflix and YouTube), still remain a largely gaming-exclusive affair. In addition, smart phone technology is rapidly advancing year-on-year; so much so that the latest smart phones are now able to compete with — and sometimes even surpass — the most advanced dedicated handheld gaming machines. From a raw game fidelity standpoint, smart phones are already starting to shadow what we see on home consoles. The gap will only grow smaller in the coming years.
PlayStation 5 DualSense controller. Source: Sony.
And as the technological gap shrinks, developer opportunities expand. Sure, many popular smart phone games are more akin to Candy Crush or Angry Birds. They are, by definition, super-quick pick-up-and-play experiences that can be enjoyed for only a few minutes at a time. But we’re seeing more and more examples of larger, deeper, richer game experiences arriving on mobile devices as well. It’s not inconceivable that we’ll soon see the latest triple-A experiences from Naughty Dog, CD Projekt Red, or Rockstar launching on certain mobile devices as well as console and PC.
The existence of the Nintendo Switch is a good example of what this could look like. More and more developers are porting the latest games to the platform, albeit with some limitations. There is also the emergence of services like Google Stadia and Microsoft’s xCloud, which promise to completely circumvent the hardware limitations by directly streaming ultra high fidelity experiences to even the smallest mobile devices.
The big question seems to be whether or not all of these advancements and market trends are sending us hurtling towards a post-console future. Will the mere idea of a dedicated box sitting under the TV become a thing of the past? Will it become so easy to experience games in a more ubiquitous way (on potentially any screen), that the focus will shift to games and services and entirely away from dedicated boxes?
Microsoft already seem to have a clear view on this, given their approach to a more iterative upgrade cycle for the Xbox (consider the Xbox One X, Xbox Series S, and Xbox Series X), as well as the increasingly-rapid move towards an Xbox “ecosystem” rather than a single, dedicated piece of hardware. If Google Stadia is on one end of the spectrum, then PS5 and Switch are on the opposite end, with Microsoft having a bet both ways somewhere in the middle.
Whatever happens, the next few years are going to be fascinating. If we revisit this topic in five years’ time, we may find ourselves looking at a completely different gaming landscape. | https://medium.com/super-jump/do-we-need-next-gen-consoles-677b7c86ffaf | ['Alex Anyfantis'] | 2020-08-28 07:27:48.713000+00:00 | ['Gaming', 'Apple', 'Sony', 'Features', 'Microsoft'] |
Interesting musings… | During the last few days — I stumbled on 2 interesting thread.
One from Frank Scavo — where he summed up Nick Carr’s presentation on Cloud computing and how it is changing the IT landscape. Not much new here from Cloud computing perspective — but I like how Nick uses historical analogy and compare it to the past trend in the Power industry. I have included Nick’s talk on Youtube here.
[youtube=http://www.youtube.com/watch?v=BYP3uMOobqk&feature=player_embedded]
Second one is from Dave Kellogg — who summed up recent Tom Siebel talk and some of the initiative that he is working on. It is a good summary of some of the larger trends affecting our society in the current day and age. More and more folks are jumping in to help monitor and mitigate carbon footprint we leave behind on daily basis. This influx of both capital and intellectual power will help make the world a better place. It is also becoming good business as more and more bright people jump on the bandwagon to improve energy utilization and work towards providing better alternative sources.
Particularly, I like one of the link that point to Zerofootprint product offering. It has simple calculator that people can use to analyze how much we contribute to the CO2 emission and how we can be more conscious in our daily life reducing the CO2 footprint we generate. Also, it share up and coming enterprise applications for companies to measure and manage their own carbon footprint. It is not only good from the social consciousness perspective — but also good business.
Enjoy! | https://medium.com/aloktyagi/interesting-musings-bf6ff3a9d1f4 | ['Alok Tyagi'] | 2017-03-08 21:02:08.836000+00:00 | ['Social Ideas', 'Innovation', 'Startups', 'Enterpreneurship', 'Cloud Computing'] |
How is Machine Learning used in the LinkedIn Recruiter Recommendation System | Photo by inlytics on Unsplash
Let’s find out!
Primary reason LinkedIn users are active on the platform for job recruitment efforts. With more than 20 million companies listed on the site and 14 million open jobs, it’s no surprise to find out that 90% of recruiters regularly use LinkedIn.
In fact, a study found that 122 million people received an interview through LinkedIn, with 35.5 million having been hired by a person they connected on the site.
Heavy Usage of ML and DS:
In addition to nurturing one of the richest datasets in the world, LinkedIn has been constantly experimenting with cutting edge machine learning techniques and pushing the boundaries of research and development.
Recruiter Recommendation:
Specifically, LinkedIn Recruiter is the product that helps recruiter build and manage a talent pool that optimizes the chance of a successful hire.
This product by LinkedIn needs to handle arbitrarily complex queries and filters and deliver results that are relevant to specific criteria.
The Architecture:
LinkedIn has built a search stack on top of Lucene called Galene, and contributed to various plug-ins, including capability to live-update search index. The search index consists of two types of fields:
The inverted field: a mapping from search terms to the list of entities (members) that contain them.
a mapping from search terms to the list of entities (members) that contain them. The forward field: a mapping from entities (members) to metadata about them.
These search index fields contribute to the evaluation of machine learning feature values in search ranking. The freshness of data in the search index fields is also of high importance for machine learning features.
The Ranking Model:
The Recruiter search experience is based on an architecture with two fundamental layers.
L1: Scoops into the talent pool and scores/ranks candidates. In this layer, candidate retrieval and ranking are done in a distributed fashion.
Scoops into the talent pool and scores/ranks candidates. In this layer, candidate retrieval and ranking are done in a distributed fashion. L2: Refines the short-listed talent to apply more dynamic features using external caches.
The Details:
The Galene broker system fans out the search query request to multiple search index partitions.
broker system fans out the search query request to multiple search index partitions. Each partition retrieves the matched documents and applies the machine learning model to retrieved candidates.
to retrieved candidates. Each partition ranks a subset of candidates, then the broker gathers the ranked candidates and returns them to the federator.
a subset of candidates, then the broker gathers the ranked candidates and returns them to the federator. The federator further ranks the retrieved candidates using additional ranking features that are dynamic or referred to from the cache — this is the L2 ranking layer.
Finding A Good Fit:
Another challenge of the LinkedIn Recruiter experience is to match candidates with related titles such as “Data Scientist” and “Machine Learning Engineer”. This type of correlation is hard to achieve by just using Gradient Boosted Decision Trees(GBDT). To address that LinkedIn introduced representation learning techniques based on network embedding semantic similarity features. In this model, search results will be complemented with candidates with similar titles based on the relevance of the query.
Machine Learning Methodologies used : | https://medium.com/dataseries/how-is-ml-used-in-the-linkedin-recruiter-recommendation-system-a3aef6d1566b | ['Jitendra Singh Balla'] | 2020-10-30 06:13:13.286000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'LinkedIn', 'Data Science'] |
Who Really Makes the Purchase in B2B Buying? — Know the Decision Makers | Buying and selling a product or service is not as straight-forward as it seems. There are several emotional as well as psychological aspects that play a significant part in confirming the purchase, and that usually goes unnoticed. It’s vital that companies take stock of purchasing decisions that involve less perceived factors to deepen their understanding of buyer roles and expand engagement across the target audiences. To start with, learn more about the various steps involved before customers make the final decision, and find out how that can be applied to develop an exceptional sales methodology.
The Levels of Buying Decisions for Business Purchases
A recent study by the Australia Post observed that there are four major channels contributing to around 92% of an individual’s purchase decision. These include effective direct mail campaigns, television advertising, flyers or catalogues and attractive websites. In order to ensure that the buying decision process goes smoothly, accurate information is required for interested clients; that means there should be a clear process for connecting them with the relevant team. Decision making is significantly increased if you provide additional resources like guidance manuals or FAQ sections on the website to readily resolve their concerns. A proactive customer service team will further benefit from the consultative selling aspect of business decision making thereby having major influences on business buyers.
Source: shanepatrickjones.com
Problem Recognition
The initial stage of problem recognition occurs when a consumer recognizes that there is a disparity between their ideal state of affairs versus the current state, i.e. they have a requirement that must be fulfilled to experience fulfilment. It is at this point that a business can maximize engagement, once they have a clear understanding of their target demographic.
Some of the more popular strategies used by companies include penetration pricing for new products in the market or an advertising campaign to draw attention to a specific brand. For example, a college student needs more laundry detergent, and decides to make a run to the nearby supermarket for a discounted deal on the product. Typically, the problem recognition happens when a consumer faces internal stimuli (e.g. stress, anxiety, hunger) or external stimuli (e.g. peer pressure, glamourous social media ads, word of mouth).
Information Search Process
The next step is when consumers start searching for the best possible solution to tackle their problem. Here’s where your sales team can accelerate the process by sharing helpful information to sway the potential client in your favour. Gaining a deeper understanding of the specifications required by the buyer is also valuable, in order to build tailored strategies via content marketing. For instance, your website can include pertinent keywords in line with potential consumers’ expectations. In the age of social media and convenient access to digital resources, it is essential that your website and social media channels possess a unique visual appeal.
Evaluating Available Alternatives
Consumers have an idea about the kind of product or service they are seeking, and will actively hunt for a great deal. This selection process is based on the pricing of a product, individual tastes, cultural background, the quality promised, etc. Consumers will take the help of immediate networks such as friends and family, read through online reviews about the concerned brand and ultimately, pick the commodity that best meets their criteria. At the business side, you can explore means of promoting the outstanding product or service you offer with evidence such as customer testimonials, internal surveys and independent reviews of the brand. For instance, this graph demonstrates organic growth in website traffic over 3 months on 500 sites after reviews were added.
Purchase Decision and Purchase
Before the buyer can come to an arrangement with their shortlisted supplier, there are multiple negotiations to narrow down the best price, method of payment and any available discount. There is also a confirmation of the preferred date of delivery and other contractual obligations, especially if this is a bulk order.
Once the order is completed and delivered on time, the client may explore additional stages in the process including a review or evaluation procedure to assess the overall performance and value of the product, and decide whether to remain with their supplier for future requirements. In case the quality of the product or service does not match what was promised, there may be certain fines or penalties imposed on the company.
Source: McKinsey & Company
Evaluation of Decision Post-Purchase
When the business purchase is executed, the client will have questions like whether it addresses the need, and whether it exceeds the expectations. Ultimately, a business aims to build life-long loyalty from the consumer’s perspective, and create a lasting relationship. If the product was defective or failed to impress, it may damage the brand reputation permanently in the eyes of said client. On the flip side, a stellar product and impeccable delivery can assure you of a repeat customer who will spread the word about your business!
The B2B Decision Making Process
Source: B2B International
Compared to more traditional consumer markets, a Business-to-business (B2B) market differs in two major ways. Firstly, the process of decision making is more complicated, as the purchase decision is arrived at by only one or two persons instead of a team. Secondly, there is a need to address both an individual as well as a company’s requirements during a B2B purchase. This means that as part of the buying journeys, decision makers have to factor in both rationally and emotionally-driven motivations before selection.
In a survey conducted by B2B Marketing and gyro, out of the 113 B2B global marketers that participated, two-thirds mentioned quality of content, level of research and expertise are the top priorities when deciding on the vendor they want. To put it simply, during a standard B2B purchase decision, multiple needs might come into play (as seen in the above figure). The degree of significance given to the interplay of each quadrant is based on the type of culture in the company, type of people and the strategic advantages presented by the product or service in question. | https://abbeyhouston490.medium.com/who-really-makes-the-purchase-in-b2b-buying-know-the-decision-makers-94cf29dc29a6 | ['Abbey Houston'] | 2019-07-02 14:21:10.656000+00:00 | ['Marketing', 'B2b Buying', 'Marketing Strategies', 'B2b Content Marketing'] |
Living With Body Dysmorphia | Eventually I left home. Away from my mother’s cooking.
I settled in a village in the heartland of Wales. About as far as I could possibly get from London. It was here that I discovered drugs. Proper Class A drugs. And rave music. Hardcore bass-thumping rave music.
I swapped my 4XL, Indie, long-sleeve and hide tee-shirt for bright and tight tees. I ate very little. I danced a lot. I lived life. I managed to reinvent myself in a place where nobody knew who I was previously. I came out of my shell.
But still, that nagging fat hung off my stomach. Every time I looked down, all I saw was this huge round belly that refused to be tamed. I still weighed myself and hated what I saw. I wasn’t thin enough. I wasn’t cool enough. I wasn’t me enough.
Confidence grew with the drug dependency. Grades suffered. There’d be a couple of arrests and twice I had to work over summer to convince the teachers to let me stay for another term.
I moved from Wales to Portsmouth. I grew my hair long. I smoked faster than a French fashion model. I devoured noodles as my main food source. I had even managed to get my weight down to a respectable 55kg (121lbs)*. I still didn’t feel good.
I lacked energy without the drugs. It was probably the lack of food. I would pat my belly and reassure myself that I’m nearly there. I had lost sight of where there was supposed to be.
Pretty soon I crashed.
Paranoia crept in. Dizzy spells. I wasn’t coping with life.
Somehow I managed to get through University. My grades were embarrassingly low compared to my peers. My drug habit had spiraled. I was consuming daily.
Even now, looking back, I’ll tell you I wasn’t an addict. I wasn’t shooting up. There was no chasing dragons for me. This wasn’t Trainspotters. I knew I could stop at any time. But I didn’t.
The paranoia became too much.
I remember one evening clearly. I had lost my rhythm. My love for dance was in tatters. I could feel eyes staring as I struggled to capture my old joy. My coordination was shot. My legs were no longer nimble. I heard every voice talk about me and point at me. I needed to get out. Get off the dance floor and away from the crowd.
I stumbled out of the venue in a panic. My apartment was only across the road. Literally a two minute walk to safety. But I had somehow convinced myself that I was being followed and I couldn’t let anyone know where I lived. I took a circular path. A long-winded route that led to the seaside. A brazen forty minute hike in the wrong direction.
I sat alone on that beach, huddled as tight as I could get. The voices had finally stopped. I didn’t want to go on. I didn’t like being me.
In the distance, the fishermen had begun their night shift. Red lights blinked across the water. And then they began to blink in time. It appeared to me like they were getting closer. I imagined sirens. I could hear the low rumble of the coastguard. Hear the squawk of their radios. Overhead a chopper circled, possibly looking for a weirdo who’d left the club an hour into the night. I looked around and saw couples making out on the beach. I swear they all stopped and looked at me, whispering, asking why I was the only one on the beach without a partner.
It was this embarrassment that saved me. The perception of onlookers mocking my attempt at suicide that stopped me from walking out into the water.
It all sounds ridiculous. Far-fetched. This was the highlight of my paranoid days. Even beats meeting my future-self at a party that began my whole downward spiral.
I raced home. Again, taking the long way.
Back in my loft, four stories below, I could still hear a crowd talking about me as they passed by.
I knew I had to get away. As far as possible. After-all, this was exactly my pattern of behavior.
An opportunity arose for me to travel with a ‘friend’ to Australia. I leapt at the chance. A clean break from an environment that was slowly destroying me. | https://medium.com/the-bad-influence/my-body-dysmorphia-ef999edb221 | ['Reuben Salsa'] | 2020-02-26 16:42:24.873000+00:00 | ['Body Image', 'Body Dysmorphia', 'Mental Health', 'Drugs', 'Addiction'] |
Now we’re just negotiating on a price. | Hello,
My name is Olivia XXXX and I’m an accounts manager at [demonic marketing site]. I found your site http://alexandrasamuel.com recently on the web and
was impressed by its layout and content I feel that it could be suitable
for my client.
We are interested in publishing an article (which I can supply) on
your website,
The article will have a link to my client’s site in it. The link must
be do follow and we cant have any disclaimers\advertising tags.
Let me know if this is something you offer, and if so, what do you
charge for it?
I look forward to hearing from you soon.
Thanks,
Olivia XXXX | https://medium.com/i-reply-to-spam/now-were-just-negotiating-on-a-price-ab30617eb8f2 | ['Alexandra Samuel'] | 2017-10-24 20:21:22.037000+00:00 | ['Email Marketing', 'Marketing', 'Digital Marketing', 'Spam'] |
Developers Needs SDKMAN Not Super-Man | Every developer has pain for setup development environment to his/her machine with lots of the setups. Sometimes, the pain goes beyond while we need to test same application on multiple versions of SDKs or virtual machines.
If you are a Mac user, you have the best option called brew installer.
But if you are Linux user, your pain is unpredictable.
We are JVM stack developers and Linux users and have the same pain for setting development environment with lots of configuration and different versions virtual machines.
For the sake of innocent developers, for the sake of time, we are going to introduce our superhero called SDKMAN. Which saves us from the cruel world of setup developments tools.
Technical Introduction:
SDKMAN! is a tool for managing parallel versions of multiple Software Development Kits on most Unix based systems. It provides a convenient Command Line Interface (CLI) and API for installing, switching, removing and listing Candidates. SDKMAN is primarily used for JVM based languages and framework. In future, they were planning to move SDKMAN for other environments as well. Currently, SDKMAN has a huge list of SDKs, which we get from here.
Install SDKMAN
$ curl -s "https://get.sdkman.io" | bash $ source "$HOME/.sdkman/bin/sdkman-init.sh" $ sdk version Install Java For installing java, SDK provides simple and easy command as below: $ sdk install java Downloading: java 8u152-zulu In progress... ######################################################################## 100.0% Repackaging Java 8u152-zulu... Done repackaging... Installing: java 8u152-zulu
Done installing! Setting java 8u152-zulu as default.
root@a33316a976d9:~/.sdkman# java -version
openjdk version "1.8.0_152"
OpenJDK Runtime Environment (Zulu 8.25.0.1-linux64) (build 1.8.0_152-b16)
OpenJDK 64-Bit Server VM (Zulu 8.25.0.1-linux64) (build 25.152-b16, mixed mode) By default, SDKMAN downloads the Zulu or open source JDK of java. But if we require installing some specific version of JDK or Specific Oracle JDK, what can we do???
SDKMAN gave us the way to download sdk's with specific versions as well. We can easily list out the existing SDK's which SDKMAN support and install it as per requirements. $ sdk list ================================================================================
Available Candidates
================================================================================
q-quit /-search down
j-down ?-search up
k-up h-help
Ant (1.10.1) --------------------------------------------------------------------------------Ant (1.10.1) https://ant.apache.org/ Apache Ant is a Java library and command-line tool whose mission is to drive
processes described in build files as targets and extension points dependent
upon each other. The main known usage of Ant is the build of Java applications.
Ant supplies a number of built-in tasks allowing to compile, assemble, test and
run Java applications. Ant can also be used effectively to build non Java So on ................... $ sdk list java ================================================================================
Available Java Versions
================================================================================
9.0.1-zulu
9.0.1-oracle
9.0.0-zulu
> * 8u152-zulu
8u151-oracle
8u144-zulu
8u131-zulu
7u141-zulu
6u93-zulu
$ sdk install java 8u151-oracle Oracle requires that you agree with the Oracle Binary Code License Agreement
prior to installation. The license agreement can be found at: http://www.oracle.com/technetwork/java/javase/terms/license/index.html Do you agree to the terms of this agreement? (Y/n): y Downloading: java 8u151-oracle In progress... ######################################################################## 100.0% Repackaging Java 8u151-oracle... Done repackaging... Installing: java 8u151-oracle
Done installing! Do you want java 8u151-oracle to be set as default? (Y/n): y Setting java 8u151-oracle as default. $ java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
As it shows, we can install oracle java successfully. But, at the start of this blog, as we discussed we can install multiple version of the same SDK easily and manage easily. If we go through the blog again, first we install OpenJDK after we are installing OracleJDK, single machine multiple JDKs and we also set OracleJDK as default, so how can we use OpenJDK as per our requirements?? Below are powerful and ease commands of SDKMAN which help us to achieve this functionality. $ sdk list java ================================================================================
Available Java Versions
================================================================================
9.0.1-zulu
9.0.1-oracle
9.0.0-zulu
* 8u152-zulu
> * 8u151-oracle
8u144-zulu
8u131-zulu
7u141-zulu
6u93-zulu ================================================================================
+ - local version
* - installed
> - currently in use
================================================================================ $ java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode) $ sdk use java 8u152-zulu $ java -version
openjdk version "1.8.0_152"
OpenJDK Runtime Environment (Zulu 8.25.0.1-linux64) (build 1.8.0_152-b16)
OpenJDK 64-Bit Server VM (Zulu 8.25.0.1-linux64) (build 25.152-b16, mixed mode)
I am sure, now you can feel the power of SDKMAN and how easy is using this tool. This makes developers life happy and safe.
References: | https://medium.com/knoldus/developers-needs-sdkman-not-super-man-15fe48b7ddff | ['Knoldus Inc.'] | 2018-02-08 22:59:15.483000+00:00 | ['Java', 'Buildtools', 'Spark', 'Apache Spark', 'Scala'] |
Zippie Summer Product Update | Zippie wallet is now live in Africa with the first partners and with a refreshed user experience. Read below for more.
Live with first customers in Zambia
As mentioned in our previous update in June, we launched with first customers AfriDelivery, Tigmoo and Musanga in Zambia. The launch event attracted a full house and Zippie was covered by top local press, the largest TV station ZNBC among others.
Since the launch, things have moved forward as planned and AfriDelivery’s and Tigmoo’s users can now get prepaid airtime rewards directly to their Zippie-powered wallets by referring friends. In other words, Zippie solution is live in the market with real customers and users :)
Best of all, users don’t need bank accounts, credit cards or app downloads to earn and send and receive rewards — just a basic smartphone with a browser is enough.
Tigmoo ecommerce shop promoting Zippie-powered airtime rewards in their main page
New wallet UX
Previously, we also mentioned that we’re working on a more streamlined and dynamic user experience to make it easy for people to onboard and use our wallet and get airtime.
We’re happy to say that the new UX is now released to our Zambian partners and the initial feedback has been very encouraging. Our partners appreciate the clean and simple UI and smooth onboarding of new users. | https://medium.com/zippie/zippie-summer-product-update-270d2918de36 | ['Pasi Rusila'] | 2019-08-08 10:32:52.136000+00:00 | ['Marketing', 'Smartphones', 'Loyalty', 'Blockchain', 'Africa'] |
What Would a Nondual Practice Look Like? Advaita Vedanta Sadhana | Dear Reader,
This essay (blog post) is borrowed from Advaita Academy, on which there will be an archive of my spiritual writings since 2011 (when they finish updating their site). This one is offered here on Medium because the idea of Advaita Vedanta Sadhana comes up in a series of current stories entitled, Going the Distance in Spiritual Life (specifically, part 2). <>
Since this is my first blog, I want to acknowledge my teachers; for whatever is found useful in these writings is due to them — their realization, their love and training. Reverent salutations to Swami Aseshananda of the Ramakrishna Order who gave me mantra diksha and ushered me into the great lineage of Sri Ramakrishna, Sri Sarada Devi, and Swami Vivekananda; to Lex Hixon, who exemplified the transcendent joy of spiritual life and introduced me to Swami; and to Babaji Bob Kindler, also a disciple of Swami, who took me under wing and continues to weave the living wisdom of the Indian darshanas into my heart and mind. Om
Early on in SRV Associations, Babaji put forward the ideal of Advaita Vedanta Sadhana for his students. At first hearing this might sound like a contradiction in terms. Advaita is beyond doing, beyond thinking — beyond the duality of bondage and liberation, or the pure-impure mind that sadhana (spiritual practice) is supposed to address. What kind of sadhana would be advaitic?
All of us inclined or devoted to Advaita have heard that we are ever-free and never-bound by our very nature, and that we do not need to do anything to create that state (and in fact, we can’t). This is definitely true from the standpoint of Atman-Brahman, our formless nature as Pure Consciousness. But, it is also true that “we are not God until we realize we are God.” This realization has everything to do with our mind. In last weekend’s classes, Babaji stated, quoting Sri Ramakrishna and Lord Buddha, “Pure mind is God, and pure mind is Buddha Mind.” He went on to add, “if pure mind is God, then impure mind isn’t, and it is only the impure mind that stands in our way.”
So, what is an impure mind? Essentially, it is a mind that cannot set aside the various dualities (pain/pleasure; good/bad; life/death, etc) and concentrate on the Self/God. This is where we see the value of that adage concerning how a single fiber sticking out from a thread will keep it from passing through the eye of the needle. The eye of the needle is always there — its existence doesn’t depend on whether the thread is well-licked or not; but that fiber keeps the thread from going through. Like that, a mind that cannot concentrate its powers and turn them inward will have no access to the Self.
Thus, the noble practice of Advaita Vedanta Sadhana requires that one truly understands this distinction between the Self and what is not the Self, with special attention on the mind, since every thing arises from it. Through this discrimination (viveka) we realize, as is stated in the Avadhuta Gita, that the Self is not made pure by bowing at the Guru’s feet. The Self is not made pure by destruction of the mind’s waves. Nor is the Self made pure by engaging in Yoga. The Self is already pure. Therefore, we give up the notion that our sadhana will give us Self-realization. How can something that we “do” in space and time, that has a beginning, a middle, and an end, cause That which is unchanging, unborn, and transcendent of time and space? The finite can never be a cause of the Infinite. As Swami Aseshananda frequently stated, “Realization is not the effect of a cause.” Sadhana, then, ceases to be goal-oriented, and becomes a divine preoccupation that qualifies the mind to behold its true essence and source. And there is great joy in this way of living. It makes sense of the Absolute and relativity, and frees us from all kinds of intellectual knots.
Viveka, discrimination, is a core practice and key concept. There are many ways to define it, but the first one we usually come across is, as translated into English, “discrimination between the Real and the unreal.” And that may be all we get at first. I personally puzzled over this for years despite the fact that a spiritual friend helped me with another version that translates as, “discrimination between the Eternal and the noneternal.” That made immediate sense to my intellect and felt like something I could carry out. But the profundity in both of those translations escaped me until I came to understand the Sanksrit word “Sat,” as in Satchitananda (Existence, Knowledge and Bliss), and in “asato ma sadgamaya” (lead us from untruth to Truth). I was fascinated by how Existence, Truth, and Real were all meanings for Sat. Somewhere during this time I also learned of the Yamas in the eight-limbed system of Patanjala Yoga, wherein one must practice satyam, truthfulness in thought, word, and deed. I also read and heard from my teachers that Sri Ramakrishna extols satyam as the austerity of the Age — by truthfulness alone the mind could be purified and the Self revealed. “How could merely telling the truth be this powerful?” I pondered.
Yet, all these ideas eventually coalesced most beautifully and disarmingly in the light of the scriptures, my teachers, and personal reflection on Truth, truthfulness, Existence, Eternal, the Real. What is That which is always and ever True? — that nothing in time, space, and causation can make false or destroy? — that waking, dream, and deep sleep states cannot hide, distort, or make void? — that depends upon nothing else for its Existence? This kind of inquiry belongs to viveka, discrimination, the first requirement in what is called the sadhanachatushtaya, the four fold practice/qualifications of the student.
In classical times, one needed some attainment in all four qualifications before a teacher would give out the nondual teachings. Nowadays, given the general poverty of authentic spiritual instruction in the home and its support in society overall, it falls to the teacher to help the sincere student become qualified in order to hear these teachings properly. Thus, and also in tune with the idea of Advaita Vedanta Sadhana, we are given the Truth up front and immediately, and then work on qualifying the mind to understand. | https://medium.com/vedanta-teachings-for-the-west/what-would-a-nondual-practice-look-like-advaita-vedanta-sadhana-92a615d31e8e | ['Annapurna Sarada'] | 2017-10-07 16:03:57.401000+00:00 | ['Spiritual Practice', 'Self-awareness', 'Guru', 'Nonduality', 'Spirituality'] |
The Surprised Writer | The Google assures me that it was Robert Frost who dropped this pearl: “No surprise for the writer, no surprise for the reader”.
Actually, the whole quote is “No tears for the writer, no tears for the reader; no surprise for the writer, no surprise for the reader”. I can recall only one story that I wrote with tears streaming down my face. That story got savaged in workshop, so I’m hesitant to ascribe too much credence to that part of Robert’s maxim. In fact, the only time I’m in a position to confirm the truth of the second bit is from the other side of the keyboard and then only if there’s some way to find out if the writer was in fact surprised.
That said, a lot of what I write seems to swerve into territory I did not intend.
When I first got this whacked-out notion to write stories, I was under the impression that I needed to have that baby mapped out before I stuck the first two words together. This resulted in very little writing. Plenty of pacing, not much writing.
The piece that had me dripping tears onto the keyboard? I had it choreographed to the final period. I knew exactly where to put each word, each punctuation mark, each pause, and each paragraph break (of course I’m exaggerating; I’m a writer, darling). However, I did plan the life and breath out of that story. That wasn’t its only problem but it sure was what those readers disliked the most. Or at least what they told me loudly and repeatedly.
The writer, Steve Lattimore, teaching that class told me that my next story would be my break-out story.
Why did I abandon my careful mapping on that next story? Who knows. Steve may have had something to do with it. Or maybe I was sick of pacing around the apartment trying to plan the whole story. At any rate, I started that story with the image of a homeless guy I’d seen sleeping on a steam grate in the snow. I built the main story about someone not unlike myself at that time, younger and prettier, but not all that different. I didn’t know where I was going. I wrote six sentences and deleted four. I wrote two sentences and rewrote both. I kept at it, going in whatever direction the story seemed to want to go. And as I was coming to the end of the story one perfect closing sentence floated up from whatever mysterious place where perfect sentences wait to be stumbled upon.
I was high for days after that. I was hooked.
Now I simply wait for the ideas to float up, the opening sentences, the what-if scenarios. I just go on about my business and let those puppies come to me. And they do.
But the puppy that comes yipping up to me, eager to get his feet onto a page, isn’t always the puppy that you read about. In fact, sorry puppies, but that initial idea that floats up — even when it’s a very good idea — isn’t usually what survives the process of writing the actual story.
For example
My family of origin may not talk to me (it sure takes the stress out of the holidays, let me tell you) but generations of aunts, cousins, grandparents, great-grandparents, uncles, and Mom and Dad have left a treasure trove of ideas. Often it’s things they said that we, the kids, weren’t necessarily supposed to hear.
I was making the bed several days ago when Mom paid a visit — you know, in my head — and left me with this gem: Remingtons don’t cry.
Some PBS documentary I’d seen years ago included Rose Kennedy spouting that same nonsense about Kennedys. Know what? I bet they do cry. The gods know they’ve got plenty to cry about. I can tell you that no matter how often Mom whipped that one out, well, Remingtons did and do cry.
The piece started out light-hearted and easy on the snark (for me). I’m just tooling along, enjoying the ride when someone gave the wheel a hard turn, and the next thing I knew I was going down some very old, dimly lit streets. Streets I tend to avoid. Suffice to say that by the final couple of paragraphs this Remington was close to crying.
Not what Remington expected. I wonder — and you can tell me — was it what you expected?
The surprise isn’t always good, though
This type of seat-of-the-pants writing has its downside. The start of each piece, including this one, has me feeling as if I’m standing at the edge of a vast field with a shovel. Somewhere out there, I think over to the left a bit near that row of trees, is where I plan to emerge. I start digging. Yep, that’s me tunneling underground, blind and nearly directionless, but still valiantly aiming for that row of trees.
My morgue file is filled with wrong turns. I have been known to toss out four, five, even eight pages of — to me — really great writing that went in the wrong direction (but nearly everything that does land in the morgue file gets another chance so it’s never wasted effort).
After all that, do I emerge anywhere near my row of trees? Occasionally. Even fairly often. I did this time and with next to no pacing around the apartment first. Of course, this time I didn’t have far to go and was pretty confident of my destination. I suppose that means that I’m not surprised and so, according to our friend Mr. Frost, you aren’t either (although the number of metaphors I’ve managed to throw into this mix is surprising).
But often enough I emerge from my dark tunnel, look around and discover that I’m nowhere near those trees. Instead, I climb out of my hole, brush the dirt off my hands, and discover I’ve somehow found my way to Shangri-la or some other mythical paradise. Because the best writing — of which I still aspire to — is mythical. It is mysterious and defies the gravity that holds other, lesser writing down. It floats. And as mystified as we readers are by these feats of literary derring-do, I bet the writer is even more amazed.
At least I hope so.
© Remington Write 2020. All Rights Reserved. | https://medium.com/illumination-curated/the-surprised-writer-3c33138012b5 | ['Remington Write'] | 2020-12-22 18:07:25.401000+00:00 | ['Ideas', 'Metaphor', 'Writing Tips', 'Reading', 'Writing'] |
Can a Convolutional Neural Network diagnose COVID-19 from lungs CT scans? | Unfortunately, we have not got significant improvement, our model still overfits after 10–15 epochs — we can see that train loss is starting to decrease, while validation loss is starting to increase. Another problem is that since our validation loss is low from the start, we can assume that we simply got nice initial parameters and so, our model is not robust (remember that since we don’t have a test set, we want to re-evaluate our model with new split). If we check the model summary, we will see that our model has 4,987,361 parameters — a huge number for such a small dataset. Let’s try to reduce them by adding more convolutional layers with max-pooling (we will also add several dense layers to see whether this improves performance):
def create_model():
model = Sequential([
Conv2D(16, 1, padding='same', activation='relu', input_shape=(img_height, img_width, 1)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 5, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 5, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 5, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.4),
Dense(64, activation='relu'),
Dropout(0.5),
Dense(8, activation='relu'),
Dropout(0.3),
Dense(1, activation='sigmoid')
]) model.compile(optimizer=OPTIMIZER,
loss='binary_crossentropy',
metrics=['accuracy', 'Precision', 'Recall'])
return model
Now our model has 671,185 parameters, significantly smaller numbers.
However, if we try to train our model, we will see next. Our model started to be too “pessimistic” and is predicting COVID for all patients. It appears that we made our model too simple.
Let’s reduce the structure to the following:
def create_model():
model = Sequential([
Conv2D(16, 1, padding='same', activation='relu', input_shape=(img_height, img_width, 1)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 5, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 5, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.4),
Dense(64, activation='relu'),
Dropout(0.5),
Dense(8, activation='relu'),
Dropout(0.3),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=OPTIMIZER,
loss='binary_crossentropy',
metrics=['accuracy', 'Precision', 'Recall'])
return model
This model has 2,010,513 parameters — several times more than “not complex enough” model, but several times less than “too complex” model. It is therefore computationally cheaper and easier for the model to be trained.
Now we start seeing quite good results. During training, the model is going through “predict positive for everyone” stage, but overcomes it and gets back to proper predicting. It still starts to overfit after around 40 epochs (and we see the same picture with every resplit of our data), so, we will let our model train for 40 epochs and evaluate it after. | https://towardsdatascience.com/can-a-convolutional-neural-network-diagnose-covid-19-from-lungs-ct-scans-4294e29b72b | ['Ihor Markevych'] | 2020-05-03 02:58:25.165000+00:00 | ['Image Classification', 'Covid 19', 'Data Science', 'Coronavirus', 'Neural Networks'] |
There Are a Lot of Problems with Sex Robots | Dr. Kate Devlin has as good an idea of what draws men to sexbots as anyone. Devlin, the author of the upcoming book Turned On: Science, Sex and Robots, says, “There’s the group that want that ‘Pygmalion Experience’ where they wish it was a real woman. Then there’s the people who have the robo fetish.” These two groups find common ground in sexbots’ hyper-perfected realness, and this design alone presents myriad problems.
The logistical complications of creating a human-like, talking, humping sexbot are huge. There is the weight, for one thing, because a metal skeleton is hefty. There is the energy source, for another, because batteries are hot, heavy and short-lived. Honda’s Asimo, recently retired, weighed 115 pounds, and ran for an hour on his lithium ion battery. Boston Dynamics Atlas, the backflip robot, weighs 330 pounds, and runs for less than an hour. Neither this weight nor this energy level makes for a full night of passion, and Atlas is currently the best robotics can do.
“Right now, a humanoid robot is basically impossible to make true-to-human,” says Xavier, who asked not to be identified by his real name in this story. A graduate of the MIT Media Lab who specializes in robotics, he says, “We’re just now maybe able to build humanoid robots that are realistic. But in terms of sex robots, they’re sort of like stuffed animals with motors in them — I’d consider them movie props.”
The hard fact is that much as some people want a functioning gynoid sexbot, these machines don’t exist, and they probably won’t in our lifetime.
But technological stumbling blocks are only one part of why today’s sexbots are bad and wrong. There’s also an essential moral queasiness about the endeavor to build one. The concept that there could be ethical issues swirling around sexbots is nothing new — in fact, concern about sexbot ethics is behind The Campaign Against Sex Robots, a three-year-old organizations that has looked to legally ban sex robots even before they hit the market.
Experts caution that beating, raping, or harming a gynoid robot could encourage that same behavior toward women.
Robot ethics is easy to comprehend when you recognize that the concept has less to do with what humans do to the bots, and more about how the ways human act towards bots could affect human interactions.
Dr. Kate Darling, robot ethicist and researcher at the MIT Media Lab, observes, “Ethical issues arise from concern that we might behave certain ways towards very realistic sex robots that look like a real woman,” because “that behavior might translate to our interactions with real women.” In other words, some experts caution, beating, raping, or harming a gynoid robot could encourage that same behavior to women.
This concern isn’t as mind-bendingly weird as it immediately seems. Sexbot designers are already imbuing their bots a kind of “consent.” Realbotix builds its Harmony sexbot (currently, a disembodied head sold separately from — but able to work with — existing RealDoll bodies) to recognize when she’s being ignored or disrespected, both in her daily mode and in her “X-Mode” or sexual simulation. Likewise, Santos is programming his Samantha sexbot to be able to say no, although what this means in real-life terms is unclear. Men who don’t want to take Samantha’s no for an answer may not.
These programming choices look a lot like conciliatory gestures to people who value women’s humanity — especially when you look at the totality of these bots’ A.I. Realbotix’s Harmony is physically a head, but her “soul” resides in an app. Users can tweak Harmony’s personality to suit their tastes (there’s even a capability for second personality, which the company has named Solana) by adjusting the app’s 10 “person points” and 18 “personality traits.” You can make your Harmony affectionate, happy, kind, and sexual — but you can also make her insecure, quiet, jealous and intense.
Harmony’s preset personality modes mean that Realbotix created a talking bot who can shut up, a beautiful bot who can express self-doubt, a sexual bot who can convey jealousy, and a bot whose intellectualism, imagination, and unpredictability can be turned down — or off — and this is telling. Realbotix designed Harmony to appeal to men who want to control their sex partner’s emotional, intellectual, and psychological states. And, because we live in a world where men aim to control real women, this is disturbing.
Harmony is a capitalist product, but Harmony is also a bot that invites you to see her as a human woman, and that makes absolute control an unsettling selling point. Realbotix is poised to release Harmony’s brother bot, Henry, next year, and while we don’t yet know what Henry’s preset personalities will include, it’s fair to assume that “annoying,” “insecure,” or “innocent” won’t be among his preset attributes. “Confident” and “assertive” likely will be, however.
Henry, its makers seem to think, is the ideal solution to all the ethical, gendered, and sexist dilemmas because, hey, it’s for women. Like Harmony, Henry will be a head equipped with an app you can personalize that will tell you jokes, give you compliments and recite poetry — because of course, all women want are bots to tell them they’re beautiful. Speaking on Sveriges Radio P4, Dr. David Levy, author of Love and Sex with Robots, said, “Imagine if women could have a bot that tells them, ‘Darling, you are so beautiful’ in addition to having a nice vibrating penis. Who wouldn’t like that?”
Me, for one. I wouldn’t like that. The idea that women need their sex toys to tell them they’re beautiful is misbegotten logic modeled on men’s needs. Women have been doing just fine with inarticulate dildos for millennia, and we’re not looking to our vibrators for conversation. Henry may be a nice conversation piece. He may act like Siri on steroids. He may provide companionship to people who feel lonely. But he is not a solution to the technologically fraught, ethically wrong, and easily fixed problems of today’s sexbots.
So what is? | https://onezero.medium.com/there-are-a-lot-of-problems-with-sex-robots-38ea0c17b7db | ['Chelsea G. Summers'] | 2019-08-09 18:27:17.153000+00:00 | ['Technology', 'Artificial Intelligence', 'Future Human', 'Robotics', 'Feminism'] |
[Paper] Backprop: Visualising Image Classification Models and Saliency Maps (Weakly Supervised Object Localization) | [Paper] Backprop: Visualising Image Classification Models and Saliency Maps (Weakly Supervised Object Localization)
Weakly Supervised Object Localization (WSOL) Using AlexNet
Visual Geometry Group, University of Oxford
In this story, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps (Backprop), by Visual Geometry Group, University of Oxford, is shortly presented. You may already know, this is a paper from the famous VGG research group. It is called Backprop since the latter papers call it Backprop when mentioning it.
Weakly supervised object localization (WSOL) is to find the bounding box of the main object within the image, with only the image-level label, but without the bounding box label.
In this paper:
Two visualizing methods are proposed: One is gradient-based method and one is saliency-based method.
are proposed: One is method and one is method. For saliency-based method, GraphCut is utilized for weakly supervised object localization (WSOL).
This is a paper in 2014 ICLR Workshop with over 2200 citations. (Sik-Ho Tsang @ Medium) | https://sh-tsang.medium.com/backprop-visualising-image-classification-models-and-saliency-maps-weakly-supervised-94392011b34a | ['Sik-Ho Tsang'] | 2020-11-29 06:29:43.996000+00:00 | ['Artificial Intelligence', 'Object Localization', 'Convolutional Network', 'Deep Learning', 'Weakly Supervised'] |
The Case for Letting Go of Pandemic Shaming (Even Now) | The Case for Letting Go of Pandemic Shaming (Even Now)
I don’t agree with Farhad Manjoo’s conclusion. I can see how someone would reach it.
Photo by Mika Baumeister/Unsplash
Last week, I cried for the first time in… well, I guess about two years, since I saw that wrenching documentary about Mister Rogers’ Neighborhood. These were big, gummy sobs, the kind where your sinuses plug up and you yawp for air.
I had just gotten off the phone with my mom, whom I haven’t seen in a year, explaining that my wife and I would not be flying from New York to Chicago for Thanksgiving as we had planned. When we made the arrangements, Covid numbers were down, and it seemed — with proper distancing, quarantine measures, testing, and high-quality PPE — that we could travel reasonably safely to a gathering that would exclusively involve my mom, her partner, my wife, and me. With cases now surging, especially in Illinois, we decided this was no longer responsible.
And yet, I can find empathy for those who would make the opposite choice. Enter Farhad Manjoo.
In what has already become an infamous column for the New York Times Friday, Manjoo traces through an animated infographic the size of his Covid-19 bubble. He concludes that it’s “enormous.”
“Once I had counted everyone, I realized that visiting my parents for Thanksgiving would be like asking them to sit down to dinner with more than 100 people,” he writes. Then, in a twist that has prompted fierce criticism from members of the Twitter commentariat, he decides to travel for Thanksgiving anyway.
“With a practically nonexistent federal response and extremely inconsistent strategies from local officials, an individual is forced to make a complex moral calculus with every decision.”
Shaming is easy. Research shows it’s also probably not effective. (For more on this and its specific relationship to the public health crisis represented by Covid-19, I recommend this thread by Julia Marcus, PhD, an infectious disease epidemiologist at Harvard Medical School.) And the backlash misses two points. First, Manjoo’s piece, despite his ultimate decision, actually illustrates infection risk better than anything else I’ve seen. (Actually, two people I know fessed up to sharing the story without having read the last part, simply thinking the data visualization was a powerful way to dissuade people from traveling.) Second, the situation Manjoo describes in Northern California does represent a different potential for exposure compared to, say, a family gathering that includes someone who’s been bar-hopping in North Dakota. The risk is far beyond non-zero, but it is also nuanced.
I’d like to be clear about one thing before I continue. The safest decision anyone can make right now, in the words of Zeynep Tufekci, is to “hunker down.” Isolating is what will slow the spread of the coronavirus, period. If you can, you should. This is why my family decided to cancel Thanksgiving plans, and why, ethically, you would do well to cancel yours.
But I think we must also understand that the issue is not exactly two-dimensional. In fact, with a practically nonexistent federal response and extremely inconsistent strategies from local officials, an individual is forced to make a complex moral calculus with every decision.
“ In an absurdly painful year, I’m not sure I’m ready to condemn someone who acknowledges that death is on the horizon and decides to spend some time with his loved ones.”
Here’s one example: Is it wrong to order food in a city where a delivery person might potentially be exposed to a dozen-plus customers every night? In a vacuum, of course it is: You are increasing the number of face-to-face interactions for a service worker who is forced to complete a now-risky job because you want the luxury of a meal that you did not cook yourself. How does the calculus change if you can tell the delivery person to leave the meal outside your door? How does it change if the restaurant, faced with months yet to go without a widespread Covid vaccine or a financial bailout from the government, is facing imminent closure for lack of business? How does it change if the city has maintained relatively low infection rates? How does it change if you are nervous about going to a grocery store, where a great many people gather every day? How does it change if you are physically unable to get to a grocery store?
To which a reasonable individual might respond that the issue of feeding yourself is very different from the issue of seeing family when you do not absolutely have to. But even here there are gradations: If the situation is truly as Manjoo describes, then his travel is about as safe as it could be. His children are educated in a “distance-learning pod” in an area with relatively low Covid rates; through conversations with other parents, he is confident that even his “indirect contacts are taking the virus seriously”; he is quarantining his family before their travel; he is planning to get tested before and after the trip; he is driving, not flying; and his Thanksgiving gathering will be extremely limited.
By this account, the risk of anyone getting sick is perhaps smaller than it might be in a relatively routine interaction elsewhere in this country. Yes, it would be even smaller if he simply sucked it up and did not travel. So many of us are making that painful decision — it’s natural to be upset at someone who understands the risks and chooses not to make the same sacrifice.
Then again, isn’t it that exact burden that should lead us to empathy? In an absurdly painful year, I’m not sure I’m ready to condemn someone who acknowledges that death is on the horizon and decides to spend some time with his loved ones.
A number of commentators seem to agree with this while reserving criticism for the platform that published Manjoo’s take. The Times is huge and influential, and so it should not endorse a perspective that may inspire others to take a dangerous action, the criticism goes. On this count, I have to admit I’m ambivalent as an editor myself. I believe in the power of the printed word and that the editors of the Times op-ed pages would, as a general rule, benefit from a bit more restraint. I also see a column that illustrates better than anything else I’ve encountered how risk can spiral out of control, and communicates in clear language a psychic toll that many of us are grappling with.
As I write this, I’m left wondering when I’ll see my mom again. If I was prone to pessimism, I might instead wonder if I will see my mom again. I carry this fear as someone who lost a father to disease as a teenager. I hope (seemingly against hope) that we can come together and see reason as an American community to fight this pandemic together. And I won’t insist that an individual — even one as frequently misguided as Manjoo — be shamed into shouldering this burden alone. | https://coronavirus.medium.com/the-case-for-letting-go-of-pandemic-shaming-even-now-9f5d4e913dc1 | ['Damon Beres'] | 2020-11-20 20:11:11.193000+00:00 | ['Shame', 'Risk', 'Covid 19', 'Pandemic', 'Coronavirus'] |
Digital Marketing skills that lead to prosperity and progress your growth | Digital marketing, the promotion of products or brands via one or more forms of electronic media, differs from traditional marketing in that it uses channels and methods that enable an organization to analyze marketing campaigns and understand what is working and what isn’t typically in real time.
“We don’t believe in digital marketing, we believe in marketing in digital world”.
Digital marketing is an umbrella term for all of your online marketing efforts. Businesses leverage digital channels such as Google search, social media, email, and their websites to connect with their current and prospective customers.
Digital marketing hikes, many questions???
Let’s go through some challenges and skills facing in digital marketing and understand them briefly.
Immense growth in digital turnover
The greatest dare is leading the immense growth in digital turnover.
“The secret of change is to focus all your energy not on fighting the old but building the new”.
Source: Digital turnover
One of the most upcoming growth is Facebook, There came many options which grasp many users through the world.
In Facebook, web and mobile apps that help users connect, share, discover,& communicate with each other.
It built a better platform for the users to connect each other.
No of active users in 2012= 1.1 billion.
No of active users in 2016= 1.71 billion.
Stock price at IPO= $38, as of yesterday, stock price= $128.
Investments from other companies facilitated the growth. Diverse customers, 70% of users come from outside of the U.S.
Managers were not pressured to produce too many profits or revenues.
“More users first, revenues lates”.
Email Commerce
Better, to begin with quote, is quite interesting right???
“ Not enough talk about the importance of brand in an email. Customers don’t sign up for email-they sign up for your brand”.
E-mail is an art of sending commercial messages to a batch of people, every email that we deliver to ongoing customers is termed as email marketing.
It refers to sending email messages for staying in a relationship with present or former customers and for grouping new customers or cogent existing customers to purchase something and sharing with other people.
Source: Email Commerce
Comparing to other traditional mail, email is the cheaper and faster, it is labeled with recipient name, street address, city, state or province, zip code.
“Great to meet you lets keep in touch”
The persons we meet once in a blue, on behalf of sharing their business cards and misplaced somewhere, there is a worthy sharing of email contact is splendid.
The main strategy is marketing your products and services with the aid of email channel with the greater possibility of accomplishing profit and meeting your goals.
Bestowed Social media proclamation
To be successful in social media publishing ads plays the superior role.
Though we are in a technological world, all we are adapting to social media, it is viral spreading sites we use daily.
Do anyone think marketing is about the stuff that you make???… then I would say your assumption is wrong, the thing most matters is the stories you tell.
“ Creating brand is not what we tell the customers, is exactly what the customers tell each other”.
Paid social media is a crucial factor for revenue growth and online business, a social media like the Facebook score is growing faster compared to other media as long as the ads they proclaim in the society.
On the other hand, Twitter is lagging on inactive users every month and even google users too inactive.
Usually, we just use it to impress people, make sure to make an impact on the world.
Source: Break on social media
Are you guys getting tiresome with my old subject!!! Okay, to make impressive how to just satisfy our customers ???…
To satisfy our customers, you have to give your best on advertising. Also, you have to build on the effective relationship with customers.
The things producer should do is:
1.Knowing the complete needs of the customers.
2.Checking of already existing products and what the stuff you have to add more.
Here are some skills to catch on:
How to use Facebook’s analytics tool “Facebook Insights”
Use “Power Editor” well
What can be done with “Look-alike” audiences?
The granular targeting of “Custom audiences”.
Look over engine marketing
Search engine marketing (SEM) is a form of Internet marketing that involves the promotion of websites by increasing their visibility in search engine results pages (SERP’s) primarily through paid advertising.
There are some calculations here
Optimizing your content, website, and blog. Creation of SEM account is easy and can build traffic quickly based on the degree of competition.
Digital Marketing boards
There are abundant platforms available. Hubspot is the first inbound marketing software that provides all the tools you need to improve and manage your online marketing strategy.
ExactTarget
Marketo
Marin Software
Vocus
Email Marketing boards
There are lots of boards, but here are scarce.
1)Campaign Monitor
2)Stream send
3)Mad mini
4)Constant contact
Social Media Marketing
Social media marketing is becoming more popular for both practitioners and researchers.
To use social media effectively, firms should learn to allow customers and Internet users to post user-generated content, also known as “earned media,” rather than use marketer-prepared advertising copy.
Content Marketing
Source: Content Marketing
Do you all believe great content is the best sales in the world???….of-course, yeah…
Let’s take a quick look at content marketing.
Content marketing is the art of using different kinds of content such as blogs, webinars, videos, website pages to attract website visitors and eventually convert them into leads/customers.
“Content Marketing is a long-term relationship. It’s not a one night stand up”
It is the act of putting content on the web.
Mobile Marketing
Mobile marketing is the art of marketing your business to appeal to mobile device users.
“If your plans don’t include mobile phones, your plans are not finished”
Source: Mobile Marketing
Mobile is the future of marketing, but really the era of mobile has already arrived. If you’re not implementing any kind of a mobile marketing strategy, you’re already trailing behind!
Usually, all are getting appealed to use mobile phones in these upcoming technologies!!!
More users are spending larger amounts of time engaged with mobile devices than ever before.New marketing consists of smartphones, SEO, mobile sites, geolocation and social marketing.As marketers have to understand these new types of consumers and how best to reach them.
Viral Marketing
Source: Viral Marketing
Viral Marketing refers to the marketing techniques that use pre-existing social networking services and other techniques to produce an increase in brand awareness or to achieve other marketing objectives through self-replicating viral processes, analogous to spread of the virus.
A direct marketing technique in which a company persuades internet users to forward its publicity material in emails.
These days, viral videos are everywhere and everyone wants one. That’s because they’re the cheapest way to spread your message to the world.
Conclusion
There are many benefits of using digital marketing, as your growth will be higher soon and also helps you to make use of proven strategies and techniques that attract not necessarily more traffic but highly targeted traffic delivers results.
Targeted the right kind of people that deliver the right kind of results is what Digital Marketing is all about -ensuring survival of your business. | https://medium.com/thoughtbees/digital-marketing-skills-that-lead-to-prosperity-and-progress-your-growth-850a97be86de | ['Ishwarya Sampath'] | 2017-12-15 01:13:52.751000+00:00 | ['Marketing', 'Digital Marketing', 'SEO', 'Marketing Strategies', 'Marketing Automation'] |
Why Bill Nye Is Not A Scientist - And Why It Matters | This piece is expanded from a previous article that I wrote, which can be found here:
First Off — Who is Bill Nye, and What Are His Qualifications?
Bill Nye the “science” guy was a favorite of many school children growing up in the 1990’s. He had a hit TV series where he taught scientific concepts to millions of children, and for that he should be applauded, even by those who disagree with him politically.
So it’s not unreasonable that many people are under the mistaken impression that Bill Nye is a scientist. But is Bill Nye an actual “scientist” and what qualifications does one have in order to be considered a scientist?
Let’s start with his formal education. Bill Nye has a mechanical engineering degree, so he would have taken his fair share of physics courses, however, he has not done the work typically required to be considered a “scientist”.
Time and time again, I see Bill Nye brought on national TV to speak as an expert on scientific issues. And amazingly Bill Nye views himself as being qualified to come on such shows! But Bill Nye has no scientific credentials whatsoever, besides being a comedian in a lab coat, and has never contributed any research to the scientific community!
The below clip is a key example of what I am talking about.
Princeton scientist with decades of research experience: Here is what the science says.
CNN Anchor (turns to the actor who played a scientist on a children’s show 20 years ago): Bill, Bill, what do you think?
Bill Nye, Actor/Comedian: He doesn’t know what he’s talking about, but take my word for it.
Some random person: Something something Trump something something
The entire thing was like an episode of the Twilight Zone. These people were dismissing an actual scientist in favor of an actor who once played a scientist on TV to protect their political ideologies.
What Qualifies Someone To Be Called A Scientist?
In order to earn the privilege of calling yourself a “scientist” one normally has to have an earned PhD (or at least a Master’s) in the natural sciences. But as one geneticist that I know told me, even after earning his PhD, he still felt hesitant using the word “scientist” to describe himself. To be able to call yourself a “scientist” is a very high honor, and not one that those in the scientific community use lightly.
[Author’s Note: In my original writing of this article, I failed to give credit to scientists who have their Master’s degrees who have done incredible work in the scientific community, and deserve that respect and recognition. And for that, I am deeply sorry. I thought about it, but then honestly failed to mention it, as it was not relevant to the case of Bill Nye. — That said, many of the replies insist that Bill Nye is a scientist, simply because he has spent a lot of time talking about science on TV, and trying to educate people, but this objection simply is not valid.]
I personally have two science degrees, an associate degree and a bachelor degree in biotechnology. (Don’t laugh, I am very proud of that associate degree, as my early college years were incredibly difficult.)
I have done research, in research labs, to further the cause of science. Yet even though I have my degree, and have done scientific research, I have not earned the coveted social privilege of calling myself a scientist, because I do not yet have my PhD.
In order to earn the honor of being called a scientist, by those of us in the scientific community, one usually has to have an earned PhD, or at least a Master’s in the natural sciences.
Of course, there are cases where someone can be described as a “scientist” even if they have no science degrees, such as in historical contexts. Many of the great scientists of past centuries had no scientific degree, in large part because they were the trail blazers. These were people who founded the modern sciences and contributed incredible insight into the natural world.
But keep in mind, that there was a time in history where most medical doctors never went to college. Someone living in a past century might be legitimately called “doctor”, even without a college education, but that would not be true today. Similarly, it would be exceedingly rare, and virtually unheard of for someone in the 21st Century to be legitimately called a “scientist” without an earned scientific PhD or a Master’s.
The one caveat to this would be that it is possible, although incredibly rare, that someone might teach themselves molecular biology through books, and lectures online, and come out with something like a cure for testicular cancer. In a case like that, someone has clearly earned the right to call themselves a scientist, having done an incredible amount of self-teaching, and an incredible amount of research, displaying knowledge and ability that could only be obtained by one with great understanding of molecular biology and related sciences.
So in very very rare instances, someone with incredible ability, and who has contributed incredible research might be considered a scientist by those of us in the scientific community, even without an earned PhD.
But how does this compare with Bill Nye? Does Bill Nye have any scientific degrees or scientific credentials?
The answer is simply no. However, that does not stop Bill Nye or CNN from pretending like he does.
Bill Nye in an interview with Tucker Carlson
Bill Nye’s “Honorary PhDs”
Not long ago, someone that I know called me on the phone to argue with me about Bill Nye’s qualifications. She kept insisting that Bill Nye IS a legitimate scientist because he has not one, but six honorary PhDs.
Originally this person told me that Bill Nye had six doctorates, and my face must have contorted in confusion listening to the phone, because no one — or at least no one that I know of — has six legitimate PhDs in science, let alone Bill Nye. Then a quick Google search revealed that these were honorary PhDs.
Once that little detail came out, things made sense.
I kept trying to explain to this person ad-nausium, that an honorary degree doesn’t mean anything. Honorary degrees are handed out like candy by universities, thanking people that they like for coming to a university to speak.
Honorary degrees, unlike degrees with honor, are handed out by universities to people that they like, and the person does not have to have ever done anything of significance. That’s why some universities (such as MIT, and the University of Virginia) don’t hand them out at all. They cheapen real degrees from people who have done actual work, and create confusion for those who don’t understand what an honorary degree is.
It’s unfortunate, because the original intent for an honorary PhD was to award people who did not have PhDs for contributing excellent work to a particular field. A possible example might be a computer engineer who figures out a new, faster, and better way to sequence DNA — This person would most likely be given at least one honorary PhD in molecular biology.
The practice of handing out such degrees has been criticized by many, because now we live in a world where universities commonly award PhDs to someone who has done no work in the field, simply for showing up to speak at an event, or for donating money to a university.
Oprah Winfrey, for example, has four honorary doctorates.
http://www.businessinsider.com/celebrities-who-have-honorary-degrees-2015-8/#jack-nicholson-has-an-honorary-doctorate-from-brown-26
This is why an “honorary degree” doesn’t mean anything. With an earned PhD, a student has often slaved away for five or six years in a molecular biology lab trying to find a genetic variable in rats that might help lead to a better understanding of why humans get lung cancer. But with an “honorary PhD” one merely has to be a comedian in a lab coat who enjoyed the filet mignon on his flight across the country one afternoon.
Again, don’t get me wrong here, I mean no disrespect to Oprah… but I do mean to disrespect Bill Nye, a man who goes on TV pretending to be a scientist even though he has no clue what he’s talking about, and millions of Liberals believe whatever nonsense falls out of his mouth, over people who have actual science degrees, and over actual scientists.
I pointed all of this out to said person that I mentioned before, but she kept insisting that in science an honorary PhD actually means something different (it doesn’t).
So, let’s take a look at the six honorary doctorates that Bill Nye has, and how he obtained them.
Rensselaer Polytechnic Institute — Speaker
As far as I can tell, Bill Nye was awarded his “PhD” here simply for showing up at the event to speak. The university website gives no details other than that. No papers had to be published in scientific journals, and no pesky dissertation had to be written. He simply had to show up, and talk to students who earned actual PhDs.
https://science.rpi.edu/itws/news/rensselaer-graduates-1613-208th-commencement
Johns Hopkins University — Speaker
Same as RPI, Bill Nye has never written a scientific paper, or done any research that would qualify him as a “scientist” or earn him a doctorate.
https://commencement.jhu.edu/our-history/honorary-degrees-awarded/
https://commencement.jhu.edu/our-history/commencement-speakers/
https://www.youtube.com/watch?v=aHLXRiMWLj4
Willamette University — According to the university website, Nye was awarded an honorary doctorate for his science TV shows. The website also notes his work in engineering. But again, nothing here that would qualify him as an actual scientist.
https://web.archive.org/web/20110312040839/http://www.willamette.edu/news/library/2011/03/commencement_2011.html
Rutgers University — Bill Nye was paid $35,000 to speak at Rutgers, and he was also given an honorary doctorate for showing up. During the speech, Bill Nye attacked man-made Global Warming/Climate Change skeptics. This is not surprising considering that Bill Nye has attacked actual scientists many times before simply for not agreeing with his preconceived ideas.
http://dailycaller.com/2015/05/18/bill-nye-paid-35k-to-tell-students-to-dismiss-global-warming-skeptics/
http://www.nj.com/education/2015/05/rutgers_graduates_class_of_2015.html
Lehigh University — Honorary doctorate in pedagogy. Pedagogy is not exactly science, it’s the art of teaching ideas to students in different ways so that they can learn the concepts. While I will criticize Bill Nye for falsely representing himself as a scientist, and while I will point out that his presentations are often false and misleading, there is a lot of room for praise here, and an honorary degree in pedagogy is definitely appropriate.
Millions of kids watched Bill Nye the science guy growing up, and became interested in science as a result. Personally, I was a fan of the other popular 90’s kid’s show, The Magic School bus, as well as the books. (Fo-shizzle-my-Frizzle)
Despite my criticism, when Bill Nye is teaching about topics that don’t deal with his preconceived political and social ideas, he does an amazing job, and shines like a superstar!
As someone with a passion for science, and for teaching, if I could reach the number of people that Bill Nye has, and get that many people interested in science, I would be very grateful for my life!
But being a comedian in a lab coat, even one with an educational TV show, does not a scientist make.
Simon Fraser University — Again, Bill Nye was given this award for his educational work, not because of any notable scientific contribution. The article on the SFU website does not specify whether Bill Nye spoke at the event, however that detail is not really important.
https://www.sfu.ca/sfunews/stories/2015/bill-nye-receives-honourary-degree-from-sfu.html
So, again, beating a dead horse that some people still will not get, Bill Nye is not a scientist, and has not done the work to earn the privilege of calling himself a scientist. He’s as much a scientist as Rachel Dolezal is an African American.
Bill Nye’s Work for NASA
But…but… Bill Nye did work for NASA!
Yes, Bill Nye has done work for NASA, but so has the person working in their cafeteria, that does not make them a scientist. While it is true that Bill Nye has done work for places such as NASA, his contribution was as an engineer, not a scientist.
What’s the difference? I personally went to a school that was big on engineering. Mechanical Engineering was probably the largest program there with hundreds and hundreds of students at least. My program, Biotechnology and Molecular Bioscience, was substantially smaller, with a few dozen students, and a number of closely related programs, such as Bioinformatics, and good old standard Biology.
While an engineer will use physics and calculus to design machines, it’s the job of the scientist to study the natural world. Both fields are extremely technical, and Bill Nye undoubtedly had to take a lot of physics courses, but that still does not make him a scientist.
If Bill Nye wants to go on TV explaining how a motor works, then great! If Bill Nye wants to go on a kids show and explain the mating habits of the black widow spider, then great! But Bill Nye is not a biologist, or a climate scientist. That does not mean that he cannot have an opinion, but he is not an expert in these areas.
Disturbingly, what we have is Bill Nye going on TV, dismissing actual scientists, and arrogantly accusing actual scientists of not understanding what they are talking about, simply because they don’t buy into Bill Nye’s nonsense.
My interview with Cody Libolt on this article
Conclusions
Over the past few years, Bill Nye has become a political activist, and out of nowhere became popular once again. While many people are under the mistaken impression that Bill Nye is a scientist, he does not have any scientific credentials. His six honorary doctorates were given to him for his work on educational TV shows, and for showing up as a guest speaker, not for any actual scientific research that he’s done.
Unfortunately millions of people do not understand what a scientist is, and will undoubtedly continue to take everything that Bill Nye says as if it were coming from a scientific authority, and not just a comedian with a popular TV show.
Thank you for reading my article. Be sure to subscribe to my electronic Newsletter where I publish exclusive content, and where I let my fans know when new publicly available content is published. I specialize in writing about science, economics, science fiction, and apologetics.
https://greenslugg-com.ck.page/48b7a894b1
Further Reading: Priceonomics-Why Do Colleges Give Out “Honorary” Degrees?
Be sure to check out some of my other articles!
I can also be found at:
GreenSlugg.com -My primary website where I blog and seek to promote Christian intellectual thought, seeking to teach Christians how to witness to others.
My YouTube channel: https://www.youtube.com/user/GreenSlugg
My Amazon author page, where I publish sci-fi and speculative fiction: https://www.amazon.com/G.S.-Muse/e/B074DPZ8PZ/ref=dp_byline_cont_ebooks_1
The Twitterz: https://twitter.com/GSMuse1
Patreon, where you can donate to my work: https://www.patreon.com/GreenSlugg
And be sure to check out other articles from authors at For The New Christian Intellectual
https://medium.com/christian-intellectual | https://medium.com/christian-intellectual/why-bill-nye-is-not-a-scientist-and-why-it-matters-20b6e3fc3fee | ['G.S. Muse'] | 2020-03-30 23:38:24.156000+00:00 | ['Ken Ham', 'Evolution', 'Bill Nye', 'Science', 'Creation'] |
Building stuff with the Kubernetes API (Part 4) — Using Go | This is part 4 of a multipart series which covers the programmability of the Kubernetes API using the official clients. This post covers the use of the Kubernetes Go client, or client-go, to implement a simple PVC watch tool which has been implemented in Java and Python in my previous posts.
The Kubernetes Go Client Project (client-go)
Before jumping into code, it is beneficial to understand the Kubernetes Go client (or client-go) project. It is the oldest, of the Kubernetes client frameworks, and therefore comes with more knobs and features. Client-go does not use a Swagger generator, like the OpenAPI clients we have covered in previous posts. Instead, it uses source code generators, originated from the Kubernetes project, to create Kubernetes-style API objects and serializers.
The project is a collection of packages that can accommodate different programming needs from REST-style primitives to more sophisticated clients.
RESTClient is a foundational package that uses types from the api-machinery repository to provide access to the API as a set of REST primitives. Built as an abstraction above RESTClient, the Clientset will be your starting point when creating simple Kubernetes client tools. It exposes versioned API resources and their serializers.
There are several other packages in client-go including discovery, dynamic, and scale. While we are not going to cover these packages, it is important to be aware of their capabilities.
A simple client tool for Kubernetes
Again, let us do a quick review of the tool we are going to build to illustrate the usage of the Go client framework. pvcwatch, is a simple CLI tool which watches the total claimed persistent storage capacity in a cluster. When the total reaches a specified threshold, it takes an action (in this example, it’s a simple notification on the screen).
You can find the complete example on GitHub.
The example is designed to highlight several aspects of the Kubernetes Go client including:
connectivity
resource list retrieval and walk through
object watch
Setup
The client-go project supports both Godep and dep for vendoring management. I use dep for ease of use and continued adoption (yes, yes, I know vgo… I know). For instance, the following is the minimum Gopkg.toml config required to setup your code with dependency on client-go version 6.0 and version 1.9 of the Kubernetes API:
[[constraint]]
name = "k8s.io/api"
version = "kubernetes-1.9.0" [[constraint]]
name = "k8s.io/apimachinery"
version = "kubernetes-1.9.0" [[constraint]]
name = "k8s.io/client-go"
version = "6.0.0"
Running dep ensure takes care of the rest.
Connecting to the API server
The first step in our Go client program will be to setup a connection to the API server. To do this, we will rely on utility package clientcmd as shown.
import (
...
"k8s.io/client-go/tools/clientcmd"
) func main() {
kubeconfig := filepath.Join(
os.Getenv("HOME"), ".kube", "config",
)
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
log.Fatal(err)
}
...
}
Client-go makes this a trivial task by providing utility functions to bootstrap your configuration from different contexts.
From a config file
As is done in the example above, you can bootstrap the configuration for connecting to the API server from a kubeconfig file. This is ideal when your code will run outside of a cluster.
clientcmd.BuildConfigFromFlags("", configFile)
From a cluster
If your code will be deployed in a Kubernetes cluster, you can use the previous function, with empty parameters, to configure your connection using cluster information when the client code is destined to run in a pod.
clientcmd.BuildConfigFromFlags("", "")
Or, use package rest to create the configuration from cluster informtion directly as follows:
import "k8s.io/client-go/rest"
...
rest.InClusterConfig()
Create a clientset
We need to create a serializer client to let us access API objects. Type Clientset, from package kubernetes, provides access to generated serializer clients to access versioned API objects as shown below.
type Clientset struct {
*authenticationv1beta1.AuthenticationV1beta1Client
*authorizationv1.AuthorizationV1Client
...
*corev1.CoreV1Client
}
Once we have a properly configured connection, we can use the configuration to initialize a clientset as shown in the next snippet.
func main() {
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
...
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
}
For our example, we will use version v1 API objects. So, next we use the clientset to access the core API resources via method CoreV1() as shown.
func main() {
...
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
api := clientset.CoreV1()
}
You can see the available clientsets here.
Listing cluster PVCs
One of the most basic operations we can do with the clientset is to retrieve resource lists of stored API objects. For our example, we are going to retrieve a namespaced list of PVCs as follows.
import (
...
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
) func main() {
var ns, label, field string
flag.StringVar(&ns, "namespace", "", "namespace")
flag.StringVar(&label, "l", "", "Label selector")
flag.StringVar(&field, "f", "", "Field selector")
...
api := clientset.CoreV1()
// setup list options
listOptions := metav1.ListOptions{
LabelSelector: label,
FieldSelector: field,
}
pvcs, err := api.PersistentVolumeClaims(ns).List(listOptions)
if err != nil {
log.Fatal(err)
} printPVCs(pvcs) ...
}
In the snippet above, we use ListOptions to specify label and field selectors (as well as namespace) to narrow down the PVC resources returned as type v1.PeristentVolumeClaimList . The next snippet shows how we can walk and print the list of PVCs that was retrieved from the server.
func printPVCs(pvcs *v1.PersistentVolumeClaimList) {
template := "%-32s%-8s%-8s
"
fmt.Printf(template, "NAME", "STATUS", "CAPACITY")
for _, pvc := range pvcs.Items {
quant := pvc.Spec.Resources.Requests[v1.ResourceStorage]
fmt.Printf(
template,
pvc.Name,
string(pvc.Status.Phase),
quant.String())
}
}
Watching the cluster PVCs
The Kubernetes Go client framework supports the ability to watch the cluster for specified API object lifecycle events including ADDED , MODIFIED , DELETED generated when an object is created, updated, and removed respectively. For our simple CLI tool, we will use this watch capability to monitor the total capacity of claimed persistent storage against a running cluster.
When the total claim capacity, for a given namespace, reaches a certain threshold (say 200Gi ), we will take an arbitrary action. For simplicity sake, we will just print a notification on the screen. However, in a more sophisticated implementation, the same approach can be used to trigger a some automated action.
Setup a watch
Now, let us create a watcher for PersistentVolumeClaim resources using method Watch . Then, use the watcher to gain access to the event notifications from a Go channel via method ResultChan .
func main() {
...
api := clientset.CoreV1()
listOptions := metav1.ListOptions{
LabelSelector: label,
FieldSelector: field,
}
watcher, err :=api.PersistentVolumeClaims(ns).
Watch(listOptions)
if err != nil {
log.Fatal(err)
}
ch := watcher.ResultChan()
...
}
Loop through events
Next, we are ready to start processing resource events. Before we handle the events, however, we declare variables maxClaimsQuant and totalClaimQuant of type resource.Quantity (to represent SI quantities in k8s) to setup our quantity threshold and running total.
import(
"k8s.io/apimachinery/pkg/api/resource"
...
) func main() {
var maxClaims string
flag.StringVar(&maxClaims, "max-claims", "200Gi",
"Maximum total claims to watch")
var totalClaimedQuant resource.Quantity
maxClaimedQuant := resource.MustParse(maxClaims) ...
ch := watcher.ResultChan()
for event := range ch {
pvc, ok := event.Object.(*v1.PersistentVolumeClaim)
if !ok {
log.Fatal("unexpected type")
}
...
}
}
The watcher ’s channel, in the for-range loop above, is used to process incoming event notifications from the server. Each event is assigned to variable event where event.Object value is asserted to be of type PersistentVolumeClaim so we can extract needed info.
Processing ADDED events
When a new PVC is added, event.Type is set to value watch.Added . We then use the following code to extract the capacity of the added claim ( quant ), add it to the running total capacity ( totalClaimedQuant ). Lastly, we check to see if the total capacity is greater than the established max capacity ( maxClaimedQuant ). If so, the program can trigger an action.
import(
"k8s.io/apimachinery/pkg/watch"
...
) func main() {
...
for event := range ch {
pvc, ok := event.Object.(*v1.PersistentVolumeClaim)
if !ok {
log.Fatal("unexpected type")
}
quant := pvc.Spec.Resources.Requests[v1.ResourceStorage] switch event.Type {
case watch.Added:
totalClaimedQuant.Add(quant)
log.Printf("PVC %s added, claim size %s
",
pvc.Name, quant.String()) if totalClaimedQuant.Cmp(maxClaimedQuant) == 1 {
log.Printf(
"
Claim overage reached: max %s at %s",
maxClaimedQuant.String(),
totalClaimedQuant.String())
// trigger action
log.Println("*** Taking action ***")
}
}
...
}
}
}
Process DELETED events
The code also reacts when PVCs are removed. It applies a reverse logic and decreases the deleted PVC size from the running total count.
func main() {
...
for event := range ch {
...
switch event.Type {
case watch.Deleted:
quant := pvc.Spec.Resources.Requests[v1.ResourceStorage]
totalClaimedQuant.Sub(quant)
log.Printf("PVC %s removed, size %s
",
pvc.Name, quant.String()) if totalClaimedQuant.Cmp(maxClaimedQuant) <= 0 {
log.Printf("Claim usage normal: max %s at %s",
maxClaimedQuant.String(),
totalClaimedQuant.String(),
)
// trigger action
log.Println("*** Taking action ***")
}
}
...
}
}
Run the program
When the program is executed against a running cluster, it first displays the list of existing PVCs. Then it starts watching the cluster for new PersistentVolumeClaim events.
$> ./pvcwatch Using kubeconfig: /Users/vladimir/.kube/config
--- PVCs ----
NAME STATUS CAPACITY
my-redis-redis Bound 50Gi
my-redis2-redis Bound 100Gi
-----------------------------
Total capacity claimed: 150Gi
----------------------------- --- PVC Watch (max claims 200Gi) ----
2018/02/13 21:55:03 PVC my-redis2-redis added, claim size 100Gi
2018/02/13 21:55:03
At 50.0% claim capcity (100Gi/200Gi)
2018/02/13 21:55:03 PVC my-redis-redis added, claim size 50Gi
2018/02/13 21:55:03
At 75.0% claim capcity (150Gi/200Gi)
Next, let us deploy another application unto the cluster that requests an additional 75Gi in storage claim (for our example, let us use Helm to deploy, say, an influxDB instance).
helm install --name my-influx \
--set persistence.enabled=true,persistence.size=75Gi stable/influxdb
As you can see below, our tool immediately reacts to the new claim and displays our alert because the total claims are more then the threshold.
--- PVC Watch (max claims 200Gi) ----
...
2018/02/13 21:55:03
At 75.0% claim capcity (150Gi/200Gi)
2018/02/13 22:01:29 PVC my-influx-influxdb added, claim size 75Gi
2018/02/13 22:01:29
Claim overage reached: max 200Gi at 225Gi
2018/02/13 22:01:29 *** Taking action ***
2018/02/13 22:01:29
At 112.5% claim capcity (225Gi/200Gi)
Conversely, when a PVC is deleted from the cluster, the tool react accordingly with an alert message.
...
At 112.5% claim capcity (225Gi/200Gi)
2018/02/14 11:30:36 PVC my-redis2-redis removed, size 100Gi
2018/02/14 11:30:36 Claim usage normal: max 200Gi at 125Gi
2018/02/14 11:30:36 *** Taking action ***
Summary
This post, part of an on going series, starts coverage of programmatic interaction with the API server using the official Kubernetes client framework for the Go programming langue. As before, the code does a walk through of implementing a CLI tool to watch the total PVC sizes for a given namespace. The code uses a simple watched list to emit resource events from the server that are processed using an event loop.
What’s next
From this point on, the series will continue using the client-go framework. In the next write up, we will start exploring the use of the controller pattern to create more robust clients with the client-go tools.
As always, if you find this writeup useful, please let me know by clicking on the clapping hands 👏 icon to recommend this post.
References
Series table of content
Code — https://github.com/vladimirvivien/k8s-client-examples
Kubernetes clients — https://kubernetes.io/docs/reference/client-libraries/
Client-go — https://github.com/kubernetes/client-go
Kubernetes API reference — https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/ | https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899 | ['Vladimir Vivien'] | 2018-03-30 21:28:09.193000+00:00 | ['Kubernetes', 'Programming', 'Golang'] |
Tracking Brands: Emotions will tell us more than sentiments ever could about how consumers perceive brands (case in point Volkswagen) | Tracking Brands: Emotions will tell us more than sentiments ever could about how consumers perceive brands (case in point Volkswagen) Karan Verma Follow May 22, 2017 · 3 min read
If you were responsible for spearheading the communications mandate for Volkswagen after the dieselgate scandal then you’d be wondering some of these questions,
How does social media sentiment change as a consequence of a public relations crisis?
How does the public react to recovery efforts initiated by the company?
How do topics of conversation shift as a consequence of a brand scandal and subsequent recovery efforts?
And some of the input that your marketing teams will get from social media could be comments (sample only) of this nature.
“how could VW lie for years and not care about the environment, this is horrendous!” — Owner of a Passat since 4 years. “It’s just so sad to know that unknowingly there was so much harm done to the environment.” — Owner of a Polo since 8 years. “It is nice to see that they’re apologising now and owning up to their wrongdoing” — a potential customer.
Imagine the task is to create the first ad campaign for Volkswagen after the dieselgate scandal, what kind of analysis of the above listed comments be of more use to you.
Sentiment Analysis: which will tag two comments as negative and one as positive.
Or
Emotion Perception Analysis: which will tell you that customers are angry; sad and guilty; and accepting of the apologies.
No points for guessing it’s the second, emotion perception analysis. So why does the industry just stick to sentiment analysis, you may ask, well the answer is, it’s only a matter of time. All marketing communications teams would prefer emotions over sentiments but as of now it is more time consuming and has more manual work involved. But with natural language processing systems gaining pace it is going to be a reality soon.
Most people express their feelings by talking about their favourite aspects of a product/service. These emotions do not get captured by the binary approach of sentiments (positive or negative). In many cases a hatred towards an aspect of the product could express extensive use of the product. Which would typically be tagged as negative in the sentiment based approach.
There has to be another way to glean the finer nuances of these emotions and their intensity by analysing both the conversation and the reason for which it would have been mentioned. In fact, its not very often that people mention an outright rejection or acceptance. Human emotion is usually complex and needs to be treated as such.
The task ahead is to just train machines to learn our analysis and repeat it over bigger data sets. Just like we trained machines to analyse sentiment, it is time for us to train the same machines to analyse emotions.
Picture this, “Arrghhh! How can <insert brand name> do this? Why can’t I get a refund?”. As a human we can understand the context and conclude that the customer is angry because of some issue with the product/service. However, according to current systems this is probably a neutral sentiment tagged comment. Which is not enough since it does not grab the actual emotion behind the comment.
We have to do better. The only way this is possible is to start feeding the system with human analysis of the comments and overtime the system would know how to tag the comments for emotions too. | https://medium.com/drizzlin/tracking-brands-emotions-will-tell-us-more-than-sentiments-ever-could-about-how-consumers-perceive-91a27acbfd6 | ['Karan Verma'] | 2017-05-22 09:20:18.335000+00:00 | ['Sentiment Analysis', 'Volkswagen', 'Online Reputation', 'Big Data', 'Brand Track'] |
As A Woman, I Am Tired Of Being Torn Down | Photo by Chema Photo on Unsplash
As A Woman, I Am Tired Of Being Torn Down!
Let’s lift each other up, and not bash each other.
There are so many things people disagree on everyday. Everyone is entitled to their own opinion.
Can’t we just be kind to each other? Why do people feel the need to shame others? It’s not like everyone is always going around asking for an opinion, so why must we shove our opinions down someone else’s throat?
A lot of the shame is women on women. I think as women we should stand together, help each other up, and not beat each other down. We are all fighting our own battles. I think it is time that we all come together to make the world a better place.
The world is bad enough as it is.
Men and women can be friends!
This one is really annoying to me. I have had male friends in the past and just because I am a woman does not mean we have slept together or we are going to. It does not mean we are in love, and it does not mean I have any desire to be with him.
I can’t tell you how many times a woman has called me or sent me a Facebook message about their boyfriend, husband, or whatever.
This gets really old, especially when you’re trying to build a career and you’re actively growing your network.
Yes, I am friends with your husband on Facebook. Why are you cursing me out because he liked three of my posts? How old are we? Or even better, Do you realize I live on the other side of the country? How big do you think his penis is? Does it reach that far? I didn’t think so!
I should not be shamed for my choices as a mother
I have three kids. I should not be told that my son was too old to still be breastfeeding.
In fact, you should not have anything to say about me breastfeeding or formula feeding. You should trust that I, as a mother, will make the best possible decision for all of my kids.
Women, in public or not, should not be shamed for the way they choose to feed their kids. It is none of your business. If you do not like it, don’t look.
I formula fed my oldest two and I breastfed my youngest.
When you formula feed your kids it’s bad because breast is best, but when you breastfeed your kids it’s gross and you need to cover up. No matter what people are going to have something to say.
Yes, I stay at home with my kids. I am doing this as a choice. It should not matter if I work or not. I do not have to go get a “real" job to make you happy. You are not the one paying my bills.
I hate that women who stay at home get shamed for not bringing money into the home and women who do work get shamed for not spending enough time with their kids and for not taking care of their home.
Is it your house? Do you live there? Why does it matter anyway?
I should not be shamed for what I wear.
I am old enough to know as a woman, if you wear certain things you will get a certain kind of attention.
They say you are asking for it. How am I asking for it by being comfortable in my own skin?
I know as a woman, we are expected to look a certain way. Don’t gain weight. Wear this makeup to look prettier or younger or wear this to make you look thinner.
I do not wear makeup as a choice. I get shamed for that too. People say I do not get ready enough and I need to try harder.
I bet if I tried harder and wore makeup someone would accuse me off trying to sleep with their significant other.
I was sexually assaulted in high school on a regular basis. I was told not to wear this or that. I wore sweat pants and hoodies damn near everyday and it still happened. Did they ever do anything to the boys? No, they did not. It is just not right.
I am very cautious about what I wear now, not because of men but for the simple fact that I have a teenage daughter. I make a point to wear clothes that reasonably cover me.
I don’t want her to think it’s okay for some men to act like they do, but I do want her to stay safe and dress appropriately at the same time.
I don’t do anything in public that I wouldn’t want her to do.
I am trying to lead by example.
I should not have to shame my daughter for what she wears.
Why is it that the girl’s dress code at my daughter’s school is two pages long, while the boy’s dress code is only one, maybe two paragraph(s)?
Why do I have to tell her that if you wear this they might think you are asking for it, like it’s a free pass to mistreat you or touch you?
On what planet is this okay? I should not have to fear for my daughter every day.
In conclusion
I understand that everyone is entitled to their own opinion. I just wish the opinion was not geared toward each other as much.
Why do we have to be against each other? Isn’t the world negative enough as it is?
We should be lifting each other up as a whole. I’m not just talking about men, or just talking about women, but everyone needs to do their part to make the world a better place. | https://medium.com/age-of-awareness/as-a-woman-i-am-tired-of-being-torn-down-46cbc6a94310 | ['Angela Welch'] | 2020-09-06 11:41:37.563000+00:00 | ['Self-awareness', 'Womanhood', 'Women', 'Self', 'Empowerment'] |
The Infuriating Truths behind France’s “Work to Live” Mentality | Years before moving abroad, I’d heard mythical tales about how lazy the French are:
They’re always on strike
They never work more than 35 hours in a week
They take coffee breaks every 15 minutes at work, for at least 30 minutes at a time
Then I moved to France and started an Executive MBA program where I was the only American in the class.
I walked in the door after my first day of class and my loving wife inquired “How was school today?”
“They sure take a lot of coffee breaks,” was my kneejerk response.
Four weeks vacation… only?
Finding myself surrounded by Franco slackers, I decided to take advantage and begin poking around to see if there was any truth to the lazy faire (translation: a play on the word laissé faire — which the French love).
The research experiment all started with a friendly exchange with a buddy back home in The States who was planning to come to visit us with his family.
As is often the case with ex-pats that move abroad to non-war torn countries, there is a tendency for friends, family, and random acquaintances to begin planning visits.
Most of the time, such visits are welcome, occasionally they’re just plain awkward. Given this buddy was a best friend, his visit would be more than welcome. The only thing standing in his way was time, more precisely vacation time.
So, as I’m working away one afternoon, I get a notification via Google Hangouts.
“Big news!!!” was all he said — more than effective bait to pull me away from whatever I was working on.
My buddy next informed me that he’s just negotiated additional PTO (paid time off) with his employer.
“Awesome!!!” Was my genuinely enthusiastic response, “How much do you have now?”
Knowing that I’m now living in the Land of the Free Time, my buddy immediately reigned in my expectations:
“First, know this is the most PTO I’ll ever get, at least with this employer,” he prefaced. “There’s no echelon higher than where I am now.”
“Okay,” I respond, genuinely impressed.
This sounded promising. I knew, after all, that he was a rising star in his company, but I had no idea he was talented enough to merit demigod-status. I brace myself, anticipating the unthinkable.
“Four weeks,” he reported.
Normally I would have considered four weeks an impressive number of vacation days, but for some reason, the number thudded unspectacularly in front of me.
Disturbed by my lackluster response to my buddy’s big announcement, I wondered what had changed in me. It’s not like I had any more vacation time than before.
True, I’m working for an American firm that offers “unlimited vacation time,” but what that translated into was working with my American colleagues on French holidays and working with my European counterparts on American holidays.
It’s hardly what I would have called the supposed French ideal of lazy-faire.
The French Response
The revelation of what has changed begins to take form less than twenty-four hours later, as I’m breaking French bread with a Parisian client over lunch.
“It’s just like August in the office,” she offhandedly remarked. “Nobody’s around and it’s impossible to get anything done.”
“Excuse-moi?” Marveling that she can refer to an entire month to visually represent an empty office, I inquired further.
She explained that French law requires its citizens to use their paid time off, otherwise, it’s lost forever.
“In rare cases, people can negotiate a carry-over,” she explained. “Most people, however, just have to use it up.”
Fortunately, using PTO for the French is as popular a national pastime as not taking PTO in the States.
Just how popular? Check out the nifty graphic below…
According to Statista, “American workers get a raw deal on vacation compared to other developed countries. The U.S. remains the only advanced economy that does not guarantee paid vacation while a statutory minimum is very much the norm everywhere else. …It might then come as a surprise to hear that U.S. workers managed to waste 768 million paid vacation days last year despite their miserable vacation allowance. That’s also a 9 percent increase on the amount wasted in 2017.”
Digging in a little deeper, I began finding differing reports on France’s official minimum annual leave ranging between 25 and 30 days and between 1 and 10 paid holidays.
Seeing as 26 to 40 days of mandatory vacation time seems mythical to my American state-of-mind, I ask my French client her take.
“It’s true,” she confirmed. “Many people have more than five weeks. I have a good friend for example that accumulates an additional two days per month.”
Doing the math quickly, that comes out to about 10 weeks of PTO per year.
I almost choked on the crusted sugar top of my creme brûlée. “Ten weeks of paid vacation?!” That’s two and a half months, which when you add in public holidays means that my French friend’s friend is only required to work 3/4 of the year.
This was more than mythical, it’s outright absurdity ringing in my “Made in the USA” ears.
This might be because, of the 35 member countries that form the OECD (The Organisation for Economic Co-operation and Development), which represents the far majority of the world’s most advanced economies, the United States is the only nation that does not guarantee its workers paid vacation.
This means that my best friend, with his MBA and C-level office accommodations, is guaranteed 30 times less paid vacation than the entry-level cashier at my village boulangerie in France.
It’s nuts, but true.
I checked with her just to be sure. She was a student that worked Fridays, Saturdays, and Sundays mornings yet was still guaranteed five weeks paid time off.
Unsurprisingly, this type of discrepancy has a major impact on work-life balance. France, with its “Oh la la!” PTO perks, offers its workers a “Top 3” work-life balance ranking within the OECD:
The United States, on the other hand, manages to pull up the backend of the index, sharing a “Worst 10” ranking with countries like Turkey, Mexico, and Latvia (notice Latvia scores higher, BTW):
With the ranking comes a host of other nasty side-effects, as noted by Jeffrey Pfeffer, a professor of organizational behavior at Stanford Graduate School of Business, in his new book Dying for a Paycheck:
“So many of these workplace practices, like work-family conflict and long work hours, are as harmful to health as secondhand smoke, a known and regulated carcinogen… We found that they account for about 120,000 excess deaths a year in the United States, which would make the workplace the fifth leading cause of death and costs about $190 billion dollars in excess health costs a year.”
It’s incredible to imagine that poor workplace policies can result in premature death, but that’s only because in the United States we’ve yet as a culture to develop a clear link between working days, stress, and longevity.
This is by no means a clear example of why the French model is universally superior.
It does, however, point to a willingness to focus on the well-being of the individual over the well-being of the organization, which is something I doubt will take hold in the States any time soon.
In the meantime, if you’re like my buddy Jim, you might consider taking an entry-level cashier position in my French village’s boulangerie.
Here’s my free guide to having a traveler’s mindset even when you’re at home | https://medium.com/mindtrip/the-infuriating-truths-behind-frances-work-to-live-mentality-a80fd13f2e4c | ['Dave Smurthwaite'] | 2020-02-19 17:09:30.815000+00:00 | ['Work Life Balance', 'Self', 'Vacation', 'Travel', 'Productivity'] |
Monitoring a server cluster using Grafana and InfluxDB | When a HTTP request is received, the load balancer will proxy (i.e forward) the request to the appropriate node to ensure that the load is equally divided among the cluster. Load balancers use different kind of techniques to decide which node to send to, but in our case, we will use an unweighted Round Robin configuration : a request will be sent to the first server, then the second and so on. No preference will be made regarding the node to choose.
Now that we have defined all the technical terms, we can start to implement our monitoring system.
II — Choosing The Right Stack
For this project, I will be using a Xubuntu 18.04 machine with a standard kernel.
In order to monitor our server cluster, we need to choose the right tools. For the real-time visualization, we are going to use Grafana v6.1.1
For monitoring, I have chosen InfluxDB 1.7 as a datasource for Grafana as it represents a reliable datasource for Grafana. In order to get metrics for InfluxDB, I have chosen Telegraf — the plugin-server driven agent created by InfluxData. This tutorial does not cover the installation of the tools presented ahead, as their respective documentation explains it well enough.
Note : make sure that you are using the same versions as the ones used in this tutorial. Such tools are prone to frequent changes and may alter the validity of this article.
III — Setting Up A Simple HA Cluster
In order to monitor our HA cluster, we are going to build a simple version of it using NGINX v1.14.0 and 3 Node HTTP server instances. As shown in the diagram above, NGINX will be configured as a load balancer, proxying the requests to our Node instances.
If you already have a HA cluster setup on your infrastructure, feel free to skip this part.
a — Setting NGINX as a load balancer
NGINX is configured to run on port 80, running the default configuration, and proxying requests to services located on port 5000, 5001 and 5002.
NGINX configured as a simple load balancer
Note the /nginx_status part of the server configuration. It is important not to miss it as it will be used by Telegraf to retrieve NGINX metrics later.
b — Setting simple Node HTTP instances.
For this part, I used a very simple Node HTTP instance, using the http and httpdispatcher library provided by Node natively.
A simple HTTP server written in Node
This server does not provide any special capabilities but it will be used as a basic web server for NGINX to proxy requests to.
In order to launch three instances of those web servers, I am using pm2 : the process manager utility for Node instances on Linux systems.
Now that NGINX is up and ready, let’s launch our three instances by running :
Doing this to the two other instances of Node servers, we have a cluster of three Node nodes up and ready.
Our three node are up and running!
IV — Setting Up Telegraf For Monitoring
Now that our HA cluster is built and running, we need to setup Telegraf to bind to the different components of our architecture.
Telegraf will be monitoring our cluster using two different plugins :
NGINX plugin : used to retrieve metrics for NGINX servers such as the number of requests, as well as the waiting / active or handled requests on our load balancing server.
: used to retrieve metrics for NGINX servers such as the number of requests, as well as the / or requests on our load balancing server. HTTP_Response : used to periodically retrieve the response time of each node, as well as the HTTP code associated with the request. This plugin will be very useful for us to monitor peaks on our nodes as well as node crashes that may happen.
Before starting, make sure that telegraf is running with the following command : sudo systemctl status telegraf . If your service is marked as Active , you are ready to go!
Head to Telegraf default location for configuration ( /etc/telegraf ), edit the telegraf.conf file and add the following output configurations to it.
Configuration for the NGINX plugin of Telegraf
Configuration for each individual node
When you’re done with modying the configuration of Telegraf, make sure to restart the service for the modifications to be taken into account. ( sudo systemctl restart telegraf ).
Once Telegraf is running, it should start sending periodically metrics to InfluxDB (running on port 8086) in the telegraf database, creating a metric called by the name of the plugin running it. (so either nginx or http_response ).
If such databases and measurements are not created on your InfluxDB instance, make sure that you don’t have any configuration problems and that the telegraf service is correctly running on the machine.
Now that our different tools are running, let’s have a look at our final architecture before jumping to Grafana. | https://medium.com/schkn/monitoring-a-server-cluster-using-grafana-and-influxdb-d5ff5f7151b2 | ['Antoine Solnichkin'] | 2019-04-08 18:08:01.832000+00:00 | ['Dashboard', 'Software Development', 'DevOps', 'Data Visualization', 'Programming'] |
“Amazon has tried to maximize profit at the expense of its own workers’ safety, skimping disastrously on workplace protections and paid time off.” | In April, OneZero’s Brian Merchant made the case that the pandemic put an end to the Amazon debate: Considering a long list of abuses at the company, he argues, shopping on its platform is unethical.
It’s a case worth keeping in mind during Amazon Prime Day, the company’s annual shopping holiday — actually two days long — that runs on October 13 and 14 after being pushed back from July this year.
Merchant writes that as Amazon stock hit record highs during the pandemic, the company slashed its affiliate sales fees, failed to inform workers of the risks of the virus, made decisions that hurt vendors, and fought its workers’ efforts to improve their working conditions — all on top of its already long record as a bad civic actor.
Read more here: | https://onezero.medium.com/amazon-has-tried-to-maximize-profit-at-the-expense-of-its-own-workers-safety-skimping-9a360f13b23b | ['Sarah Kessler'] | 2020-10-13 05:31:07.359000+00:00 | ['Amazon', 'Business', 'Ethics', 'Tech', 'Prime Day'] |
A Short Guide on How to Create Glassmorphic Elements in Pure CSS | Glassmorphism — The CSS Way
Glassmorphism is pretty easy to achieve for front-end developers. There is one main CSS property that we can use: backdrop-filter . This property allows you to apply multiple effects such as blur, sepia, and greyscale to the area behind your component. Since it applies to everything behind the component, to see the effect, you must make this element at least partially transparent.
To create the glassmorphism effect, you should use backdrop-filter: blur() .
<div class="basic">
<div class="blur"></div>
</div> .basic {
width: 200px;
height: 200px;
background: rgba(255,255,255,0.4);
position: relative;
}
.blur {
position: absolute;
bottom: 25px;
right: 162px;
width: 200px;
height: 200px;
background: rgba(255,255,255,0.4);
backdrop-filter: blur(5px);
}
Basic component
The image behind has straight background: rgba(255,255,255,0.4) . The element above is a copy of the first one but with an additional backdrop-filter: blur(10px) property.
This is the simplest example of a new trend. But we can go even further. You can add, as recommended by Michał Malewicz, a border radius, white border, and a little bit more blur.
The last thing you can try is to add a 1px inner border with some transparency to your shape. It simulates the glass edge and can make the shape stand out more from the background.
<div class="basic">
<div class="blur"></div>
</div> .basic {
width: 200px;
height: 200px;
background: rgba(255,255,255,0.4);
position: relative;
}
.blur {
position: absolute;
bottom: 25px;
right: 162px;
width: 200px;
height: 200px;
background: rgba(255,255,255,0.4);
backdrop-filter: blur(10px);
border-radius: 10px;
border: 1px solid rgba(255,255,255,0.2);
} | https://medium.com/better-programming/a-short-guide-on-how-to-create-glassmorphic-elements-in-pure-css-4d52f81089ab | ['Albert Walicki'] | 2020-11-30 16:05:03.704000+00:00 | ['Design', 'Glassmorphism', 'Programming', 'CSS', 'UX'] |
Parallel Asynchronous API Call in Python | Although here we will implement asynchronous program that will execute 15 different instance parallelly rather than waiting for every instance to be completed.
Let’s jump to the implementation.
First, we will see how synchronous program reacts
Sync Program Output
Here we are calling a API 15 times one by one . One API call starts only after previous API call finishes. If you look at output, it takes 16.67 secs to complete 15 API calls. This is synchronous approach where next process starts when previous process ends. This type approach sometime costs heavy in real time project.
Now same we will implement same using async approach where we will make 15 API calls parallelly.
Async ProgramOutput
The max time taken to complete 2.45 secs.
The main advantage of using parallel execution is that code runs more efficiently and saves time.
This is it. If you are having any doubts, have a comment below | https://medium.com/swlh/parallel-asynchronous-api-call-in-python-f6aa663206c6 | ['Sankhadip Samanta'] | 2020-09-17 20:58:51.786000+00:00 | ['Python3', 'Asynchronous', 'Python', 'Requests Library', 'Asyncio'] |
You Do Not Have to Choose a Niche to Be a Good Writer | You Do Not Have to Choose a Niche to Be a Good Writer
Just write your mind.
Photo by Nathan Dumlao on Unsplash
Why does it seem such a problem for some when one writer creates different types of content?
Ever since I was young, the idea of having preferences on music, literature, sports, etc., seemed more like an obligation than a choice.
“People define their likes at the age of 14.” — Some unknown source cited.
Well, *Earl*, I like a little bit of everything. I don’t have a favorite artist, musician or author. I simply like the art, the song or the book itself; and it can be any genre. So, what does that make me? A rebellious teen? An outsider? Did I come from Jupiter? (There’s already a lot of Martians out there.)
When searching for the formula to be successful on Medium (like it exists), the advice of choosing a niche would come up a lot. This is not the first time I have been advised to choose and follow a path in writing.
I won’t lie: having a specialization might help you build a loyal audience.
But.
A) You’ll end up finding yourself in a comfortable position that will be hard to get out of.
B) Once you get out and write about something else than what your readers expect, the audience that you worked so hard to form will slowly start to vanish.
Writing about something other than my “niche” makes me anxious
I started freelancing four years ago, but I would only apply to creative writing jobs about romance. I did not think I was good at anything else other than writing romance. That is what I wrote about on a daily basis. When I was offered jobs outside my comfort zone I would not think twice before rejecting it.
In my mind, my specialty was romance. Because that was my niche, I would consider myself an expert on it. But only it. Just the thought of writing about anything else would make nervous. When trying to put the words together to create another kind of content, I would paralyze.
To this day, I still overthink when posting content that my readers are not used to see from me. I usually write about mental health and I feel they trust my articles better when that’s the matter.
Having a niche could be an inhibition
I have put off creating a blog because of this mindset.
For a long time, I was looking for my specialty. I am good at writing romance, so that is all I would write about.
But then, I started writing about mental health and even though, I am not an expert, I was good at making people feel better by sharing my stories, and I got stuck in it.
With time, I came to realize that I am good at writing about politics, or relationships or even about football.
I should be able to speak my mind without fearing people’s reaction to my content, just because it is not what they thought it was my niche.
I should be able to still sound professional even though I usually do not write about a certain subject.
The quality of my content should not be measured based on whether it is inside my area of specialty or not.
Allow yourself to explore your own thoughts
In today’s world, the notion that you have to be objective with what you want to do, especially at such a young age, obstructs the wonders that come with exploring your mind.
We lose so much because we fear what society will say if we are a lawyer and a dancer at the same time.
We should be able to explore ourselves further. For own happiness, our own well-being.
Let’s write about what we want. Write about what makes us feel good. Write about what goes through our mind.
Maybe I’m ignoring good advice. But that only helps me realize that I am writing for myself first, and then for others… | https://medium.com/the-4-elements-of-change/you-do-not-have-to-choose-a-niche-to-be-a-good-writer-18a4b9052846 | ['Rita Alexandra'] | 2019-08-06 20:28:32.469000+00:00 | ['Writing Life', 'Opinion', 'Writing', 'Writing Tips', 'Advice and Opinion'] |
Invest In Happiness | Invest In Happiness
One way — 4 LINES
Photo by Denise Jones on Unsplash
Invest in the happiest mind
You have one way to adore yourself
Positive and loving thoughts
It can be your perfect antidote | https://medium.com/blueinsight/invest-in-happiness-c6dae0a1f7c3 | ['Alexandra Androne'] | 2020-12-14 12:03:59.531000+00:00 | ['Poetry', 'Motivation', 'Blue Insights', 'Happiness', 'Inspiration'] |
Event Recap: “Better Data, Better Tomorrow” | Any time we order a latte at the coffee shop, tap our phones to ride transit, or hail a car through a ride-sharing app, we are creating data. Isolated, these bits of information are useless. But the aggregation of millions of these tiny data points can paint an incredibly detailed picture of what people need and want, and when and where they want it. That is the core of location data analytics. Unleashing its power, and placing it in the hands of organisations, is Quadrant’s mission.
To explore the power and promise of location data, the Quadrant team hosted its first meetup of 2019 with the presentation “Better Data, Better Tomorrow” at its headquarters in Singapore. During the two-hour event, our CEO Mike Davie and Big Data & Blockchain Engineer Sharique Azam shared their insights on how location data analytics work, offering some real-world use cases of its vast potential across industries.
In case you missed it, we have selected some highlights of the event here.
You can watch the Facebook Live broadcast here. If you want to stay updated about our upcoming events, subscribe to our newsletter and join our Telegram community.
Better data, better insights
Mike started off the evening by recalling one of the first lessons he learned launching DataStreamX, the first high-quality real-time data platform, and working with thousands of data buyers and sellers.
Mike Davie speaking to the audience.
“What we found when transacting all these data between organizations was that the data space is murky. As soon as you start making money off data … there are always people trying to find out how to game the system,”
Mike Davie, CEO of Quadrant
DataStreamX went from transacting 5 billion data records per month in June 2017 to 57 billion processed in November 2018, as Mike explained. But there were still important problems to solve. How do you avoid having different data sellers putting the same datasets on the platform? How do you protect the integrity of data feeds? How do you ensure data has not been tampered with or altered? What was needed was a solution to trace the authenticity of every bit of data. Because the insights of any data analysis are only as good as the data that feed the analysis.
The answers to these questions are the genesis of Quadrant. We recognised the potential of the blockchain to verify and map data in a way that was not previously possible, offering solutions to the problems described above. That’s how Quadrant was born. Launched last year, our protocol is capable of verifying and mapping disparate data sets to bring transparency and trust to the data industry.
Why location data?
Why has Quadrant decided to focus on location data? As Mike told the audience, this sort of information, if properly gathered and analysed, offers the power to enhance services, address challenges and plan the future in almost every industry, from healthcare services to retailers to telecoms. Location data is produced all the time by each of the billions of mobile devices active globally, offering an enormous pool of data from which to draw insights. Its potential is almost limitless, and Quadrant is the tool that can unlock it.
Mike presenting the Uses of Location Data
Mike cited one of the most visible companies on earth in explaining the uses of location data, which include identifying new consumer and market segments, designing effective marketing strategies, improving customer service and managing risks. McDonald’s the global fast food chain, was able to judge the effectiveness of every billboard promoting the Big Mac’s anniversary. It did this by analysing the pattern of people driving by the ad and seeing how many of them subsequently ordered a hamburger.
“Before location data analytics, we were just guessing,”
Mike Davie
Better Data, Better Tomorrow
The final presentation of the night was given by Quadrant’s Big Data and Blockchain Engineer Sharique Azam. Sharique provided a case study that offers an example of how Quadrant’s data platform functions and provides benefits in the real world. The customer in this case needed authentic, high-quality location data for their products and services. However, this was easier said than done. In order to achieve this goal, the customer needed to ensure the provenance and veracity of all the data it used, and to do this with information at a large enough scale to make the resulting insights meaningful.
Big Data and Blockchain Engineer Sharique Azam presenting a Use Case of Quadrant
Sharique then moved on to explain how Quadrant works for customers, walking through an overview and demonstrating a few steps for data stamping and verification on the Quadrant platform. If you have not done so already, we invite you to take a look through the presentation for detailed technical information via the facebook live video.
Sharique’s presentation demonstrated, through a case study, how Quadrant has expanded its capabilities and how it is able to meet the needs of customers. The technical analysis provided a more detailed explanation of the nuts and bolts of the Quadrant platform. Sharique drew a clear line from where we started, through how Quadrant has grown and matured as a technical offering for enterprises, to where we are headed as the leading platform for data analysis and authenticity.
The First of Many
Overall, it was a positive event that underscored and highlighted the ways Quadrant is building a leading data platform for enterprise customers. We want to thank everyone who attended the meetup and invite anyone who couldn’t make it to have a look through our video recap and the event presentation. If you weren’t able to make it, don’t worry — we have the presentation material available for download here.
We will be holding more events in the future. We hope to see you at upcoming events. | https://medium.com/quadrantprotocol/event-recap-better-data-better-tomorrow-350e7f987cb0 | ['Nikos', 'Quadrant Protocol'] | 2019-01-22 13:26:33.938000+00:00 | ['Analytics', 'Blockchain', 'Singapore', 'Big Data', 'Location Data'] |
5 years of TicketSwap | 5 years of TicketSwap
Fighting fraud but especially: creating a community of fans
2012 As festival-goers, we weren’t so pleased with the current state of buying and selling tickets.
It all started when Hans Ober (aged 24 back then, and still a student) had a spare ticket for the Dutch festival, Lowlands. He scoured the internet for a safe and convenient way to sell it, but there wasn’t anything out there. In the end, he took the risk and listed the ticket on Marktplaats (the Dutch Craigslist). As is customary on Marktplaats, the buyer came over to his apartment to buy and pick-up the ticket, so already had access to a worrying amount of personal information (name, address, phone number etc.). Hans reluctantly handed over the tickets, to which the buyer responded: “Ok thanks! I’ll transfer the money somewhere next week!” 🙊
Hans: “That didn’t really feel good. In the end it worked out, but in times where online banking and exchanging PDF files was already possible, there must be a much more convenient way. Right?”
Hans shared his idea with Ruud Kamphuis (25 back then) and Frank Roor (26 back then), who had the technical know-how to build a website. Frank worked on the designs, Ruud thought about the technical architecture and Hans asked around to see whether others shared the same enthusiasm for the idea. There was one thing that was set in stone from day one: this website would be easy, fair and transparent. And above all, unique to the market, as it would only be accessible to real fans.
Frank: “To keep the story short: In 2012, the first version of TicketSwap was live. First, only friends and family used it. But then suddenly, on December 22nd, it happened… the first conversion of a ‘stranger’ was a fact.”
The feedback of TicketSwap’s first users couldn’t have been more positive. It worked! Event-goers could sell their tickets with a profit of max 20% on top of the original ticket price. The team reasoned that this was enough to cover added costs (like service and transaction fees) but too little to be enticing to traders. From then on, every new user coming on to use TicketSwap meant another step closer to a safe, fair, and easy secondary ticketing world. | https://medium.com/ticketswap/5-years-of-ticketswap-b175f7c2624e | [] | 2018-07-25 09:12:43.623000+00:00 | ['Growth', 'News', 'Startup', 'Timeline'] |
Bible: Not the Best Schoolbook | Sign up for Taking Stock of COVID: New comics from The Nib
By The Nib
New comics from Matt Lubchansky, Keith Knight, and Niccolo Pizarro. Plus How Seniors Are Living With COVID-19. Take a look | https://medium.com/the-nib/bible-not-the-best-schoolbook-b129e88f436f | ['Matt Bors'] | 2015-06-23 06:59:44.704000+00:00 | ['Religion', 'Science'] |
Linear Regression using Python | Meaning of Regression
Regression attempts to predict one dependent variable (usually denoted by Y) and a series of other changing variables (known as independent variables, usually denoted by X).
Linear Regression
Linear Regression is a way of predicting a response Y on the basis of a single predictor variable X. It is assumed that there is approximately a linear relationship between X and Y. Mathematically, we can represent this relationship as:
Y ≈ ɒ + ß X + ℇ
where ɒ and ß are two unknown constants that represent intercept and slope terms in the linear model and ℇ is the error in the estimation.
Example
Let’s take the simplest possible example. Calculate the regression with only two data points.
Here we have 2 data points represented by two black points. All we are trying to do when we calculate our regression line is draw a line that is as close to every point as possible.
Here, we have a perfectly fitted line because we only have two points.Now, we have to consider a case where there are more than 2 data points.
By applying linear regression we can take multiple X’s and predict the corresponding Y values. This is depicted in the plot below:
Our goal with linear regression is to minimise the vertical distance between all the data points and our line.
So now I guess, you have got a basic idea what Linear Regression aims to achieve.
Python codes
First, let’s import the libraries:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt #Data visualisation libraries
import seaborn as sns
%matplotlib inline
The next step is importing and checking out the data.
USAhousing = pd.read_csv('USA_Housing.csv')
USAhousing.head()
USAhousing.info()
USAhousing.describe()
USAhousing.columns
Here, I have used USA_Housing.csv as the example dataset. It is always a good practice to explore the dataset. Try using your own file and run the above code to get all possible information about the dataset.
Snapshot of the first five records of my dataset
Here, I’m considering Price as the dependent variable and the rest as independent variables. Which means I have to predict the Price given the independent variables.
Now its time to play around with the data and create some visualizations.
sns.pairplot(USAhousing)
The pairs plot builds on two basic figures, the histogram and the scatter plot. The histogram on the diagonal allows us to see the distribution of a single variable while the scatter plots on the upper and lower triangles show the relationship (or lack thereof) between two variables.
sns.distplot(USAhousing['Price'])
A great way to get started exploring a single variable is with the histogram. A histogram divides the variable into bins, counts the data points in each bin, and shows the bins on the x-axis and the counts on the y-axis.
Correlation
The correlation coefficient, or simply the correlation, is an index that ranges from -1 to 1. When the value is near zero, there is no linear relationship. As the correlation gets closer to plus or minus one, the relationship is stronger. A value of one (or negative one) indicates a perfect linear relationship between two variables.
Let’s find the correlation between the variables in the dataset.
USAhousing.corr()
And now, let’s plot the correlation using a heatmap:
The black colour represents that there is no linear relationship between the two variables. A lighter shade shows that the relationship between the variables is more linear.
Coefficient of determination
Coefficient of determination R2 is the fraction (percentage) of variation in the response variable Y that is explainable by the predictor variable X. It ranges between 0 (no predictability) to 1 (or 100%) which indicates complete predictability.A high R2 indicates being able to predict response variable with less error.
Training a Linear Regression Model
Let’s now begin to train out regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can’t use.
X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = USAhousing['Price']
Train Test Split
Our goal is to create a model that generalises well to new data. Our test set serves as a proxy for new data.Trained data is the data on which we apply the linear regression algorithm. And finally we test that algorithm on the test data.The code for splitting is as follows:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
From the above code snippet we can infer that 40% of the data goes to the test data and the rest remains in the training set.
Creating and Training the Model
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)
The above code fits the linear regression model on the training data.
Predictions from our Model
Let’s grab predictions off the test set and see how well it did!
predictions = lm.predict(X_test)
Let’s visualise the prediction
plt.scatter(y_test,predictions)
A pretty good job has been done, a linear model has been obtained! | https://medium.com/analytics-vidhya/linear-regression-using-python-ce21aa90ade6 | ['Surya Remanan'] | 2019-07-23 07:16:26.104000+00:00 | ['Python Programming', 'Machine Learning', 'Linear Regression', 'Python', 'Data Science'] |
Machine Learning Concept behind Linear Regression | Machine Learning Concept behind Linear Regression
Applying Simple ML model on CO2 Prediction
Photo by Alexander Tsang on Unsplash
Introduction
In the past recent years, there have been a lot of hypes on Artificial Intelligence (AI). You can find it almost anywhere from turning on the light with your voice to a fully autonomous self-driving car.
Most of modern AI requires a lot of data. The more you give, the better it learns. For instance, to train AI to understand an image of a cat, you need to give a lot of images of cats and non-cats so that it can distinguish between the two.
But how exactly does AI learn from the data?
In this post, we will look at a very simple model to get an idea of how AI learns. We will focus on the amount of global CO2 emission for the past 55 years and attempt to predict its amount in 2030.
CO2 Emission Data
The data we will be using is from WorldBank. Unfortunately, the data is not up to the current date of 2019 (at the time of writing this). It’s from 1960 to 2014, but this will do just fine for this experiment.
Fig 1: Amount of CO2 Emission annually from 1960 to 2014. Source: WorldBank Data
On the x-axis corresponds to the year (assuming year 0 is 1960) and on the y-axis corresponds to the amount of CO2 emission. Based on the chart, we might be able to make a rough estimation of what the value of 2030 should be.
Suppose we just draw a straight line that best fits this data, we could predict the future if we extend this line further and further.
Fig 2: Fit a line and use it to make a rough estimation at year 70 (2030)
Keep in mind that we can not accurately predict the future. Things might change for the better, but for simplicity, we just assume that the rate of changing is constant.
Linear Regression
Let’s bring in a little bit of math. The blue line above might be familiar to you. It’s a simple straight line that is represented by the equation below.
Uh huh. Now you remember. Again, x is the year and y is the amount of emission. To get the line shown in Figure 2, m=446,334, and b=9,297,274. Don’t worry about m and b. I will explain them in detail later.
In case we want to know info on the year 2030 (year 70 if we count from 1960), we can now use our equation above.
Now as promised, let’s take a close look at what m and b are.
Fig 3: Behavior of m on the line
In Figure 3, as we change the value of m, the line rotates around. Hence, variable m controls the direction of the line. Whereas in Figure 4, the variable b controls the position of the line by moving it up or down.
Fig 4: Behavior of b on the line
Learning Process
With m and b, we can control the line and adjust it to best fit our data.
The question now is how do we find the value of variables m and b? The idea is the following:
Randomize the value of m and b. Give the variables to a loss function to determine how bad the line is compared to the data, also known as error rate. Adjust the value of m and b based on the error rate. Go back to Step 2. Repeat until the variables stop changing.
Loss Function
If our line is very bad, this loss function will give a very big error. At the same time, if the line fits the data well, the error will be small.
Fig 5: The difference between predicted line and actual data
The line has its predicted value y’ every year. Then, we can compare the predicted value y’ to actual value y to find the difference. We compute this value on every year and take its average. This is also known as Mean Squared Error (MSE).
Fig 6: Closed form of MSE
Now we are almost ready to update our variables. There is one small catch. The error rate we found earlier is always positive. Pretty much we do not know which direction should update our line. Should it rotate clockwise or counter-clockwise? That is when Gradient Descent comes in.
Fig 7: Formula to update each variable (in this case, variable m)
In a nutshell, this tells the direction and how much each variable affects the error. The more effect it has, the more value it should change. Then, we can use this information to update our variables. We won’t dive deep down into the derivative, but if you are interested, you can check out this video on Coursera. I find the explanation there quite clear.
Note the alpha α, also known as, learning rate, is to control how much we should update our variables. Usually, we set it to a small value, like 0.001, so that we can slowly update our variables toward optimal value.
Good news: in practice, we don’t manually do this derivation.
Popular frameworks like Tensorflow or PyTorch will compute this automatically. But I put it here to give a brief idea of how it can know which direction to change the value.
Implementation
So, do you need to code everything as described above to get linear regression working? Fortunately, many existing libraries simplify everything for you. In this example, we will explore scikit-learn, a machine learning library built for Python, to predict CO2 emission.
With just a few lines of codes, scikit-learn makes linear regression very accessible. Line 9 alone does all the necessary steps needed to train the model. Phew, you were worry for nothing, weren’t you? Not only for linear regression, but also this applies to implementing many other ML algorithms with just a few lines of codes.
Conclusion
That’s it. Not so complicated right (except the derivation part)? Linear regression is a simple method yet quite effective even though it is just a straight line. In this post, we only looked at the case where there is only one variable (year). Actually, this can be extended to handle multiple variables, but that is the topic for later post. | https://towardsdatascience.com/how-machine-learns-from-data-a-simple-method-for-co2-pollution-prediction-af18430ce12b | ['Vitou Phy'] | 2019-11-04 12:27:27.745000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Gradient Descent', 'Predictions', 'Linear Regression'] |
Contact Tracing for COVID-19 Cannot Work | Herd immunity is the only path, if we believe the math.
One common argument that crops up over and over in the COVID-19 discussion, when it comes to whether and how to come off of government mandated lockdown, is contact tracing. This argument states that we should stay on lockdown until our case numbers are low enough, and our testing capacity is high enough, that we can trace all contacts of people who are confirmed to have the disease, and test them, and then have them quarantine, and then test their contacts, and so on, so we can leave the house and behave normally again. And we only have to sit in quarantine long enough that these two numbers get to the point where our cases are low enough, and our testing capacity is high enough, that we can pivot to this other method of disease management.
I really wish this could work, but it can’t.
Advocates of this argument point to South Korea, which is successfully managing COVID-19 in this manner. What they don’t seem to understand, is that South Korea has a lot more tests than cases. What matters for this method is not how many tests you have, nor even how many tests you have per capita, but rather how many tests you have per infected person. South Korea, as of April 21st 2020, issues about 10,000 tests per day when running this method, with only around a tenth of a percent of tests coming back positive of late. They’ve only had around 10,000 total confirmed cases. They’ve tested around 500,000 of their 51 million population, but they had high testing capacity early and could test everyone who had contact with a confirmed case.
How Many Tests Do We Have?
The USA testing effort is plateauing at around 150,000 tests per day.
Only around 10,000 of these per day currently come from public sources and the CDC, the rest are private entities, many of which were prevented from entering the testing effort by the FDA until the middle of March.
Notably, our testing capacity is not climbing to meet that million tests per day number. It’s flattening out. And we are not using our tests in contact tracing. We are using them clinically for people who are symptomatic.
How Many Tests Do We Need?
This is harder to guess at. If our goal is simply to use our tests clinically, then we don’t need too much more than we have now. But if we want to use them in a contact tracing effort, to track down everyone who has this before they end up needing a doctor or infecting someone else, we need many, many more.
If we ballpark it using South Korea as an example, we’d need as many tests per day as we have total confirmed cases over the life of the virus. Around a million tests per day. Seven times as many.
If we presume many current cases aren’t counted, which is undoubtedly true, then that number could be higher. A recent Santa Clara County seroprevalance survey thinks that number may be 50 times higher, but it’s come under attack and may not be a good indicator. If we think they’re only overestimating by a factor of ten, then we’d still need five million tests a day.
If we back our way into the number of tests we need for contact tracing from scratch, as the Harvard Edmond J. Safra Center for Ethics did, we need 30 to 50 million tests per day. They estimate possibly 100 million. That’s an insane number in a country of 340 million people, but that’s the prediction, and the reason why is clear when you unpack what they propose:
Find a positive.
Test everyone they came in contact with in the last six days.
If any of them test positive, repeat.
In that scenario, you may be forced to get a test several times a week even though you exhibit no symptoms purely because you were at the bowling alley with someone else who tested positive. And so on.
So depending on who you believe, we need between seven and three hundred times more tests than we have to be able to do “contact tracing” for COVID-19. And according to one popular narrative, the country must stay in quarantine until this capacity is reached. And our testing capacity isn’t currently climbing.
Wait For The Vaccine!
A vaccine will take between 12 and 18 months.
After around a month and change of our current quarantining measures, we are dumping 5% of our nation’s milk on the ground, a number that is soon to double, destroying over 5% of the total egg production, destroying tens of millions of pounds of fresh food, 25% less beef is being produced, and the CO2 for drinking water treatment may be cut to 33% of normal. These are first order effects due to nothing more than supply chain disruptions. More first order effects are sure to matriculate, and combine, into even more severe issues for the critical economic infrastructure of the country. Unemployment aside, and inability to pay for critical medical care aside, and inability to pay for a mortgage aside, we will end up with mortality related to food shortages under any plan to maintain our currently broken economy for a year or more. And that could lead to the 3rd order effect of civil unrest. Waiting for a vaccine is simply not on the table.
Herd Immunity is the Only Way
Many advocates of “flattening the curve” seem, in recent weeks, to have forgotten the whole point of flattening the curve. The point of the exercise is to run the hospitals as close to capacity as possible to ensure a specific subset of infected people get treatment: those who would die without it but would live with it, while making sure that the general population catches this disease at reasonable rates and moves on from it.
If you flatten the curve so flat that nobody gets infected, then you’ve failed to flatten it at all. You’ve just pushed the curve to the right, and it happens after quarantine is lifted. If you flatten the curve so that ICUs run at 50% capacity, then your quarantine lasts twice as long as if ICUs run at 100% capacity. If you know by your medical experience that 50% of the people you take in are going to die no matter what you do anyway, you can triage those (send them home) and run a 100% capacity ICU in a third of the time as the hypothetical 50% capacity case. Your goal is to save the people who can be saved. Anecdotes I’ve read seem to indicate that this decision is difficult to make, because the disease progresses quickly in those who die, and seems to not make that final jump in those who are effectively treated. We may not be able to make truly informed triage decisions unless we do them by age or comorbidity factors, which would be an ugly procedure.
Early discussions, including ones here on HWFO, were about how to get the curve flat enough so that we were reducing triage to acceptable levels. With states approaching COVID-19 death rate peaks without full ICUs, they are only prolonging quarantine by maintaining it without a full ICU. And curiously, given how “flat curves” are “longer curves,” they are prolonging the long term exposure of their healthcare workers to COVID-19 as well, increasing the death rates of healthcare workers in the long term.
This does not mean that the gates should be thrown open without a plan, nor that the plan should stress opening bowling alleys and salons over restaurants. But any state under ICU capacity with a relatively flat deaths per day rate should absolutely ease restrictions, continue to quarantine the old, infirm, or those with comorbidities, let healthier people get sick, and move towards the only possible answer, whether we like that answer or not.
The other answers aren’t answers at all, mathematically speaking. | https://medium.com/handwaving-freakoutery/contact-tracing-for-covid-19-cannot-work-d3f99dddadf1 | ['Bj Campbell'] | 2020-04-21 20:34:07.391000+00:00 | ['Economics', 'Medicine', 'Random', 'Covid 19', 'Coronavirus'] |
What 4,300 hours of meditation has taught me | I started meditating consistently in the summer of 2001. I had just finished developing the first Microsoft Xbox, and I literally sat with my feet on my desk. I should have felt proud and satisfied. I had a beautiful, big, brand-new house in Silicon Valley, and an Audi and a Porsche. I was a multi-millionaire and had more money than I knew what to do with.
Sitting there in my office, I noticed that I couldn’t relax. I felt dissatisfied and anxious, and I had no idea why. Shortly afterwards, at my 27th birthday party, I drunkenly walked from guest to guest asking people what the meaning of life was. One response that stands out in my memory was from a friend named Jing; I’m proud to tell you that Jing created gmail. Jing said, “Never own a house that’s too big to hold all of your friends.” (see note at end)
Finally, I turned the camera on myself, and somewhere there is a DV tape-recording of me slurring the words, “The purpose of life is to become enlightened. There are fourteen steps to enlightenment. The first step is to realize that you want to be enlightened, and the last steps is to become enlightened.”
A random bumper sticker I saw somewhere. I love how scratched and damaged it is.
I have been interested in introspection since I was a kid. I remember at about age seven picking up a ragged book on meditation that was in the stack of books next to our toilet. The book was about Transcendental Meditation. As I read it, I remember thinking that it seemed too complicated. I had the impression that meditation was supposed to be simple, but this book was filled with a very complex philosophy. One of the key points that the book stressed was that the one needed a guru. So I went to my mum and asked her where I could find a guru. She suggested looking in the Yellow Pages under “G for guru.” I went and looked, but of course I found nothing.
At around age eight, I had the idea that if I could make one eye look directly into the other eye, and that if I did that for a while, the recursive cycle would cause me to enter an ecstatic state. I searched around the house for pieces of mirror and lenses that I could use to create such a device. I kept stopping and questioning myself: part of me believed that it would work, and another part of me was skeptical and thought that it was a silly idea. Not surprisingly, I didn’t find the materials, and I didn’t have the tools or knowledge to make the device. In various different contexts, the tension between these two parts of myself — the visionary and the skeptic — has resurfaced repeatedly in my life.
During my undergraduate degree, I trained in Tae Kwon Do, and I now understand that the training contained elements that were meditative, including focusing of the mind, and increased embodied awareness. I also learned to juggle, which requires a high level of mental focus and control.
When I graduated and started working, I was clearly searching for spiritual meaning, and I read the Bible from cover to cover while on a business trip to California. I remember forcing myself to read every mind-numbing word about who begat who in the Book of Genesis. I particularly enjoyed the Book of Proverbs, which is a book of wisdom, and parts of the Book of Psalms, which is a book of poetic devotion. I also forced myself to read a thick book on Tibetan Buddhism, which I found to be strangely complex and cognitive. I didn’t find what I was looking for — a practical technique — in either contemporary Christianity or in Tibetan Buddhism.
Nevertheless, I seem to have been meditating. I remember going running at lunchtime in Bristol in the UK. I learned to proactively control my breath by taking long, slow, deep breaths, which prevented me from getting out of breath and needing to pant. In hindsight, I realize that this was a form of breath control meditation, known as pranayama in Sanskrit. “Prana” means energy and “yama” means to control.
I also believe — although I don’t think that there is objective evidence for this yet— that running tends to help integrate trauma, leading to increased balance of the mind. It does this by activating and completing the flight response to traumatic experiences, and also by alternately stimulating the two sides of the thalamus (as in EMDR therapy), which helps to thoroughly integrate and finally store traumatic experiences as memories. Before integration, these traumatic experiences are captured as mostly disjoint neural configurations in different areas of the brain.
After running, I used to sit and meditate on the grass — with the ducks and geese — next to a lake in the business park where I worked. I witnessed the breath in my nostrils for fifteen minutes per day. I don’t understand how I knew to do that, because it’s something I learned later in life. This practice is called ānāpānasati in Pali. “Sati” means mindfulness and “ānāpāna” means inhalation and exhalation.
The meditation room at Esalen Institute in Big Sur, California. I have spent many hours meditating here.
Now let’s travel forward again to 2001, to the completion of Xbox, and to my feet on my desk and anxiety in my mind. Shortly after my the birthday party where I drunkenly realized my need for enlightenment, I was at the gym where I worked-out when I saw an advert for a meditation seminar, a seminar which I attended. The seminar was taught be a man-woman couple. The woman taught a breath witnessing technique called hong-sau, which is similar to ānāpānasati except that you mentally say “hong” on the inhalation and “sau” on the exhalation. I found that this immediately calmed my mind. The man talked philosophy, and told us that stress is caused by the difference between the way things are and the way we want them to be. He said that we can reduce stress either by changing the way things are (externally) or by changing ourselves (internally). This made a lot of sense to me.
I started practicing this hong-sau meditation technique for fifteen minutes in the morning and fifteen minutes in the evening. My anxiety decreased and my contentment increased. I was promoted to management at work. My personality changed and I became more authentic and better able to advocate for myself, including becoming aware of when I felt hurt, and becoming more able to express that. This ultimately led to the relationship with my wife breaking down, and to divorce: I changed, and so we became incompatible.
After four months of practicing hong-sau, around Christmas of 2001, I had some life-changing experiences in meditation. My awareness became so focussed that it was able to temporarily pierce the veil of delusion and directly experience the fundamental nature of reality. I’ve tried to write about this before, and it’s very hard to explain with words. In a nutshell, there is truly only one thing that exists. This is sometimes called non-duality, and also what some mystics call God. Everything that is experienced is actually an expression of this underlying non-dual reality, except that in our normal experience of duality opposites appear to be separate and contrasting; opposites appear to be in conflict with each other. In reality, everything cancels through time and space into an indescribable perfection, which can always be experienced here and now. This is the experience that my eight-year-old self was anticipating as he dreamed of making the eye-gazing contraption. It may be that the device was a metaphor for the process of awareness becoming aware of itself in a recursive loop.
After these experiences, I discovered that I had a profound and alternative understanding of many scriptural texts, because they often reference this non-dual reality, a reality that can only truly be understood through direct experience. I also found myself confused and trapped in this body, in this reality, unable to return at-will to the direct experience of our true nature. I have been going through a process of integration of these experiences over the past sixteen years.
I discovered the spiritual group that the seminar teachers were part of. They were devotees of Paramahansa Yogananda, one of the main teachers who, in the 1920s, brought meditation to the USA from India. I learned the technique of Kriya Yoga as taught by Yogananda, and I practiced it for up to four hours per day.
I had many intense experiences, which some people might consider spiritual. One time, I was driving home in my Porsche (I think it’s funny that I was driving a Porsche), while witnessing my natural breath. I looked to the side, and saw a piece of litter by the side of the road. Usually, seeing litter would bother me; I would internally fret about the inconsiderate nature of people. Instead, in this instance, I became overwhelmed by the perfection of not just this piece of litter, its qualities and its placement, but by the perfection of everything in the universe. I was instantly not only aware of the whole of reality but also perceiving it all as perfect.
Another time, I woke up inside a dream. In the dream, I was witnessing a galaxy when I realized that I was dreaming and became aware that the galaxy was inside of me. There was an overwhelming sense of enormousness and power. I was in awe of what it means to contain galaxies. To be such a small creature, yet to be made of everything; to contain countless universes. The power of the energy flowing through my spine felt so great that it could rip me to pieces, like I was being flossed by a galactic-sized pipe-cleaner. Reflecting on this now, this experience shares a quality with the fundamental nature of reality: that too is infinitely large, yet is contained within these small beings. Note that it’s one thing for the cognitive mind to try to conceive of infinity, and it’s another thing entirely to experience infinity directly by knowing it inside yourself.
I have a very complex and unusual life-story, which you can read about in other articles, articles already written, and articles that will be written. For now, I’m going to jump to 2011. At this point I had been practicing Kriya Yoga for ten years, and accrued at least 2,000 hours of meditation experience. I had been hearing about the 10-day Vipassana retreats taught by S. N. Goenka, and finally went to my first one.
My wife’s Facebook profile picture while she was on retreat.
Before going to my first 10-day, I thought that I was special, that I was some kind of mystic or guru. I thought that I had some great purpose in this life. Somewhere in the depth of my mind, I thought that I was some kind of messiah, and that my purpose was to somehow help other people. What I discovered on this first 10-day retreat was the depth and breath of my suffering. I came to understand what it really means to be a human, and I understood experientially how and why I am trapped inside this body. I got a relatively clear perception of the moment-to-moment suffering that my mind inflicted upon itself in its struggle to have reality be other than it is. It was overwhelming to become conscious of the intensity, depth, and persistence of my unconscious mind’s arduous struggle with reality, it’s rejection of things evaluated as unfavorable and it’s grasping after things evaluated as favorable. Who am I to think I can help anyone when I am so totally lost in delusion and suffering?
I discovered that I’m just a regular human, and a pretty broken one at that. Like all humans, my mind is broken, but I’ve gradually come to appreciate how beautifully broken it is. This is actually part of the path: to increase equanimity not just for the favorable and unfavorable circumstances by increasing equanimity for the sensations that they invoke in the mind-body, but also to increase equanimity for the non-adaptive nature of the unconditioned mind and how adorably it struggles with reality.
So, by practicing what Buddha taught, I came to understand his first noble truth: that life is suffering. More specifically, the nature of the untrained mind is suffering, and suffering begets more suffering, creating an endless loop of experiencing the delusion of being, from moment-to-moment, in this life, and perpetuating it from life-to-life. Paradoxically, the way out of this trap is to develop equanimity not only for the trap itself, but for our foolishness in staying trapped. Also, paradoxically, freedom from this trap is annihilation of the very entity that seeks freedom.
The reality is that my wife Cindy and I are gurus (teachers), but we’re only gurus to the extent that we embrace our normal humanity, and dedicate our lives to the process of being regular humans and supporting others in doing the same. We are only teachers to the extent that we meditate and practice what we advocate. It’s so easy to fall into the trap of thinking that I’m some kind of perfected guru and then use that as a defense against further learning, growth, integration, and truth. We know of so many so-called teachers or gurus who are, in reality and secretly, far less functional than the average person; it’s the guru defense, and most gurus suffer from it.
I just got back from my fourth ten-day Vipassana retreat. For the first nine days of the retreat, you don’t talk with anyone, acting as if you are there alone, making no eye contact, and not even gesturing to others. On the tenth day, you can talk to others; and a lot of talking happens on day ten. The daily schedule looks like this (I now know this schedule by heart):
4:00 Wake
4:30–6:30 Meditate (2 hours)
6:30–8:00 Breakfast
8:00–11:00 Meditate (3 hours)
11:00–1:00 Lunch
1:00–5:00 Meditate (4 hours)
5:00–6:00 Break (no food for old students)
6:00–9:00 Meditate and discourse from teacher (1.5 hours of meditation)
9:30 Sleep
So you meditate for about 10.5 hours per day, which amounts to over 100 hours during the ten-day retreat. This is industrial-strength meditation training, and is apparently the format that Buddha used to teach Vipassana, and the format that has been used for thousands of years to teach Vipassana. The process needs to be this intense in order to overcome the enormous amount of momentum we have in our everyday lives. It takes this much effort to become skilled and practiced enough in the technique to be able to bring it home and use it, day-to-day, in everyday life.
I’ve been practicing Vipassana since late 2011, and meditating at home for between one and two hours per day since late 2015. I estimate that I’ve practiced Vipassana for over 2,300 hours. Overall I estimate that I’ve now meditated for over 4,300 hours in my lifetime. Here are some additional things that I have learned from all that meditation:
The only thing we can control is our attention
We can’t control the external circumstances of the world. We can’t control our bodies. We can’t control our thoughts. We can’t control our emotions. Everything that happens to us, and that we do, is the result of our unconscious mind reacting, and to circumstances arising in order to invoke reactions from our unconscious mind. The only thing we have control over is where we place our attention. This is because the only thing that really exists is our attention, and it’s what we are. By directing our attention to the core of our delusion, we can use it to untie the knots of delusion which bind us, and free ourselves from this self-imposed prison of suffering.
We are 100% responsible for our contentment
Whether we are happy or unhappy, content or discontent, is the result of a process inside our minds. The default program that our unconscious mind is running is designed to cause us to suffer. It does this by continually reacting to reality, to the sensations that reality invokes inside our bodies. Not only that, but our unconscious patterning tells us that we are unhappy and suffering, or needing to acquire something, because of external circumstances. There is in fact absolutely zero necessity for suffering, under any circumstances.
We are 100% responsible for our circumstances
Everything we experience is created by our past thoughts. To experience different circumstances, we must think different thoughts. Our thoughts are a result of the purity of our mind, which is a function of how we direct our attention. By skillful direction of attention, we can purify the mind, which will lead to more adaptive thinking, and therefore more favorable circumstances. Meanwhile, paradoxically, as the mind is purified — and retrained to not react to reality — whatever circumstances we find ourselves in are increasingly experienced as optimal.
Everyone else is suffering too
As we come to experientially understand our true nature, and the real cause of our suffering, it becomes very clear what drives people to behave the way they do. This leads to dysfunctional behavior from others being seen not as a personal attack but as an expression of delusion. They don’t know what they’re doing. They don’t yet understand how their actions are harmful to themselves. They’re choosing the best item on their menu of behavioral options.
It becomes much easier to feel compassion for even the most heinous of perpetrators. This doesn’t lead to allowing abuse; in fact, one is able to more powerfully prevent harm coming to the innocent, because the real issues can be faced head-on.
In situations where there is an abuser and a victim, compassion naturally arises for both the abuser and the victim, and sometimes even more so for the abuser; they are unwittingly harming both themselves and the victim. This change in perspective leads to the cycle of victim-perpetrator-savior being broken because we don’t automatically take on the role of savior, just as we don’t automatically take on the role of either the victim or the perpetrator.
Meditation is the most effective use of time
I have experienced many revelations during or after meditation sessions that led to massive reductions in effort in achieving goals. It’s one thing to drive to achieve something, but it’s a whole other level of effectiveness to spend that energy striving for the right thing. Effective and right use of effort is something that requires time for incubation and disconnection. Not only that, but the purification of the mind that comes from Vipassana practice, and the development of the ability to see things from multiple perspectives, leads to a clarity of vision that is an unparalleled asset in decision-making.
Relationships are extremely valuable to me
In my drive to be productive and effective, I often forget about the value of my social and family connections. Because meditation brings my awareness back to the reality of my human being, I become acutely aware of the true value of the people that I love. I am reminded of how important these “soft” assets are to me: the people and the relationships. I often reach out to and connect with many people following ten-day retreats, and the amount of time that I socialize seems to be correlated with the amount of time that I meditate.
My suffering is not what it seems to be
At this last retreat, I realized that there is a very similar pattern to my experiences at retreats. I have always gone through a period of feeling down, regretful, and anxious. This is in contrast to what many others experience: bouts of anger. At least until now, I don’t seem to have struggled with a large number of mental impurities related to anger. I realized that the water I swim in is colored with sadness, regret, and anxiety. I didn’t used to even think of those states as mental impurities. Like most people, I thought that my modes of suffering where ways that I was a victim to life, that these were externally imposed by my circumstances and history. This past retreat, I understood, even more deeply than before, that these are just non-adaptive mental habits that I have unconsciously perpetuated. By returning to the Vipassana technique, I was able to release these layers of impurities and come through into more clarity.
Conclusion
This article should have given you a glimpse into the mind of a long-term meditator. If you meditate a lot yourself, perhaps what I have written is validating or comforting, or perhaps even challenging. Thanks for reading, and please remember to give this article some claps, and to subscribe to my profile here on Medium, if you have not already done so. | https://medium.com/gethealthy/what-4-300-hours-of-meditation-has-taught-me-51ad3440149e | ['Duncan Riach'] | 2020-10-20 11:52:29.438000+00:00 | ['Happiness', 'Mindfulness', 'Vipassana', 'Productivity', 'Meditation'] |
Data Challenges Superiority of Manualized Psychotherapy | New data fails to support the promotion of manualized psychotherapy as superior to non-manualized forms of psychotherapy.
By Zenobia Morrill
Photo Credit: Flickr
A recent systematic review comparing manualized psychotherapy to non-manualized psychotherapy has challenged the ongoing promotion of psychotherapy manuals as a necessary part of evidence-based treatments (EBTs). Researchers, Dr. Femke Truijens and colleagues in Europe, found that manualized psychotherapy is no more superior to psychotherapy delivered without a manual.
“Manualized treatment is not empirically supported as more effective than non-manualized treatment. While manual‐based treatment may be attractive as a research tool, it should not be promoted as being superior to non-manualized psychotherapy for clinical practice.”
Psychotherapy treatment manuals are intended to direct therapists in the application of their approach. Manualized treatments specify a theoretical basis, the number and sequencing of treatment sessions, the content and objectives of each session, and the procedures required to achieve the objective of each session. The use of manuals have been embraced, and at times required, by overseeing institutions such as the American Psychological Association (APA) and the National Institute for Health and Care Excellence (NICE).
“This requirement captures the assumption that it is more effective to apply manualized treatment than to provide treatment in a less or nonmanualized form. As this assumption seems vital to justify the dissemination of manual‐based EBTs to clinical practice, in this paper, we review the empirical evidence for this assumption,” Truijens and colleagues explain.
They note that in clinical practice, there has been pushback to manualized approaches and the utility of manuals has been critiqued. Scholars and psychotherapists have expressed concerns that manuals inhibit flexible application of approaches, and impedes on one’s ability to tailor therapy to individual needs or adapt interventions to multiple, or “comorbid” presentations of distress.
In addition, manuals tend to be constructed around diagnostic presentations such that specific approaches are delineated for specific “disorders.” Practitioners critique the feasibility of mastering each approach. One response to these concerns has been to encourage the flexible adaptation of treatment manuals. For example, through the use of “transdiagnostic” manuals.
Nevertheless, the research has been focused on how to apply manuals rather than on whether or not manualized approaches are more effective. To address this gap in the literature, the authors consider the following questions:
“Does the use of manuals actually increase therapy effectiveness? And should manuals, therefore, be embraced in clinical practice and training?”
Truijens and team sought to add to this discussion by reviewing the empirical evidence. They write, “Given the current requirement of manuals as the core of evidence-based psychotherapy, it seems crucial to substantiate this discussion with empirical evidence.”
In this systematic review, the research team evaluated whether or not manual-based psychotherapy was more effective than psychotherapy delivered without a manual. They also examined the efficacy of manualized and non-manualized psychotherapies as compared to no treatment, delayed-treatment, minimal treatment, or alternative treatment control groups. Lastly, they examined lower levels of therapist adherence to the manual. The hypothesis was that if manualized therapy is indeed more effective, then the extent to which the therapist adhered to the manual would be linked to effectiveness.
To explore these three hypotheses, Truijens and colleagues conducted a systematic review of the existing literature. For the first hypothesis, they examined six relevant empirical studies. Eight meta-analytic studies applied to the second hypothesis and one meta-analysis of 15 studies was used to explore the last hypothesis regarding manual adherence.
Their results did not support the superiority of manualized psychotherapy compared with non-manualized psychotherapy. The researchers review of the six articles comparing manualized and non-manualized therapy directly found that three studies yielded no significant difference between the two, two observed superiority of non-manualized therapy, and one supported manualized delivery. The one study that did support manualized psychotherapy was interpreted by the authors to have been started “from a single specific intervention that appeared to be exceptionally effective, regardless of the administration via a manual.”
When manualized and non-manualized psychotherapy was compared with no treatment, delayed treatment, minimal treatment, or alternatives, the superiority of manualized psychotherapy was also not conclusively supported. Out of the eight meta-analyses reviewed, three demonstrated an advantage of using manualized therapy, one indicated the superiority of non-manualized delivery, and four showed no significant difference. The authors interpreted these findings:
“Here, we have to remark that it is fairly complex to meaningfully compare effect sizes of treatments that are so different in nature, given their varied understanding and operationalization of treatment, control groups, diagnosis, and outcome. First and foremost, this underlines how the universal hypothesis of manual efficacy is in trouble with respect to empirical support, both as a direct and as a moderating factor.”
Finally, when therapist adherence to the manual was explored, results were similarly inconclusive. One meta-analysis found that the degree of therapist adherence to the manual did not affect outcomes. The remaining 15 studies provided unclear results. Truijens and team comment on these findings:
“As such, the suggestion that adherence and fidelity to treatment principles may impact a positive treatment outcome remains a worthwhile avenue for further research. However, as an indicator for ethe fficacy of the manual as a general principle for clinical practice, this conflicting body of evidence is insufficient.”
The findings of this study do not support the superiority of manual-use in psychotherapy. The authors write that the failure to corroborate this claim “points to a severe problem in the justification of EBT dissemination.” In their conclusion, Truijens and team encourages consideration beyond the question of “manual or no manual?” toward the components and steps of the therapy process required to attend to different people and different presentations.
“Based on this review, we are not inclined to call for more research to settle the dispute about manualization in general; rather, we urge both researchers and clinicians to go beyond the dichotomy, as the next step in understanding what works for whom in psychotherapy.”
****
Truijens, F., Zühlke‐van Hulzen, L., & Vanheule, S. (2018). To manualize, or not to manualize: Is that still the question? A systematic review of empirical evidence for manual superiority in psychological treatment. Journal of clinical psychology. DOI: 10.1002/jclp.22712 (Link) | https://medium.com/mad-in-america/data-challenges-superiority-of-manualized-psychotherapy-885805ba85c6 | ['Mad In America'] | 2018-12-20 21:47:03.670000+00:00 | ['Depression', 'Suicide', 'Medicine', 'Mental Health', 'Mental Illness'] |
Launch VS Code in online computing environments | I’m excited to share that Next Tech has released alpha support for Visual Studio Code!
VS Code is hands-down one of (if not) the best code editors out there. It’s trounced others (like Sublime and Atom) in popularity in recent years, thanks to the incredible features Microsoft has added and the vast library of extensions tens of thousands of developers have created.
Installing VS Code is pretty straightforward and you can do it yourself on your computer if you’d like. However, we’ve found that our hosted environments for Python, Node, Go, Haskell, and many other programming languages make it super easy to get started with a new programming project in just a few seconds.
So we thought, wouldn’t it be great if you could use these environments… and VS Code?!
Well now, you can. This includes the ability to install extensions, change your settings, debug programs, push to GitHub, deploy to Azure, and much more.
This guide walks you through how to get started with VS Code on Next Tech. If you just want to jump into a sandbox with VS Code, click here. In a few seconds you’ll have VS Code running in your browser:
Or, read on for the details!
Why VS Code?
Over the years we’ve received many requests for innovative features like code collaboration, debuggers, version control integration, and much more, but our focus is increasingly on our infrastructure offerings. As such, we see VS Code as being able to address a number of these requests as we continue to develop innovative infrastructure offerings.
VS Code is also the most popular code editor by far. Here’s Stack Overflow’s 2019 developer survey results for the most popular development environments:
This is also coming at a cost to other editors. Here’s the Google Trends data for VS Code (blue) versus Sublime (red) and Atom (yellow):
So we feel that our investment into integrating VS Code will be well worth it as we’ll be able to provide a well-loved tool inside our infrastructure.
This feature is currently in a very early alpha state (current limitations are documented here). However, in the coming months we’ll be rolling out a more tightly integrated version and many other related improvements.
Here’s what’s currently supported:
Intelligent code completion (IntelliSense).
Powerful debugging functionality.
Numerous different ways to configure your interface (multiple tabs, zen mode, etc.).
Integrated terminals.
Multiple themes.
VS Code’s command line interface.
…and many other extensions you can install.
Very soon, we’ll also be adding support for:
An integrated web browser.
VS Code’s Live Share feature, which allows you to collaborate with others in real-time.
For now, I hope you’ll take a look and share your feedback!
Getting Started
To get started, head over to the sandbox launchpad. Once you’re there, pick the language you’d like to use (this guide uses Go), then check Use Visual Studio Code, as shown below:
You’ll be shown a dialog that contains an explanation of the current limitations of this feature (also detailed at the end of this page). Just click the Sounds fun, let’s go! button and your sandbox will load with the VS Code interface:
You may notice that VS Code is just another tab type in the sandbox interface. If you click the + to create a new tab, you may notice that some options are now hidden:
(eventually these will all be hidden as we integrate them directly inside of the VS Code interface)
For the best experience, you can click the square in the top right corner of the VS Code interface to make VS Code full screen:
To get started, you can click File, then New File:
(note that the Ctrl+N will not work in the browser)
Save your file as main.go , then put this code in it:
Now, head over to the extensions marketplace and install the Go extension:
(you may need to press Ctrl+Shift+P, type “reload”, then select Developer: Reload Window to get things working correctly)
You’ll be prompted in the bottom right to install several packages. Go for it!
You can try typing b in the code editor to see the intelligent autocomplete kicking in. Here, it sees that b is actually an array with 5 int s in it:
You can run code using the software normally installed in your sandbox. To run this file, click Terminal, then New Terminal:
Then you can use the already installed version of Go from your sandbox:
And there you have it! You’ve just used VS Code in the cloud to write a Go program.
Feedback
If you try this new feature, I’d love to hear what you think. Feel free to respond to this post or submit a ticket here.
Thanks for reading!
Shout-out
This integration uses an adapted version of this awesome open source project by Coder! | https://medium.com/nexttech/launch-vs-code-in-online-computing-environments-dab98b35fd5 | ['Saul Costa'] | 2019-05-28 23:34:46.787000+00:00 | ['Developer Tools', 'Vscode', 'Visual Studio Code', 'Productivity', 'Microsoft'] |
Unconventional Ways to Motivate Your Team in 2021 | Unconventional Ways to Motivate Your Team in 2021
Leadership when it’s not ‘business as usual.’
Photo credit: Mitchell Luo
Let’s face it: this year was tough. Next year might prove nearly as challenging. Times like these have fundamentally changed what it means to be a leader. It’s no longer just about driving short-term results — building sustainable teams and processes is more important.
All the while, those at the helm are charged with demonstrating authenticity, encouraging others to be themselves while maintaining the integrity of organizational culture.
The challenges are immense, but so are the opportunities. A long-term focus creates the need for better incentive alignment. Honesty allows us to get to the root of problems faster. And a world in which the individual is celebrated paves the way to more effective collaboration.
Ensuring that everyone has a seat at the table
It’s not enough to get it right once anymore. Winners are replicating success by facilitating the right teams, processes, and most importantly, incentives.
Have you ever heard of the “agency problem”? Economists define this as:
“A conflict of interest inherent in any relationship where one party is expected to act in another’s best interests.”
For example, let’s say Jon wants to start and own a consulting business. If he hires two employees to work on the business and only rewards them with minimum wage, they have no incentive to think like an owner.
However, this problem could be mostly solved if Jon gave the two employees a reason to work harder, such as stock options or performance-based bonuses. A classic case of incentive alignment.
Such a mechanism becomes even more important when we zoom out to examine companies with shareholders. CEOs are tasked with driving stakeholder profits, but often act on their own interests to maximize wealth. Without a system in place, this should be expected since it’s human nature.
Because it’s not the player, it’s the game.
The good news is, in a world where long-term visions are gradually overtaking short-term targets, this might become less of a problem. As leaders, employees, investors, customers, and the world begin to prioritize sustainable outcomes, incentive alignment is being spoken of more often.
The most tangible way I’ve seen this play out is the transition from customer acquisition to retention. When a business changes their key metric from reducing customer acquisition costs (CAC) to improving lifetime value (LTV), they are making a statement.
That keeping a customer matters more than the costs of acquiring them. As you might imagine, this can cause a ripple effect across an organization, shifting everyone’s focus from “bait and hook” to “surprise and delight”.
Having honest conversations about real problems
Live virtual events and meetings bring a certain sense of excitement.
Because they’re not always scripted or rehearsed, which creates an environment in which serendipity and transparency are possible. There’s only so much you can prepare, and that’s a good thing.
Real-time conversations force us to draw from experience, rely on intuition, and say how we truly feel. Instead of one-way dialogue, interactive experiences are about co-creation. You’re learning more about yourself as you speak, and hopefully imparting some lessons to others at the same time.
It’s a journey, not a destination.
And such gatherings are more important than ever before. After hearing “all things considered” during every call or witnessing the perils of miscommunication due to less time in-person, we know this to be true.
So instead of controlling the discussion, it behooves leaders to participate. By striking a balance between your personal and professional lives, prompting others to speak their mind, and fostering a community as opposed to a cult.
That’s how leaders harness the power of authenticity to solve problems faster.
Bringing the best out of individuals and their teams
This year has made it clear that at the end of the day, we’re on our own.
Which isn’t good or bad, it just is. We’re all responsible for our own health, financial security, upward mobility, and career trajectory. The only difference from years past is that the lesson came to us as less of a frog in boiling water and more of an emergency crash course.
Similar to how the Great Recession influenced Generation X’s perspectives, the pandemic will change our way of thinking about work and fulfillment. You might be thinking that this makes the job of a leader only harder.
I say the opposite.
Instead of having to be the source of motivation, leaders can and should ask teams to take matters into their own hands. Let individuals find their drive, searching across the landscape for hints of impact from customer testimonials, revenue growth, and user success stories. Then reward them for performance.
After all:
“True leaders don’t create followers. They create more leaders.” — Tom Peretz
We’re better off designing systems as opposed to a patchwork of parts. Especially when the not-so-distant future looks so opaque and unpredictable. Instead of offering prescriptive advice, organizations will benefit from having the proper guardrails in place.
Structuring the right incentives, deploying radical honesty, and providing autonomy will empower happier, more effective teams come 2021. | https://medium.com/swlh/unconventional-ways-to-motivate-your-team-in-2021-710bf46fefc5 | ['Sid Khaitan'] | 2020-12-28 19:02:46.669000+00:00 | ['Startup', 'Culture', 'Teamwork', 'Collaboration', 'Leadership'] |
Why People Leave the Church and Never Come Back | I’d like to share with you my experience leaving the church for several years, and my recent decision to return.
Your instinct might be to get excited. You might hope that I’m about to share the secret sauce that will bring loved ones back into the fold. I’d like to tell you right up front that this is not my intention.
My goal is not to show you how to explain away someone’s doubts, or to rekindle their testimony. Rather, I want to share what I and others felt upon leaving the church — regardless of our reasons for leaving.
I invite you to maintain an open heart while reading, as there are parts that may stir up feelings of defensiveness or frustration. If this happens to you, I invite you to look inward and ask yourself, “Why am I feeling this?”
Why people leave
People leave the church for many reasons. Some have been deeply hurt or offended by other members of the church. Some feel deceived and betrayed by the church because of inaccuracies in church history. Some have been deeply wounded by the actions of leaders or official church policies that affect people they love. Others doubt because of the imperfect actions of our founding prophet, Joseph Smith. And many just plain don’t feel like they fit in.
Regardless of why people leave, it’s what they often experience upon leaving that makes them never want to come back.
If you’ve never left the church before, it’s hard to comprehend the experience. You must understand that it’s almost impossible to just “leave” the church. For most of us, the church is embedded in who we are. It’s part of our character, and our identity. Mormonism is enmeshed in our values, our morals, our family relationships, and our friendships.
Many of us have steeped in Mormon traditions and heritage, dating back generations, since birth. Which makes choosing to leave the church — regardless of the reason — difficult, complicated, and so incredibly painful.
People who leave experience feelings of extreme loneliness, betrayal, and a complete loss of identity. It’s as if the foundation they’ve built their life around is crumbling. They feel anger, devastating sadness, relief, and frustration. They feel a need to belong while at the same time feeling a need to be alone. They feel deceived, judged, looked-down-upon, and very confused.
It is a horrible experience that robs you of your ability to trust others — especially those who belong to the organization that has caused you so much profound pain and suffering.
Because of this, those who leave the church often become cynical, skeptical, and jaded.
Members make the pain worse
Now you have a small glimpse into the pain most people go through when they begin to leave the church. With that in mind, consider how much more painful it would be to go through these struggles, deal with the seemingly endless wave of emotions, and suffer through these trials while the people you’ve leaned on and trusted your entire life suddenly start distancing themselves from you, avoiding you, pitying you, judging you, or criticizing your character and your choices.
Not only are you struggling with a deep, personal spiritual battle that you didn’t choose, it also feels like you’re being punished for it by the people who proclaim to love you most.
I invite you to consider the words of Christ in D&C 81 : 5:
Wherefore, be faithful; stand in the office which I have appointed unto you; succor the weak, lift up the hands which hang down, and strengthen the feeble knees.
If anyone in our community needs succoring, compassion, support and empathy, it is the people undergoing a faith transition.
Just last week, I asked a Facebook group notorious for its community of ex-Mormons — many of whom are very vocal about their frustrations with the church — a question: What would you tell a member of the church to avoid doing if they don’t want to cause unnecessary pain for someone they love who has chosen to leave?
Here are some of the responses:
When people treat you with pity and say things like, “I’m praying for you.”
When you’re turned into a “project” and people only come to your house to fulfill a calling or “under assignment from the Bishop”
When your family is turned into a weapon and used against you, like getting grandkids excited about church with the agenda of motivating parents to take them in the future, or using a family member’s temple sealing to pressure a less-active member into getting a temple recommend when they’re not worthy or don’t desire one.
When you make assumptions about why someone leaves — like that they just want to sin, they don’t have enough faith, or they were never truly committed. I can’t tell you how many people said they’ve been gone from church for YEARS and nobody ever asked them why they left and sincerely listened to their answers.
When you don’t reach out as a friend. Once some people stop coming to church, you might feel uncomfortable and ignore them because you don’t know what to say.
Sending missionaries to visit them when they move to a new neighborhood despite constant requests to not be contacted by missionaries.
Assuming that leaving the church equates to a loss of morals.
Bearing your testimony to them with the assumption that it will somehow magically make their doubts and struggles disappear.
So the problem we’re running into is that people leave the church because they’re hurt… and then in an attempt to bring them back, we hurt them even more.
What you should do instead
So if you can’t do any of the above things to convince those you love to come back to church, what do you do?
The answer is simple: Stop trying to get them to come back to church!
Just love them.
It wasn’t until I had a bishop who invited me into his office — not to get me to come back, but to express his love for me and show empathy and understanding for what I was going through — that I even contemplated coming back.
We talked about my struggles with the crushing guilt that came from the pressure I felt to live up to a cultural standard of perfection all the time. We talked about my anger towards God and towards the church. We talked about my frustration with Joseph. We talked about the lack of compassion we often show to people who are different than us.
He listened. He empathized. He understood. He invited me to talk more.
He helped me grapple with my own faith, and showed me that it’s ok to be a different kind of Mormon… the kind of Mormon who wrestles with tough questions. One who doesn’t have all the answers. One who doesn’t “know” the church is true. | https://humanparts.medium.com/why-people-leave-the-church-and-never-come-back-410e3e817a3a | ['Nate Bagley'] | 2019-06-18 19:07:00.690000+00:00 | ['Mormon', 'Spirituality', 'Faith', 'God', 'Religion'] |
What a Sea Turtle Can Teach You About Finding your Purpose | Have you ever seen a sea turtle?
Laboriously pull herself onto the beach from the ocean?
Have you ever seen a sea turtle scooch across the sand?
Each inch of progress is a battle of wills
Nature tells the turtle no you can’t
The turtle tells Nature, but I am | https://medium.com/weirdo-poetry/what-a-sea-turtle-can-teach-you-about-finding-your-purpose-666fba4d5523 | ['Jason Mcbride'] | 2020-10-04 12:39:03.450000+00:00 | ['Poetry', 'Comics', 'Life Lessons', 'Work', 'Writing'] |
Spring vs. Spring Boot: A Comparison of These Java Frameworks | Spring vs Spring Boot: A Comparison of These Java Frameworks
Want to learn more about these two popular Java frameworks? Check out this Article how they each solve a different type of problem.
What is Spring Boot? And, what is a Spring Framework? What are their goals? How can we compare them? There must be a lot of questions running through your mind. At the end of this blog, you will have the answers to all of these questions. In learning more about the Spring and Spring Boot frameworks, you will come to understand that each solve a different type of problem. More Additional Information On Spring Boot Online Training
What Is Spring? What Are the Core Problems Spring Solves?
The Spring Framework is one of the most popular application development frameworks for Java. One of the best features in Spring is that it has the Dependency Injection (DI) or Inversion Of Control (IOC), which allows us to develop loosely coupled applications. And, loosely coupled applications can be easily unit-tested.
Example Without Dependency Injection
Consider the example below — MyController depends on MyService to perform a certain task. So, to get the instance of MyService, we will use:
MyService service = new MyService();
Now, we have created the instance for MyService , and we see both are tightly coupled. If I create a mock for MyService in a unit test for MyController , how do I make MyController use the mock? It's bit difficult — isn't it?
@RestController
public class MyController {
private MyService service = new MyService(); @RequestMapping("/welcome")
public String welcome() {
return service.retrieveWelcomeMessage();
} }
Example With a Dependency Injection
With the help of only two annotations, we can get the instance of MyService easily, which is not tightly coupled. The Spring Framework does all the hard work to make things simpler.
@Component is simply used in the Spring Framework as a bean that you need to manage within your own BeanFactory (an implementation of the Factory pattern).
is simply used in the Spring Framework as a bean that you need to manage within your own BeanFactory (an implementation of the Factory pattern). @Autowired is simply used to in the Spring Framework to find the correct match for this specific type and autowire it.
So, Spring framework will create a bean for MyService and autowire it into MyController .
In a unit test, I can ask the Spring Framework to auto-wire the mock of MyService into MyController .
@Component
public class MyService {
public String retrieveWelcomeMessage(){
return "Welcome to InnovationM";
} } @RestController
public class MyController { @Autowired
private MyService service; @RequestMapping("/welcome")
public String welcome() {
return service.retrieveWelcomeMessage();
} }
The Spring Framework has many other features, which are divided into twenty modules to solve many common problems. Here are some of the more popular modules:
Spring JDBC
Spring MVC
Spring AOP
Spring ORM
Spring JMS
Spring Test
Spring Expression Language (SpEL)
Aspect Oriented Programming(AOP) is another strong side of the Spring Framework. The key unit in object-oriented programming is the class, whereas, in AOP, the key unit is the aspect. For example, if you want to add the security in your project, logging, etc., you can just use the AOP and keep these as a cross-cutting concern away from your main business logic. You can perform any action after a method call, before a method call, after a method returns, or after the exception arises.
The Spring Framework does not have its own ORM, but it provides a very good integration with ORM, like Hibernate, Apache iBATIS, etc.
In short, we can say that the Spring Framework provides a decoupled way of developing web applications. Web application development becomes easy with the help of these concepts in Spring, like Dispatcher Servlet, ModelAndView, and View Resolver.
If Spring Can Solve so Many Problems, Why Do We Need Spring Boot?
Now, if you have already worked on Spring, think about the problem that you faced while developing a full-fledged Spring application with all functionalities. Not able to come up with one? Let me tell you — there was lot of difficulty to setup Hibernate Datasource, Entity Manager, Session Factory, and Transaction Management. It takes a lot of time for a developer to set up a basic project using Spring MVC with minimum functionality.
Take your career to new heights of success with an Spring Boot Certification
<bean
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix">
<value>/WEB-INF/views/</value>
</property>
<property name="suffix">
<value>.jsp</value>
</property> </bean>
<mvc:resources mapping="/webjars/**" location="/webjars/"/>
<servlet>
<servlet-name>dispatcher</servlet-name>
<servlet-class>
org.springframework.web.servlet.DispatcherServlet
</servlet-class>
<init-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/my-servlet.xml</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>dispatcher</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
When we use Hibernate, we have to configure these things like a datasource, EntityManager, etc.
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="${db.driver}" />
<property name="jdbcUrl" value="${db.url}" />
<property name="user" value="${db.username}" />
<property name="password" value="${db.password}" />
</bean>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="classpath:config/schema.sql" />
<jdbc:script location="classpath:config/data.sql" />
</jdbc:initialize-database>
<bean
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
id="entityManagerFactory">
<property name="persistenceUnitName" value="hsql_pu" /
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
How Does Spring Boot Solve This Problem?
Spring Boot does all of those using AutoConfiguration and will take care of all the internal dependencies that your application needs — all you need to do is run your application. Spring Boot will auto-configure with the Dispatcher Servlet, if Spring jar is in the class path. It will auto-configue to the datasource, if Hibernate jar is in the class path. Spring Boot gives us a pre-configured set of Starter Projects to be added as a dependency in our project. During web-application development, we would need the jars that we want to use, which versions of the jars to use, and how to connect them together. All web applications have similar needs, for example, Spring MVC, Jackson Databind, Hibernate core, and Log4j (for logging). So, we had to choose the compatible versions of all these jars. In order to decrease the complexity, Spring Boot has introduced what we call Spring Boot Starters.
Dependency for Spring Web Project
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId
<version>4.2.2.RELEASE</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId
<artifactId>jackson-databind</artifactId
<version>2.5.3</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId
<version>5.0.2.Final</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
Starters are a set of convenient dependencies that you can include in your Spring Boot application. For using Spring and Hibernate, we just have to include the spring-boot-starter-data-jpa dependency in the project.
Dependency for Spring Boot Starter Web
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
The following screenshot shows the different packages under a single dependency that are added into our application:
There are other packages that you will see. Once you add that starter dependency, the Spring Boot Starter Web comes pre-packaged with all of these. As a developer, we would not need to worry about these dependencies and their compatible versions.
Spring Boot Starter Project Options
These are few starter projects to help us get started quickly with developing specific types of applications. | https://medium.com/quick-code/spring-vs-spring-boot-a-comparison-of-these-java-frameworks-14a1b594657 | ['Priya Reddy'] | 2020-02-05 03:22:23.325000+00:00 | ['Spring Training', 'Java', 'Spring Boot', 'Spring', 'Framework'] |
What is a Data Scientist? | What is a Data Scientist?
by Michael Watson
With the rise of big data and data in general (see here for a definition of big data), there is an increased need for people to analyze that data to turn it into information. Thomas Davenport and D.J. Patil recently wrote an article in Harvard Business Review about how “data scientist” is going to be the hot job of the 21st century.
A data scientist is:
“…a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.)… ” “…More than anything, what data scientists do is make discoveries while swimming in data. It’s their preferred method of navigating the world around them. At ease in the digital realm, they are able to bring structure to large quantities of formless data and make analysis possible.”
The article is well-done and worth a read. I think you could extend the definition of “data scientist” to also include the field of operations research (which includes optimization). Besides just analyzing data, optimization can help you get even more value from the data.
___________________________________________________________________
If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars. | https://medium.com/opex-analytics/what-is-a-data-scientist-745c6a67e1b6 | ['Opex Analytics'] | 2019-04-25 17:24:37.459000+00:00 | ['Analytics', 'Data Science', 'Artificial Intelligence'] |
It’s not only about the roast: how your coffee shop nudges you into buying more specialty latte | I t’s not only about the roast: how your coffee shop nudges you into buying more specialty latte
Coffee is good and only an almost full stamp card is better. Is there a trick hiding behind your favourite coffee spot’s loyalty scheme?
The above picture was taken a couple weeks ago on a rarely sunny day amidst the rainy Budapest spring. I had always known that I had many coffee stamp cards but lining them up like this took even me aback.
You could tell, I love good coffee. What you could not tell, I have a few coffee shops I am just more loyal to above any others, where I had already finished off quite a few stamp cards.
Do you think any of the cards work better?
While I do love to associate this particular attraction to some well-considered factors such as the specific roast they use, the “intense floral and plum notes in the after taste on the back-left corner of your taste buds”, the atmosphere or the quality of service, I am also very well aware that there are other subtle factors influencing the frequency of visiting a coffee shop and becoming a regular — and not only in case of coffee shops.
Loyalty programs are a common tool in marketeers’ and businesses’ hands to drive more frequent purchases and build up a relationship, but the design of the loyalty program — in this case, of the coffee stamp — can have a significant impact on its success.
If you observe the stamp cards above, you might recognise that in case of most coffee shops, after drinking 9 coffees, you get the 10th for free. What a bargain! (You spend 18 euros to get a free latte!)
Now, you might also recognise that some of the cards are pre-stamped: it either represents the last, free latte in the row (such as the green card in the top right corner — cheers, Espresso Embassy!) or it is the very first one in the line, in which case your first purchase will actually give you the second stamp on the card (the brown circles right below the espresso cup -hi, Kelet Café!).
Some cards are just displaying 10 spots for the stamps, and you get the 10th stamp for the free coffee.
Then there are a couple ones where the 10th free coffee is actually not indicated on the card, leaving only 9 places to indicate your daily dose of caffeine.
Here comes the magic!
Science tells us that considering only the coffee shops where you need to collect 9 stamps to get 1 for free, you are more likely to do so when the card comes pre-stamped compared to when your card is empty.
Why?
The Endowed-progress effect
The Endowed-progress effect is the idea that by providing an artificial sense of progress toward a goal, a person will be more likely to complete that goal.
That is, you are more motivated to finish a task, if you feel that the given task requires less effort due to some advancement — even if this advancement is completely fake. This effect, also called head-start, makes us believe that less effort is needed to a task and thus it is easier to achieve.
In a famous study, researchers Joseph C. Nunes and Xavier Dreze coined this phrase by observing the following phenomenon in case of loyalty cards for car wash.
For the study, the researchers handed out two types of stamp cards: one that required 8 purchases and stamps for a free wash, and one that required 10 purchases and stamps, but two of the spots were already stamped. Despite the same effort required from both groups to get a free car wash (8 visits), after 9 months 34% of the people in the 10-stamp group redeemed their card versus 19% of the people in the 8-stamp group. Moreover, for those who had had a head-start it took 3-days-less between the visits to go to the car wash, and this time between visits decreased further with each additional car wash purchased.
You haven’t finished this article yet but you are almost there!
When considering why the endowed-progress effect might work so well, we have to consider two related phenomena, all together explaining how we perceive progress during a task and what prompts us to complete it.
1. The Zeigarnik-effect
The little red dots next to the app icons on your iPhone screen.
The last few minutes of any Avengers movie.
Your recall of the material while preparing for any exam vs right after the exam.
Being interrupted while working yourself through the levels of a stupid online or mobile game.
In 1927, Russian psychologist Bluma Zeigarnik found that waiters had a better recall of the orders while being prepared, but once the bill was paid they had difficulty remembering the details.
That is, an uncompleted or interrupted task stick with us much more, repeatedly popping up in our head and causing a task tension. Up until we don’t complete our goal, we will be bothered because of the feeling of an incompleteness — sometimes even to the point that it hinders our ability to focus on something else.
This insight is actually used quite widely in UX: remember the last progress bar you have seen in a registration process, during an online course or while purchasing online? Congratulations, you have met an attempt to activate the Zeigarnik-effect!
Consequently, there are range of options for businesses to help customers follow through with their intentions:
By implying that they have already started a process, you can get people to subscribe to a service or enable a demo feature they might be interested in.
In the physical world, helpdesks, bank branches or any locations with longer waiting periods pull customers into the process as soon as possible.
Give people an actual headstart toward the goal: let them do the first necessary step for free or make it a lot easier (remember the newsletter subscription pages that seemingly ask only for an email address only to make you input your name, etc on the next page as well?).
2. The goal gradient effect
Have you ever glanced on the countdown while running on a treadmill, just to increase your tempo for the last 10 seconds or 1 mile?
Have you ever read the last few pages of a book or paragraphs of an article quicker?
Have you ever felt during a long night spent with studying for the next day exam that the only thing that keeps you motivated is the vanishing distance between you and the end of the material?
This is because, indeed,
the closer we are to a goal, the harder we work to complete the task at hand — that’s called the goal gradient effect.
This means that even the perception of getting closer to the end of a process might prompt us to put extra effort into it. This concept, originally established by psychologist Clark Hull in the 1930s who observed rats running in maze at the end for their reward, highlights how our perception of progress, again, impacts our actual performance.
Don’t worry, it works with humans as well: for example, when participant had to hold a handgrip during a research for 130 seconds, those shown a countdown clock squeezed harder than those who didn’t know how much time left.
In practice, goal gradient effect boils down to a very common sentence that you might have heard from your conductor in the gym, from your manager during a busy week or you might have seen it during several online processes:
“You are almost ready!”
It’s an almost magical feeling, knowing, that it is just a few steps between you and completing your goal. Even when hiking, it’s better to have smaller, visible peaks on your route, than one, continuous ascending — it gives you more frequent goals to reach and more frequent rewards for your work.
To put this trick into effect, there are two fairly widespread practices:
You can visualise the goal as vividly as possible, because the easier it is to imagine, the closer we feel it, and, consequently, the more effort we put into it. Whenever you create goals or design processes, it is important to make clear for your target group what that goal is and where they are in reaching them. Phrasing is important as well: highlighting what is left instead of what is behind activates goal gradient effect — especially if it’s close to the end. For example, people are more likely to give to a charity if they are told that they are two-third of the way toward the goal vs less than one-third along the way (although bandwagon effect might be in play here, as well — but let’s leave that for a later occasion).
Another option is to break down bigger goals into smaller ones, thereby actually bringing the reward of completion closer. In case of hard-to-reach goals, such as savings, it might be demotivating and frustrating if we are working toward a yearly goal — instead, aim to save a given amount every month. When working on losing weight, a bigger goal spread over a longer time period might make you less motivated than smaller, biweekly targets. In a project, sub-goals keep employees motivated, and a to-do list with broken down actions creates an perception of progress. Often online account setups or on-boarding processes are broken down to very small tasks, thus suggesting that you progress ahead and bringing the finish line closer visually.
There are plenty of ways to get your target group start working toward and achieve a goal — whatever you are up to, how they perceive their progress will always bear an importance. | https://uxdesign.cc/its-not-only-about-the-roast-how-your-coffee-shop-nudges-you-into-buying-more-specialty-latte-bcd9df380d97 | ['Krisztián Komándi'] | 2019-07-02 00:01:43.749000+00:00 | ['Product Design', 'Loyalty', 'Behavioral Economics', 'Psychology', 'UX'] |
Freelance your way into Data Science now | Freelance your way into Data Science now
How to start with Data Science if you want to be a freelancer?
Here’s my suggestion on how to get started with skills training in Data Science.
Become a Data Science Freelancer
Learn Python from the Very Start
There are three reasons why a first-time data scientist must learn Python:
Most experienced Data Scientists who speak publically in this field swear by Python and its utility in ML and AI — and most already have a background in R.
A typical first-year data scientist will have the Data Science certification. If you’re already current on it, you can begin with quick exposure to Python because it’s learned by rote by those who have learned it before.
As a new data scientist, you need a good basis for building out your application and communicating the results of your code to those around you. With Python, you can research and prototype programs much faster and larger than with other languages, especially if you’re reading from Git where you have access to the source code.
The first step to becoming a data scientist, then, is to understand first, how a Python program is going to be used to handle data. There are hundreds of Python packages for various purposes, many of which are open source. Picking up one will go a long way toward learning the entirety of Python.
From there, begin reading Data Science Books and Data Science blogs.
For thorough tutorials in the field, check out Courses in Data Science.
Earn an Income Over Time — Learn in the Short Run and Earn a Living in the Long Run
Data science is ideal for people who are self-employed. You may do better and be more happy making 1 month salary, $30,000, or $60,000 a year if you have high skills, publish your coding prowess, and have the experience to provide valuable content for these job listings.
Even if you become a data scientist, you can make big money. If you work as a software engineer or product designer, which are traditionally the core roles for those who write the code, you can make $150,000 to $250,000 per year — no matter where you live as you can always work remotely, use Fiverr and Upwork for gigs. It is definitely possible.
If you work for a startup that develops a technology or product, your salary will vary, depending on the success of the company, because you will often get equity instead of a full payment. That’s the case if you for example find a job through Angelist.
So it’s really up to you which do you prefer: large company or a small one, or working as one-person freelancer. All is viable and all leads to success if you put in the work.
Good luck! | https://medium.com/data-science-rush/freelance-your-way-into-data-science-now-24eb3d2f2ac | ['Przemek Chojecki'] | 2019-12-09 18:38:31.365000+00:00 | ['Work', 'Python', 'Data Science', 'Freelancing', 'Freelance'] |
How to classify sounds using Pytorch | In this article , I am going to talk about how we can prepare the sound or, audio dataset so that we can able to classify them . It may be a little long but if you can hold patience and read it thoroughly once, it will be highly beneficial.
First we will load the audio file. From the directory having n number of sounds files, we will try to load 2–3 out of them using torchaudio.load first. torchaudio supports sound files of format ‘.wav’ and ‘.mp3’ which is used to give waveform and sample rate of the sound file. Waveform consists of frequencies of the sound per frame in an array format whereas the sample rate determines the frequency at which the waveform can be represented.
import torchaudio
waveform, sample_rate = torchaudio.load("_PATH OF THE AUDIO FILE_")
If the audio file is in other format then try googling to extract out waveform, sample_rate of the audio file. For example, if you have ‘.flac’ audio file you can try this : →
import soundfile as sf
waveform, sample_rate = sf.read("_PATH OF THE AUDIO FILE IN FLAC_")
2. Normalize all the shape of waveforms to one size. After loading the file, check the shape of the waveform using waveform.size()[0] . If it’s value is more than 1 , then we will have to normalize it using
from pydub import AudioSegment waveform = AudioSegment.from_mp3(_PATH OF THE AUDIO FILE_)
waveform = waveform.set_channels(1)
waveform = waveform.get_array_of_samples()
waveform = torch.tensor(waveform, dtype = torch.float)
waveform = torch.reshape(waveform, (1,waveform.shape[0]))
3. Change the waveform to Spectrogram, Mel Spectrogram or, MFCC. Now we will change waveform into Spectrogram(a visual representation of the spectrum of frequencies of a signal as it varies with time) using
Spectrogram = torchaudio.transforms.Spectrogram()(waveform)
or, mel spectrogram(a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency) using
Mel_Spectrogram = torchaudio.transforms.MelSpectrogram()(waveform)
or, MFCC(Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an mel-frequency cepstrum. Mel-frequency cepstrum is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency) using
n_fft = 400.0
frame_length = n_fft / sample_rate * 1000.0
frame_shift = frame_length / 2.0
params = {
"channel": 0,
"dither": 0.0,
"window_type": "hanning",
"frame_length": frame_length,
"frame_shift": frame_shift,
"remove_dc_offset": False,
"round_to_power_of_two": False,
"sample_frequency": sample_rate,
}
mfcc = torchaudio.compliance.kaldi.mfcc(waveform, **params)
4. Finally we can create the dataset class using the above 3 points like this. #1#Define the dataset class name first .
class audio_classification(torch.utils.data.Dataset):
#2# Define the class constructor to define audio_ids , their classification class_ids in a list and applying augmentations to them
def __init__(self, ids, recording_id, class_id, required_columns, is_valid = 0):
self.ids = ids
self.audio_ids = audio_ids
self.class_id = class_id
self.required_columns = required_columns
self.is_valid = is_valid
if self.is_valid == 1:
self.aug = # transfoms for validation images
else:
self.aug = # transfoms for training images
#3# Define the __len__ function
def __len__(self):
return len(self.ids)
#4# Finally define the __getitem__ function
def __getitem__(self, index):
filename = "__PATH OF THE AUDIO__"+ self.audio_ids[index] +"__EXTENSION OF THE AUDIO FILE__" waveform , _ = torchaudio.load(filename) # reshape to 1 channel if required
waveform = torch.reshape(waveform, (1, waveform.shape[0])) # Change the waveform to Spectrogram, Mel-Spectrogram or,MFCC
specgram = torchaudio.transforms.Spectrogram()(waveform)
# convert 1 channel to 3 channels applying imagenet models
specgram = specgram.repeat(3, 1, 1)
# Applying audio augmentations by converting to numpy array
specgram = np.transpose(specgram.numpy(), (1,2,0))
specgram = self.aug(image = specgram)['image']
#As torch accepts channels first so applying this
specgram = np.transpose(specgram,(2,0,1)).astype(np.float32)
return {
'specgram' : torch.tensor(specgram, dtype = torch.float) 'label' : torch.tensor(self.class_id[index], dtype = torch.float)
}
After doing all these steps this will completely act as an image classification problem. And we can define dataloaders , models , loss function, optimizer and training & validation process after this and start training our model consequently like we do in Image classification problem.
I hope you have understood this article well , if there is any questions related to this article then please feel free to comment on this article, until then enjoy learning. | https://medium.com/analytics-vidhya/how-to-classify-sounds-using-pytorch-27c9f2d4d714 | ['Soumo Chatterjee'] | 2020-12-28 16:36:09.516000+00:00 | ['Audio Classification', 'Python', 'Sound Classification', 'Pytorch', 'Deep Learning'] |
Time blocking to the rescue | Nowadays that we are at home working, we can be very busy with a lot of tasks and no time to focus. Then we look for possible ways to solve this problem, task management here, task management there, and maybe some scheduling to help us with all meetings we have all day.
Photo by Robert Bye on Unsplash
So, for the last weeks, I’ve been testing the Time Blocking practice to schedule almost my entire day. To be honest, it’s quite hard in the beginning, but over time it becomes quite simple to manage.
What is Time blocking?
As said before, Time blocking is a practice of planning out every moment of your day in advance dedicating specific time “blocks” for certain tasks and/or responsibilities.
When you start it, your calendar seems to be messy, but it’s quite the opposite when you are used to it. Because when you define all blocks it becomes harder for anyone to “steal” your time. So you can be focused on the tasks.
How to start?
Basically, define a big chunk of time for all the basic stuff you have to do. Like I did in the example below.
I defined a basic timeframe like work period. Then you can see how effective it is when you want to better plan your time. For sure when you define time chunks like that you may fail a bit in some periods of the day, but at least you will be able to better see, what you are focusing the most during your week.
One thing it’s possible to do, it’s to add a buffer time block, with that you don’t need to worry about the tasks that you’ve failed to do within the time block.
The most important thing it’s necessary to be understood, focus on the highest priorities of tasks you need to do every day. Because then you will put your highest effort right away in the most important tasks you need to deliver.
It takes time
For sure this practice takes time for you to get used to, so, in the beginning, you can try defining your time blocks every day before you start your work, then over time you will be able to see some patterns in your behavior. After a few days of planning days separately, try to define the whole week and customize the blocks however you want until you have a picture that fits better for your daily life. | https://medium.com/the-innovation/time-blocking-to-the-rescue-3272d1c83bf9 | ['Igor Carvalho'] | 2020-07-29 15:33:21.169000+00:00 | ['Time Blocking', 'Productivity', 'Time Block', 'Time Management'] |
What Marketers Can Learn From My Favorite Murder Podcast | My Favorite Murder is a successful comedy/true-crime hybrid podcast. It has been rated №1 in the iTunes Comedy podcast charts and №20 in overall podcasts. It has been running for almost a year and grew a highly-engaged fan base (“Murderinos”) largely through word-of-mouth tactics. True crime is a crowded category in podcasts, but My Favorite Murder hosts Karen and Georgia bring their own shock, outrage and witty, deadpan comedy spin to their storytelling.
Whether or not you’re a true crime fan, there are several lessons you can learn from the show:
My Favorite Murder is not highly produced, doesn’t involve expert interviews and the hosts openly admit to sourcing materials from Wikipedia. They curse, show when the stories scare them, don’t edit mistakes and spend time chatting about current events before they dig into the theme of their podcasts. By being true to their personalities and not trying to be like other true crime podcasts, the hosts have found a differentiating niche. Part of what makes My Favorite Murder so compelling is the emotion with which the hosts tell stories and talk about murder. Their fear, anxiety and shock comes into play and their feelings are contagious. To extend the conversation past the podcast, My Favorite Murder has a closed group on Facebook with over 100k people who post their “favorites” and discuss newsworthy cases. This keeps the fan base engaged in between new episodes of the show. Repeated phrases like “stay sexy, don’t get murdered” and “stay out of the woods” have latched on among Murderinos. The hosts of the show have used this to their advantage by encouraging fan art around such phrases, and have created a coded language for fans to self-identify and congregate. The hosts, and many fans of true crime in general become interested in murder stories because of their “hometown murders”: cases that occurred near them and were therefore particularly resonant. The hosts of the show encourage listener submission of “hometown murders” and dedicate minisodes of the show to sharing listener stories.
Amanda Kleinberg is a senior brand planner with the Digital practice in New York. | https://medium.com/edelman/what-marketers-can-learn-from-my-favorite-murder-podcast-e41302cfa086 | [] | 2017-03-03 15:42:09.707000+00:00 | ['Edelman', 'Marketing', 'Podcasts', 'Digital Marketing'] |
Reading Jack Temple Kirby’s “The Countercultural South” | In his 2009 obituary in The New York Times, Jack Temple Kirby was described as “a historian who decried stereotypes of the American South and traced the ways its people and landscapes have shaped one another.” Unfortunately, decrying stereotypes about the South can be a Sisyphean way to spend one’s life, but I’m glad he did it. I can’t remember when I read his book Rural Worlds Lost, the American South 1920–1960, but it has influenced my thinking about my home region, especially his thesis that what many people call “the South” ended around 1960 and was replaced by something else in the evolutionary chain of modern cultures. If the rural worlds were lost, then what came next?
So, when I was searching for books about the South that would coincide with this project’s focus on beliefs, myths, and narratives, his 1995 book The Countercultural South stood out. The book, which was published by the University of Georgia Press, is barely a hundred pages long and contains three essays about working-class Southern men, both black and white. Its length — an academic history under 400 pages! — made it appealing for one reason, but its subject was more important to me for personal reasons. That Times obituary also shared, “In movies, for example, he said, the South has been trapped by clichés of racists, graceful landed gentry, poverty, homespun rural values, stock-car racers and moonshiners.” As a kid, I grew up basking the glow of Walking Tall, Smokey and the Bandit, and The Dukes of Hazzard, then witnessed as a barrage of Civil Rights movement dramatizations appeared in the 1980s. So, I couldn’t agree more that we are often “trapped by clichés.” In this book, published as the fin de siecle of Y2K approached, Kirby was discussing blue-collar men not using the lens of violence, machismo, cars, alcohol, or racism, but instead through union membership, land management, and the effects of capitalism.
Right away, on the first page of his “Introduction,” Kirby lays out his thesis “that a not-quite-measurable but substantial minority of southerners are countercultural. Some resist in peaceful and conventional ways, such as labor union activism. Others avoid contact so far as they can, sustaining existence on a shrinking margin of society.” He then adds a distinction about one key difference: “Black workers are progressive, historically evolved as it were; whites are not.” This, he posits, is why the biracial working-class can’t seem to get together politically. They may be in the same boat, but they’re rowing in different directions.
The term “counterculture” carries a lot of weight. Most people would think of hippies when they hear the word, but the two are not synonymous. A counterculture is a subculture that runs counter to the mainstream, usually through a set of fundamental values that can’t be reconciled. Some subcultures, like antique collectors and dog-show types, co-exist just fine within the mainstream. Others, like neo-Confederates, not so much. And in the South, we’ve got plenty of notions that don’t jive with common American ones. Kirby acknowledges that, too:
The bourgeois hegemony in the South is fairly recent. [ . . . ] For the antebellum South was primarily a civilization based upon noncapitalist — indeed anticapitalist — labor and social relations, and its people were shamelessly devoted to leisure and indiscipline, maddeningly indifferent to technology and growth.
If that’s where the roots are, you can’t make the branches grow where the trunk isn’t. You’ve got the makings of a counterculture. Over on the next page, he conceptualizes three powerful myths about Southern culture:
This countercultural South is widely acknowledged and almost totally (and perhaps willfully) misrepresented as superficial, curmudgeonly regional male style: southerners (read “white middle and upper classes”) are archconservative politically, dangerously aggressive in pursuit of violent sport, and excessively familiar in social relations. Southerners (read “rednecks” and “hillbillies”) are quaint premoderns, prone to taking the law into their own hands, but entertaining despite their doleful delinquencies of discipline and taste. And southerners (read “the black poor and working class”) are lazy and immoral, our principal criminal population.
About the counterculture, it wasn’t the first of those three that the nation worried about. It was the latter two. What do you do with people you can’t control or assimilate? They’re frightening and confusing, and our nation has pared both down to their worst qualities, making them into comic buffoons and irredeemable villains.
Here, Kirby does better than that, beginning with the tendency to “negotiate” within black working-class culture to seek solutions to their problems. In this first essay, he builds his discussion around the Mississippi John Hurt song “Stagolee” and a then-new book by Nathan McCall called Makes Me Wanna Holler. McCall’s autobiographical work confronted systemic racism in the mid-1990s, and Kirby uses it as a contrast to “my own white working-class neighborhood, about fifteen years earlier and hardly a mile and a half away, in the very same Portsmouth.” McCall had also come from a suburban background, but had made “choices” — an idea whose validity Kirby questions — that eventually put him in prison, yet he later ascended to become a truth-telling journalist and writer. Kirby asserts here that racism necessitated that African Americans, especially men, constantly had to “negotiate” social terrain full of near-impossible hurdles.
Kirby then does what historians do: analyze the past. Using this theme of negotiation, he moves quickly through slavery, the Civil War, and Reconstruction, and reminds his reader that “sharecropping, credit, and the expanding railway system” played major roles in how the working lives of Southerners were shaped. The factor of money changed everything. Previously, slaves had been paid no wages, the hands that produced the crops had no role in selling them, and later, wages to sharecroppers and millhands were often “paid” in company-store credit. Landlords and “furnish” merchants also managed to turn debt into wage slavery. Kirby uses the story of Ned Cobb and the almost-biracial Southern Tenant Farmers Union as his example for how those unjustly treated laborers negotiated the circumstances.
About the two-thirds of the way though, Kirby takes us back to Nathan McCall and adds another now-familiar figure, Henry Louis Gates, Jr. For Kirby, Gates exemplifies something altogether different: an African-American man who has had “success in accommodating himself in mondo bianca.” McCall’s story shows one trajectory, and Gates’ another. Both men had then-recently published memoirs, and Gates’ book Colored People told of growing up in the Piedmont area of West Virginia. Taking a distinctly different approach, “Gates extends black experimentation with the language of identity with a view to reconcile black folks not only with themselves but also with all us whites who may be willing to read, to listen.” Unlike McCall’s surroundings in Virginia, where blacks were a significant group numerically, Kirby’s explanation has the Gates family living in a tight-knit black community where their numbers were small enough not to be threatening to whites. This, of course, yielded differing courses for two young men who would become leading intellects.
Next are the white working-class men, who for obvious reasons took other routes into modern culture. In an essay titled “Retro-Frontiersmen,” Kirby begins with VS Naipaul’s A Turn in the South, published in 1989, and in particular, with a guy who Naipaul interviewed about rednecks. I had read Naipaul’s book fifteen or twenty years ago, so the passages were familiar to me. The guy had a lot to say, and most of it would make sense if you’ve known rednecks, but probably wouldn’t if you haven’t. Kirby uses the “colorful” descriptions to veer his way onto the subject of land management, more specifically forests, even more specifically the intentional setting of forest fires.
Carrying us backward to the time of Frederick Law Olmsted, Kirby threads a narrative that we don’t normally hear, one in which the white working poor were affected by and reacted to modern land-use practices. While modern people might look at rural Southern whites in the mid-nineteenth century and see only desperate poverty, Kirby reminds his reader that forests, left untended, provided many of their basics needs: wild hogs for meat, edible plants, wood for shelter. Initially, we learn, farmers fenced their crops to keep wild animals out. They handled farming with a method that mixed field rotation with slash-and-burn, and they grazed livestock on common land. Then, the paradigm changed to fencing in the animals, not the crops, while fertilizing and reusing existing fields. This shut many people out of the wild places that sustained their lives. Thus, a massive group of non-landowning, mostly white frontiersmen were turned into an antagonistic social force when they were shut out by “progress.” Then came deforestation by the paper and lumber industries, which was enabled by railroads. Later, when rural whites turned to setting forest fires as a means of revenge (or as they saw it, justice), county extension agents were put in place to see that “modern” agriculture methods were being used by all, instead of the old free-range ways that were being forcibly left behind.
Increasingly unable to scrape out a living, poor and working-class whites who didn’t own land had to join the wage-earning world where every option meant being controlled by the system. There was sharecropping and also alternatives that involved hard labor for low pay: factories, mills, lumber camps, and work crews. This way of life ran directly counter to the self-reliant freedom offered by subsistence farming, free-range grazing, hunting wild game, and foraging in forests. Here, Kirby returns once again to the habit of setting forest fires as a reaction to a system that forbade many from accessing the abundant resources. The way Kirby puts it in The Countercultural South, where African-Americans resorted to negotiating within their circumstances, for whites, there was nothing to negotiate. Powerful forces aligned against them to gather resources into a few hands, and there was little they could do- except burn it down.
The third essay in the collection, “‘Redneck’ Discourse,” offers Kirby’s culminating discussion with references to everybody from WJ Cash and VO Key to Ellen Glasgow, Flannery O’Connor, Walker Percy, and Erskine Caldwell. He also spends a few pages on country music, mentioning clean-and-acceptable singers like Randy Travis and Reba McEntire as well as David Allan Coe’s “Longhaired Redneck” and Jerry Jeff Walker’s “Redneck Mother.” Toward the end of the essay, Kirby shifts our attention to a series of lesser-known writers: Harry Crews, Harry Leland Mitchell, Linda Flowers, and Constance Pierce, so we can see how much the South has changed since the rural worlds were lost. By the mid-1990s, women were becoming prominent writers and were taking part in politics as delegates to conventions. Earlier in the chapter, he had given a page or two to the ways that Southern middle-class men were taking on the affectations of being a Bubba. By the end, Kirby is marveling at how even the Jaycees in the South had let women in. “So Dixie was a little behind, as usual, but not by much and not for long,” we read. For this new epoch, our trails wouldn’t be blazed by mythic noblesse oblige aristocrats. This time, it would be “realtors and developers, sellers of insurance (among many other things), members of learned professions, and striving occupants of the lower and middling levels of corporate infrastructure.”
As with the first two books I read for this fellowship — Rosenzweig’s The Presence of the Past and Wilson’s Judgment & Grace in Dixie — I finished The Countercultural South thinking, I should have read that years ago. The book was published when I was a junior in college, studying English, not history, but Kirby was writing about the world that I had experienced in the 1970s, ’80s, and ’90s. I’ve long been privy to sentiments that people with money and power don’t do right by the working classes, many of those grumbling utterances coming from white men. These days, there’s not much sympathy for the plight of white men, especially when the group turns Kirby’s countercultural frustration into the notion that white men are victims of discrimination. However, it is also widely acknowledged that frustrated, white, working-class men identify heavily with widely held belief that social and political forces align against them.
About Kirby’s discussion of black men, it is either incomplete or it is simply left to stand on its own. He acknowledges in the “Introduction” that the essays are not meant to make up a cohesive whole, but the second and third essays do go together. He makes some solid points about “negotiation,” but focuses a good deal of attention on McCall specifically, where the latter two essays cite wide-ranging examples from white culture.
For my part, Kirby’s book lends a bit of credence to the date I’ve chosen as a starting date for this project: 1970. My idea is that the Civil Rights movement of the 1950s and ’60s was a something like a mini-medieval period, with what came before being distinctly and obviously different from what came after. Kirby wrote in that third essay,
By about 1970, the “modernization” of the rural South was more or less complete, [Hank] Williams was long dead, and the country music audience was vastly transformed. The southern industrial working class was enlarged; and as we have already observed, the country population, while shrunken, still included the largest population of rural poor in the nation.
Basically, the region was different . . . but still the same, too.
In The Countercultural South, Kirby writes mainly about change, because that’s one of the markéd features of the post-movement era: constant change. Kirby uses examples like country music to show that evolution. He also discusses land being rented to outsiders for hunting. The example I use, which fewer people think about, is air-conditioning, which changed the way houses were built, eliminated a main reason to be on the front porch, and caused people to start closing their windows. Even if the Civil Rights movement had never happened, air-conditioning by itself would still have altered the social and cultural fabric of Southern culture indelibly. Change was coming, no matter what.
I want to end with the question I began with: If the rural worlds were lost, then what came next? I get tired of those who would say, “There’s no such thing as ‘the South’ anymore.” On the flip side, efforts to preserve in amber some mythic thing called “the South” seem just as futile as declaring it “dead.” The boosterism and consumerism of the new middle-class seem, to me, to want to commodify the whole thing by discarding the unpleasant parts and accentuating the charming parts. That sanitized version might work for a feature film or a home design show, but in real life, it doesn’t. So what is like, really, amid these changes? With any change, the people who get left behind and pushed aside aren’t going to like it, which does not make them villains, and if there’s a cohesive argument there, a counterculture will result. In the South, we can look at angry, disaffected people with chagrin, saying, “Why don’t they stop acting like that and saying those things?” Or we can be people “who may be willing to read, to listen” to understand what our systems and institutions are doing to their lives. | https://medium.com/nobodys-home/reading-jack-temple-kirbys-the-countercultural-south-58e1ddf28ab6 | ['Foster Dickson'] | 2020-12-20 17:22:06.590000+00:00 | ['Books', 'Reading', 'History', 'The South', 'Working Class'] |
Why Marketing Consistency Leads to Success | Put Up or Shut Up
If you’re starting a new job, workout program, diet, or relationship you need to realize that the key to success is always consistency. From a business perspective, the same is true for marketing.
Consistently promoting your business and it’s services/products is a way to drive growth quickly and effeciently.
Ways to market consistently
There are numerous ways to market your business today. Communication has never been easier and with that, we are at an all time high of being able to put our businesses and their brand messaging in front of consumers.
Free ways to market yourself or business
Social media
is without a doubt something you should be investing time on as a business owner. Being free, it’s a collection of networks that puts your content forward for people interested in whatever it is you have to offer.
Consistently sharing, broadcasting and interacting with your followers is going to mean big business in the future. We use five main social media networks to spread the word about Couple of Creatives. You can pick whatever networks you like on your own but it’s not necessary to cover them all. In the end, pick the networks you know you will use. If you neglect any, it will come off as unprofessional.
On these social media accounts, we post a number of times a week (3–5). To help with this we use an application called Buffer which allows us to schedule and post on our behave. I highly recommend spending a day or so a week scheduling content. It will seem like a lot of work at first but you will save more time in the long run.
Networking
Attending meetups, talks, and general network meetings are free outlets to learn more about other business and explain the benefits of your own to members of your local community.
Consistently making an effort to network keeps your name in the back of like-minded business owners. The time may arise where they need a good or service you provide and they think of you first.
Focus on making good impressions. Come up with an elevator pitch about what it is you provide and how you can benefit consumers.
Interact more online
Forums, blogs, comment boards, and more are free places to show your appreciation, opinion, and expertise in a given area. You’ll likely want to hone in on topics surrounding your own business. For example, I visit a website called designernews.co quite often. It is an open forum for designers to talk about news, inspiration, ask questions, and more. I like to comment on some topics as well as post my own thoughts on the forum.
Doing this means a few things:
My name gets spread a bit if the topic gains in popularity. I can offer advise and appear as a professional thus increasing my public perception. The more I reply and offer my own thoughts the more my name spreads
All of these things combined can come from a simple interaction on a website. It’s pretty amazing to think that many people got to where they are by simply stating what’s on their mind.
Blogging/YouTube
Subscribe to our channel for more videos!
We are avid bloggers and now YouTubers. I myself dedicate time to Couple of Creative’s YouTube channel and my own creation called Web-Crunch’s YouTube channel and blog. (I recommend you go subscribe if you’re interested in design, development, and entrepreneurship).
Premium ways to market yourself or business
Ads
Ads are the obvious route when it comes to promoting your business by spending money. Depending on your business you may need to take out ads in specific areas. We are an online business primarily so we take out ads on Facebook, Instagram, and Google Display ads from time to time.
If your business is a storefront you can utilize the best of both worlds by doing a bit of both direct marketing and digital. Stumped about what those are? Read this for help.
Sponsorships
Radio spots, events, and printed collateral are great ways to spread the message about your business. These cost money but help accelerate your brand’s presence to many people of all types of backgrounds.
Radio sponsorships, for example, are run for a set amount of time. Business owners can pay for more spots as they please and in return, a radio station creates a radio reel about your business. Think of it as a dynamic testimonial and promotion in one that people hear quite often.
Signage
Billboards, flyers, storefront signs and much more are great ways to increase awareness of your business. These might not mean instant business but it creates an impression with people who see them. If they happen to need or want something you offer in the future, chances are they will think of your business first if they saw your sign.
So, Where to start?
If you’re new to marketing your own business you can do a lot these days on your own using the free and premium methods I mentioned prior. The main question to ask yourself is:
Do I have the time to market my own business?
We all want to be successful as well as have a personal life. Balancing those are forever a challenge but many find ways to make it work. Marketing takes a lot of work and again if not done consistently it can actually hinder business.
We actually talked very recently in a new video about how marketing incorrectly can also hinder business. Check it out
Let’s Rock n’ Roll
We recommend starting off by doing an audit of your business’s brand first and foremost.
Never rush into trying to advertise your business without first appearing professional, knowing what it is that you do and why. People need to be able to trust you and find the value in what you have to offer. Don’t give them any reason to feel uncomfortable.
If you’re not sure how to go about making your brand appear cohesive and professional you might consider hiring a professional. It just so happens this very challenge is something we are experts at. Let us give you a hand. 💡👫
If you’re not quite ready for help. Start with social media. Pick a couple of networks that you find yourself already on quite often. Facebook and Twitter are great places to create branded pages and promote your business for free. Start showing your followers you mean business by posting quality content often. Soon enough you’ll gain some attention and from there you can compound that attention into even more.
Have questions? We are here to help. Contact us today. | https://medium.com/couple-of-creatives/why-marketing-consistency-leads-to-success-74f688127042 | ['Andy Leverenz'] | 2017-03-24 19:38:37.348000+00:00 | ['Marketing', 'Practice', 'Marketing Strategies', 'Consistency', 'Social Media'] |
Why the Blackwater Pardons Are Far Worse Than Anyone Realizes | Why the Blackwater Pardons Are Far Worse Than Anyone Realizes
The world just watched the US disregard the value of human life, mock the rule of law, hand extremists recruiting material, and sabotage its foreign policy. It’s hard to appreciate the extent of the damage.
The Pardon of the Blackwater Contractors:
Nicholas Slatten, Paul Slough, Evan Liberty, and Dustin Heard
The White House included four names on its list of pardons who may be unfamiliar, but they should not go without notice. To pardon them is to set free the killers of innocent men, women, and children. The injustice will fuel the next generation of extremists, will sour relations with Iraq, and make others reluctant to partner with us.
This subversion of the rule of law and defense of human life sends the message that corruption is alive, that the US prizes some lives above others. In the US, if your sister is the US Secretary of Education, then your employees may shoot a child in the head and walk free.
This comes following our very public and embarrassing foreign assault that we completely missed at the same time the US has stunned the world with its nonexistent national pandemic response (97% of member states have a national response per WHO).
The pardon disproves the claim that the US is dedicated “to investigating violations of U.S. law no matter where they occur.” It tarnishes the “commitment of the American people to the rule of law, even in times of war,” as the Department of Justice claimed in 2014.
Instead, the US offers a half-hearted, incoherent justification for pardoning people guilty of killing civilians.
To all who were present around noon on the day the “Blackwater contractors unleashed powerful sniper fire, machine guns, and grenade launchers on innocent men, women, and children,” you will be forgiven for doubting that this is a nation that respects the rule of law or human life. | https://medium.com/discourse/why-the-blackwater-pardons-are-far-worse-than-anyone-realizes-99a5b23872 | ['E. Rosalie'] | 2020-12-25 21:10:38.440000+00:00 | ['Leadership', 'Politics', 'Government', 'World', 'Science'] |
Google Place Autocomplete API With Retrofit, Dagger, and Coroutines | I thought that many developers could face the same situation that I have just gone through. The problem was that there is no article available on the internet related to Place Autocomplete API with MVVM and Retrofit.
If you are here and you got stuck while implementing Place Autocomplete API with Retrofit and MVVM, you are at the right place.
This going to be a very long tutorial because there is so much learning related to Android development. As you can see, I am using Dagger/Retrofit/Data Binding Library, and much more. So let’s begin. If you get stuck anywhere, feel free to comment.
“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.” — Dr. Seus
Let’s get started. | https://medium.com/codechai/google-place-autocomplete-api-with-retrofit-dagger-and-coroutines-6e24dfc26a7 | ['Mustufa Ansari'] | 2020-12-23 11:57:57.495000+00:00 | ['Java', 'Kotlin', 'Android', 'AndroidDev', 'Android App Development'] |
13 Pointless Things You Do Everyday Because You Still Believe it Works | Chasing your dreams without ever chasing the actions required
Photo by Yohann Lc on Unsplash
2021 is just around the corner. You tell your coworkers this is THE year you’ll finally stick to your New Year’s resolutions. Because this year is different, you explain. But you’re really just trying to convince yourself more than them.
Because you believe deep down that this isn’t the year. Underneath the surface you believe you won’t change. You can’t change. You’re a failure.
You tell your friends you want to hit the gym every day. You’re going to write a book. You’re going to complete that diet you’ve tried 45 times but never really stuck to it longer than 3.5 days. That diet you bragged to your friends about it completely changing your life after only 2 days..
You tell everyone this because it gives you a sense of false-responsibility. By spreading the word of how great you WILL be, you feel obligated to achieve that task. Because then you couldn’t possibly face your friends ever again.
But it never works like this, does it? Your coworkers will still like you. Your friends will still be there. It doesn’t not work because you’ve told everyone and anyone willing to listen to your goals. It never works because you don’t want it to work, on some level.
Because you don’t believe it can work.
You haven’t found a reason to believe it can work. To believe in the goal. In yourself.
Movie stars don’t lose 70 lbs and get jacked because they set New Year’s resolutions. They do it because they have to. Because they believe they have to. And they believe they can.
And they have a reason to believe.
Just because that reason might be $5 million dollars for them, doesn’t mean you can’t find an equally important reason for yourself.
When you set out to make your goals this year, don’t tell anyone.
Hell, don’t even make a New Year’s resolution.
Find a reason to achieve the goals you’ve always wanted to accomplish.
Make that reason everything you focus on. What you wake up to. What you dream about. What you think about when you glance at our cell phone 300 times a day.
You need a reason to go to the gym every day. You need a reason to stop eating a bag of chips every other night. You need a reason to feel proud enough of yourself to look in the mirror and say “I believe.”
When your coworkers, friends, and family inevitably bring up the exciting topic of resolutions over the next batch of Christmas cheer, tell them simply:
“Fuck resolutions.”
Find a reason to believe you can do something instead.
The rest will follow.
Believe me.
And then believe yourself. | https://medium.com/illumination/13-pointless-things-you-do-everyday-because-you-still-believe-it-works-cde4c1197f49 | ['J.J. Pryor'] | 2020-12-03 13:13:46.516000+00:00 | ['Motivation', 'Resolutions', 'Self Improvement', 'Habits', 'Life Lessons'] |
Functional Programming With Java: map, filter, and reduce | map
Stream#map(Function<T> mapper) is an intermediate stream operation that transforms each element. It applies its argument, a Function<T, R> , and returns a Stream<R> :
That’s the gist; map is pretty straightforward to use. But there are specialized map functions depending on the types.
flatMap
Stream#flatMap(Function<T, Stream<R>) is the often-misunderstood sibling of map .
Sometimes the mapping function will return an arbitrary number of results, wrapped in another type, like java.util.List :
var identifier = List.of(1L, 5L); Function<Long, List<String>> mapper = (id) -> ...; identifier.stream() // Stream<Long>
.map(mapper) // Stream<List<String>>
???
Most likely, we want to work on the list’s content, not the list itself. By using flatMap , we can map the Stream<List<String>> to a Stream<String> :
var identifier = List.of(1L, 5L); Function<Long, List<String>> mapper = (id) -> ...; identifier.stream() // Stream<Long>
.map(mapper) // Stream<List<String>>
.flatMap(Collection::stream) // Stream<String>
...
Optional<T>#flatMap
In the case of java.util.Optional<T> , the flatMap method is used to flatten the Optional back to its content:
Actually, the implementation of flatMap is even doing less than map by omitting to repackage the mapper's returned value into a new Optional .
Value-type map / flatMap
Until Project Valhalla with generic specialization arrives, handling with value types and generics is always a special case.
We could rely on auto-boxing, but we can’t deny that there’s an added overhead. The JDK includes specialized Stream types to improve dealing with value types:
If our mapping function returns one of the related value types, we could use the corresponding mapTo...(mapper) / flatMapTo...(mapper) to create a value-type-based Stream:
This way, we can get a real array of long , without intermediate boxing:
long[] hashCodes =
List.of("hello", "world")
.stream()
.mapToInt(String::hashCode)
.toArray();
forEach
As mentioned before, map is an intermediate operation. Many other languages use it to perform actions on all elements, discarding any return type, if not void .
We can use map just like that too, but there's a better way.
By utilizing the terminal operation Stream#forEach(Consumer<T>) , we apply the consumer on every element of the stream: | https://medium.com/better-programming/functional-programming-with-java-map-filter-and-reduce-d0df1092d6ee | ['Ben Weidig'] | 2020-09-30 18:46:46.956000+00:00 | ['Software Development', 'Coding', 'Functional Programming', 'Java', 'Programming'] |
Human-Centered Design in Real Time | In March, COVID-19 reached pandemic status. Hospitals faced a sharp increase in patients and an equally sharp decrease in resources. Supply chains were significantly delayed, and hospitals overrun with COVID patients were creating bed capacity by any means necessary, from repurposing existing units to creating new ones in tents or ships.
Like hospitals around the world, Swedish Health Services — the largest nonprofit provider in Seattle — had a plethora of doctors, nurses, admins, and other hospital decision makers who needed to track and optimize available beds and clinical supplies like masks and ventilators. They also needed to track patient status and staffing needs. This kind of on-hand and in-the-moment information creates situational awareness to help workers make fast, informed decisions.
Working with healthcare experts, a task force of Microsoft designers, engineers, product managers, researchers, and content designers came together. From end to end, this collective discussed user needs and outcomes, data modeled the business workflows, and designed UX mockups, visual assets, and user guides. They held two-hour design sprints on Microsoft Teams, using mockups in PowerPoint to get everyone on the same page. Design and engineering then used those PowerPoints to rapidly prototype a solution and start another iterative cycle.
Members of the Swedish staff provided expertise that the team folded into the designs in nearly real-time. This insight about necessary levels of granularity and timeliness helped create designs to accurately capture staffing needs. For example, there might be Registered Nurses (RNs) without assigned primary care patients who have more availability. Using a solution that marks nurses as Assigned RN or Unassigned RN, a hospital could quickly re-assign work as it received more patients or other needs shifted.
This also gave frontline workers a voice to request additional support from supply managers, respiratory therapists, or any other clinical or non-clinical roles. Tracking these workers’ requests alongside things like burn rates of masks and other resources enhances situational awareness among management decision makers and helps analyze trends so organizations can best care for patients and keep workers safe.
Working together, an emergency response solution for individual hospital systems was ultimately created. While this is very much a first version that the team is continuing to evolve, several major Washington hospitals are already using it. Now that it’s also globally available in 11 languages, hundreds of organizations worldwide are adopting and exploring it, too. You can learn more about it here and it’s available here.
For government: coordinating regional and statewide healthcare
With the first solution, the team aimed to meet the needs of single hospital systems. With the government, however, they had a different design problem — they needed to coordinate multiple hospital systems. State governments must track and plan logistics across all healthcare providers for their region to allocate critical resources where they’re needed most
To help solve this, our task force held another multi-day HCD sprint with government workers at the Washington State Health Department to create a separate solution that focused on the state’s most pressing needs. Forget what the tech should be, forget what the screens should look like — what outcomes does the state ultimately want to achieve? And based on that, what capabilities do they need, and what are the corresponding personas and user journeys?
Starting with an empty mind map, the design team began identifying and internalizing the needs of medical professionals and government workers. They sketched workflows, held design critiques, and created end-to-end user journeys. Then came rapid cycles of user feedback and design iteration, something open-source design processes easily facilitate. | https://medium.com/microsoft-design/human-centered-design-in-real-time-9dff578a0fba | ['Rachel Romano'] | 2020-05-01 16:27:23.188000+00:00 | ['Design', 'User Experience', 'Microsoft', 'Apps', 'Healthcare'] |
Stop Using Your Left Hand | Stop Using Your Left Hand
Sometimes it can hold you back
Most people are really good at something. And if they are not, then they most likely could get really good at something, with some time, practice, and initiative.
Most of us have some type of strength. Not always physical strength, but something else.
Some of us are smart. Some of us are great with people. Others can work well with their hands. Or are born to lead. Or are responsible.
And if you don’t have any apparent strengths — you’re well-rounded (which is a strength).
The number of strengths is too great to completely list.
But I have noticed that most people don’t use their strengths very often.
They focus more on weaknesses — on what they lack — rather than what they are really good at. | https://johnmashni.medium.com/stop-using-your-left-hand-2b48bff8d8b4 | ['John Mashni'] | 2019-06-04 13:58:04.873000+00:00 | ['Leadership', 'Self Improvement', 'Life Lessons', 'Personal Development', 'Entrepreneurship'] |
Man With Machete Thwarted By Kindergarten Nurse | Teddy-bear’s picnic
On 8 July 1996, just four months after the Dunblane school massacre in Scotland, another attack took place.
Dunblane had witnessed sixteen children and their teacher murdered by a crazed gunman. This time, one brave, selfless nurse stood in the way of a machete wielding lunatic.
Nobody had seen the man run along the fence line. A small grey bag slung over his shoulder. The bag contained a sixteen-inch machete, a smaller knife, a bottle of petrol and two iron bars.
He was intent on causing harm. When he reached the fence, he swiftly pulled out his machete and slammed it down upon the head of Wendy Willington. No warning. Time froze as Wendy dropped her carrier bag and slumped to the ground. Blood gushing from the wound.
Teacher Dorothy Hawes shouted the order for everyone to run inside. The children of St.Luke’s School began to scream. The end-of-term teddy bear’s picnic was officially over.
Surinder, young Reena’s mum, who was happily chatting away to her friends and fellow mothers, Azra and Wendy, had turned away when the machete came smashing down. The blow struck Surinder in the head and she too collapsed to the floor.
The man, tall, black and wearing a trilby with a chin strap, then brought the blade down upon Azra. Three women now lay bloodied on the ground from head wounds. The man lurched over the fence trying to grab a child, but she was too fast.
All around Lisa children clustered. The bubbly 21 year old nursery nurse remained surprisingly calm despite all the screams around her. She saw Dorothy push as many children as she could back inside the school.
Lisa did the same, shoving the children through the open doorway only to realize that not all the children were there. She did a quick head count and ran back outside.
The man immediately noticed Lisa and rushed to get her, machete raised and ready to swing. As the blade came down, Lisa lifted both arms to protect her head from the descending knife.
The blade struck. Her left arm and hand were cut through. Her ulna bone protruding through her skin with the hand hanging from its tendons. Adrenaline surging through her body, Lisa felt no pain as her cardigan quickly became soaked in blood.
“The man’s lips were drawn back and there was the most frightening, angry grimace on his face,” Lisa recalled. “It was as if he was laughing.”
Angered by her intervention, the man slashed once more at Lisa, this time aiming his blade at the young Francesca. The little girl was clutching Lisa’s skirt tightly as she was being herded to safety.
Lisa threw out her injured left hand to divert the blow but not in time. The blade skimmed past her fingers and slashed poor Francesca across the face. Her whole left cheek was brutally exposed from ear to mouth. The child’s eyes glazed over as she collapsed.
The man turned his attention to the remaining children outside.
In the toy shed, Philippa Parlor was in a desperate battle to keep the man out. Her full weight behind the door with three children crying and screaming behind her. The man tried to wrestle the door open, pushing and shoving, but Philippa held on.
Lisa had now managed to get almost all the children into the nursery building. She could see the man at the far side trying to breach the toy shed. And there, in the playground, were two remaining children, Marium and little Ahmed.
Lisa had a decision to make. A pivotal moment. Should she remain and lock the door ensuring the safety of the children behind her? Or should she risk all trying to reach the remaining kids stranded and helpless in the playground. She didn’t hesitate. Lisa ran back out to the playground.
Marium and Ahmed raced towards Lisa, the man right behind them raising his machete to strike again. Ahmed tripped over.
The machete came down. Lisa swooped on Ahmed, scooping him up as the blade crashed down. Lisa had lifted her uninjured right hand to stop the blow. The machete sliced into her hand and cut open the top of Ahmed’s head.
Lisa ran. Ahmed in her arms, Marium at her feet, the vicious lunatic one step behind. Lisa reached the door and slammed it shut only for the man to wedge his foot in the door.
Blood was running down Ahmed’s head. He was still conscious but very quiet. Again, Lisa had a moment of calm. Despite her wounds and the screams, she was able to assess the situation.
Across where she was, at the reception classroom, sat 25 terrified children, temporary teacher Linda, injured mother Wendy and Francesca with the cheek wound. The door was slowly being forced open by the man.
Lisa’s back firmly pushing but her strength wasn’t holding up. In front were six or seven children including Reena and Surinder’s daughter whose right cheek had also been sliced open. Lisa knew she couldn’t lead the man towards the other children at the reception classroom.
Placing Ahmed on the floor, covering him in clothes so the man wouldn’t be able to find him, Lisa gathered the children towards her. She moved away from the door and bent over the children, trying her best to protect them. She thought this would be it. This would be the moment she dies and here would be her final resting place.
The man burst into the room and struck the machete down onto Lisa’s back. Once, twice he hacked before turning back to the open door. He must’ve seen another target. Adrenaline coursing through her body, Lisa was able to stand up and guide the children down the hallway.
Unbelievably she felt nothing but a deep throbbing from her wounds. Ahmed remained hidden under the clothes with only his feet protruding.
The man saw the movement and turned back once more.
The machete again sliced into Lisa. Her back now suffering more damage as she continued to run, herding the children to the nursery. Blood was pouring from her head wound. Blood was pouring out of her arm wounds. Blood was pouring down her back. She was a bloody mess…and yet she still fought. Grabbing a tray, she flung it back at her assailant as she raced to a far door which led to the main entrance of the school.
Her blonde hair matted red, Lisa stumbled through the door where she was greeted by the Head Teacher. The police were on their way along with an ambulance. Teaching was still being conducted in the other classes.
The attack had lasted less than five minutes and nobody except the nursery school were aware of what had just happened. | https://medium.com/lessons-from-history/man-with-machete-thwarted-by-kindergarten-nurse-a6003b62499f | ['Reuben Salsa'] | 2020-08-06 23:13:23.036000+00:00 | ['Schools', 'Salsa', 'Heroes', 'History', 'Writing'] |
13 Clever Money Lessons My Pain in the Ass Boss Taught Me | 13 Clever Money Lessons My Pain in the Ass Boss Taught Me
This guy was the devil’s uncle and he can teach you a lot.
Picture by OzgeCebeci
If the devil had an uncle then this guy would be it.
Despite the obvious leadership lessons he taught me about how not to lead humans, he taught me a lot about money. The moment he became my boss I knew it would be over quickly. Over the short time I spent working for him, I took notes of all the money lessons he imparted on me before I became the dearly departed.
The cleverest money lessons come from financially rich people who are poor in all other areas of life. Money can show you the ugly side of life which you can learn from. Here’s what the devil’s uncle taught me.
Invest in yourself above all else.
That’s why he was an ass. All the money he made from holding various senior leadership roles was invested in meat pies, beer and real estate. Not a dollar was spent on self-education.
He had the self-awareness of a 5 year old. He was emotionally unintelligent. His jokes were uncomfortable. His communication style was that of a little boy playing in a sandpit and wanting to destroy all the other children’s sandcastles.
All he would have had to do was spend some of his money on understanding himself. He had childhood trauma; it was written all over his face. He clearly had lots of romantic relationship issues over the years.
You can’t prevent tragedy, but you can learn to deal with it by investing your money in yourself.
You can be rich in one area of your life and poor in others.
He was rich in the work he did. He found a way to “step over dead bodies” (as he called it) and get financially rich off a corporate fat cat salary.
To his credit he was fairly rich in the family area of his life too and prioritized his wife and children. In the areas of friendship, health, fitness, entertainment and travel, he was incredibly poor.
When you meet a financially wealthy person, take a look at what areas of their life they had to neglect in order to get the money. It’s rare someone is a billionaire and is 10/10 in all other areas of their life.
The high-net-worth individual title isn’t impressive.
This is a title he loved to drop. He wanted everyone in the office to know that his bank deemed him to be a high-net-worth individual. This was the first time I had come across this strange version of bragging.
Your financial status is nobody’s business.
Shouting from the rooftops how much money you have is a great way to get robbed and cheated by an email from a Nigerian Prince.
It’s not how much money you have. It’s what you do with it. You can do meaningful stuff with money. This is what people remember. This is what attracts people into your life that can help take you to the highest of highs and experience a sense of joy you didn’t know existed.
Titles are for the factory worker industrial age. In the 2020s it’s all about how your work contributes to the evolution of society. The meaning of your work outweighs the money it places in your bank account.
Bank digit competitions don’t beat the Olympics.
He was a competitive son of a gun. Everything was a game with a guaranteed loser — this is how he thought about money. You were either a rich winner or a poor loser. There was no in-between.
The number of zeroes at the end of your bank balance is meaningless. You can have lots of zeroes in your bank account and sit at home alone, while battling divorce, with estranged children who never want to speak to you again, and still become incredibly unhappy.
Life isn’t a competition.
A competitive mindset works against you. Why? You can’t always be the money-making winner. At some point it will be your turn to lose. I learned that when this terrible boss fired me. I had to spend my time eating shit for a while and looking for a new job.
Replace competition with the art of collaboration.
You’ll make a lot more money.
A Porsche doesn’t make you interesting.
One day I came into the office early. We went out for coffee which was never fun. He told me a story about how he was so rich and successful and threw money at a Porsche like it was nothing.
I was supposed to be interested by his Porsche porn. It didn’t make sense. It was a piece of metal with four wheels exactly like my moderately priced Honda Civic. No matter how many times he shared his Porsche story to impress people, nobody was interested.
He ended up going back to a Toyota and never talking about cars again. You don’t need to be a slave to a luxury car and get into debt.
Stick with the Toyota.
Stay away from mortgage motivation.
The moment this boss went from sweetheart to the devil’s uncle was when he casually dropped the phrase “mortgage motivation.” I’d never heard this phrase before. He explained to me before I started hiring people that I must only choose candidates who have a mortgage.
I wasn’t familiar with this interview question. That’s when I saw his true red devil colors.
“You need to select people who have a mortgage so if we need extra revenue on a particular month we can put our fingers into their backs and drive them harder. A mortgage is a form of motivation a person can’t ignore. They either comply or they can’t pay their mortgage and their family will be disappointed with them. Their partner may even leave them.”
He then told the story of how he was going to bring across his mate from another company to help with managing. Apparently, his mate was an expert at sticking fingers into people’s backs as they made phone calls to customers.
Mortgage motivation is modern-day human slavery. Don’t fall into the trap.
Avoid debt if you can.
Take on a level of debt you can easily repay (if you must).
Avoid employers who promote mortgage motivation.
The best money motivation is to earn money to invest into things that give you meaning in your life (and in the lives of others).
Your job title doesn’t change your salary.
Job title circus made no sense to me. It looked as though he was getting pay rises every six months. His job title kept getting fancier. That’s when a colleague alerted me to the game: you can change your job title on LinkedIn as many times as you want, to whatever you want.
Your job title doesn’t equal the amount on your paycheck.
If you buy books then actually read them.
A financial education can make you clever with money. Finance books are a great way to learn about money — from hedge fund managers, Wall Street tycoons, CEOs, tech company founders, etc.
Every day I would come to work and see books on my boss’s desk. The books seemed to randomly change a lot. I learned that leaving books on your desk is supposed to make you look smart. What made me laugh was when I found out he hadn’t actually read any of the books. Use your money to buy books and then actually read them.
Every dollar you spend learning about money is the equivalent of roughly ten dollars you don’t have to work for in the future.
Nobody cares how many homes you’ve got.
The number of homes you have doesn’t make you rich. My bad boss had lots. The funny thing was the homes were all in terrible suburbs.
Location, location, location is what counts with property. A good property in a terrible suburb with a high crime rate won’t go up as fast as a modest property in leafy green suburb with no bank robbers.
You can have a lot of homes and still be an asshole.
The way you treat people can make you a lot more money than investing in lots of properties and treating people like human slaves.
Don’t live on a golf course.
If you live on a golf course you’re not rich. You’re stupid. Who wants golf balls hitting their house?
Who wants to give directions to a kids birthday on a golf course? Who wants to have the stripy pants club hanging around the front of your house?
My bad boss lived on a golf course. It was situated in one of the most dangerous suburbs in my hometown. He paid top dollar to live in a dangerous area and brag about golf course life. Everybody at work knew about his home because he made it a point of telling everybody. It made no sense.
You can live on a golf course but if the suburb is more dangerous than Compton California, where Snoop Dog grew up, how is that a good financial decision? It isn’t. It’s also the reason why his tin pot convertible Porsche kept getting broken into, forcing him to sell his ego on a car website for $100K.
Amenities to your home don’t make you rich. The family in the home do.
Throw wads of cash at your health.
He looked like a grandpa with grey hair, even though he wasn’t old. His eyes were devil red. The white part of his eyes was pineapple yellow. His skin looked like a shriveled up paper bag you put mushrooms in.
His beer gut would hang over his belt, where all the unreleased emotion he was too afraid to let go of was stored. The financial lesson he taught me was simple: look after yourself.
Spend money on your health.
Buy lots of fruit and vegetables.
Drink water so you flush your system.
Lower your alcohol consumption or give it up entirely.
Spend money on going to a gym or doing group exercise in the sun.
The state of your body determines your energy levels. Low energy produces angry bad bosses. You can be far more productive and make a helluva lot more money when your brain isn’t foggy.
Energy is life. Energy attracts the people to you who can help you earn a decent living, so you can stress less and work less.
Replace hate with love and you’ll be filthy stinking rich.
My bad boss was an angry grandpa with a one-way ticket to the mental hospital. His financial problem was hate.
Hate makes you poor. You seek revenge and treat others badly when you’re a hater with unresolved psychological issues. Love makes you a lot of money. People who live their life with love may not have as much money in their bank account, but love makes their life worth living.
Love conquers all — even your financial goals.
Help other people make money.
It’s way more fun. Seeing people hit their financial goals and provide for their family, thanks to your help, is one of the best feelings in the world. That’s what my bad boss inspired me to do. Every bad boss can learn to become a human again.
How much money you make depends on how committed you are to working on yourself. | https://medium.com/the-ascent/13-clever-money-lessons-my-pain-in-the-ass-boss-taught-me-156be5a04172 | ['Tim Denning'] | 2020-12-12 15:02:23.480000+00:00 | ['Money', 'Leadership', 'Books', 'Life Lessons', 'Work'] |
6 of the Latest UX Trends in Web and Mobile Design | The user experience (UX) of your website, web application or mobile app are absolutely vital. Good UX tells the user whether your company is trustworthy, whether you have their best interests in mind and how hard you’re going to work to cater to their needs.
When it comes to UX design, there is no room for cutting corners. You need to get it right or you risk turning customers away. That’s why we’ve put together this list of the latest web and mobile design trends, so you can find out what your users are expecting from your UX.
UX Design
Different Types of Media
One of the biggest challenges that UX designers have today is figuring out a way to keep users engaged and on the page/in the app. Luckily enough, there is one simple way of doing this: use different types of media.
The most shared types of content across the web are pictures and videos. Users love the visual medium as it can help to illustrate a point or just give them something else to look at other than giant walls of text.
That’s why your UX should accommodate all different sorts of media. You can include videos and images that will keep users on your page and not send them to someone else’s.
Micro Interactions
Micro interactions are also keeping users engaged. Microinteractions.com explains that “Every time you change a setting, sync your data or devices, set an alarm, pick a password, log in, set a status message, or favorite or “like” something, you are engaging with a microinteraction.”
Micro interactions can be as miniscule as the pull-to-refresh feature in the Twitter mobile app, or the ability to ‘like’ a post on Facebook. Although they are small and barely noticeable, they can make a huge difference to the UX.
One reason that we love micro interactions so much is because they allow us to have a tangible effect on the apps themselves. We feel like we are in control and in some cases they can make an app more practical or more fun to use.
Simplicity
Another way that you can boost user engagement is by keeping your UX simple. This is easier said than done but now that users have shorter attention spans than goldfish, it’s important that you don’t distract them with unnecessary design elements.
By keeping things simple, you can draw the user’s eyes to where they really need to be. For example, one way of keeping things simple is to use a lot of white space. In using white space, you can use bolder colors or fonts to capture their attention.
On top of that, making your UX simple also keeps it ‘skim friendly’. Users don’t like it when websites and mobile apps are cluttered — mostly because they don’t have time to look at everything. If you keep your UX simple not only does it allow them to get to the need to know information quicker but will also show them that your website or app is a trusted resource.
A win-win by all accounts.
Mobile Friendly Sites
One of the biggest things that website owners overlook is whether or not their site is mobile friendly. It might look fantastic on a desktop but how does it look on a smartphone or tablet?
Website owners will need to find an answer to that soon as Google’s upcoming update will actively punish sites that aren’t mobile friendly. The search engine giant already adds a helpful ‘mobile friendly’ banner to sites in their search results but soon websites will find themselves pushed down the ranks if they don’t look good on mobile. For many websites, this will cause a huge dent in traffic and could seriously harm revenue.
Google’s mobile friendly changes went live on the 21st of April, 2015. You still have a chance to take Google’s mobile friendly test and see if your site is designed for mobile devices.
Material Design
Material Design is a design language created by Google and used by the developers of Android apps. Its core principles are bold typography, bright colours and physical surfaces and edges.
Google already uses Material Design in most of its own apps such as YouTube, Gmail, Google Drive, Google Maps and all Google Play-branded apps. It has proven to be a success in these apps as users like the way that it resembles real materials (Google based Material Design on paper and ink). Users like that everything is clear and they enjoy the fact that the visual experience feels sleek.
Another big benefit of using material design for your UX is that it also scales tremendously well too. Google has revealed that one of their biggest goals with Material Design was to create “a system for design that would work across all our platforms” and with the mobile and desktop versions of their apps using Material Design, they seem to have done just that.
A/B Testing
However, while these trends may be popular right now, it’s also important that you test them out. One effective method of testing is A/B testing in which you test two different versions of a product (or in this case, UX design) and see which ones your users like best.
You can use surveys, landing pages or feedback forms to do this. The gathered data can help you implement changes and tweaks that suit your users’ needs. | https://medium.com/elpassion/6-of-the-latest-ux-trends-in-web-and-mobile-design-9aed98fcc507 | ['Michał Ptaszyński'] | 2016-12-15 13:23:52.187000+00:00 | ['Design', 'Mobile App Design', 'UX Design', 'Mobile Design', 'Web Design'] |
Brown Paper Packages | The brown paper wrapping would be wrinkled and sueded by the time our parcel made it across the country to my eager hands, which immediately went to work solving the overzealous tape-job my oma would perform to secure packages from Northern Ontario to her grandkids out West.
There was always the same tissue paper with a scent of something sweet and new, like the petrochemical sheen of new synthetic fabric. Invariably, the parcel would have several smaller packages inside, and for Christmas, a thrice-foiled stollen with a marzipan centre, made and preserved at least 6 weeks prior. I would spend $50 for the elaborate ingredients to make a stollen 20 years later only to make something that tasted of her absence.
The inner packages always had some give to them and it was satisfying to feel the contents give under my hands, plush in a plastic bag under crackle of paper. Was it pyjamas this time? Maybe it was a new dress or a hand-knitted doll clothes for my Cabbage Patch doll she drove all the way to Toronto in the winter to get, when most kids were not as lucky that year. The tears come readily when I picture her driving through the treacherous snow. I am filled with the aching humanity of what love drives us to do. I only hope to live every day to deserve the generosity.
Christmas does not feel the same without brown paper packages, the dry warm texture like a skin. But packages from oma had a special resonance coming from her hands, so gentle the way they cupped my chin in old photos. Her hands cradled the heart and textures of tradition that I live to give back. | https://medium.com/scribe/brown-paper-packages-a7d65cb1b63d | ['Jessica Lee Mcmillan'] | 2020-12-05 22:15:13.554000+00:00 | ['Nonfiction', 'Memoir', 'Christmas', 'Prose', 'Nostalgia'] |
Time Series Anomaly Detection With LSTM Autoencoders- an Unsupervised ML Approach | Artificial Intelligence and Anomaly Detection
Time Series Anomaly Detection With LSTM Autoencoders- an Unsupervised ML Approach
How to set-up an anomaly detection model
Image by Author
Anomaly here to detect that, actual results differ from predicted results in price prediction. As we are aware that, real-life data is streaming, time-series data etc., where anomalies give significant information in critical situations. In the detection of anomalies, we are interested in discovering abnormal, unusual or unexpected records and in the time series context, an anomaly can be detected within the scope of a single record or as a subsequence/pattern.
Estimating the historical data, time-series based predictive model helps us in predicting future price by estimating them with the current data. Once we have the prediction we can use that data to detect anomalies on comparing them with actuals.
Let’s implement it and look at its pros and cons. Hence, our objective here is to develop an anomaly detection model for Time Series data. We will use neural-network architecture for this use case.
Let us load Henry Hub Spot Price data from EIA. We have to remember that, the order of data here is important and should be chronological as we are going to forecast the next point.
import os
print(os.listdir("../input"))
import warnings
warnings.filterwarnings('ignore') print("....Data loading...."); print()
print('\033[4mHenry Hub Natural Gas Spot Price, Daily (Dollars per Million Btu)\033[0m')
def retrieve_time_series(api, series_ID):
series_search = api.data_by_series(series=series_ID)
spot_price = DataFrame(series_search)
return spot_price def main():
try:
api_key = "....API KEY..."
api = eia.API(api_key)
series_ID = 'xxxxxx'
spot_price = retrieve_time_series(api, series_ID)
print(type(spot_price))
return spot_price;
except Exception as e:
print("error", e)
return DataFrame(columns=None)
spot_price = main()
spot_price = spot_price.rename({'Henry Hub Natural Gas Spot Price, Daily (Dollars per Million Btu)': 'price'}, axis = 'columns')
spot_price = spot_price.reset_index()
spot_price['index'] = pd.to_datetime(spot_price['index'].str[:-3], format='%Y %m%d')
spot_price['Date']= pd.to_datetime(spot_price['index'])
spot_price.set_index('Date', inplace=True)
spot_price = spot_price.loc['2000-01-01':,['price']]
spot_price = spot_price.astype(float)
print(spot_price)
Raw data visualization
print('Historical Spot price visualization:')
plt.figure(figsize = (15,5))
plt.plot(spot_price)
plt.title('Henry Hub Spot Price (Daily frequency)')
plt.xlabel ('Date_time')
plt.ylabel ('Price ($/Mbtu)')
plt.show()
print('Missing values:', spot_price.isnull().sum())
# checking missing values
spot_price = spot_price.dropna()
# dropping missing valies
print('....Dropped Missing value row....')
print('Rechecking Missing values:', spot_price.isnull().sum())
# checking missing values
The common characteristic of different types of market manipulation is that, the unexpected pattern or behavior in data.
# Generate Boxplot
print('Box plot visualization:')
spot_price.plot(kind='box', figsize = (10,4))
plt.show()
# Generate Histogram
print('Histogram visualization:')
spot_price.plot(kind='hist', figsize = (10,4) )
plt.show()
Detecting anomalous subsequence
Here, the goal is identifying an anomalous subsequence within a given long time series (sequence).
Anomaly detection is based on the fundamental concept of modeling what is normal in order to discover what is not….Dunning & Friedman
Pre-processing
We’ll use 95% of the data and train our model on it:
Next, we’ll re-scale the data using the training data and apply the same transformation to the test data. I have used Robust scaler as shown below:
# data standardization
robust = RobustScaler(quantile_range=(25, 75)).fit(train[['price']])
train['price'] = robust.transform(train[['price']])
test['price'] = robust.transform(test[['price']])
Finally, we’ll split the data into sub-sequences with the help of a helper function.
# helper function
def create_dataset(X, y, time_steps=1):
a, b = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i:(i + time_steps)].values
a.append(v)
b.append(y.iloc[i + time_steps])
return np.array(a), np.array(b) # We’ll create sequences with 30 days of historical data n_steps = 30 # reshape to 3D [n_samples, n_steps, n_features] X_train, y_train = create_dataset(train[['price']], train['price'], n_steps)
X_test, y_test = create_dataset(test[['price']], test['price'], n_steps)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
LSTM Autoencoder in Keras
The sequence autoencoder is similar to sequence to sequence learning. It employs a recurrent network as an encoder to read in an input sequence into a hidden representation. Then, the representation is fed to a decoder recurrent network to reconstruct the input sequence itself.
Here, our Autoencoder should take a sequence as input and outputs a sequence of the same shape. We have a total of 5219 data points in the sequence and our goal is to find anomalies. We are trying to find out when data points are abnormal.
If we can predict a data point at time ‘t’ based on the historical data until ‘t-1’, then we have a way of looking at an expected value compared to an actual value to see if we are within the expected range of values for time ‘t’.
We can compare y_pred with the actual value (y_test). The difference between y_pred and y_test gives the error, and when we get the errors of all the points in the sequence, we end up with a distribution of just errors. To accomplish this, we will use a sequential model using Keras. The model consists of a LSTM layer and a dense layer. The LSTM layer takes as input the time series data and learns how to learn the values with respect to time. The next layer is the dense layer (fully connected layer). The dense layer takes as input the output from the LSTM layer, and transforms it into a fully connected manner. Then, we apply a sigmoid activation on the dense layer so that the final output is between 0 and 1.
We also use the ‘adam’ optimizer and the ‘mean squared error’ as the loss function.
Issue with Sequences
ML algorithms, and neural networks are designed to work with fixed length inputs.
Temporal ordering of the observations can make it challenging to extract features suitable for use as input to supervised learning models.
# history for loss
plt.figure(figsize = (10,5))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Evaluation
Once the model is trained, we can predict using test data set and compute the error (mae). Let’s start with calculating the Mean Absolute Error (MAE) on the training data.
MAE on train data:
Accuracy metrics on test data:
# MAE on the test data:
y_pred = model.predict(X_test)
print('Predict shape:', y_pred.shape); print();
mae = np.mean(np.abs(y_pred - X_test), axis=1)
# reshaping prediction
pred = y_pred.reshape((y_pred.shape[0] * y_pred.shape[1]), y_pred.shape[2])
print('Prediction:', pred.shape); print();
print('Test data shape:', X_test.shape); print();
# reshaping test data
X_test = X_test.reshape((X_test.shape[0] * X_test.shape[1]), X_test.shape[2])
print('Test data:', X_test.shape); print();
# error computation
errors = X_test - pred
print('Error:', errors.shape); print();
# rmse on test data
RMSE = math.sqrt(mean_squared_error(X_test, pred_reshape))
print('Test RMSE: %.3f' % RMSE);
RMSE is 0.099, which is low, and this is also evident from the low loss from the training phase after 20 epochs: loss: 0.0749— val_loss: 0.0382. Though this might be a good prediction where the error is low but the anomalous behavior in the actuals cant be identified using this.
Threshold computation:
Objective is that, anomaly will be detected when the error is larger than selected threshold value.
Looks like we’re thresholding extreme values quite well. Let’s create a dataframe using only those:
Anomalies report format:
Inverse test data
Finally, let’s look at the anomalies found in the testing data:
The red dots are the anomalies here and are covering most of the points with abrupt changes to the existing spot price. The threshold values can be changed as per the parameters we choose, especially the cutoff value. If we play around with some of the parameters we used, such as number of time steps, threshold cutoffs, epochs of the neural network, batch size, hidden layer etc., we can expect a different set of results.
With this we conclude a brief overview of finding anomalies in time series with respect to stock trading.
Conclusion
Though the stock market is highly efficient, it is impossible to prevent historical and long term anomalies. Investors may use anomalies to earn superior returns is a risk since the anomalies may or may not persist in the future. However, every report metric needs to be validated with parameters fine-tuned so that anomalies are detected when using prediction for detecting anomalies. Also for metrics with different distribution of data a different approach in identifying anomalies needs to be followed.
Connect me here.
Note: The programs described here are experimental and should be used with caution for any commercial purpose. All such use at your own risk….by Author. | https://medium.com/swlh/time-series-anomaly-detection-with-lstm-autoencoders-7bac1305e713 | ['Sarit Maitra'] | 2020-12-14 04:22:09.827000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Neural Network Algorithm', 'Anomaly Detection', 'Stock Market'] |
How retirement was meant to be | There we were, two couples sitting around a table at 10 o’clock on a beautiful but sultry Monday morning playing cards. Our only objective was to win the game.
Nana Neva and I had taken an extended weekend break from our part-time grand-parenting duties to explore a less-familiar area of Virginia with another retired couple.
We had worked all of our lives to reach this point. Playing cards followed by a round of dominoes seemed like the perfect way to begin a new week, especially on a hot and muggy morning.
We played until lunch and then walked down the slanting limestone driveway to a cozy eatery in a marina for some fabulous homemade ice cream. Choosing which flavor became the toughest decision we made all day.
The location had much to do with our buoyant attitude. We had rented a cottage situated on a point overlooking a human-made lake where the dam generated hydroelectricity. The lake was long and narrow, the product of a few creeks damned up to fill steep valleys in southern Virginia.
Such a project brought more natural benefits than producing power. Wildlife thrived.
Each morning and evening a resident bald eagle perched on a favorite snag, often on the same limb a quarter of a mile across the bay from us. We had a perfect view from our deck that faced the water, made murky by a series of recent heavy rains.
Before breakfast, I spotted an osprey perched on a dead pine farther up the narrow bay. The “fish hawk” stood tall and stately in the morning mist.
Pileated woodpeckers called and flew back and forth across the water, too, landing if only briefly in the sizable wild cherry tree in our front yard along the shoreline. An eastern kingbird, a much smaller species, chased the much larger woodpecker upon every approach. Fierceness is the kingbird’s nature.
The ripe fruit of the lakeside tree drew songbirds, too. The kingbird didn’t seem to be as bothered by the Carolina chickadees, tufted titmice, red-bellied woodpeckers, and even young redheaded woodpeckers. I could have stayed there all day to watch that show.
The previous day we ventured to Rocky Mount, the county seat where my maternal grandparents were born. We researched family records in the historical society. The lilt and soft, southern accent of our hostess could have been my grandmother’s.
In the process, I was a boy again, standing in the hot Virginia sun inserting a nickel into a parking meter for my father. Dad had to finish the task because I wasn’t strong enough to turn the knob so the coin would activate the meter. The street meters have long disappeared, just like the department store where a relative had worked.
We visited the Booker T. Washington National Monument where the famous educator was born and freed as a slave. The sweltering heat and humidity made it easy to envision the slaves toiling in the parched fields.
Back at the cottage, boats rippled the reflected sunset as they headed in for the evening. Spiders devoured gnats trapped in the delicate webs on the deck just as a young eagle glided across the dusk’s burnished light.
This is what retirement was meant to be. We are grateful to be at this phase of our lives.
That said a palpable quietude subdued any thought of celebration. Too many others would not know the same joy and appreciation. Empathy should temper our golden years. Compassion must rule the way to ensure a purposeful retirement. | https://brucestambaugh.medium.com/how-retirement-was-meant-to-be-ac4216b716b5 | ['Bruce Stambaugh'] | 2018-12-12 16:54:13.041000+00:00 | ['Writing', 'Essay', 'Travel', 'Retirement'] |
How to Actually Deploy Docker Images Built on M1 Macs With Apple Silicon | Use buildx
Buildx comes with a default builder that you can observe by typing docker buildx ls into your terminal. However, we want to make and use a builder ourselves.
Make a builder
You will need a new builder, so make one with docker buildx create — name m1_builder . Now you should see the new builder when running docker buildx ls .
Use and bootstrap the builder
The next steps are to “use” the new builder and bootstrap it. Start with docker buildx use m1_builder and then docker buildx inspect — bootstrap , which will inspect and bootstrap the builder instance you just started using. You should see something like this:
computer@computer-m1 ~ % docker buildx inspect — bootstrap [+] Building 5.3s (1/1) FINISHED => [internal] booting buildkit 5.3s => => pulling image moby/buildkit:buildx-stable-1 3.1s => => creating container buildx_buildkit_m1_builder0 2.3s Name: m1_builder Driver: docker-container Nodes: Name: m1_builder0 Endpoint: unix:///var/run/docker.sock Status: running Platforms: linux/arm64, linux/amd64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6
You can see that this builder has a whole host of platforms it will build for!
Build with the builder
Now, cd your terminal into a place where you want to build and push an image so you can run this pseudo-command:
docker buildx —-platform linux/amd64,linux/arm64,linux/arm/v7 -t <remote image repository> --push .
You should see the “in use” builder go crazy and build the architectures you specified.
Manifest
Once your image has been built and pushed, you can inspect the manifest with:
docker buildx imagetools inspect <remote image repository>
You should see a manifest printout with all your different architectures. Most services that pull an image are smart enough to know which architecture to go get.
Clean up
Finally, as you might have experienced, Docker can start to take up a bit of disk space. If you did not use buildx , you probably ran docker system prune --all , the equivalent for buildx that is luckily semantically similar.
docker buildx prune — all
That’s it! Happy container development! | https://medium.com/better-programming/how-to-actually-deploy-docker-images-built-on-a-m1-macs-with-apple-silicon-a35e39318e97 | ['Jon Vogel'] | 2020-12-22 16:49:09.470000+00:00 | ['Apple', 'Docker', 'Apple Silicon', 'Programming', 'Containers'] |
Building a CI/CD on GCP with Kubernetes | Last year I have given a talk at Nexus User Conference 2018 on how to build a CI/CD pipeline from scratch on AWS to deploy Dockerized Microservices and Serverless Functions. You can read my previous Medium post for step by step guide:
CI/CD Workflow on AWS with Swarm
In 2019 edition of Nexus User Conference, I have presented how to build a CI/CD workflow on GCP with GKE, Cloud Build and Infrastructure as Code tools such us Terraform & Packer. This post will walk you through how to create an automated end-to-end process to package a Go based web application in a Docker container image, and deploy that container image on a Google Kubernetes Engine cluster.
CI/CD Workflow on GCP with K8S
Google Cloud Build allows you to define your pipeline as code in a template file called cloudbuild.yaml (This definition file must be committed to the application’s code repository). The continuous integration pipeline is divided to multiple stages or steps:
Quality Test : check whether our code is well formatted and follows Go best practices.
: check whether our code is well formatted and follows Go best practices. Unit Test : launch unit tests. You could also output your coverage and validate that you’re meeting your code coverage requirements.
: launch unit tests. You could also output your coverage and validate that you’re meeting your code coverage requirements. Security Test : inspects source code for common security vulnerabilities.
: inspects source code for common security vulnerabilities. Build : build a Docker image based on Docker multi-stage feature.
: build a Docker image based on Docker multi-stage feature. Push: tag and store the artifact (Docker image) to a Docker private registry.
Now we have to connect the dots. We are going to add a build trigger to initiate our pipeline. To do this, you have to navigate to Cloud Build console and create a new Trigger. Fill the details as shown in the screenshot below and create the trigger.
Notice the usage of variables instead of hardcoding Nexus Registry credentials for security purposes.
A new Webhook will be created automatically in your GitHub repository to watch for changes:
All good! now everything is configured and you can push your features in your repository and the pipeline will jump to action.
One the CI finishes the Docker image will be pushed into the hosted Docker registry, if we jump back to Nexus Repository Manager, the image should be available:
Now the docker image is stored in a registry, we will deploy it to a Kubernetes cluster, so similarly we will create a Kubernetes cluster based on GKE using Terraform:
Once the cluster is created, we will provision a new shell machine, and issue the below command to configure kubectl command-line tool to communicate with the cluster:
Our image is stored in a private Docker repository. Hence, we need to generate credentials for K8s nodes to be able to pull the image from the private registry. Authenticate with the registry using docker login command. Then, create a Secret based on Docker credentials stored in config.json file (This file hold the authorization token)
Now we are ready to deploy our container:
To pull the image from the private registry, Kubernetes needs credentials. The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named nexus.
Run the following command to deploy your application, listening on port 3000:
By default, the containers you run on GKE are not accessible from the Internet, because they do not have external IP addresses. You must explicitly expose your application to traffic from the Internet. I’m going to use the LoadBalancer type service for this demo. But you are free to use whatever you like.
Once you’ve determined the external IP address for your application, copy the IP address.
Point your browser to that URL to check if your application is accessible:
Finally, to automatically deploy our changes to K8s cluster, we need to update the cloudbuild.yaml file to add continuous deployment steps. We will apply a rolling update to the existing deployment with an image update:
Test it out by pushing some changes to your repository, within a minute or two, it should get pushed to your live infrastructure.
That’s it! You’ve just managed to build a solid CI/CD pipeline in GCP for whatever your application code may be.
You can take this workflow further and use GitFlow branching model to separate your deployment environments to test new changes and features without breaking your production:
GitFlow Branching
Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy. | https://medium.com/foxintelligence-inside/building-a-ci-cd-on-gcp-with-kubernetes-db8455d7286e | ['Mohamed Labouardy'] | 2019-06-16 12:16:00.949000+00:00 | ['Serverless', 'Docker', 'Kubernetes', 'DevOps', 'Google Cloud Platform'] |
A Guide to Spirituality and the Pain of Longing | A Guide to Spirituality and the Pain of Longing
Creating a playbook for a life of love, kindness, compassion, empathy, and wisdom
Photo by Ashley Batz on Unsplash
There’s a key element in the lives of all serious spiritual seekers that I call the pain of longing. It is a gnawing feeling that says, there’s more to life than I’m experiencing or that I’ve been told to expect. There is more, and yet I do not know what it is.
Most people go through their younger lives thinking about essential questions concerning the nature of the universe, their reason for being, etc.. These are often college conversations, conducted in a dorm room at 2:00 am after a couple of joints and listening to Pink Floyd’s Dark Side of the Moon.
After college, most of us get a job, create a family, watch football games, play video games, and just go through life without thinking about very much other than knee-jerk survival. However, there are some people who are stricken with the pain of longing syndrome. Eventually, they reach a state of depression, hopelessness, and confusion for their life lacks meaning. I was one of these people. For us, the pain of longing never diminishes it just gets worse, and the only way to relieve the pain is to do contemplation meditation and ruthless introspection. This can be a lifelong process. In time, some people find Jesus or Buddhism. Others numb the pain of longing with drugs or alcohol, and to avoid self-destruction become a Friend of Bill W. and attend 12-step meetings. Others, including this author, spend their whole lives walking on the Path, grabbing sound bites of wisdom here and there to sustain our souls, emotions, and mind.
During my personal journey, I had written a book called Spiritual not Religious: Sacred Tools for Modern Times. I brought the manuscript to various spiritual teachers for their opinions. The responses were quite interesting. This is the story of what happened. | https://medium.com/change-your-mind/a-guide-to-spirituality-and-the-pain-of-longing-335d17465306 | [] | 2020-12-28 14:10:45.401000+00:00 | ['Personal Development', 'Life', 'Mental Health', 'Spiritual Growth', 'Self Improvement'] |
Unittesting Apache Spark Applications | Unittesting Apache Spark Applications
A PySpark case
Unittesting Spark applications is not that straight-forward. For most of the cases you’ll probably need an active spark session, which means that your test cases will take a long time to run and that perhaps we’re tiptoeing around the boundaries of what can be called a unit test. But, it is definitely worth doing it.
So, should I?
Well, yes! Testing your software is always a good thing, and it will most likely save you from many headaches, plus, you’ll be forced to have your code implemented in smaller bits and pieces that’ll be easier to test, thus, gain in readability and simplicity.
Okay, then what do I need to do this?
Well, I’d say we can start with pip install spark-testing-base and work our way from there. We’ll also need pyspark (of course) and unittest (unittest2) and pytest for this — even though pytest is a personal preference.
Spark testing base is a collection of base classes to help with spark testing. For this example we’ll be using the base SQLTestCase which inherits from SparkTestingBaseReuse , that creates and reuses a SparkContext.
On SparkSession and SparkContext :
From personal experience (using the currently latest spark version 2.4.+), I’ve found that I needed to make some minor adjustments to the SQLTestCase , which is a test case I use quite a lot in my current project. So, here’s an example of the adjustments I’ve made to suit my needs:
Example of a base spark test case, based on Spark testing base’ s SQLTestCase
To sum up the changes I’ve made:
I added a configuration to have the timezone set to UTC for consistency. Timezone consistency is something very basic to have throughout your code, so please make sure you always set spark.sql.session.timeZone
for consistency. Timezone consistency is something very basic to have throughout your code, so please make sure you always set Another important thing to set in the configuration is the spark.sql.shuffle.partitions to something reasonable for the machine that will be running the tests, like <= cores * 2 . If we don’t do that, then spark will use the default value, which is 200 partitions, and it will unnecessarily and inevitably slow down the whole process. <= cores * 2 is a general good rule, not only for the tests.
to something reasonable for the machine that will be running the tests, like . If we don’t do that, then spark will use the default value, which is partitions, and it will unnecessarily and inevitably slow down the whole process. is a general good rule, not only for the tests. Also added a method to sort the dataframes to be compared before the comparison. There is a compareRDDWithOrder method in one of the base classes, but I think it is easier to work with dataframes.
method in one of the base classes, but I think it is easier to work with dataframes. The schema_nullable_helper method should be used with caution , as it may end up sabotaging your test case, depending on what you need to test. The use case for this is for when you create dataframes without specifying a schema (which is currently deprecated), because spark tries to infer the data types, sometimes you have inconsistencies for the Nullable flag between the two dataframes to be compared, depending on the data used to create them. This method will update one of the two dataframes’ schema to what the other’s schema is regarding nullables only.
method should be , as it may end up sabotaging your test case, depending on what you need to test. The use case for this is for when you create dataframes without specifying a schema (which is currently deprecated), because spark tries to infer the data types, sometimes you have inconsistencies for the Nullable flag between the two dataframes to be compared, depending on the data used to create them. This method will update one of the two dataframes’ schema to what the other’s schema is regarding only. And lastly, I added a slightly adjusted version of the setUp for the appName and the config . The session instantiation is also different in the latest pyspark version. (There is a pending release for the support of 2.2.+ and 2.3.+ spark versions still open here and here, so, we’ll be subclassing to work around this) | https://towardsdatascience.com/unittesting-apache-spark-applications-b9a46e319ce3 | ['Maria Karanasou'] | 2019-11-06 13:05:20.126000+00:00 | ['Pyspark', 'Python', 'Apache Spark', 'Data Science', 'Towards Data Science'] |
Wildfire Area Prediction: An AI Approach to a GIS Problem | Performing Exploratory Data analysis
There are 39 features in the dataset that we are using amongst which most of them are unique identification numbers used by different agencies and do not provide any information about fire. Therefore, we would remove those features right away after which feature analysis is performed on every individual feature to analyze their in our model.
Analyzing our class label: FIRE_SIZE_CLASS feature
The feature is first encoded manually to ‘int’ data type to facilitate machine learning model computation. Here different classes of fire size indicate different fire area sizes (in acres) as given below:
1/A: 0–0.25 acres
2/B: 0.26–9.9 acres
3/C: 10.0–99.9 acres
4/D: 100–299 acres
5/E: 300–999 acres
6/F: 1000–4999 acres
7/G: 5000+ acres
As seen in the graph, dataset is highly imbalanced with maximum reported incidents lying in Class 2.
Features such as OBJECTID, FOD_ID, FPA_ID are removed as they do add any value to the model.
Analyzing feature: SOURCE_SYSTEM_TYPE
Code snippet for individual feature analysis is as follows:
Feature Analysis Code
The feature is then encoded to a numeric value using ‘Label Encoders’ after which graphical analysis is performed.
Observation:
For each of the fire size class, maximum fires are associated with source system type = 2 which is ‘INTERAGCY’ and minimum fires in each class associated with source system type = 1 which is ‘NONFED’.
Therefore, this feature would be useful for our model.
Analyzing feature: SOURCE_SYSTEM
Similar statistical analysis is performed for each feature as shown in the above code snippet. The graphical analysis of this feature is as follows:
Observation: The X- Axis values of this plot re overlapping but we can clearly observe than most of the box plots have their 25–75%values lying between fire classes 1 and 2. This observation can be an effect of huge data imbalance which can be avoided by adding weight to each fire class during modeling.
Analyzing feature: NWCG_REPORTING_AGENCY
Observation: In most of the Box Plots, the median line is overlapping with the 25th/75th percentile lines. Most agencies have been reporting fires of size lying in class 1,2,3 but Agency 3 has 25–75% reported incidents lying in fire size class 5,6 and agency 4 between class 6 and 7.
Features such as ICS_209_INCIDENT_NUMBER, MTBS_ID, MTBS_FIRE_NAME are again removed as they were not important for the prediction.
Analyzing feature: FIRE_YEAR
Observation: Year 2006 has seen maximum forest fires not just class 2 fires but also Class 1 and Class 3 are visibly higher than any other year. More recent years i.e. 2014, 2015 show similar numbers which are not as high as 2006.
Since for every year, most of the count values are of fires lying in class 1,2 and 3, for analyzing fires lying in other classes, We can analyze the fires with dataset not having fires of classes 1, 2 and 3. Therefore we would filter out our data, the code for which is given below:
Code for Filtering out Majority classes from Imbalanced Data
Observation: This is a graph of filtered data such that the data contains only fires of classes 4,5,6,7. After analyzing the filtered data, we observe that in 2006, even the fires of bigger sizes are much more than other years. Fires of class 7 (biggest size) are the largest in years 2006, 2007, 2011, 2012 and 2015.
Feature DISCOVERY_DATE is present in Julian format, it can be converted into YYYY-MM-DD format for better understanding but we already have Discovery Year, Discovery Day of Year from which we can get the month of forest fire and after that we have discovery time which can be used to find out at which interval of the day did the fire occur, therefor this feature would not add any value to our dataset and can be discarded.
Analyzing feature: STAT_CAUSE_CODE
Observation: When the fire cause code is 1: we have maximum fires of size 1. In fact, these is a probability of fires from all 7 size classes to be present in it, on looking at the STAT_CAUSE_DESC feature we realized that class 1 is actually ‘Miscellaneous’ therefore it makes sense. For causes of class 10 (Powerline), very less fires are cause due to it as the count is very small and even when the fires are cause, the fires are only of class size 1 or 2 i.e. small fires. Similarly cause 12 (Fireworks) has the least number of instances and only in class 1 and 2. For class 9 (Smoking) we can see fires of class 3,4,5 and 6 along with 1 and 2 i.e. smoking can cause huge fires and is a much more dangerous cause.
Containment features such as CONT_DATE cannot be present for current fires therefore removing all containment information.
Analyzing features: LATITUDE and LONGITUDE
Geographical analysis of forest fires based on coordinate values is done by plotting heatmaps using the code below:
Code for creating heatmap of Forest Fires
Observation: Lighter color means more wildfires (as given in the legend). This is a view of all 52 states in the U.S. which include both the continent part and the main land U.S. To get a better visualization of wildfires in mainland U.S., we can remove the continent states data from our data frame i.e. states = Hawaii, Alaska and Puerto Rico (Reference taken from world map)
Heatmap for data filtered for U.S. mainland
Observation: This is just the forest fires visualization in U.S. mainland. Taking reference from U.S. map, we can see a lot of forest fires in California (left corner)
Regions like Montana, Nebraska, Colorado, Indiana, Illinois have very less cases of forest fires and most of them are darker in shades i.e. the count of forest fires is very less there. Then again in regions at right bottom corner i.e. Virginia, North and south Carolina, Georgia, Florida, Alabama there are so many forest fires as the color is very bright and dense.
We can also represent forest fire counts throughout the years using Folium for better representation through a map (Folium image present in article title). Code for Folium representation is as follows:
Folium Data presentation Code
Similar analysis is performed on features such as COUNTY, OWNER_CODE etc. | https://medium.com/swlh/wildfire-area-prediction-an-ai-approach-to-a-gis-problem-2a4e8d97d7e8 | ['Kirti Girdhar'] | 2020-11-13 21:51:29.535000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'GIS', 'Data Science', 'Forest Fires'] |
Improve Your Debugging experience in Visual Studio Code | Improve Your Debugging experience in Visual Studio Code
With this useful feature that you might not be aware of
Previously If we want to debug Nodejs applications, we have to create launch.json file to specify the debug configurations.
To make debugging easier, VS Code has added a new feature to easily debug Nodejs apps.
Now with this feature enabled, VS Code automatically detects your Nodejs code and starts debugging session so you don’t need to specify your environment and other configurations in launch.json
Start debugging in VS code
Add a breakpoint at any line in your code by right clicking the area just before any line number in your code and select Add Breakpoint option
Adding breakpoint
2. If you have a package.json file then add debug script inside it
"scripts": {
"debug": "node --inspect-brk index.js"
}
package.json
3. Press Control + Shift + P or Command + Shift + P (Mac) to open command palette and type attach and then select “ Debug: Toggle Auto Attach ” option
Toggle Auto Attach option
4. Once done, open terminal in VS Code from terminal -> New Terminal menu. ( Control + Backtick shortcut)
5. In the opened terminal, execute following command
npm run debug OR yarn run debug
6. If you don’t have package.json , you can directly run
node --inspect-brk index.js
from terminal. Here, index.js is the name of file that you want to debug.
7. Once executed, you will see the debugger started and you can start watching variables and their values
8. Awesome! Enjoy
Debugging session
So using Toggle Auto Attach option, VS Code makes it very easy to debug Nodejs applications.
That’s it for today. I hope you learned something new.
Don’t forget to subscribe to get my weekly newsletter with amazing tips, tricks, and articles directly in your inbox here. | https://medium.com/javascript-in-plain-english/useful-feature-added-in-visual-studio-code-that-you-might-not-be-aware-of-284c237daf3 | ['Yogesh Chavan'] | 2020-09-29 13:40:41.960000+00:00 | ['Development', 'Debugging', 'Programming', 'Nodejs', 'JavaScript'] |
Data cleaning and feature engineering in Python | Data cleaning and feature engineering in Python
Building better machine learning models for predicting San Francisco housing prices
Housing price data provides a great introduction to machine learning. Anybody who has bought a house or even rented an apartment can easily understand the features: more space, and more rooms, generally lead to a higher price.
So it ought to be easy to develop a model — but sometimes it isn’t, not because machine learning is hard but because data is messy. Also, prices for the exact same house in different neighborhoods of the same city, even only a mile away, may have significantly different prices. The best way to deal with this is to engineer the data so that the model can better handle this situation.
Since finding data can be the hardest problem in machine learning, we will use a great sample set from another data science project on Github which is a set of housing prices in San Francisco, mostly over the last few years, scraped from San Francisco Chronicle home sale listings. This data set can be found here: https://github.com/RuiChang123/Regression_for_house_price_estimation/blob/master/final_data.csv
First, we’ll load the data from a local copy of the file.
import pandas as pd
housing = pd.read_csv("final_data.csv")
Now, let’s take a look at a few graphs of this data set, graphing the total number of rooms by the last sold price.
import matplotlib.pyplot as plt
x = housing['totalrooms']
y = housing['lastsoldprice']
plt.scatter(x,y)
plt.show()
That single point in the far lower-right corner is an outlier. Its value is so extreme that it skews the entire graph, so much that we cannot even see any variation on the main set of data. This will distort any attempt to train a machine learning algorithm on this data set. We need to look more closely at this data point and consider what to do with it. If we sort the data by the total number of rooms, one of our axes above, it should stick out.
housing['totalrooms'].sort_values()
Here are the results:
7524 1.0
11223 1.0
3579 1.0
2132 1.0
5453 1.0
2827 1.0
... 2765 23.0
8288 24.0
9201 24.0
6860 24.0
4802 26.0
8087 26.0
11083 27.0
2601 28.0
2750 28.0
10727 28.0
11175 33.0
8300 94.0
8967 1264.0
Name: totalrooms, Length: 11330, dtype: float64
Indeed, that data point does stick out. It is the very last value in the list, which is a house that has 1,264 rooms! That is very suspicious, especially since the plot shows it having a pretty low price. At the very least it is wildly inconsistent with the rest of the data. The same may be the case with the previous value showing 94 rooms. We can take a closer look at these two houses with the following commands, pulling them up by their numeric identifier.
First let’s look at the house which supposedly has 1,264 rooms:
df = pd.DataFrame(housing)
df.iloc[[8967]]
This query shows something even more suspicious, which is that the “finishedsqft” field is also 1264.0. In other words, this is clearly just an error, probably on data entry — when the original data set was created, somebody accidentally used the same value for both finishedsqft and totalrooms.
Now, let’s take a look at the value just preceding it, with 94 rooms:
df.iloc[[8300]]
This home which supposedly has 94.0 rooms has only two bedrooms and two bathrooms! Again, this is an error. It is not clear how this crept in, but we can be pretty certain that if we go to this house, it does not have 94 rooms, but only two bedrooms and two bathrooms. We will need to eliminate these two data points but first let’s take another look at a graph of finishedsqft:
x = housing['finishedsqft']
y = housing['lastsoldprice']
plt.scatter(x,y)
plt.show()
There is another outlier in the lower-right. Let’s take a closer look at this data:
housing['finishedsqft'].sort_values()
Here are the results
1618 1.0
3405 1.0
10652 1.0
954 1.0
11136 1.0
5103 1.0
916 1.0
10967 1.0
7383 1.0
1465 1.0
8134 243.0
7300 244.0
...
9650 9699.0
8087 10000.0
2750 10236.0
4997 27275.0
Name: finishedsqft, Length: 11330, dtype: float64
First, unexpectedly, there are ten houses listed at 1.0 square feet. This is clearly wrong. Note that these were impossible to see in the graph, we had to look at the actual values. Additionally, the above results show the largest house at 27,275.0 square feet. It turns out, this is a house with only 2.0 bedrooms and 2.0 bathrooms, even though it is listed at 27,275 square feet, so this is almost certainly a mistake, or at least an extreme outlier. Let’s eliminate all of these outliers and take another look at the graph.
housing = housing.drop([1618])
housing = housing.drop([3405])
housing = housing.drop([10652])
housing = housing.drop([954])
housing = housing.drop([11136])
housing = housing.drop([5103])
housing = housing.drop([916])
housing = housing.drop([10967])
housing = housing.drop([7383])
housing = housing.drop([1465])
housing = housing.drop([8967])
housing = housing.drop([8300])
housing = housing.drop([4997])
x = housing['finishedsqft']
y = housing['lastsoldprice']
plt.scatter(x,y)
plt.show()
This is looking much better. There still may be some outliers in here, and we could investigate them more closely if we really wanted to, but there are no single data points in this view that are distorting the graph, and probably none (that we can see) that would distort a machine learning model.
Now that we have cleaned the data, we need to do some feature engineering. This involves transforming the values in the data set into numeric values that machine learning algorithms can use.
Take the “lastsolddate” value, for example. In the current data set, this is a string in the form of “mm/dd/yyyy.” We need to change this into a numeric value, which we can do with the following Pandas command:
housing['lastsolddateint'] = pd.to_datetime(housing['lastsolddate'], format='%m/%d/%Y').astype('int')
housing['lastsolddateint'] = housing['lastsolddateint']/1000000000
housing = housing[housing['lastsolddateint'].notnull()]
Now let’s create a checkpoint for our data so that we can refer back to it later.
clean_data = housing.copy()
Additionally, there are a number of fields that we cannot or should not use, so we will eliminate them.
I prefer to create functions to do this sort of work that we might do again and again, as we will see below, in order to simplify the code as we try out different hypotheses.
We remove the columns in remove_list for a number of reasons. Some of them are text values we just cannot do much with (info, address, z_address, zipcode, zpid). The latitude and longitude fields might be useful in some form but for this example it may just complicate things — no reason not to experiment with it in the future though. The zestimate and zindexvalue fields were actually produced by other data science techniques (probably from Zillow), so using them would be cheating! Finally, we will drop usecode (e.g. house, condo, mobile home) which could be quite useful but we will not use it for this example.
def drop_geog(data, keep = []):
remove_list = ['info','address','z_address','longitude','latitude','neighborhood','lastsolddate','zipcode','zpid','usecode', 'zestimate','zindexvalue']
for k in keep:
remove_list.remove(k)
data = data.drop(remove_list, axis=1)
data = data.drop(data.columns[data.columns.str.contains('unnamed',case = False)],axis = 1)
return data housing = drop_geog(housing)
Now that we have cleaned up the data, let’s take a look at how a few algorithms manage using it. We will use scikit-learn.
First, we need to split the data into testing and training sets, again using a function that we can reuse later. This assures that when we test the data, we are actually testing the model on data it has never seen before.
from sklearn.model_selection import train_test_split
def split_data(data):
y = data['lastsoldprice']
X = data.drop('lastsoldprice', axis=1)
# Return (X_train, X_test, y_train, y_test)
return train_test_split(X, y, test_size=0.2, random_state=30) housing_split = split_data(housing)
Let’s try Linear Regression first.
import sys
from math import sqrt
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import GridSearchCV
import numpy as np
from sklearn.linear_model import LinearRegression
def train_eval(algorithm, grid_params, X_train, X_test, y_train, y_test):
regression_model = GridSearchCV(algorithm, grid_params, cv=5, n_jobs=-1, verbose=1)
regression_model.fit(X_train, y_train)
y_pred = regression_model.predict(X_test)
print("R2: \t", r2_score(y_test, y_pred))
print("RMSE: \t", sqrt(mean_squared_error(y_test, y_pred)))
print("MAE: \t", mean_absolute_error(y_test, y_pred))
return regression_model
train_eval(LinearRegression(), {}, *housing_split)
This train_eval function can be used for any arbitrary scikit-learn algorithm, for both training and evaluation. This is one of the great benefits of scikit-learn. The first line of the function incorporates a set hyperparameters that we want to evaluate against. In this case, we pass in {} so we can just use the default hyperparameters on the model. The second and third lines of this function do the actual work, fitting the model and then running a prediction on it. The print statements then show some stats that we can evaluate. Let’s see how we faired.
R2: 0.5366066917131977
RMSE: 750678.476479495
MAE: 433245.6519384096
The first score, R², also known as the Coefficient of Determination, is a general evaluation of the model showing the percentage of variation in the prediction that can be explained by the features. In general, a higher R² value is better than a lower one. The other two stats are root mean squared error and mean absolute error. These two can only be evaluated in relation to other evaluations of the same statistic on other models. Having said that, an R² of .53, and the other stats in the many hundreds of thousands (for houses probably costing one or two million) is not great. We can do better.
Let’s see how a few other algorithms perform. First, K-Nearest Neighbors (KNN).
from sklearn.neighbors import KNeighborsRegressor
knn_params = {'n_neighbors' : [1, 5, 10, 20, 30, 50, 75, 100, 200, 500]}
model = train_eval(KNeighborsRegressor(), knn_params, *housing_split)
If Linear Regression is mediocre, KNN is terrible!
R2: 0.15060023694456648
RMSE: 1016330.95341843
MAE: 540260.1489399293
Next we will try Decision Tree.
from sklearn.tree import DecisionTreeRegressor
tree_params = {}
train_eval(DecisionTreeRegressor(), tree_params, *housing_split)
This is even worse!
R2: .09635601667334437
RMSE: 1048281.1237086286
MAE: 479376.222614841
Finally, let’s look at Random Forrest.
from sklearn import ensemble
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
forest_params = {'n_estimators': [1000], 'max_depth': [None], 'min_samples_split': [2]}
forest = train_eval(RandomForestRegressor(), forest_params, *housing_split)
This one is a bit better, but we can still do better.
R2: 0.6071295620858653
RMSE: 691200.04921061
MAE: 367126.8614028794
How do we improve on these results? One option is to try other algorithms, and there are many, and some will do better. But we can actually fine tune our results by getting our hands dirty in the data with feature engineering.
Let’s reconsider some of the features that we have in our data. Neighborhood is an interesting field. The values are things like “Portrero Hill” and “South Beach.” These cannot be simply ordered (from most expensive to least expensive neighborhood), or at least, doing so would not necessarily produce better results. But we all know that the same house in two different neighborhoods will have two different prices. So we want this data. How do we use it?
Python’s Pandas library gives us a simple tool for creating a “one-hot encoding” of these values. This takes the single column of “neighborhood” and creates a new column for each value in the original neighborhood column. For each of these new rows (with new column header names like “Portrero Hill” and “South Beach”), if a row of data has that value for the neighborhood in the original column, it is set to 1, otherwise it is set to 0. The machine learning algorithms can now build a weight associated with that neighborhood, which is either applied if the data point is in that neighborhood (if the value for that column is 1) or not (if it is 0).
First, we need to retrieve our check-pointed data, this time keeping the “neighborhood” field.
housing_cleaned = drop_geog(clean_data.copy(), ['neighborhood'])
Now we can create a one-hot encoding for the “neighborhood” field.
one_hot = pd.get_dummies(housing_cleaned['neighborhood'])
housing_cleaned = housing_cleaned.drop('neighborhood',axis = 1)
We will hold onto the “one_hot” value and add it later. But first, we have to do two more things. We need to split the data into a training set and a test set.
(X_train, X_test, y_train, y_test) = split_data(housing_cleaned)
For our final step, we need to scale and center the data.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler()
scaler.fit(X_train)
X_train[X_train.columns] = scaler.transform(X_train[X_train.columns])
X_train = X_train.join(one_hot)
X_test[X_test.columns] = scaler.transform(X_test[X_test.columns])
X_test = X_test.join(one_hot)
housing_split_cleaned = (X_train, X_test, y_train, y_test)
Let’s unpack this step a bit.
First, we apply StandardScaler(). This function scales and centers the data by subtracting the mean of the column and dividing the standard deviation of the column, for all data points in each column. This standardizes all of the data, giving each column a normal distribution. It also scales the data, because some fields will vary from 0 to 10,000, such as “finishedsqft,” while others will vary only from 0 to 30, such as number of rooms. Scaling will put them all on the same scale, so that one feature does not arbitrarily play a bigger role than others just because it has a higher maximum value. For some machine learning algorithms, as we will see below, this is critical to getting even a half decent result.
Second, it is important to note that we have to “fit” the scaler on the training features, X_train. That is, we take the mean and standard deviation of the training data, fit the scaler object with these values, then transform the training data AND the test data using that fitted scaler. We do not want to fit the scaler on the test data, as that would then leak information from the test data set into the trained algorithm. We could end up with results that appear better than they are (because the algorithm already is trained on test data) or appear worse (because the test data is scaled on their own data set, and not on the test data set).
Now, let’s rebuild our models with the newly engineered features.
model = train_eval(LinearRegression(), {}, *housing_split_cleaned)
Now, under Linear Regression, the simplest algorithm we have, the results are already better than anything we saw previously.
R2: 0.6328566983301503
RMSE: 668185.25771193
MAE: 371451.9425795053
Next is KNN.
model = train_eval(KNeighborsRegressor(), knn_params, *housing_split_cleaned)
This is an a huge improvement.
R2: 0.6938710004544473
RMSE: 610142.5615480896
MAE: 303699.6739399293
Decision Tree:
model = train_eval(DecisionTreeRegressor(), tree_params,*housing_split_cleaned)
Still pretty bad, but better than before.
R2: 0.39542277744197274
RMSE: 857442.439825675
MAE: 383743.4403710247
Finally, Random Forrest.
model = train_eval(RandomForestRegressor(), forest_params, *housing_split_cleaned)
Again, a decent improvement.
R2: 0.677028227379022
RMSE: 626702.4153226872
MAE: 294772.5044353021
There is certainly far more that can be done with this data, from additional feature engineering to trying additional algorithms. But the lesson, from this short tutorial, is that seeking more data or pouring over the literature for better algorithms may not always be the right next step. It may be better to get the absolute most you can out of a simpler algorithm first, not only for comparison but because data cleaning may pay dividends down the road.
Finally, in spite of its simplicity, K-Nearest Neighbors can be quite affective, so long as we treat it with the proper care. | https://towardsdatascience.com/data-cleaning-and-feature-engineering-in-python-b4d448366022 | ['Scott Johnson'] | 2019-04-09 13:23:07.182000+00:00 | ['Scikit Learn', 'Machine Learning', 'Python', 'Feature Engineering'] |
How to Teach AI and ML to Middle Schoolers | Teaching Philosophy
The most effective way to keep young students engaged is to use positive reinforcement and encouragement to help them feel like they are understanding the concepts.
To introduce middle schoolers to AI, it is imperative to start with extreme basics. This begins by eliminating the perception that AI is terrifying robots that can operate completely independently. From our experience, most students instantly thought of AI as something out of Avengers: Age of Ultron. We needed to dispel their theories, first by clarifying exactly what AI is and by giving precise, accurate examples. We showed how AI is all around them, from Amazon Alexa to Netflix recommendations, which helped them gain a better understanding of what AI really is.
What most middle schoolers think AI is (Image source — Avengers: Age of Ultron)
Most importantly, to keep students engaged, they must understand why AI is important for them to learn. They should understand that most job opportunities in the future will require AI, and learning the concepts now will give them an advantage in high school and college. Additionally, providing some examples of fields that are shifting to the use of AI, such as medicine and economics, will interest students with all sorts of different interests.
Artificial intelligence is the simulation of human intelligence in machines that are programmed to think and act like humans.
To help middle schoolers understand AI/ML, the topics should be defined very simply and intuitively. We explained that machines will learn from their own results and run thousands of tests to improve. One great example for middle schoolers is as follows: An AI is a small infant that is shown squares and triangles over and over again until it can determine between a square and a triangle. This explains the basic concept of providing data and labels for a model to learn from in a commonly understandable way.
This is what you shouldn’t teach (Image source: KDnuggets)
Instead of trying to distinguish AI from ML, it is much easier to just say that they are very similar and essentially the same thing. Trying to differentiate the two will just confuse the already confused middle schoolers even more. This is a similar strategy to teaching elementary kids that 0 is the smallest number but then later teaching them about the existence of negative numbers. | https://medium.com/better-programming/how-to-teach-ai-and-ml-to-middle-schoolers-34bf59262ea8 | ['Ayaan Haque'] | 2020-08-20 15:47:41.080000+00:00 | ['Machine Learning', 'Tutorial', 'Artificial Intelligence', 'Education', 'Programming'] |
On Libraries | If you have a garden and a library, you have everything you need.
— Marcus Tullius Cicero
The school year has ended and that can only mean one thing: library season is upon us. If you are of the persuasion that libraries are dying out in our digital age you might be surprised to know that the opposite is in fact true. According one librarian, though funding libraries is always an issue, library use has actually been on an upward swing over the last decade. And while libraries are changing, a fact that is the source of worthwhile debate on what the role of libraries in society should be, they are still a popular institution that 94% of people think improves the quality of life in a community.
An article in the Atlantic even highlighted the important role libraries play in the wake of tragedy, likening them to what the writer called “second responders.” For example, “In Orlando, after the nightclub shootings, the library hosted an art gallery for those who made art as a way to express and share their reactions.” While first responders provide necessary help in an emergency, second responders, i.e. libraries, provide a space for people to work through the pain of experiencing trauma.
For me, when summer comes around, I get nostalgic for libraries. To be sure, my mom would take issue with this point, arguing that I didn’t like them as a child. Perhaps it was the awkward programs the local library put on for kids that we frequented, or being that I was the oldest child in our family, my problem was that I had outgrown them. But in my memory, the library has always been a refuge, almost a sacred space. In high school I would often hop on my bike and ride to our local branch to check out titles I knew were important; books by Heller, Baldwin, and Bradbury, among others.
A friend of mine and his family had a particular library tradition in the summertime. Each week they would go to the library with a tub and fill it to the brim with books. At the end of the week they would return all the books and fill the tub again.
I’ll never forget the day, when another friend of mine, my best friend, shared with me one of his most personal secrets: he loved to read. For an athlete that was an astonishing, almost shameful idea. But for me it was freedom, knowing that it was ok that I loved reading too.
America has over 100,000 libraries, more than 16,000 of those being of the public variety. Each and every one of them is a blessing to the community it serves, something not to be taken for granted. Unfortunately, many do. I once had a disheartening conversation with a family member who saw the public library as a waste of real estate and tax-payer dollars.
I now live in a country that only has 1100 libraries, 1 for every 70,000 people. The lack of libraries is sorely felt by our family on a weekly basis. Not having access to books changes a culture.
Christians are, to borrow an Islamic phrase, “people of the book” and I would add, not “people of the screen.” Of course this label is in reference to our commitment to the Bible, but because God’s message comes to us as a book and not a movie or tweet, we do well when we have a vibrant reading culture in our cities and homes. For us “people of the book,’ the public library is a type of Noah’s ark. It preserves some of our culture’s most cherished artifacts, books, from the rising flood of a banal social media culture.
Though I won’t be able to bring my tub to the local library and fill it to the brim with books this summer, I hope you will. In the words of J.K. Rowling, “When in doubt, go to the library.” SDG
John Thomas is a freelance writer. His writing has appeared at Mere Orthodoxy, Christianity Today, and Desiring God. He writes regularly at medium.com/soli-deo-gloria.
His recent articles include: | https://medium.com/soli-deo-gloria/on-libraries-b4ec3d0db0c0 | ['John Thomas'] | 2019-06-09 19:27:18.579000+00:00 | ['Books', 'Reading', 'Libraries', 'Summer', 'Culture'] |
Control freaks and psychological saftey — We bring Eric Olive on the podcast as a guest to talk… | Control freaks and psychological saftey — We bring Eric Olive on the podcast as a guest to talk about the science of decisions and we ended up talking about control and safety. How do you create an environment of psychological safety? And how does that encourage creative collaboration?
Eric has also offered a list of articles and books for more reading which we’ve added below.
You can reach Eric at:
uiuxtraining.com
[email protected]
Your browser does not support the audio element.
Articles
A Leader’s Framework for Decision Making by David J. Snowden and Mary E. Boone HBR November 2007
Fooled by Experience by Emre Soyer and Robin M. Hogarth
Leaders as Decision Architects by John Beshears and Francesca Gino — Harvard Business Review. Structure your organization’s work to encourage wise choices.
“Organizing and the Process of Sensemaking”, Organization Science, vol. 16, no. 4, pp. 409–421.
“The Identification of Solution Ideas During Organizational Decision Making,” Management Science 39: 1071–85. Paul C. Nutt (1993),
“Surprising but True: Half the Decisions in Organizations Fail,” Academy of Management Executive 13: 75–90. Paul C. Nutt, 1999.
Only for HBR (Harvard Business Review) Subscribers
Before You Make That Big Decision by Daniel Kahneman, Dan Lovallo, and Olivier Sibony. Harvard Business Review.
The Hidden Traps in Decision Making by John S. Hammond, Ralph L. Keeney, and Howard Raiffa. Harvard Business Review, January 2006.
Books
A More Beautiful Question by Warren Berger
Beyond Greed and Fear by Hersh Shefrin
Decisive by Dan and Chip Heath
Educating Intuition by Robin Hogarth
Focus by Daniel Goleman
How We Decide by Jonah Lehrer
Intuition at Work by Gary Klein
Nudge by Richard Thaler and Cass Sunstein
Seeing what Others Don’t by Gary Klein
The Art Of Thinking Clearly by Rolf/Griffin Dobelli
Winning Decisions by J. Edward Russo and Paul J.H. Schoemaker’
Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.
You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts. | https://medium.com/theteamw/control-freaks-and-psychological-saftey-we-bring-eric-olive-on-the-podcast-as-a-guest-to-talk-c630c9da6798 | ['The Team W'] | 2018-07-24 19:49:57.239000+00:00 | ['Trust', 'Eric Olive', 'Risk', 'Decisions'] |
The Hidden Costs of Firmware Flashing | First of all, the debugging/programming hardware. ST products are famously paired with the ST-Link series of debuggers; the most common ST-Link V2 comes for 20 € on Mouser, but a knock-off can be found on Aliexpress for prices as low as 1.50 $.
Cheap, but effective
For Microchip products, it’s a different story. Their go-to debugger is the relatively new ICD-4, and the toy costs up to 240 €— quite an investment.
They also offer a more affordable option in the form of the PIC-kit 4: just under 50 €. Still twice as much as the ST counterpart.
Recently, they also released a super-cheap alternative, the Microchip SNAP, but it’s so bare bones (not even a case) I wouldn’t recommend it for a professional setting.
When choosing between those two options, you already have an up front difference of ~30 € in costs for every production unit, which can become even more if you are unlucky. In my experience Microchip programmers are very delicate constructs, and we break at least a couple per year just from normal usage. Also, their device support is often incomplete: it’s not uncommon to find out the the newest processors are not supported by some versions of the debugging tools, forcing you to buy a second one.
For now I’ve only listed one-time kickstarting costs, which are easily covered by the first profits. It sucks, but it’s no big deal. The real problem kicks in when you start flashing your custom boards.
Time is Money
By looking at two similar project I realized Microchip programmers are unbelievably slow at their supposed job. I compared the flashing time of two ELF files about the same size (~100 KB each): one for the PIC24FJ128, the other for a STM32F030.
The results? While my ST-Link V2 is done in a mere 8 seconds delay, the ICD3 needs 25 seconds to finish. That’s a three fold difference. This means that anyone assigned to the task of preparing products to be delivered will cost you thrice as much, weighing heavily on the returns.
As a side note — the actual flashing time is more or less the same, about 5 seconds for both. The problem is MPLABX (Microchip’s development environment) takes 20 damn seconds to handshake with the device. I am unsure about what the hell they are chit chatting about, but apparently it’s so important you have to waste a third of a minute on it.
This means that it took me a minimum of one and an half hours to program 200 PIC24 boards, while I could have been done in just 30 minutes if the product mounted an STM32 counterpart.
This is already a sizeable increase, but it’s not over yet. When handling big orders it is appropriate to build a somewhat automated setup to save as much time as possible; while this is generally achievable for most manufacturers, Microchip seems to make a statement in how you should take things slowly.
Zero Flexibility
ST Microelectronics has the simplest possible approach to develop firmware for their products. Any ARM compiler will do to create a binary image, and then they distribute a command line tool to flash it. This is also paired with a complex Code Generator (STMCubeMx) and IDE, but you are not forced to use those.
Unfortunately MPLABX does not allow to time the flashing procedure.
In this way it is fairly easy to setup a flashing workstation from any small device running Linux (looking at you, Raspberry Pi) and a touch display. You can even build a continuous integration system with an headless Jenkins server connected to a protoboard: the changes pushed to the Git repository are automatically tested and validated on real hardware.
This is simply not possible with a PIC family microcontroller. The only way to develop firmware in Microchip’s world is to use their infamous MPLABX IDE, complete with obsolete GUI and long running bugs. There is just no other option to flash a PIC MCU than to use a full blown personal computer, start up a session and painstaikingly flash each device by hand.
If you really need an automated procedure you can implement a custom bootloader for your application, but guess what? | https://medium.com/swlh/the-hidden-costs-of-firmware-flashing-c2d3dde09628 | ['Mattia Maldini'] | 2020-02-05 21:52:17.943000+00:00 | ['Entrepreneurship', 'Comparison', 'Embedded Systems', 'Hidden Cost', 'Product Design'] |
15 Old Water Towers of Berlin | This water tower was part of the Charlottenburg gas plant but is also known as Wasserturm Gaußstraße. By the 2000s, you could also notice huge — higher than the tower — ball-shaped gas tanks nearby. However, nowadays, the tower stands alone among commercial and logistics facilities.
The tower seems to be in good shape, but broken windows and a welded door might indicate the structure is currently abandoned.
4. Wasserturm Prenzlauer Berg
District: Prenzlauer Berg. Height: 44 m. Built in 1875–1877. Info: Google Maps, Wikipedia.
This tower is the oldest and most well-known one and was functioning for over 70 years. Unlike many similar structures that are just elevated water tanks, Wasserturm Prenzlauer Berg also included apartments for machinery operators who used to work in the tower. And it’s probably the only Berlin’s tower that you can comfortably observe from the park on a hill — relaxing on a bench and sipping a refreshing drink.
5. Wasserturm Güterbahnhof Moabit
District: Moabit. Built in 1893. Info: Google Maps, Wikipedia (in German). | https://medium.com/5-a-m/berlin-water-towers-4d6fd66797d5 | ['Slava Shestopalov'] | 2020-10-20 09:43:08.519000+00:00 | ['Design', 'History', 'Berlin', 'Architecture', 'Photography'] |
Simple Psychological Hacks to Break Your Bad Money Mindset | Hide your money
The number one mistake most people make is to leave all their money in one banking checking account. Instead, think about your checking account as a funnel. It’s a place where money from your job or business comes, but it doesn’t stop there.
Always keep about 1.5–2x of your monthly expense in this main account. For every dollar over that amount, transfer to a high-yield savings account. Be sure to do your research on which one to choose, but if you want a quick answer, I use Ally bank.
Savings rates change all the time. Last year Ally offered 2.2% interest. As 2020 draws to a close, Ally now offers 0.6%. This is to be expected due to the economy and state of the world right now. A good rule of thumb, the cheaper money is to borrow (right now, it’s hovering around 2.75%), the lower the return on interest rates.
Besides earning a small interest rate, why should you funnel all your money into another bank account? So you don’t have easy access to your money, and you don’t see how “rich” you are on a daily basis.
If all you see is the 1.5–2x of monthly expenses amount in your main checking account, you won’t feel as compelled to go on an Amazon shopping spree and buy things you don’t need.
Of course, the funnel shouldn’t stop there either. You should also transfer some of that money into an investment account, but the point is to hide your money and forget about it, so your nest egg grows in the background.
Add tax to your personal expenses
Let’s say you’re thinking about upgrading to a new smartphone. An average iPhone is about $999 + tax for a total of ~ $1100. How many hours of work will it take you to pay for the new phone?
For this example, let’s say you make $40 per hour. $1100 ÷ $40 = 27.5 hours. But it doesn’t stop there. To pay for that $1100 phone, you would need to work more than 27.5 hours due to taxes.
At $40/hr, you’re making roughly $80,000 a year, which puts you in the 22% federal tax bracket. Depending on what state you live in, your state tax rate will vary. I live in California, which means at $80k per year, add on a 9.3% tax for a total of ~31%.
In actuality, your tax rate will probably be less since the US uses a progressive tax system, but this is just an example for argument’s sake.
We need to add that 31%, which is $341, on top of the $1100 purchase to earn enough money to pay for it. In total, you would need to work 36 hours to earn $1441 to pay for the new iPhone. Now it’s up to you to decide, are 36 hours of work worth the return of a new iPhone?
Once you start thinking about your personal expenses this way, you might start to think twice about following through with them.
The 30-year rule
Whether it’s stocks, bonds, or real estate, how do you know if it’s a good investment?
Ask yourself this question:
In 30 years, will I still view this as a good investment?
If the answer is yes, buy it. If you’re not sure, then you might want to think about it more before investing.
Thinking about investments in the longterm will help you make the right choices. Don’t buy stocks in the short term just because everyone else is, then expect them to rise in value and cash out in a few months. Unless you’re a skilled investor or have been buying stocks for quite a while, this is a recipe for disaster. Why would you want to gamble with your hard-earned money?
Start early, contribute often
I’ve been investing in low-cost index funds like the target-date fund and an S&P 500 index fund ever since my first full-time job. I started with small contributions of 1–3% of my salary. As I’ve earned more through the years, I slowly increased my contributions.
I had no idea what I was doing at the time. I just followed the advice everyone gave me “contribute to your 401k”.
Looking back, I wish I contributed much more aggressively. You just can’t beat the advantage of compound interest. Those small contributions from 7 years ago have grown substantially.
But instead of wishing I invested more in the past (and with after-tax dollars instead of pre-tax), I use this experience to make better investing decisions in the future. Now I prioritize investing. Every time I earn money, whether it’s from my full-time job or side business income, I funnel a set percentage into investments. | https://medium.com/the-post-grad-survival-guide/simple-psychological-hacks-to-break-your-bad-money-mindset-ebdb4739e643 | ['Monica Galvan'] | 2020-12-15 01:42:12.913000+00:00 | ['Finance', 'Psychology', 'Self Improvement', 'Money', 'Economics'] |
Why I Started with a Lower Paying Job | The hidden advantages of NOT maximizing for income early
The paradox of chasing high pay early
I think it’s a decent default rule to take the lowest pay you can for the first decade or so of your career.
If you ignore pay and focus on other things, I’m willing to bet you’ll be happier and earn more pay a decade in than someone who maximizes pay from day one.
I’ll walk you through my own example to explain some of the reasons I think this is true.
An early choice
It wasn’t a dramatic sell-your-soul-for-riches moment, but at age 19 I had a juicy job offer. I had just gotten married, had no job (nor did my wife), zero cash, and a mortgage to pay. I was working odd electrical and landscape gigs and sending out resumes.
I had a job opportunity that my friends were jealous of. It had a $45k base salary, company car, plus commission. They said I’d likely make around $50k in my first year.
I said no.
I wasn’t really excited about the job, but I could’ve been fine with it. Beyond looking for something more connected to my interests and skills, I also knew this job had a fairly low ceiling. I’ve never been primarily motivated by money, but I knew some middle aged people who had essentially the same job and I knew their life seemed pretty mediocre to me, financially and otherwise.
Instead, I got a job I was thrilled about that paid $25k. I excelled, and it led to several next steps with slightly higher pay. Here’s my first eight years in the professional world.
My actual pay trajectory:
Year 1: $25,000
Year 2: $28,000
Year 3: $35,000
Year 4: $35,000
Year 5: $40,000
Year 6: $40,000
Year 7: $45,000
Year 8: $75,000
Seven years in, I was still making less than that first offer I turned down. I never focused on pay. I don’t think I ever asked for a raise. That year seven job probably paid me too much too. I would’ve done it for $40k, but I wasn’t going to turn down the higher offer.
Let’s compare my pay trajectory to a very reasonable estimate of what I would’ve pulled in over the same eight years with the other job.
My forgone pay trajectory:
Year 1: $50,000
Year 2: $55,000
Year 3: $60,000
Year 4: $60,000-$65,000
Year 5: $60,000-$65,000
Year 6: $60,000-$65,000
Year 7: $60,000-$70,000
Year 8: $60,000-$70,000
The job had a pretty consistent ceiling. The best performers made somewhere between $60–70k. It wasn’t a role or industry that really had a clear path to something else within either. A jump to something totally new can always be made — I’ve done it myself more than once — but one of the dangers with high pay early is that it makes those jumps harder and less likely.
So for seven years, I looked like a poor sucker compared to my company car driving alternate self. But by year eight, I not only surpassed the ceiling of the previous trajectory, but had massive amounts of opportunity and social capital at my disposal, not to mention greater fulfillment in my work.
Oh, and each year after year eight saw quick and significant jumps in income as my years of learning where to focus and optimizing for value began to pay off. The years of non-income optimization also set the stage for me to launch a company in year 10 (and another in year 15!), something I’d never have been able to do without all the social capital I’d built.
Why does it work this way?
I think there are several reasons I was better off taking the lowest pay my wife and I could handle for nearly every job for the first several years. If I have any regrets, it’s that I didn’t find a way to live on even less and worry even less about pay.
For me, lower pay early on meant several things:
1. It was easier to be impressive
A great way to get ahead in your career is to always strive to be the best employee anywhere you work. Not all employees are equal, and this is where low pay can be a big advantage.
A young $35k worker gets noticed for being just a little above average. A young $60k worker, on the other hand, had better be pretty damn good to command that salary early in their career.
You can stand out to your colleagues and the broader world pretty easily when you’re a low paid employee. Soon, you develop a reputation. “Hey, have you met that young dude from that one place? What a hustler. Maybe he’d be a good fit for this…” Again, this works both internally at a company and externally.
I had little problem establishing myself as impressive because being obviously worth more than a low salary is doable with a little hustle. A great reputation built over those first several low pay years can catapult you to a higher pay in ten years than yearly cost of living increases at a fatter starting point.
2. It was easier to find what I loved and hated
$50k isn’t quite golden handcuffs, but it’s kind of like copper handcuffs. Early, that’s a lot of money. Once you’ve tasted it, it’s very hard to go back. One problem my wife and I have is that we’ve always managed to live right up to our level of income. Always. Once you have high pay, going back is brutal.
This means that if you discover two years in you never want to audit tax documents again, it might be too late. Not just to change, but to even see it. You actually become worse at knowing yourself and being honest with yourself if you are paid a lot. You weave stories about how much you kinda sorta like it, or how you’ll leave in five years. Lies. I saw many people do it.
Self-discovery is too important to play servant to your early income goals.
3. It was easier to find new opportunities
Opportunities travel through the grapevine. You have to be sending a frequency that others tune into in order to find them. People have a rough idea how much you make. If you are a young hotshot with a big salary, I can almost guarantee someone somewhere has said, “Hey, she’d be a good fit…but doesn’t she make like $50k right now? This role starts at $35k. Doubt she’d do it. Who else?”
I’ve said it about people many times. Sometimes, the person in question was too senior for the role, but sometimes they were perfect for it but their pay was just too senior for the role. Some of them will be making about the same in five years, where the role I decided not to consider them for could have blossomed into much more than that.
I got some very cool opportunities for jobs and side gigs that I never would have gotten had I started at the $50k job. Low opportunity cost was my secret weapon.
You don’t realize it, but a highly paid young person is like a red flag to people looking for hungry young talent. Salary maximizers often miss out on the best long-term opportunities.
4. It was easier to act on new opportunities
Even if you manage to gain self-knowledge, and you manage to find cool opportunities, if you’re earning bookoo bucks early, seizing them isn’t easy.
Again, the copper handcuffs begin to chafe. You are used to a certain standard of living afforded by your high pay. It can be hard to move to completely different areas and maintain it. I had little trouble early on saying no to bad fitting roles and yes to cool ones with low pay. I started at $25k and we learned to live on that (we had our first child while I was still making $28k for goodness sake!), so new opportunities weren’t big sacrifices.
Focus on what matters
Go in to each job with a mission to be the best employee there. To create the most value, have the most fun, capture the company vision and help build on it, learn everything you can, help as many people as possible. Don’t turn down a job you like over a few thousand in salary. Don’t haggle over an offer to ratchet up the pay inch by inch. Ignore salary altogether if you can. Focus on building value for yourself and others.
I’m willing to bet the entire $50k I passed on in year one that you’ll be doing better a decade down the road than those who maximize pay above all.
Isaac is the CEO of the career launch platform crash.co, where he also writes about career stuff. | https://medium.com/the-mission/why-i-started-with-a-lower-paying-job-57a7c5c5cbc3 | ['Isaac Morehouse'] | 2019-02-14 16:05:28.013000+00:00 | ['Money', 'Entrepreneurship', 'Careers', 'Learning'] |
Our Little Home Cats Are Close of the Big Wild Felines | The Cat (Felis Catus) is a friend of humans for at least 10,000 years. These small mammal carnivores are members of millions of homes.
The species is the only in the Felidae family domesticated. Cats have many similarities with wild felines: habits and appearances are close.
The small ones are part of the Felinae. The big ones are part of the Pantherinae.
Studies show that there was a common ancestor about six a 10 million years ago. A wild cat (Proailurus) lives about 30 million years ago in Euroasia.
The Latin names are the standard of modern biologic taxonomy. That is a discipline of botany that classifies the species of organisms.
The Swedish naturalist Carolus Linnaeus (1707–1778) changes this discipline.
He proposes the binomial nomenclature for animals and plants. The first name indicates the genus, and the second details the species.
The scientist organized names at a time when knowledge about nature was overflowing. About 4,400 species of animals and 7,700 of plants received the name by Linnaeus.
This concept comes up in the Systema Naturæ. The work has had several editions. The tenth, from 1758, is the beginning point of this terminology.
Cover of the 1758 edition of the Systema Naturæ.
On Internet Archive
System of Nature
Through
Three Kingdoms of Nature
According
Classes, Orders,
Genera and Species
With
Characters, Differences,
Synonyms, Places
These pages describe seven species: Leo, Tigris, Pardus, Onca, Pardalis, Catus, and Lynx. | https://medium.com/4devs/our-little-home-cats-are-close-of-the-big-wild-felines-f700eb12fe70 | ['Daniel Roncaglia'] | 2020-06-19 18:21:21.606000+00:00 | ['Cats', 'Animals', 'Biology', 'Science', 'Taxonomy'] |
10 Tips for Writing Scientific Papers | #1: Adhere to the Macro
Form matters. Writing in a pre-defined form eases reading, as readers have a sense of what’s next and can easily relate your work to others. Most papers stick to the following meta-structure, and you should do so as well:
Abstract: One paragraph synthesis of the entire document. Introduction: Field, problems, and contributions. Related work: To-date solutions and how they are lacking Body: Free-zone. Describe your innovative work here. Results: Comparison to the state-of-the-art; proof of usefulness Conclusions: Sum-up and avenues for future investigation.
While there are some exceptions, this is the structure you will see the most when reading other people’s work. These sections might mix or swap, but they are always there. What usually changes, from paper to paper, is the internal structure within each section.
#2: Pay Attention to the Micro
Inside each paragraph, each sentence should have a reason to exist. A simple exercise is to name each sentence based on its role to the whole. If you are unable to name or label it, remove it. Another way to put it is: all sentences must add up to the paragraph conclusion. This tip is about finding and eliminating text that is not aiding or building towards anything.
As an example, picking a paper at random, see the first paragraph of the Ray tracing on programmable graphics hardware paper:
Ray tracing on programmable graphics hardware first paragraph.
Here, I color-coded each sentence. Try for yourself to label each one based on their role/task. Moreover, which sentences depend on others?
In order: the topic introduction (yellow), computing power has increased (blue), real-time ray-tracing is real (green), special chips are being made (pink), and conclusion (teal). Yellow states ray-tracing, which is mentioned in all other sentences. Blue and green do not depend on each other. Pink adds to the information of blue and green. Teal gathers everything to conclude the paragraph. Therefore, every sentence has a role in the conclusion.
If the authors were having trouble fitting the page count needed for a specific submission, this rationale could be used to find which are the least essential sentences. In this example, between blue, green, and pink, pink is the least informative argument and would be my candidate for removal.
#3: Write like J. S. Bach
In the previous example, the microstructure of the paragraph includes an introduction (yellow) and ends in a conclusion (teal). At the section scope, there is an introductory paragraph and a conclusive paragraph. The same goes for the paper scope, with the introduction and conclusion sections. As in any Bach’s piece, your text should be a constant stream of arguments and conclusions. Paragraphs must deliver something. Sections must deliver something. The paper has to deliver something. All levels matter.
There is no need to be a trained musician to understand this. Close your eyes and listen to the first minute of the Chaconne. Pay attention to how it constantly argues, counter-argues, and concludes the musical phrases.
Bach’s Greatest Work: The Chaconne, played by Hilary Hahn
The exercise is: for each paragraph, ask yourself: what it brings to the table? How does this add to the section narrative? Inconclusive paragraphs should be reviewed, merged, or removed. The same goes for conclusive paragraphs that do not add to the section message. The previous tip was about removing unnecessary text. This one is about tying everything up into a narrative.
#4: A Strong Related Work is a Strong Work
Out of all sections, the related work is the one that is more often lacking. By the time a reader reaches this section, he/she should be aware of the problems you are targeting, but not exactly sure what has been done to tackle it. The job of the related work is two-fold: to remark the important literature and to delineate a common issue with most to-date solutions.
A weak related work merely states what has been done so far. It does not pose a question or shows a common weakness. In a sense, it does not give a reason for your paper to exist. Why should there be another paper if there is no lingering problem to be solved?
An excellent related work values the superb work that has been done so far, while elegantly stating that there is still more to be done; and one or several of these “more” s is what you are doing in the article. This takes a good scientist to write, not just a good writer. A researcher’s eye sees the gap in the literature. This is where a good adviser comes into play. Use his/her eyes until you can use yours.
The Focal Loss paper gives a good example of this. The field is deep learning, and the specific problem is object detection. I recommend reading the entire related work. However, for brevity, consider just the third and fourth paragraphs, given bellow:
The paragraph begins enumerating the seminal one-stage detection architectures. Then, in yellow, it states the “gap”: one-stage detectors are faster but less accurate than two-stage approaches. In contrast, their work investigates: is it possible to be as accurate while retaining the speed?. Finally, they emphasize that their approach is based on a novel loss, not an innovative architecture. In sum, we have been presented the methods, their common issue, and how the authors addressed it.
It doesn’t hurt to mention how the microstructure of these paragraphs is outstanding and how it constantly poses arguments and ties them together. The first paragraph is special, as it even has a mid-paragraph resolution. There is no limit to how micro you can get.
#5: The Reordering Game
For each paragraph, consider all of its sentences independently. Can you reorder them without compromising the meaning? If almost no scrambling is possible, you have a clear and growing argument. Else, you might be just throwing one argument after another, until the last sentence comes to round up everything. Sentence and paragraph order matters; some orders are more effective at delivering a message than others.
On the Focal Loss example, put the first highlighted sentence just before the second and read it all. The message is more or less retained. However, this created a long stream of facts about detectors. Putting that sentence in the middle gives the readers a pause to breathe and to understand why these facts were given, which will ease digesting the next set of facts.
Consider the tip #4 text, second, third, and fourth paragraphs. The first two cover what a weak/strong related work is and the third gives a strong example. The second and third could be swapped without changing the meaning. However, we would be talking about weak related work and go to a strong example. This is sub-optimal. If you are giving a strong example, put it near the definition of what strong means.
This whole article is made of ten individual tips that could be said in any order. However, some orders place similar ideas closer and distinct ideas apart. In many cases, much can be improved by just aligning ideas properly. If you will compare weak with strong, start with the weak. If you will talk about A and B, and give an example of A, put B first. Ideas should always be building up. Write towards a climax.
#6: Do Not Repeat Yourself
Another common mistake is to repeat concepts said in previous sections to “remind the reader.” Do not do this. Write for readers that pay attention. Do not cue the reader as if he had just forgotten what you said.
If you do need to recall previous ideas, consider (1) naming them or, (2) assigning numbers. An example of (1) is Freud’s concepts of ego, super-ego, and id. In his theories, he extensively relied on these names on many occasions, instead of restating their definitions all the time. An example of (2) is given here and now, using the numbers in parenthesis. No need to repeat.
Keep in mind, however, that nothing said in the abstract should be relied upon on the text. Consider the abstract as not being part of the manuscript at all. Repeat anything you need at the introduction.
#7: Describe the Problem, not the Field
In most papers, you are within an ample field, and you are targeting a specific issue. For instance, you might be studying children’s development and focusing on their performance on puzzles. You can expect familiarity with the field, but not as much for the problem.
In more detail, “Children Development” is the field. You may describe it, but assume your readers are familiar. “Performance on Puzzles” is a specific problem. Explain this thoroughly; do not expect your readers to be familiar with this or to be aware of the typical puzzles used to assess development in kids. For instance, consider the Focal Loss paper once again. It does not try to overview the field: what neural networks are. Instead, it outlines the problem of object detection.
#8: Papers Are Not Tiny Books
Book authors expect you to read it front to back. No section skipping. Articles, on the other hand, are usually read in crazy ways. Of all parts, the “body” is the less often read. As a rule of thumb, your text should be ready for readers that jump from any section to any other section.
This happens all the time. After a year on a specific problem, you may have a set of “favorite papers,” in which you regularly consult the “body” and “results” section only. Whenever a new paper on the topic arises, you go instantly to the conclusions to see what it did. On new problems, we quickly skim hundreds of titles, of which some we read the abstract, and so on.
Imagine a master student who has two days to judge 20 papers as relevant to some question. This smart kid won’t read any of them thoroughly. He will read the 20 abstracts and filter his pile to 12 papers. Then, he will read 12 introductions, and, for some of them, he might read the conclusions, arriving at eight papers. Job’s done. Later on, a Ph.D. receives the eight documents and reads their abstracts. Two of them he saved to read entirely next week, the other six he skimmed the “body” section to get a feeling for it and selected one more to understand fully. So far, no one followed the actual order of any paper.
Edit: I just read an article that illustrates my point. The author calls it “three pass approach”. Literature reviews are done in passes, and they purposely focus on articles non-linearly. Make sure your paper is ready to be read like that.
#9: Good Writers Borrow; Great Writers Steal
Sometimes you are in the writer’s block. Nothing is coming out, but the deadlines are coming closer. Writing is hard; filling is easier. Find a paper you like, from another topic, then color-code the sentences or paragraphs of the section you are in right now, same as before with the other tips. Now, your task is just to fill these colors with your own content. You stole the structure, not the text.
Consider the following text. It steals the structure from the color-coded paragraph used in tip #2 and sets it in a totally different field.
It is valid to consider if this could be seen as copyright infringement or some form of plagiarism. However, this is not the case. This idea is not new and certainly is not rare. This is a standard procedure and an authentic way of getting the juices flowing. Most of the time, you only need the first paragraph to get going, then it is all yours.
#10: A Table is Worth a Thousand Plots
Plots are nice; plots are beautiful. Include as many as you can. However, keep in mind that you should have at least one table that sums up the data on most of the plots. Tables are the truth. They allow reviewers to do their own visualizations, to bypass log-scales, to compare your results to others, etc. Feel free to use several tables or a single master one, as long as there is a numerical version of your main results available to readers.
Consider the Focal Loss paper once more. Data science is full of experiments, and each experiment yields a ton of data. It is hard to judge whether a technique is really good by just reading about it and seeing some plots. Now, take a look at table 1:
Given the table, you might still object to the work or question the proposed loss function. However, one thing you cannot deny: tests were made. Tables show the true amount of effort that has been put to prove the effectiveness of the proposed method. The saying “a picture is worth a thousand words” does not generalize to plotting: no set of plots can ever make for a good table. | https://medium.com/swlh/10-tips-for-writing-scientific-papers-8c60ae18fed2 | ['Ygor Rebouças Serpa'] | 2020-05-02 21:13:18.911000+00:00 | ['Writing', 'Education', 'Data Science', 'Writing Tips', 'Academia'] |
Getting started with ReactJS | In this tutorial, you are going to learn What is ReactJS and how you can use this JS framework to write maintainable web apps in Javascript.
The project we are going to build will be a clone of Coinmarketcap, a famous website that lists the latest price of Cryptocurrencies. CMC provides JSON APIs to access their data.
Before I get into the main project, let’s talk a bit about ReactJS itself.
What is ReactJS?
From the official website:
A JavaScript library for building user interfaces.
ReactJS was initially developed by Facebook and now maintained by the community.
ReactJS is:
Declarative :- By declarative, it means that you don’t tell all the nitty gritty details and just tell what you want and the framework will take care of itself. In short, no direct DOM manipulation like it’s done via jQuery.
:- By declarative, it means that you don’t tell all the nitty gritty details and just tell what you want and the framework will take care of itself. In short, no direct DOM manipulation like it’s done via jQuery. Component Oriented:- In ReactJS, a page is divided into components. Some components are children of other components. For instance, if there a page with a search bar. The search bar will be treated as a component.
In ReactJS, a page is divided into components. Some components are children of other components. For instance, if there a page with a search bar. The search bar will be treated as a component. Learn Once, Write Anywhere:- ReactJS can be used as a server language with NodeJS and for mobile apps with the help of ReactNative.
Thinking in React
The team behind ReactJS shared a very useful article that talks about the philosophy behind the framework itself and how could you design your apps efficiently by collaborating with other team members. I will be following it to write our CoinmarketCap clone.
The mockup is done so is component labeling. We will be making our components accordingly.
Creating ReactApp
Assuming you have updated Node and NPM installed, we will use npx tool to install and run react app locally.
npx create-react-app cmclite
You should also install React Developer tools, a Chrome extension to inspect DOM.
Once the app is created, run npm start to run the server. If all goes well, you should see something like:
React App in Action
At the first level, it creates public and src folder and a few other files.
➜ cmclite git:(master) tree -L 1 . ├── README.md ├── node_modules ├── package-lock.json ├── package.json ├── public
In public folder you find an index.html file. The file contains the following line:
<div id="root"></div>
The is the main container div of the entire app.
If you notice there is no inclusion of any .js file within index.html . The Create-React App is doing some magical things to take care of all such dependencies. If you want to dig deeper, read this excellent answer.
Next, go to src/App.js file in which you find the following code:
render() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p>
Edit <code>src/App.js</code> and save to reload.
</p>
<a
className="App-link"
href="
target="_blank"
rel="noopener noreferrer"
>
Learn React
</a>
</header>
</div>
);
}
} class App extends Component {render() {return ( Edit src/App.js and save to reload. https://reactjs.org target="_blank"rel="noopener noreferrer"Learn React );
We will remove the unnecessary code lines and the file now looks like:
import React, { Component } from 'react';
import './App.css'; class CMCLiteApp extends Component {
render() {
return (
<div className="App">
<h1>Welcome to CMC Lite</h1>
</div>
);
}
} export default CMCLiteApp;
I am not going to discuss what this import means since this is beyond the scope of this article. You might want to google to learn about ES6 javascript. If you are too lazy, visit here.
Alright. the very outer component that is CMCLiteApp is ready and now we have to work on inner components.
Did you notice the weird mixture of Javascript and HTML? It is called JSX (JavaSript XML)
What is JSX?
ReactJS uses JSX for templating purpose, though it is not necessary. There are a few advantages to using it:
Faster because it performs optimization while compiling.
Type-safe, most of the errors can be caught during compilation.
Productivity is increased as it is easier to write templates.
Babel is used as a pre-processor to compile JSX based syntax to JS objects. Let’s discuss a simple example:
const element = <h1>Hello, world!</h1>
You can see this weird syntax. Weird because you see HTML and Javascript are mixed up. If you run paste this line in online Babel REPL you will see the native JS code:
"use strict"; var element = React.createElement("h1", null, "Hello, world!");
Another:
const myId = 'test'
const element = <h1 id={myId}>Hello, world!</h1>
Here you assign a variable and then use it. The JS version of it is generated as:
var myId = 'test';
var element = React.createElement("h1", {
id: myId
}, "Hello, world!");
As you can see that JSX version is shorter and readable.
<div>
<Article />
<LeftBar />
</div>
The JS version is:
"use strict"; React.createElement("div", null,
React.createElement(Article, null),
React.createElement(LeftBar, null));
As you can see <div> has nested <Article> and <LeftBar> tags. The JS version also created elements.
Alright, let’s get back to our project. We will create the CurrencyList object now which is nothing but displaying <table></table>
I create a folder under src and named it as components . Under this foder I created a file called CurrencyList.js . Refer to the diagram/mockup I shared above. The name is used there as well.
import React, { Component } from 'react'; class CurrencyList extends Component {
render() {
return (
<table></table>
);
}
} export default CurrencyList;
The CurrencyList.js contains <table> tag. App.js now looks like this:
import CurrencyList from './components/CurrencyList' class CMCLiteApp extends Component {
render() {
return (
<div className="App">
<h1>Welcome to CMC Lite</h1>
<CurrencyList />
</div>
);
After importing the file, I called <CurrencyList> component. If all goes well you should the following in Chrome Inspector’s REACT tab.
The <table> tag is visible here. The typical Chrome inspector shows the following markup:
You can definitely see the difference between the two markups here.
OK, the next component is the Currency component. A single row of the table. Create a new file Currency.js in src/components .
import React, { Component } from 'react'; class Currency extends Component {
render() {
return (
<tr>
<td>Bitcoin</td>
<td>$68,501,264,485</td>
<td>$3,897.57</td>
<td>$9,419,160,206</td>
<td>17,575,400 BTC </td>
</tr>
);
}
} export default Currency;
As expected, it contains nothing but rows of entries. Right now it is hard-coded but it will soon be linked with CMC API. The CurrencyList.js file now looks like this:
import Currency from './Currency' class CurrencyList extends Component {
render() {
return (
<table className="table margin-top">
<tr>
<th>Name</th>
<th>MarketCap</th>
<th>Price</th>
<th>Volume(24h)</th>
<th>Circulating Supply(24h)</th>
</tr>
<Currency />
</table>
);
}
}
After adding the static <th> I called <Currency /> component. I also made bootstrap related changes in App.js which now looks like:
class CMCLiteApp extends Component {
render() {
return (
<div className="container">
<div className='row'>
<div className="col-md-12"></div>
<h2>© CoinMarketCap Lite</h2>
</div>
<div class="row margin-top">
<CurrencyList />
</div>
</div>
);
}
}
If everything works fine you should see something like below:
Before I move further and connect the API, let’s talk a bit about Props and State.
Prop vs State
A prop is a read-only object which is passed to the child object(s) by a Parent object. Props can’t be modified. On the other hand, a state object can be modified. Typically, a state object becomes a prop when is passed to a child object. From the CoinMarketAPI, I picked a couple of JSON entries and set a state variable in App.js file.
state = {
currencies: [
{
"id": 1,
"name": "Bitcoin",
"symbol": "BTC",
"slug": "bitcoin",
"circulating_supply": 17578950,
"total_supply": 17578950,
"max_supply": 21000000,
"date_added": "2013-04-28T00:00:00.000Z",
"num_market_pairs": 6700,
"tags": [
"mineable"
],
"platform": null,
"cmc_rank": 1,
"last_updated": "2019-03-09T12:07:27.000Z",
"quote": {
"USD": {
"price": 3943.74146337,
"volume_24h": 10641968561.724,
"percent_change_1h": 0.0142697,
"percent_change_24h": 0.477704,
"percent_change_7d": 2.24831,
"market_cap": 69326833997.50806,
"last_updated": "2019-03-09T12:07:27.000Z"
}
}
},
{
"id": 1027,
"name": "Ethereum",
"symbol": "ETH",
"slug": "ethereum",
"circulating_supply": 105163658.5616,
"total_supply": 105163658.5616,
"max_supply": null,
"date_added": "2015-08-07T00:00:00.000Z",
"num_market_pairs": 4770,
"tags": [
"mineable"
],
"platform": null,
"cmc_rank": 2,
"last_updated": "2019-03-09T12:07:20.000Z",
"quote": {
"USD": {
"price": 138.471887904,
"volume_24h": 4958392663.27933,
"percent_change_1h": 0.105328,
"percent_change_24h": 0.747307,
"percent_change_7d": 2.97593,
"market_cap": 14562210339.916405,
"last_updated": "2019-03-09T12:07:20.000Z"
}
}
},
]
}
And then in render() I did the following:
<div className="row margin-top">
<CurrencyList currencies={this.state.currencies} />
</div>
And then in the CurrencyList component, I accessed it via a prop variable:
render() {
console.log(this.props.currencies);
return (...
As you can see I passed a currencies parameter in CurrencyList tag which I then access within the CurrencyList component. So far so good. Our data is available. What all is required to iterate and create multiple <Currency> objects. I am going to make a few changes in CurrencyList.js so pay attention as they are important.
return (
<table className="table margin-top">
<thead>
<tr>
<th>Name</th>
<th>MarketCap</th>
<th>Price</th>
<th>Volume(24h)</th>
<th>Circulating Supply(24h)</th>
</tr>
</thead>
<tbody>
{
this.props.currencies.map(( currency ) => {
return (
<Currency key={currency.id} currency={currency} />
);
})
}
</tbody>
</table>
First I surrounded the rows with THEAD and TBODY because I was getting warnings by the compiler. After that, I added an evaluation block {} and within that, I called map to iterate the a currencies object and pass the value to <Currency> tag. Notice I passed key parameter. Again, React wants to make sure that there are unique DOM entries. Arrow functions and map is beyond the scope of this post. You may find some good resources online. You may also use plain-old for loop too, The Currency component now looks like below:
<tr>
<td>{this.props.currency.name}</td>
<td>{this.props.currency.quote.USD.market_cap.toLocaleString('en-US', {style: 'currency',currency: 'USD',})}</td>
<td>{this.props.currency.quote.USD.price.toLocaleString('en-US', {style: 'currency',currency: 'USD',})}</td>
<td>{this.props.currency.quote.USD.volume_24h}</td>
<td>{this.props.currency.circulating_supply}</td>
</tr>
So far so good. The state object is integrated and the data is being displayed. What is now left is to access the real-time data.
Accessing Remote API
I am going to use Axios library for that purpose. Install it by running npm i axios . You might need to install dependencies manually.
CoinMarketCap API does not allow to access their API directly from Javascript so I created a simple PHP script which accesses CMC API and then returns data by enabling CORS in the returned header. Once Axios is installed, import it import axios from 'axios
class CMCLiteApp extends Component {
state = {
currencies: [
]
}
axios.get('
.then(res=> this.setState({ currencies:res.data.data })) componentDidMount() {axios.get(' http://localhost:8080/fetch.php' .then(res=> this.setState({ currencies:res.data.data })) }
After emptying the currencies it is then being set in componentDidMount() method. It is invoked immediately once the DOM is inserted into the tree. Once the data is fetched, setState() is used to assign the key currencies of state object.
Conclusion
So this was the very basic ReactJS tutorial which should help you to kick start. There are more things that can be explored and I recommend you to visit official docs and other online resources. The code is available on Github.
This blog post was originally published here. | https://pknerd.medium.com/getting-started-with-reactjs-13e4ceb47d08 | ['Adnan Siddiqi'] | 2019-03-10 10:04:31.223000+00:00 | ['Front End Development', 'React', 'Reactjs', 'Web Development'] |
How Rough Sex Can be Healing | How Rough Sex Can be Healing
When you’re a control freak, sometimes the best way to get off is to let go.
Photo by Artem Labunsky on Unsplash
I am the epitome of a control freak; I’m tightly wound, always stressed, constantly working, my house is immaculately clean, and my brain basically never shuts down. I am anxious all the time, even when I dream I have stress dreams (don’t worry, I’ve been medicated for my anxiety for over half of my life). For me, sex is a huge stress reliever, and it’s pretty much the only time in my life that I can let go and be out of control of my sexual experience.
The ways in which somebody enjoys sex largely depends on who they are and their life experiences. For me, my life had been constantly out of control in my younger years which forced me to be who I am today. I’ve always been consistent, organized, and overly focused on what I can control. Sex, for me, has been the only time that I can truly let go. Of course, with the kind of sex I am talking about, you have to be with somebody whom you trust.
Rough sex, kinks, and sexual prowess are different for everybody; my husband and I, for example, have different sexual desires. The main difference is that he’s just down for whatever, be it vanilla sex or extremely rough sex, he just likes to have sex. I, on the other hand, have very specific things I am into, like choking, spanking, being tied up, being slapped in the face, dirty talk, etc etc etc. The list goes on and on, I’m not just into your generic, little spanks here, some light choking, hair pulling rough sex; I like to be dominated, I like to be completely out of control.
This may seem like two completely different things: a control freak who likes to be out of control. Maybe that’s my kink? Being out of control. | https://medium.com/fearless-she-wrote/how-rough-sex-can-be-healing-ceddaae98c37 | ['Alexandra Tsuneta'] | 2020-02-11 13:01:01.979000+00:00 | ['Women', 'Relationships', 'Psychology', 'Sexuality', 'Sex'] |
Is America’s Veneer of Civility About to Crumble Alongside Its Leadership? | Is America’s Veneer of Civility About to Crumble Alongside Its Leadership?
Public Response to the Full-Blown COVID-19 Crisis Could Make Black Friday at Walmart Seem Like a Walk in the Park
Albert Einstein famously said, “Everybody is a genius, but if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” Einstein did not foresee our “stable genius” in the White House. That “fish” is not only completely out of his element — more so than any president in the history of the United States — he also refuses to own his unwavering stupidity.
“WE CANNOT LET THE CURE BE WORSE THAN THE PROBLEM ITSELF,” Donald Trump shout-tweeted two Sundays ago, shortly before midnight. “AT THE END OF THE 15 DAY PERIOD, WE WILL MAKE A DECISION AS TO WHICH WAY WE WANT TO GO!”
Trump, seemingly, cannot discipline himself from plagiarizing boneheaded policy directions and cues from Fox News talking heads. The Next Revolution host, Steve Hilton, had said as much during his 9 p.m. show.
“You know that famous phrase, ‘The cure is worse than the disease?’” Hilton declared. “That is exactly the territory we are hurtling towards. You think it is just the coronavirus that kills people? This total economic shutdown will kill people.”
Now everyone has a right to their opinion, full of “dumbfuckery” as it may be. But Donald Trump is — although it pains me to say it — president of the United States. That he repeats with alacrity such breathtakingly mindless hysteria should not be his special skill. But it is.
Maybe I’m social distancing too much with Trevor Noah. But I can picture Trump after Hilton uttered that bit of twaddle — a bloated puffer fish, a “Jabba the Hutt of privilege,” armed with double-barreled, gold-plated squirt sanitizer, pacing the Oval erratically, eyes peering left and right for signs of the unseen enemy. Coronavirus seems destined to take him down, if not out, from his White House perch. And he’s not having it. Indictments — and prison — await him. So it’s war. And he is the wartime president, goddammit. Goddamn Xi and his fucking Chinese virus. [Idea bubble pops.] But only I can fix it! It will be the biggest war, the best war, ever fought by any president in the history of the United States — with the best ratings ever. The economy is so last week anyway. It’s about the war now, Stupid — Keep up, Pence! And if 2.2 million Americans die — people die in a war, right? We are overdue for a purge anyway — the elderly, those hangers-on with preexisting conditions, the dumb… My God, they don’t deserve to live. And the poor, pfft, their time is up, anyway. [Whips out phone — tap, tap…] How the fuck do you spell “decision”? Ivankaaa!
By Monday, Trump had doubled down on his — because, until he denies it next week, it is his — divinely inspired thought. Scaling back efforts to constrain the spread of coronavirus was now the harebrained scheme.
“Our country wasn’t built to be shut down,” Trump declared with the kind of bombast only he can muster. “America will again, and soon, be open for business…If it were up to the doctors, they’d say let’s shut down the entire world, [but] this could create a much bigger problem than the problem that you started out with.”
“I’m not looking at months, I can tell you right now,” he added defiantly, as if to knowledge-shame the world’s top scientists and epidemiologists, the mere minions who had been unanimous in their advice to curb the virus. Common Sense had long fled the White House. But now, Expressed Warnings were being bullied to defect.
By Tuesday, Trump pumped up the rhetoric, committing “to have the country opened up, and just raring to go, by Easter” — then, less than three weeks away — because, well, “it’s such an important day for other reasons, but I’ll make it an important day for this too.”
That Trump did not think it perverse to link the untimely deaths of large swaths of the population with the Christian celebration of Jesus Christ rising from the dead should have been ringing every alarm bell across this country. Except, it didn’t. Two pastors, Tony Spell of Life Tabernacle Church in Baton Rouge, La., and Rodney Howard-Browne of The River at Tampa Bay Church in Florida continued with business as usual. Spell had been busing in about 1,000 people for Sunday service because the coronavirus pandemic is “politically motivated” — a reminder that Trump’s disinformation propaganda has dangerous consequences. Howard-Browne was arrested.
By Thursday, the United States had surpassed China, which is four times more densely populated despite being roughly the same size, to become the country with the highest number of COVID-19 cases. Trump’s “America First” credo had taken on new meaning. But more importantly, our numbers have not yet peaked. Testing limitations prevail. Backlogs at private labs slow down efficient patient care and the delivery of critical data to evaluate how many Americans are actually infected.
By Friday, confirmed cases of coronavirus in the United States crossed the 100,000 mark. The death rate, already exponential, was doubling every 3.03 days and trending toward less time (2.57 days). Deaths already topped 2,000. At that rate, in 30 days, without social interventions working, or a cure, there would be 512 times (2 raised to the power 9) that number — 1,024,000 — “a lot of fucking dead people.” Compare this with the swine flu (H1N1) pandemic, which the Centers for Disease Control and Prevention (CDC) estimated had caused 60.8 million illnesses, 273,304 hospitalizations and 12,469 deaths.
By last Sunday, with body bags beginning to pile up, the rich and famous included, Trump had toned down the bombast. Federal social distancing guidelines would extend until April 30. But, then, he accused New York City’s healthcare workers of “worse than hoarding” surgical masks and other personal protective equipment.
“[Trump] cannot conceive a facility going from 20,000 masks to 200,000 with this pandemic,” Assistant Nurse Manager Sandra Brissett wrote on her Facebook page last Sunday. “He has not entered a hospital, skilled nursing facility or assisted living facility to make this assessment. No wonder staff were told to improvise and have ultimately paid the price with their very lives.”
In his “inspirational” Sunday message, Trump had not a word of gratitude for the sacrifices being made by healthcare workers. No apology for the supply chain problems that continue to endanger their lives. No explanation for the 17.8 tons of personal protective equipment shipped out by his administration to China in February, despite warnings from January about the looming crisis. No expression of condolences to the families of the dead.
Trump was quite vocal, however, about the “very good job” his administration will have done if the death toll is kept to 100,000. The White House task force now projects 100,000 to 240,000 deaths from COVID-19, even with full mitigation efforts like social distancing — more loss than in the Korean, Vietnam, Afghanistan and Iraq wars combined, in which there were 143,858 American casualties.
A very good job?
This is the magnitude of the coronavirus crisis, made worse by a disastrous lack of leadership. Trump brazenly declared recently that he takes no responsibility for the incompetent national response of his administration and the federal government. But Trump has blood on his hands! There will be no rewrite of history on my watch. No rewrite of the missing six weeks in which Trump failed to act after being made aware of the looming danger. Albeit in my zoomorphic visions, he is the epitome of the imbecilic ostrich with his orange-coiffed head digging deeper into the rabbit hole of fraudulent Norman Vincent Peale Power of Positive Thinking theology.
“It’s going to disappear,” Trump famously said, lording it over facts, science and critical thought. “One day — it’s like a miracle — it will disappear.”
A week later, April 5, the number of people infected has more than tripled to 311,637 cases; 8,802 people have died. Only 1.6 million people have reportedly been tested — 0.484 percent of the 330 million U.S. population.
COVID-19 is a wake-up call to the world. But for America, it is a reminder that a tiny but well-placed stone in the slingshot of the universe can bring even the mightiest giant to its knees — overnight. In 2016, we could identify Russia. This time, the force is unseen. The state-of-the-art weapons and military technology we have been financing for decades at the expense of healthcare and social services are no damn good in this “war.”
Never in recent decades has America witnessed as swift and shocking a deterioration in its economic outlook as within the past seven weeks. The 30-percent drop in the S&P 500 since its all-time high in mid-February is the quickest decline in its history — $10 trillion in shareholder wealth vanished. The first wave of unemployment claims are already at unprecedented levels and about to get worse. Federal Reserve Chairman Jerome Powell has attempted to convince the public that the central bank will not lack resources to support the economy, and that present conditions do not constitute a real economic recession. However, such voices are not bastions of trust. People are concerned now about their very survival.
Indeed, Americans are understanding what destabilization looks like, that it’s not some nebulous thing to commiserate about “over there.” These conditions — in which businesses are shuttered, there’s no income, basic commodities disappear from the shelves even as buyers line up to see what’s left, all the while worrying about how they will feed their families — are normal to many who show up at our borders. In fact, many come seeking asylum from countries in which America has been the “virus” of destabilization.
In many aspects, immigrants are ahead of the curve of most Americans. None of this is “unprecedented” where they come from. They understand that economic recovery will not be overnight. And they understand what’s coming down the pike when people line up around the block outside hunting shops in a massive surge to buy guns; it’s not just about protection from looters. Asian-Americans, for example, were buying weapons to protect themselves from potential racist attacks because, in Donald Trump’s America, immigrants — and Obama — are the scapegoats for all our ills.
This is the unspoken commentary of this crisis — how the veneer of civility may well disappear in the competition for scarce resources, how America is no longer isolated from the realities of the rest of the world. Anarchy could prevail — law enforcement and the National Guard are not exempt from COVID-19. Many of us in America who have lived through catastrophic events like hurricanes and earthquakes understand Carl Jung’s observation: “The psychology of a large crowd inevitably sinks to the level of mob psychology.” Americans see this every year at Walmart on Black Friday.
Doctors and nurses are already deciding who lives or dies. So are Trump and his “tremendous” team, according to how caches of emergency supplies are being dispatched. On March 27, Trump shamelessly said that states like Washington and Michigan whose governors were not “appreciative” enough of his efforts were on his “do not call” list.
This has consequences!
Meanwhile, renegade Republican governors in states such as Texas and Missouri still have no apparent urgency to enact statewide lockdowns despite a study, which revealed that social distancing would be “an unproductive measure” if adopted by less than 70 percent of the population. This all but guarantees a rollercoaster effect of containment and epidemic.
New data from Gallup suggests that, while 92 percent of adults are avoiding “events with large crowds, such as concerts, festivals or sporting events,” only 59 percent had adequately “stocked up on food, medical supplies or cleaning supplies,” which will undoubtedly require trips into public spaces.
The daily updates provided by Gov. Andrew Cuomo, D-N.Y., constitute a prescient reminder to other states of the tsunami on the way. On March 25, New York City EMS reportedly received 6,406 medical 911 calls — the highest volume ever recorded in the city — surpassing the record set on 9/11. At Elmhurst Hospital Center, a 545-bed public hospital in Queens, where the emergency room is filling up with more than 200 people at times and a refrigerated truck is stationed outside to help accommodate the dead, the scene has been described as “apocalyptic.”
And when we think that the axis of social evil around which far-right politics revolves couldn’t possibly steer any further into the fascistic abyss, somehow, it finds the gear. Last Monday, skepticism about the numbers of reported dead began circulating the Republican nuttery. The week before, it was a call for social Darwinism — the sacrificial deaths of seniors over 50 as a preferable state to an economic shutdown. Especially when cloaked in religious-speak by quacks like R. R. Reno, who asserts that the desire to save lives over the economy “reflects a disastrous sentimentalism,” it is a reminder that the Kool-Aid-drinking cults of death like that of Jim Jones only shape-shift.
This fuels consequences.
Can America Survive COVID-19 with Trump at the Helm?
In the recent bestseller, A Very Stable Genius: Donald J. Trump’s Testing of America, the short-lived former White House Communications Director Anthony Scaramucci reportedly asked Trump, “Are you an act?” Trump replied, “I am a total act, and I don’t understand why people don’t get it.”
Which, actually, isn’t true. The majority of American voters “got it” — from 2016. Trump exposed himself to be a capricious, vengeful little man who normalized hatred and lies. Former First Lady Michelle Obama famously warned, alluding to Trump, “The presidency doesn’t change who you are; it reveals who you are.” Mental health professionals also expressed grave concerns about Trump’s temperament; how he’d handle a genuine crisis threatening the nation. Now we know. His characters flaws are a matter of life and death.
Donald Trump is accustomed to chaos of his own creation with human adversaries he can intimidate and confront. He’s not accustomed to riding shotgun with real experts in the driver’s seat, not accustomed to being unable to buy his way out of trouble, not accustomed to being unable to bully his way toward his ends, not accustomed to being judged solely by the content of his character. But he has been weighed, he has been measured, and he has been found wanting.
It is time for Donald Trump to resign. | https://donnakassin.medium.com/is-americas-veneer-of-civility-about-to-crumble-alongside-its-leadership-b04a9c73ced | ['Donna Kassin'] | 2020-05-20 19:18:53.898000+00:00 | ['Leadership', 'Election 2020', 'Politics', 'Covid 19', 'Coronavirus'] |
How Tweets Are Analyzed By Twitter With The Help Of Pig? | Pig Tutorial - Edureka
As I mentioned in my Hadoop Ecosystem article, Apache Pig is an essential part of our Hadoop ecosystem. So, I would like to take you through this Apache Pig tutorial, which is a part of our Hadoop Tutorial Series. In this Apache Pig Tutorial article, I will talk about:
Apache Pig vs MapReduce
Introduction to Apache Pig
Where to use Apache Pig?
Twitter Case Study
Apache Pig Architecture
Pig Latin Data Model
Apache Pig Schema
Before starting with the Apache Pig tutorial, I would like you to ask yourself a question — “ while MapReduce was there for Big Data Analytics why Apache Pig came into the picture?“
The sweet and simple answer to this is:
approximately 10 lines of Pig code is equal to 200 lines of MapReduce code.
Writing MapReduce jobs in Java is not an easy task for everyone. Thus, Apache Pig emerged as a boon for programmers who were not good with Java or Python. Even if someone who knows Java and is good with MapReduce, they will also prefer Apache Pig due to the ease of working with Pig. Let us take a look now.
Apache Pig vs MapReduce
Programmers face difficulty writing MapReduce tasks as it requires Java or Python programming knowledge. For them, Apache Pig is a savior.
Pig Latin is a high-level data flow language, whereas MapReduce is a low-level data processing paradigm.
Without writing complex Java implementations in MapReduce, programmers can achieve the same implementations very easily using Pig Latin.
Apache Pig uses a multi-query approach (i.e. using a single query of Pig Latin we can accomplish multiple MapReduce tasks), which reduces the length of the code by 20 times. Hence, this reduces the development period by almost 16 times.
Pig provides many built-in operators to support data operations like joins, filters, ordering, sorting etc. Whereas to perform the same function in MapReduce is a humongous task.
Performing a Join operation in Apache Pig is simple. Whereas it is difficult in MapReduce to perform a Join operation between the data sets, as it requires multiple MapReduce tasks to be executed sequentially to fulfill the job.
In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce. I will explain you these data types in a while.
Now that we know why Apache Pig came into the picture, you would be curious to know what is Apache Pig? Let us move ahead in this article and go through the introduction and features of Apache Pig.
Introduction to Apache Pig
Apache Pig is a platform, used to analyze large data sets representing them as data flows. It is designed to provide an abstraction over MapReduce, reducing the complexities of writing a MapReduce program. We can perform data manipulation operations very easily in Hadoop using Apache Pig.
The features of Apache pig are:
Pig enables programmers to write complex data transformations without knowing Java.
Apache Pig has two main components — the Pig Latin language and the Pig Run-time Environment , in which Pig Latin programs are executed.
and the , in which Pig Latin programs are executed. For Big Data Analytics, Pig gives a simple data flow language known as Pig Latin which has functionalities similar to SQL like join, filter, limit etc.
which has functionalities similar to SQL like join, filter, limit etc. Developers who are working with scripting languages and SQL, leverages Pig Latin. This gives developers ease of programming with Apache Pig. Pig Latin provides various built-in operators like join, sort, filter, etc to read, write, and process large data sets. Thus it is evident, Pig has a rich set of operators .
with Apache Pig. Pig Latin provides various built-in operators like join, sort, filter, etc to read, write, and process large data sets. Thus it is evident, Pig has a . Programmers write scripts using Pig Latin to analyze data and these scripts are internally converted to Map and Reduce tasks by Pig MapReduce Engine. Before Pig, writing MapReduce tasks was the only way to process the data stored in HDFS.
If a programmer wants to write custom functions which are unavailable in Pig, Pig allows them to write User Defined Functions ( UDF ) in any language of their choice like Java, Python, Ruby, Jython, JRuby etc. and embed them in Pig script. This provides extensibility to Apache Pig.
) in any language of their choice like Java, Python, Ruby, Jython, JRuby etc. and embed them in Pig script. This provides to Apache Pig. Pig can process any kind of data, i.e. structured, semi-structured or unstructured data, coming from various sources. Apache Pig handles all kinds of data .
. Approximately, 10 lines of pig code are equal to 200 lines of MapReduce code.
It can handle inconsistent schema (in case of unstructured data).
Apache Pig extracts the data, performs operations on that data and dumps the data in the required format in HDFS i.e. ETL (Extract Transform Load) .
i.e. . Apache Pig automatically optimizes the tasks before execution, i.e. automatic optimization .
. It allows programmers and developers to concentrate upon the whole operation irrespective of creating mapper and reducer functions separately.
After knowing what is Apache Pig, now let us understand where we can use Apache Pig and what are the use cases which suits Apache Pig the most?
Where to use Apache Pig?
Apache Pig is used for analyzing and performing tasks involving ad-hoc processing. Apache Pig is used:
Where we need to process, huge datasets like We blog , streaming online data, etc.
Where we need Data processing for search platforms (different types of data needs to be processed) like Yahoo uses Pig for 40% of their jobs including news feeds and search engine .
. Where we need to process time sensitive data loads. Here, data needs to be extracted and analyzed quickly. E.g. machine learning algorithms require time-sensitive data loads, like twitter, needs to quickly extract data of customer activities (i.e. tweets, re-tweets, and likes) and analyze the data to find patterns in customer behaviors, and make recommendations immediately like trending tweets.
Now, in our Apache Pig Tutorial, let us go through the Twitter case study to better understand how Apache Pig helps in analyzing data and makes business understanding easier.
Twitter Case Study
I will take you through a case study of Twitter where Twitter adopted Apache Pig.
Twitter’s data was growing at an accelerating rate (i.e. 10 TB data/day). Thus, Twitter decided to move the archived data to HDFS and adopt Hadoop for extracting the business values out of it.
Their major aim was to analyze data stored in Hadoop to come up with the following insights on a daily, weekly or monthly basis.
Counting operations:
How many requests does twitter serve in a day?
What is the average latency of the requests?
How many searches happen each day on Twitter?
How many unique queries are received?
How many unique users come to visit?
What is the geographic distribution of the users?
Correlating Big Data:
How usage differs for mobile users?
Cohort analysis: analyzing data by categorizing user, based on their behavior.
What goes wrong while site problem occurs?
Which features user often uses?
Search correction and search suggestions.
Research on Big Data & produce better outcomes like:
What can Twitter analysis about users from their tweets?
Who follows whom and on what basis?
What is the ratio of the follower to following?
What is the reputation of the user?
and many more…
So, for analyzing data, Twitter used MapReduce initially, which is parallel computing over HDFS (i.e. Hadoop Distributed File system).
For example, they wanted to analyze how many tweets are stored per user, in the given tweet table?
Using MapReduce, this problem will be solved sequentially as shown in the below image:
Twitter MapReduce Example - Pig Tutorial
MapReduce program first inputs the key as rows and sends the tweet table information to mapper function. Then the Mapper function will select the user id and associate unit value (i.e. 1) to every user id. The Shuffle function will sort same user ids together. At last, Reduce function will add all the number of tweets together belonging to the same user. The output will be user id, combined with username and the number of tweets per user.
But while using MapReduce, they faced some limitations:
Analysis needs to be typically done in Java.
Joins, that are performed, need to be written in Java, which makes it longer and more error-prone.
For projection and filters, the custom code needs to be written which makes the whole process slower.
The job is divided into many stages while using MapReduce, which makes it difficult to manage.
So, Twitter moved to Apache Pig for analysis. Now, joining data sets, grouping them, sorting them and retrieving data becomes easier and simpler. You can see in the below image how twitter used Apache Pig to analyze their large data set.
Twitter had both semi-structured data like Twitter Apache logs, Twitter search logs, Twitter MySQL query logs, application logs and structured data like tweets, users, block notifications, phones, favorites, saved searches, re-tweets, authentications, SMS usage, user followings, etc. which can be easily processed by Apache Pig.
Twitter dumps all its archived data on HDFS. It has two tables i.e. user data and tweets data. User data contains information about the users like username, followers, followings, number of tweets etc. While Tweet data contains tweet, its owner, number of re-tweets, number of likes etc. Now, Twitter uses this data to analyze their customer’s behaviors and improve their past experiences.
We will see how Apache Pig solves the same problem which was solved by MapReduce:
Question: Analyzing how many tweets are stored per user, in the given tweet tables?
The below image shows the approach of Apache Pig to solve the problem:
Twitter Solution - Pig Tutorial
The step by step solution of this problem is shown in the above image.
STEP 1– First of all, twitter imports the twitter tables (i.e. user table and tweet table) into the HDFS. STEP 2– Then Apache Pig loads (LOAD) the tables into Apache Pig framework. STEP 3– Then it joins and groups the tweet tables and user table using COGROUP command as shown in the above image. This results in the inner Bag Data type, which we will discuss later in this article. Example of Inner bags produced (refer to the above image) – (1,{(1,Jay,xyz),(1,Jay,pqr),(1,Jay,lmn)}) (2,{(2,Ellie,abc),(2,Ellie,vxy)}) (3, {(3,Sam,stu)}) STEP 4– Then the tweets are counted according to the users using COUNT command. So, that the total number of tweets per user can be easily calculated. Example of tuple produced as (id, tweet count) (refer to the above image) – (1, 3) (2, 2) (3, 1) STEP 5– At last the result is joined with user table to extract the user name with produced result. Example of tuple produced as (id, name, tweet count) (refer to the above image) – (1, Jay, 3) (2, Ellie, 2) (3, Sam, 1) STEP 6– Finally, this result is stored back in the HDFS.
Pig is not only limited to this operation. It can perform various other operations which I mentioned earlier in this use case.
These insights help Twitter to perform sentiment analysis and develop machine learning algorithms based on user behaviors and patterns.
Now, after knowing the Twitter case study, in this Apache Pig tutorial, let us take a deep dive and understand the architecture of Apache Pig and Pig Latin’s data model. This will help us understand how pig works internally. Apache Pig draws its strength from its architecture.
Architecture
For writing a Pig script, we need Pig Latin language and to execute them, we need an execution environment. The architecture of Apache Pig is shown in the below image.
Architecture of Pig - Pig Tutorial
Pig Latin Scripts
Initially as illustrated in the above image, we submit Pig scripts to the Apache Pig execution environment which can be written in Pig Latin using built-in operators.
There are three ways to execute the Pig script:
Grunt Shell: This is Pig’s interactive shell provided to execute all Pig Scripts. Script File: Write all the Pig commands in a script file and execute the Pig script file. This is executed by the Pig Server. Embedded Script: If some functions are unavailable in built-in operators, we can programmatically create User Defined Functions to bring that functionalities using other languages like Java, Python, Ruby, etc. and embed it in Pig Latin Script file. Then, execute that script file.
Parser
From the above image you can see, after passing through Grunt or Pig Server, Pig Scripts are passed to the Parser. The Parser does type checking and checks the syntax of the script. The parser outputs a DAG (directed acyclic graph). DAG represents the Pig Latin statements and logical operators. The logical operators are represented as the nodes and the data flows are represented as edges.
Optimizer
Then the DAG is submitted to the optimizer. The Optimizer performs the optimization activities like split, merge, transform, and reorder operators etc. This optimizer provides the automatic optimization feature to Apache Pig. The optimizer basically aims to reduce the amount of data in the pipeline at any instance of time while processing the extracted data, and for that, it performs functions like:
PushUpFilter: If there are multiple conditions in the filter and the filter can be split, Pig splits the conditions and pushes up each condition separately. Selecting these conditions earlier helps in reducing the number of records remaining in the pipeline.
If there are multiple conditions in the filter and the filter can be split, Pig splits the conditions and pushes up each condition separately. Selecting these conditions earlier helps in reducing the number of records remaining in the pipeline. PushDownForEachFlatten: Applying flattens, which produces a cross product between a complex type such as a tuple or a bag and the other fields in the record, as late as possible in the plan. This keeps the number of records low in the pipeline.
Applying flattens, which produces a cross product between a complex type such as a tuple or a bag and the other fields in the record, as late as possible in the plan. This keeps the number of records low in the pipeline. ColumnPruner: Omitting columns that are never used or no longer needed, reducing the size of the record. This can be applied after each operator so that fields can be pruned as aggressively as possible.
Omitting columns that are never used or no longer needed, reducing the size of the record. This can be applied after each operator so that fields can be pruned as aggressively as possible. MapKeyPruner: Omitting map keys that are never used, reducing the size of the record.
Omitting map keys that are never used, reducing the size of the record. LimitOptimizer: If the limit operator is immediately applied after a load or sort operator, Pig converts the load or sort operator into a limit-sensitive implementation, which does not require processing the whole data set. Applying the limit earlier reduces the number of records.
This is just a flavor of the optimization process. Over that, it also performs Join, Order By and Group By functions.
To shutdown, automatic optimization, you can execute this command:
pig -optimizer_off [opt_rule | all ]
Compiler
After the optimization process, the compiler compiles the optimized code into a series of MapReduce jobs. The compiler is the one who is responsible for converting Pig jobs automatically into MapReduce jobs.
Execution engine
Finally, as shown in the figure, these MapReduce jobs are submitted for execution to the execution engine. Then the MapReduce jobs are executed and give the required result. The result can be displayed on the screen using a “DUMP” statement and can be stored in the HDFS using “STORE” statement.
After understanding the Architecture, now in this Apache Pig tutorial, I will explain you the Pig Latins' Data Model.
Pig Latin Data Model
The data model of Pig Latin enables Pig to handle all types of data. Pig Latin can handle both atomic data types like int, float, long, double etc. and complex data types like tuple, bag, and map. I will explain them individually. The below image shows the data types and their corresponding classes using which we can implement them:
Pig Data Types - Pig Tutorial
Atomic /Scalar Datatype
Atomic or scalar data types are the basic data types which are used in all the languages like string, int, float, long, double, char[], byte[]. These are also called the primitive data types. The value of each cell in a field (column) is an atomic data type as shown in the below image.
For fields, positional indexes are generated by the system automatically (also known as positional notation), which is represented by ‘$’ and it starts from $0, and grows $1, $2, so on… As compared with the below image $0 = S.No., $1 = Bands, $2 = Members, $3 = Origin.
Scalar data types are − ‘1’, ‘Linkin Park’, ‘7’, ‘California’ etc.
Pig Latin Data Model - Pig Tutorial
Now we will talk about complex data types in Pig Latin i.e. Tuple, Bag and Map.
Tuple
The tuple is an ordered set of fields which may contain different data types for each field. You can understand it as the records stored in a row in a relational database. A Tuple is a set of cells from a single row as shown in the above image. The elements inside a tuple do not necessarily need to have a schema attached to it.
A tuple is represented by ‘()’ symbol.
Example of a tuple − (1, Linkin Park, 7, California)
Since tuples are ordered, we can access fields in each tuple using indexes of the fields, like $1 form above tuple will return a value ‘Linkin Park’. You can notice that above tuple doesn’t have any schema attached to it.
Bag
A bag is a collection of a set of tuples and these tuples are a subset of rows or entire rows of a table. A bag can contain duplicate tuples, and it is not mandatory that they need to be unique.
The bag has a flexible schema i.e. tuples within the bag can have a different number of fields. A bag can also have tuples with different data types.
A bag is represented by ‘{}’ symbol.
Example of a bag − {(Linkin Park, 7, California), (Metallica, 8), (Mega Death, Los Angeles)}
But for Apache Pig to effectively process bags, the fields and their respective data types need to be in the same sequence.
Set of bags −
{(Linkin Park, 7, California), (Metallica, 8), (Mega Death, Los Angeles)},
{(Metallica, 8, Los Angeles), (Mega Death, 8), (Linkin Park, California)}
There are two types of Bag, i.e. Outer Bag or relations and Inner Bag.
Outer bag or relation is nothing but a bag of tuples. Here relations are similar to relations in relational databases. To understand it better let us take an example:
{(Linkin Park, California), (Metallica, Los Angeles), (Mega Death, Los Angeles)}
This above bag explains the relation between the Band and their place of Origin.
On the other hand, an inner bag contains a bag inside a tuple. For Example, if we sort Band tuples based on Band’s Origin, we will get:
(Los Angeles, {(Metallica, Los Angeles), (Mega Death, Los Angeles)})
(California,{(Linkin Park, California)})
Here, first field type is a string while the second field type is a bag, which is an inner bag within a tuple.
Map
Map Example - Pig Tutorial
A map is key-value pairs used to represent data elements. The key must be a char array [] and should be unique like the column name, so it can be indexed and value associated with it can be accessed on basis of the keys. The value can be of any data type.
Maps are represented by ‘[]’ symbol and key-value are separated by ‘#’ symbol, as you can see in the above image.
Example of maps− [band#Linkin Park, members#7 ], [band#Metallica, members#8 ]
Now as we learned Pig Latin’s Data Model. We will understand how Apache Pig handles schema as well as works with schema-less data.
Schema
Schema assigns a name to the field and declares the data type of the field. The schema is optional in Pig Latin but Pig encourages you to use them whenever possible, as the error checking becomes efficient while parsing the script which results in efficient execution of the program. The schema can be declared as both simple and complex data types. During LOAD function, if the schema is declared it is also attached to the data.
Few Points on Schema in Pig:
If the schema only includes the field name, the data type of field is considered as a byte array.
If you assign a name to the field you can access the field by both, the field name and the positional notation. Whereas if the field name is missing we can only access it by the positional notation i.e. $ followed by the index number.
If you perform any operation which is a combination of relations (like JOIN, COGROUP, etc.) and if any of the relations is missing schema, the resulting relation will have a null schema.
If the schema is null, Pig will consider it as a byte array and the real data type of field will be determined dynamically.
I hope this Apache Pig tutorial article is informative and you liked it. In this article, you got to know the basics of Apache Pig, its data model, and its architecture. The Twitter case study would have helped you to connect better.
With this, we come to an end to this article on Pig.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Big data. | https://medium.com/edureka/pig-tutorial-2baab2f0a5b0 | ['Shubham Sinha'] | 2020-09-10 09:41:14.839000+00:00 | ['Mapreduce', 'Apache Pig', 'Big Data', 'Hdfs', 'Hadoop'] |
Fat Joe’s Story About R. Kelly Beating People Up In An Underground Chicago Fight Club | “As part of a promotional campaign for his The Darkside Vol. 1 album, which was released last year, the PR/Cuban rapper shot a series of webisodes called ‘Fat Joe’s Tales From the Darkside.’ For Part 3, he tells about the time he was visiting R&B; sensation R. Kelly in Chicago and how he didn’t believe Kellz when the crooner told him he was a real bona fide thug.”
— Awl pals Ego Trip being Ego Trip, they have once again unearthed an amazing bit of hip-hop to share with the world. | https://medium.com/the-awl/fat-joes-story-about-r-kelly-beating-people-up-in-an-underground-chicago-fight-club-a982d45fbc13 | ['Dave Bry'] | 2016-05-13 12:02:52.087000+00:00 | ['Storytelling', 'Boxing', 'Ego Trip'] |
Flutter — BoxDecoration Cheat Sheet | centerSlice Property
centerSlice is the same as 9 patch png in Android Studio. Is a technique used to scale the image in such a way that the 4 corners remain unscaled, but the four edges are scaled in one axis and the middle is scaled in both axis.
The Value of the centerSlice property is the Rect Class. We need to constrcut a rectangle from its left and top edges, its width, and its height. Let’s start to know the size of our picture.
Width = 320 & the Height = 190
The class we need to use is
Rect.fromLTWH(double left, double top, double width, double height)
Our centerSlice is the green rectangle in the middle of the picture. To create it we need to know the width of the orange rectangle and put it for the left value and the height of the purple rectangle and put it for the top value
Rect.fromLTWH(50.0, 50.0, double width, double height)
So we have tell the Rect class to move 50 from the left and 50 from the top of the picture and start drawing the rectangle from the yellow point marked on the picture above.
In the above picture we have the width of the rectangle is 220 and the height is 90 so the final class value should be
Rect.fromLTWH(50.0, 50.0, 220.0, 90.0)
The final code is
new Center(
child: new Container(
decoration: new BoxDecoration(
image: new DecorationImage(
image: new AssetImage('assets/images/9_patch_scaled_320x190.png'),
centerSlice: new Rect.fromLTWH(50.0, 50.0, 220.0, 90.0),
fit: BoxFit.fill,
)
),
child: new Container(
//color: Colors.yellow,
width: 110.0,
height: 110.0,
)
),
);
You can see that the four red square are unscaled. let give the child of the container more width and height. | https://medium.com/jlouage/flutter-boxdecoration-cheat-sheet-72cedaa1ba20 | ['Julien Louage'] | 2018-06-05 13:49:05.682000+00:00 | ['Mobile App Development', 'iOS App Development', 'Flutter', 'AndroidDev', 'Android App Development'] |
Top 3 Prompts Addressing the Grief and Anger of Racism in America | Some of us have no trouble expressing ourselves on current events. For others of us, we may need some prompting and encouragement. These writers have made space for us writers to find our voices and share our hearts.
Marla Bishop’s prompt is part of The Bad Influence pub’s prompt series called Ideastream. Typically, the author gives a reflective introduction before sharing the prompts. Her piece is titled, Fighting for Our Lives, and the three prompts contained therein are all related to that theme.
She lives in London, England, and shares her heart about the racialized police killings on both sides of the pond. At a rally she attended, it took ten minutes to read out all the names of people of color killed by police — and that was not even a complete list.
According to Marla, we are all doing what we can — we are writers, so our task is to write. It may not seem like much, but as Kim McKinney reminded me this week, the pen is mighty.
That being said, the post contains three prompts to get our creative juices flowing — poetry, short prose/fiction, and finally, a limerick prompt that evokes the tragic killing of Breonna Taylor as she lay sleeping. Not your typical limerick topic. But then, these are not your typical times.
If you’ve had a lot on your heart, but have not been able to figure out how to say it, try this IdeaStream. It just might be the catalyst you’ve been looking for. Thank you, Marla.
David S. is the editor, curator, educator extraordinaire of a publication serious writers and poets should know about — Dead Poets Live. Even if poetry is not your thing, every writer can learn something from David’s jam-packed posts.
His prompts are more than a topic with words of encouragement. They’re a classroom, and often an art gallery rolled into one. He does his homework and brings treasures to us.
In his prompts, he lifts up a poet and one of their poems that speaks to him. He researches and explores — the poet, the poetry, their life and times. The posts are aptly illustrated, often with David’s own eye-catching paintings.
In this one, Prompt: My Country?, he’s chosen America Never was America to Me by Langston Hughes and included photos of Harlem in the 1930s.
We also get a lesson in dramatic storytelling from author Donald Miller. Three clear pointers David extracts from his reading and invites us to keep in mind as we read Hughes and explore whose country this is through the vehicle of poetry.
This post is listed as an 8-minute read. It doesn’t take 8 minutes to read the whole thing, given the photos. But you will want to spend at least that, likely more, pouring over and absorbing the rich content here. Below is just one of the provocative questions David gives us to ponder while writing and reflecting. Thank you, David.
What does a poem “cost” me to write? How much am I willing to risk
emotionally?
I hesitate to put one of my own posts here, but since it is a call for submissions to the publication Middle-Pause on this topic, I feel it qualifies. Like the other two, Calling all Writers, introduces the what and why — in this case, the creation of a safe place for women of all races to discuss their feelings and experiences.
There are some suggested questions for reflection and writing and a commitment to read, honor, and comment on what we receive. While we are off to a decent start, we want to hear from more people. We especially want to hear about the impact of the police killings and the Black Lives Matter protests on families. This includes, but is far from limited to, mixed-race families.
What has it been like to watch events unfold as a family? How are the kids and/or grandkids dealing with it? Have you had conversations about race, and what has that been like for them and for you? How are you supporting the young ones?
So there you have it — three posts inviting us to put words to our anger, our fears, our visions, and our nightmares on the topics of race, justice, police brutality vs. public protection, exclusion, and family. Of course, you choose or let your heart choose the format — prose, poetry, even limerick. We look forward to reading the results! | https://medium.com/top-3/top-3-prompts-addressing-the-grief-and-anger-of-racism-in-america-7e4bd944e234 | ['Marilyn Flower'] | 2020-06-26 00:18:53.284000+00:00 | ['Poetry', 'Writing Prompts', 'BlackLivesMatter', 'Race', 'Writing'] |
Dramatically Improve Any College Essay With These 5 Simple Tips | Are you just about to submit your academic essay, but the deadline is still a few hours away? Don’t submit yet!
Photo by Green Chameleon on Unsplash
Take a look at these five tips and see whether you can improve your essay within only a few hours.
I have been teaching philosophy at university for more than ten years now. I’ve been grading undergraduate essays for even longer. Writing an essay is hard work. Most students find it difficult to mold their ideas into an academic paper, especially during their first years when they are trying to figure out what the hell it is the professor wants.
You might have understood the material, done your research, even come up with thoughts of your own about the topic (woohoo!).
But is your essay actually an essay — or is it just an accumulation of related claims?
Here are some tips that can improve any essay. Yes. Any! Essay! Even the one about that topic you have never heard of before, because you slept through every single lecture! Here is what you need to do.
Photo by Victor Garcia on Unsplash
1. Arguments, it’s all about arguments
An argument is a unit consisting of a conclusion and reasons that are offered as support for the conclusion. You might think arguments are only relevant to philosophers, but I cannot think of an academic subject that requires students to write argument-free essays.
In fact, your textbooks and suggested readings are full of arguments. “The economy crash happened, because…”, “Ethical Relativism is an implausible view, because…”, “Shakespeare should be interpreted as being a feminist, because…” Look out for arguments in your reading material. Then present these arguments as clearly as possible in your essay. What is the conclusion and what reasons are given for believing it?
Presenting others’ arguments in your essay is tremendously important. But it is not enough.
Your professor will want to see that you can do more than reproduce the learned material. They also want to see that you can evaluate it; that is, critically engage with it.
Now that you established that and why Buddhists believe that there is no self, do you agree? Why do you agree? Why not? Your evaluation of arguments drawn from the expert literature is the heart of your essay.
Don’t skip it, don’t be vague, don’t stay superficial, don’t try to guess what your professor wants to read. Take a risk and argue your brains out.
Pretend you are a lawyer in a courtroom.
You don’t want the jury to just blindly believe your conclusion. You want them to arrive at the same conclusion. That’s why you talk them through the evidence. In your essay, the “evidence” are the reasons for why any maximally rational reader should agree with your conclusion.
One word of warning: be cautious with appeals to authority. The fact that Nietzsche/Durkheim/Trump/Oprah/50Cent/your mom says so, is usually not a good reason for why a maximally rational reader should subscribe to the conclusion. Unless it’s a conclusion about them. Then maybe.
Photo by Stock Photography on Unsplash
2. Avoid Dogmatism
This assumes that you took the first tip to heart and you are trying to critically engage with the material you have read.
Believe me when I, as a philosopher, say that nothing is ever obvious. Do not dogmatically state your conclusion.
Acknowledge reasons that may lead sensible people to believe the opposite.
Acknowledge experiences and observations we all make that might warrant a different conclusion. Be charitable towards your “opponent” and assume that they are rational and well-informed.
Present the most plausible version of their arguments and objections. If you’re playing with a weak opponent, then winning isn’t really an achievement, is it? Reply to these arguments and objections by explaining why they are not as convincing as they might appear at first. Stay humble and be respectful.
Photo by Priscilla Du Preez on Unsplash
3. Explain technical terms
Sure, your professor spent the last semester talking about “experiential avoidance”, “possible worlds”, or “accentual verses”, but these are technical terms.
They are terms that are defined in a particular way within your discipline and this definition might differ from the meanings these terms have in an everyday context.
In your essay, you will have to explain these terms rather than simply assume that your reader knows what they mean.
Of course, since your reader is your professor or a teaching assistant, they will know what the terms mean. But they want to see that you, too, know how these crucial terms are defined within your discipline.
And don’t underestimate the power of giving one or two examples to clarify a concept. Here is what I mean:
Experiential avoidance is “the attempt to escape or avoid certain private experiences, such as particular feelings, memories, behavioral predispositions, or thoughts”. Thus, you are engaging in experiential avoidance if you avoid writing your essay because doing so makes you doubt yourself. Or assume that whenever you try to argue for a particular position, thoughts and feelings of inadequacy arise in you. If this leads you to avoid arguing for your view, then you are displaying experiential avoidance.
By the way, since these are technical terms within your subject, you should never refer to Wikipedia, Merriam-Webster, the Oxford English Dictionary, or the Urban Dictionary in order to define them.
How many technical terms should you explain in your essay? It depends on the subject, topic, whether there are different definitions of the term floating around in your discipline, and whether you are writing an undergraduate or a graduate essay. One rule I would advise not to ignore is this:
you should at least explain any technical term that is mentioned in your title or in your essay question.
Being able to define technical terms also avoids one nasty pitfall: Jargonism. If you don’t know how to explain a word without using that same word, don’t use the word.
An essay is essentially a dialogue.
If your conversation partner doesn’t understand the language you are using, they will cut out. Save your monologue for your Youtube channel.
Photo by Stefan Cosma on Unsplash
4. Keep the focus
Yes, I know, you read all these articles and books and blogs and comments to blogs and now you have tonnes of thoughts in your head that just want out. But an essay is not the right place for it. (Maybe start your own blog?) You need to rigidly stay on topic. And I mean Rigidly! Capital “R”! Always ask yourself:
does this paragraph or even sentence directly support my answer to the essay question?
Do I need it to establish my conclusion or is it just interesting side noise? If the former, include it. If the latter, delete it. Right now!
If you are asked to set your own essay topic, think of your essay topic as a single question. For example, “Can I ever know that I am not in the Matrix?” Let your working title be that single question and let your writing be guided by the focus it creates. Avoid compound questions such as “Was Shakespeare a feminist and did he inspire other writers to become feminists?” Such a question will most likely result in an essay that lacks focus.
Don’t play detective in your essay, play lawyer. Tell your readers for what you are going to argue (introduction), then argue for it (main part), then tell them for what you have argued (conclusion).
Photo by Jordan Ladikos on Unsplash
5. Locate the question within the debate
You are writing an academic essay, not an opinion piece for your local newspaper. You are contributing to an already existing debate. Help the reader understand this debate.
Present the main positions that people in your discipline hold when it comes to this debate. Doing so will show your reader that you are familiar with the current state of this debate within your subject, but it also gives them a better sense of where in your own view falls.
Explain to your reader why the question you are attempting to answer in your essay is important —
not just to you, but to a wider audience, the academic discipline, maybe even the whole of society.
You don’t need to spend one-third of your essay on justifying, well, your essay, but adding a few sentences on motivation suggests to the reader that you are not just writing this essay because you are “told to do so”, but because you can see a real value in engaging with the topic.
Just to be clear, you should not explain why you are interested in the topic or why you are keen to find an answer to a particular question. The task is to highlight why it is in your discipline’s/your community’s/society’s/the world’s interest to come up with answers. | https://medium.com/swlh/dramatically-improve-any-college-essay-with-these-5-simple-tips-8d309a24a5e | ['Alexandra Sol'] | 2020-04-28 15:50:53.950000+00:00 | ['Education', 'Self Improvement', 'Essay', 'College', 'Writing'] |
Gangster or an Imposter. | The city of Mumbai had been reeling under a massive spurt in crime. Police had lost all the will power to enforce law and order. There were regular cases of shootouts between rival gangs. One of the gangs stood out in ruthlessness and sophistication to commit crime. It was the black cobra gang.
Rarely any of the members of the gang was ever convicted. There were rumors floating in the city that it had penetrated its tentacles in the government machinery and the judiciary and could influence any decision.
It had become more of a mafia providing protection to the industry and collecting money from them in return. This money was used to bribe judges and the government officials to get their work done.
There were reports in newspapers that one of the police officers had been raided by the Central bureau of investigation and crores of rupees were unearthed from his residence. He had properties in every part of the country.
On investigation it was revealed that he was on the payroll of the black cobra gang and used to act as conduit for their messages, often passing department secrets to them. Whenever a plan was made to arrest any of the gang member, he used to tip them off and they in turn used to be alerted;hence never been caught or hid their ill gotten wealth.
Every industry felt the presence of the gang, the film industry in the financing of movies, real estate, corporate houses, dance bars, drugs, kidnapping, illegal migration; you name any illegal work and its hand was involved. But they were so smart that they never used to be caught due to their influence.
Soon a black day arrived in the annals of the city. All the newspapers carried the headlines that the commissioner of police had been shot at and was fighting for his life in a city hospital.
He was considered to be one of the most honest and upright officers and since his transfer to Mumbai a year back; the gang had begun to feel the heat due to his persistent crackdown on their business and many of their gang members were being eliminated in encounters.
After a day it came to light that the younger brother of the police commissioner had gone missing and their father had died due to the resulting shock.
A meeting was being held in the outskirts of Mumbai in a big Farm house. People had assembled around a table and all of them were in a black dress with the tattoo mark of a black cobra on their right arm.
Twenty persons were present in a room. They were all waiting for their boss, a man in his fifties.Soon a bald man arrived; he was in black coat and tie. All the gangsters stood up as a mark of respect.He raised his hands and gestured them to sit.
Quickly he arrived at a seat at one end of the table and sat down. All the gangsters were quiet, eagerly waiting for him to begin the talk. He was playing with a paper weight lying on the table spinning it again and again. All of a sudden, he looked at one of the gangsters and spoke;
“Peter you were negotiating for the land of empire mills. Has the owner agreed to sell?”
“We were willing to pay Rs 200 million. But he is not willing to settle for anything less than 400 million.Any amount of threats is not working.” Peter replied.
“Anyone who does not listen to us has no right to live in this world. Just send our best trigger man to bump him off.” Christopher Sharma, the boss authoritatively ordered and Peter nodded in agreement.
He now turned towards Michael “Has the film producer agreed to sell us the rights to the territories of his next movie. After all we will gain a lot as he is the best producer in town”
“He was refusing but when we took out the bean shooter he got a shock of his life and agreed to our terms.” Michael replied.
“And what about the heroine for the movie. I want that babe Sandhya to star in the movie.”
“He has agreed to all of our terms.” Michael replied.
“Very good Michael. We require more men like you in our gang.”
The boss now turned towards Cyril. He had joined the gang three years back and had become the blue eyed boy of the boss. His competence lay in dealing with the police and the government department. Any graft or dough that had to be passed onto the police or the government officials; he used to come in handy, as he had built good relationship with them.
“This police commissioner Vijay Kumar has become a pain in our ass. I don’t know how to deal with him.He doesn’t accept any bribe and doesn’t leave us in peace. He survived attack on his life by our chopper squad. This has made him all the more determined and he is after our skin.
The government has increased his security. His actions against our gang members have made him the darling of the public.I want peace in my business. Cyril, I wonder if you can work out a deal with the commissioner.”
“He is believed to be a man of principles. They say his father brought him up in such an atmosphere.He would prefer to die than bend against anyone. But you wishes are my order. I will try to do anything to bring him to the negotiating table.” Cyril replied raising level of confidence in Christopher.
“This is what I expect of you. I know you have never failed in your dealings. Let’s see how you deal with the situation. You can have all the money you want but do try to bring him to the negotiating table. In the end if he does not agree, we would have to eliminate him at any cost. This time there would be no mistake from our side. He would straightaway meet the gods either in heaven or the hell. The only thing is we don’t want enmity with the law, if we can turn them in our favor, the much better.” Christopher completed his sentence.
“I understand boss” Cyril replied.
“I have not heard you playing the flute for a long time.When I hear you play it brings peace to my soul.”Christopher demanded with a smile.
Cyril took out a flute from his pocket and for five minutes played the instrument. All were spellbound and clapped after he stopped playing.
Soon the meeting was over and the gangsters dispersed after discussing all the things on their agenda.
Cyril now got busy with the work. He had to arrange a meeting of his boss with the police commissioner or either try to thaw the relationship between the police and the gangsters. All the resources were there at his disposal. He could use any amount of money that he required, but since the commissioner was considered to be incorruptible man money was of no use.
He began to try to reach the commissioner through a network of informers and began to impress on the police that a meeting would be good for the peace of the city.
Vijay Kumar agreed to meet Christopher and a meeting was arranged at a five star hotel. Being a punctual man he was there 15 minutes before the agreed time. He had reached there with two of his trusted policemen. They were now waiting for Christopher, the gang boss.Music was played around and a dancer was performing to the beats of the music.
At sharp 8:00 PM, Christopher entered the room and proceeded towards a table at the corner of the room. Reaching there he shook hands with the commissioner of police.
“It is my good luck to have met the commissioner of Mumbai police. I have become a great fan of yours after reading of your honesty and straightforwardness.”
“How can you be my fan? If you were my fan you would not have ordered for my shooting”Vijaykumar was blunt in his talks.
The blunt and honest remark hurt the boss where it mattered the most “Let bygones be bygones. We can become good friends. We have always had friendly relationship with police in the past. You can ask for any amount of money. It would be not a constraint. We can instantly make you a multimillionaire.
If you don’t like money we can always donate the money for noble cause; for the policemen who have laid their lives for the good of the society. But please don’t interfere with our work. All the previous police officers have had good relationship with us.”
“See Christopher, you can’t buy my honesty. I have come here to meet you and in order to warn you that you better leave the city for good. Your nefarious activities have harmed the city. If you don’t want to land in jail, you have to give up on your activities.”Vijaykumar’s face had become very stern and he was displaying no emotions.”
“What use is all the enmity? It will not benefit you in any way.” Christopher was taken aback with the sudden warning.
“Till the time I am here as a commissioner you will not be able to carry on with your nefarious activities. It is the last time I am warning you. Otherwise the next time you will be in jail.”
Christopher looked at the five men who had accompanied him. They suddenly took out guns and one of them placed it at the head of Vijaykumar. The other men too placed their guns at the head of the two policemen, even before they could react.
“I had come here to negotiate with you. But it appears that you don’t want to live in this world any longer. We have killed many policemen and you would be an addition in my fair list. My friend! I am really sorry that we have to kill you. Anyone who disagrees with us has no right to live in this world.” Christopher now displayed a crooked smile.
But before the gangsters could do anything, Christopher found a gun pointed at his temple. He turned around to see; it was Cyril.
“Put the gun down. Why you are doing this. You are my most trusted man. Have I not done enough for you?” some tension appeared on the face of Christopher as he was talking.
“Boss I have high regards for you.You are the best boss, anyone can dream of. But you see, money is not everything in life. Relationships also have some value. Ask the men to put down their guns or you would not be alive to talk any further.”
Everyone now dropped their guns on getting signal from Christopher.
“You would be wondering of what relationship I am talking about?” Cyril began to talk again.
“The police commissioner is my elder brother. After your men shot him we made a plan to reach you and I infiltrated your gang. I was keeping a watch on all your activities and arrangement of a meeting was the last part on our agenda. Now you spend the remaining part of your life in jail.If you want I will visit often and play the flute for you.”
Suddenly one of the gangsters took out his gun, but Cyril was alert and shot at him. He was wounded and fell on the floor. In all the commotion the boss found a chance and tried to run away. Cyril shot at his legs and he fell on the floor. The police soon rounded all the gangsters.
Cyril had achieved success in his mission. Most of the crime in Mumbai had reduced and it became a peaceful city. Cyril as usual enjoyed playing the flute, with nothing to bother him. | https://medium.com/illumination/gangster-or-an-imposter-678a508526e8 | ['Deepak Sethi'] | 2020-12-23 02:27:26.053000+00:00 | ['Self Improvement', 'Life Lessons', 'Writing', 'Fiction', 'Short Story'] |
What is your sponsorship department the best in the world at? | In sponsorship, we are constantly competing for advertising dollars. Every day not only do we go head to head with other sponsorship advertising options, but we battle against other forms of advertising.
Digital ads are a big part of this competition. Brands ask themselves every time when looking at your package, “Why is this package better than targeted FB ads campaign?”.
With this competition, unless we are able to massively differentiate ourselves, we will become commoditized. Our assets will be a drive to the bottom on price.
There is an answer here. And it comes with focusing on a few items that will excel us to the next level of sponsorship value & revenue.
Using a model from the book Good To Great, we can position ourselves to drive maximum revenue and fight becoming commoditized with our brands. I dive in on how below.
The Hedgehog concept from the book Good To Great
First, if you haven’t read Good To Great by Jim Collins, it is a must-read. It will change your entire perspective on business.
In particular, though, there is a very important idea called the Hedgehog concept. At its core, it comes down to this description based on a Greek poem.
In this poem, a fox is looking to capture a hedgehog. In the dynamic between a fox looking to eat a hedgehog, the fox exhausts many options, constantly changing its core strategy to capture the hedgehog without success.
The hedgehog holds a simple core strategy and rolls into a protected ball with each of the fox’s attacks.
Each time the fox stops its strategy and bounds back into the forest to strategize on a new way to capture the hedgehog.
At the end of the day, the fox’s strategies are scattered and diffused, moving on many levels and unsuccessful each time. The fox sees the problem in all of its complexity, pursuing ideas many solutions at the same time.
The hedgehog, on the other hand, simplifies a complex world into one single organizing idea. A basic principle or concept that unifies & guides everything. It doesn’t matter how complex the world is…the hedgehog reduces all challenges and dilemmas to simple hedgehog ideas.
Ultimately, the hedgehog is successful in its goal (survival) and the fox is unsuccessful in its goal.
The Greek poet Archilochus wrote, “the fox knows many things, but the hedgehog knows one big thing.”
A core idea to guide your decisions is in essence the hedgehog’s strategy.
But how do we choose our north star, our hedgehog concept? Luckily, the book gives us a way to answer this.
Behold, the Three Circles of the Hedgehog concept.
Ok, so can we implement this in sponsorship? Let’s dive into these concepts individually.
What are you (your department) deeply passionate about?
I like to start with this concept because it comes back down a lot to culture. No matter what north star we choose, we need to have unwavering buy-in on that focus. This part of the equation is the heart of your department.
For your team, what are you deeply passionate about?
What idea can you get your whole department behind?
Does your department have a mission beyond increasing sponsorship revenue for your organization?
For many teams it’s community. Our fans & the community is are our lifeblood, many times our team is the pulse & heard of our communities & cities.
Are we deeply passionate about our fans & the community they live in? If so, how? How do we show we are?
Pushing further beyond just our city and fans, what are the sub-sets that we are passionate about?
A focus on our Veterans? LGBTQ initiatives? Families? Early Education? Local Business?
The reason why passion comes into play here is if we know what drives our passions as an organization, we can funnel & focus that passion into action.
It helps us make decisions on our sponsorship prospecting strategy. If we have a passion for building up our local and small businesses, we can focus our attention on packages that serve them.
It helps us define the niche in which we will become fanatical about as we grow. If we can focus on that package, we can build assets and packages that are the best in the world for that customer.
And as a department, it helps us know why we wake up each morning. What overarching goal are we moving toward? How will this focus and passion build energy to come to work every morning around that goal?
If you don’t define this passion, the focus will turn to revenue.
In the long run, it is very hard to get a department to rally around “Our passion is driving revenue to increase the value of the club.”
That should be the result of our passion.
As we think about our department, we must come to a collective mission. Why does our sponsorship department exist? What mark will we leave? What results in 10 years from now will we be proud that we achieved?
When we interview a candidate for a sponsorship position, how do we know they will fit our culture if we don’t have a passion to rally around?
When we hire them, how do we know what direction to point them in?
It is a must to define what we are passionate about as a department. It gives our department a reason to wake up each morning. It gives us a tangible mark to hit (most local business accounts in the league and most dollars driven to them).
Most important, it gives us something that we can stand out on. We can be known for. This will help us compete against other advertising platforms.
What can we be the best in the world at?
This part is a massive part that I think we fail at in sponsorship.
As competition for ad dollars heat up, if you can’t answer this question you will lose out on dollars. If we can’t answer this question…we become commoditized in the industry. We put ourselves into a place where we become good at everything…but not the best in the world.
The answer shouldn’t be ‘We are the best at sports sponsorship’. That definition is too broad. I somewhat think even “we are the best in the world at reaching our fans.” is an issue…
Do you reach your fans better than a third-party as Barstool does? Can you prove that?
Do you reach your fans better than if the brand targeted your fans on social media?
This is why it is so important to define this as a department. It will open your vulnerabilities and blind spots.
By asking the question to your team “what are we the best in the world at?”, you start to see where your vulnerabilities are.
You also can understand where your strengths are. How do you reach your fans better than Barstool? What specific metrics, tangible and intangible, can you point to that make your sponsorship a better buy than other outlets?
Overall, it helps define the top reasons a sponsor should go with us. It creates a place where you can say “If you want to do [what you are the best in the world at], we are the place for you
For example, when I was selling restaurant ads in a traveler's hotel book, I was hit constantly with “What about this publication or Yelp ads?” from prospects.
And honestly, Yelp was better than us at reach and trackability.
But when I sold, I focused on what we were the best in the world at: Reaching visitors.
I would respond with “ Look, the other publications are great for what they do, but there are over 3.2 Million visitors to Portland per year who spend over $15Bn on food. It’s a market you can’t afford to miss out on.”
We’re the best in the world at reaching those travelers through our Portland Hotel Book. It’s in over 30,000 rooms across the city and a guide for visitors looking for where to eat.
So if you want to umbrella reach with some of the other publications…great. Absolutely spend there. But if you see the value in reaching the visitor market and want to drive those dollars to your restaurant…our publication is the best option to do so.”
If a restaurant told me they weren’t interested in reaching the travelers market (which most didn’t)…I moved on.
The beauty of knowing what you’re the best in the world at is the ability to understand and convey why it is a no-brainer to buy with you. There is no competition. We are the best. You can go with a competitor with marketing dollars…but if you do you are doing so because you don’t believe my reach and influence can help you drive sales.
It also saves your team and department a TON of time in the sales market. It allows you to build a prospect list of clients that are much more likely to close.
It helps us focus our energy on clients that we can overproduce for. When we get a “We’re not interested in reaching the visitor market.” you can move onto clients who are.
Understanding what you are the best in the world at helps you stand out, focus, and become more efficient as a company.
Ask your department, what are we the best in the world at?
If answered truthfully, this is the gateway to breaking through your plateau. No matter the size of your team, if you can find what you are best at, you will win more deals.
Answering this question as a department is the key…but it won’t be easy.
If you asked your department this…you’d probably get blank stares.
You might get some mumbles about creating great experiences for brands and their clients…but is that really what you are best in the world at? I would argue that most sponsorship departments would say the above.
What this question is really doing is asking us to define our total value. In all of the services that we offer…what do we do the best?
It is vital that we ask ourselves that question. The answer is the foundation of our growth.
Let’s run through how you may be able to funnel this down to an answer.
Say you start with “We are the best in the world at reaching sports fans in the (city area) area.”
If I were a brand manager, I would ask you to prove it. Are you the best at reaching sports fans in the area? Over ESPN? Over targeted Facebook ads to sports fans?
Can you prove that your in-stadium sign is better at reaching and influencing fan purchases overrunning an Instagram ad campaign?
Can we be the best in the world when put up against Instagram’s targeted ads?
As you can see, this exercise asks us to define why we are the best. We are defining how we stack up against other options in the same way our customers do when evaluating us.
As we define this, we can understand exactly what we are the best at, why/how we are the best at it, and this allows us to double down on our strengths.
It helps us stand out among other options. It also helps us understand where we take deals and how we structure and value our assets.
If we can define what we are best in the world at, we can build a plan to even more growth with focus.
I can’t stress this enough. Ask your department this, ask your self this. It will help drive your value.
“What is this department the best in the world at?”
Last, what drives our economic engine?
This is the 3rd question to ask yourself because many of the answers to the above will drive this piece.
How do we make money? What is our business model? How do we generate revenue?
Can our team survive on selling signage? Will that asset drive enough revenue? Can we command a high price for it?
What is our portfolio of assets we offer? Can the team sustain with 90% physical assets that we offer?
Can we shave to 4 BIG sponsors over 100 smaller sponsors?
As we understand what we are the best in the world at, we can understand how to value the assets we do the best.
If we are a small team, and we can’t drive enough revenue through physical assets….we need to look to other avenues to generate it.
Look at your product offerings, double down on the ones that are the most profitable, and understand what drives your economic engine. But don’t forget your passion and what you are the best in the world at.
As you see in the diagram, your strategy comes in the middle of all three. If your economic engine doesn’t sync up with what you are the best in the world at…it is doomed to fail.
If you can understand these three, you can then build a successful plan for your team.
Flip-flopping over multiple growth plans becomes a tiring task in the off-season. With so many opportunities in advertising, so many new competitors, how do we know which are the best for our team?
By answering these 3 questions, we can make more informed decisions on the opportunities and threats posed.
We can also understand what aspects of new platforms we can take advantage of.
For example, let’s say Tik Tok comes out and is pushing through the industry like wildfire. How do we understand where our sponsorship team fits?
If we are passionate about small business in our city, and we are the best in the world at connecting community members to those small businesses, then the strategy all the sudden becomes a no brainer.
Can we be the first team on Tik Tok to promote small businesses with a series of videos? Can we pioneer a series on the best tacos in town?
Imagine being able to go to restaurants in the city and selling them on a platform that showcases their food trackably to fans in the state as well as out? What if that video gets 100,000 views in a week. What value has that brought the sponsor?
My point here is we know that Tik Tok can get you 100,000 views on a video….but do we know how we want to harness that reach and turn it into value?
If we know what our passion is & what we are the best in the world at…it is YES every time. Instead of strategizing for weeks, we can have the platform fit our thesis.
But most importantly, we can instantly identify the benefits of a new platform or opportunity and execute on it quickly.
When we thoughtfully execute quickly on an opportunity, we win. This is the super-power result of understanding these 3 questions.
So, what is your department’s passion, what are you the best in the world at, and what drives your economic engine?
In my mind, any strategy not built on these pillars is one doomed to fail. We will be in a constant state of ups and downs…but more importantly, we will miss out on executing quickly on the opportunity.
If Blockbuster had defined this, they would have probably gone to home delivery earlier. Netflix knew what they could be best in the world at and positioned themselves to take over the market.
What’s best is they didn’t even have to compete directly with Blockbuster. They doubled down on what they were the best in the world at.
In sponsorship, we are in the same position. We can build a place where we don’t compete with digital ads because we understand what we are the best in the world at. We can create a strategy around utilizing our digital reach to put our assets into a category that doesn’t compare.
As I said before, the first step comes with asking the hard question to our departments and defining each.
It may take some time…but it will be a foundation for our growth. It will help drive our ship and help us make decisions faster.
More importantly, it will help us define why we exist as a department in a better way than “we drive revenue for our team”.
So what is your department the best in the world at? I’m excited to hear the different answers from our readers.
— — — —
Want more sponsorship insights delivered directly to your inbox weekly? Sign up for the Digitally Sponsored email newsletter by CLICKING HERE. | https://medium.com/sqwadblog/what-is-your-sponsorship-department-the-best-in-the-world-at-a5f0d7e30f47 | ['Nick Lawson'] | 2020-11-12 15:54:12.307000+00:00 | ['Sponsorship', 'Marketing', 'Sports Business', 'Sportsbiz', 'Sponsorship Marketing'] |
Mapping Classical Antiquity: Exploring the Ancient World through the maps created by ancient peoples | Mapping Classical Antiquity: Exploring the Ancient World through the maps created by ancient peoples Lewis D'Ambra Follow Jul 11 · 6 min read
Possible Invasions and Migrations during the Bronze Age Collapse
The aim of this publication is to explore, through maps, texts and historical accounts of expeditions, what the people of what we can loosely call Classical Antiquity knew of the world and how their knowledge of the world’s geography evolved, changed and grew over time. Through this we will see how antique civilisations interpreted this knowledge and how this knowledge in turn influenced their world view. The series of articles following this one will trace the story of knowledge of the world from the dawning of the classical age in ancient Mesopotamia, through the Greeks, Phoenicians and Romans, to the end of the classical world where so much of that accumulated knowledge was lost and forgotten. However, before we can dive into the detail of the evidence, we need to set the context and parameters of our period of exploration.
Dating Classical Antiquity is a difficult business, the period roughly covers the millennium from the re-emergence of civilisation after the Bronze Age collapse to the collapse of antique civilisation itself about 1500 years ago. However, the exact definition of the boundaries of this period is open to dispute.
Some historians place the start of Classical Antiquity at end of the Greek Dark ages[1], using the date of the first Olympic Games[2] in 776 BCE as a starting point. Other historians use the founding of Rome in 753 BCE[3] as its start, and others still will pick out different events to place its beginning. The only consensus being of a start point in roughly the 8th Century BCE.
Romulus and Remus suckling a she-wolf
The end of the period is even more disputed, some historians use the fall of the Western Roman Empire in 476 CE[4] to mark the end of Classical Antiquity. Others place the end with the death of the Emperor Justinian in 565 CE[5] and yet more still with the coming of Islam in the 7th Century CE.
For the sake of clarity, for this series we are going to use the first Olympic games in 776 BCE as the start point for our period and the fall of the Western Roman Empire in 476 CE as the end of Classical Antiquity. This allows a broad scope but with definitive boundaries not tied to the lives of individuals but geopolitical events with a clear start and end point.
In between these epoch marking dates, the world would see the rise and fall of Assyria[6], Babylon[7] and Persia[8], the conquests of Alexander[9] and the domination of Rome[10].
Alexander the Great
Throughout the period, ideas, knowledge, and religions would shift and change, fundamentally altering the face of civilisation. However, there would be a consistent and continuous streak running throughout as knowledge of the world was passed down and developed, linking Babylon to Greece and Persia to Rome.
Through the maps these civilisations drew and the explorations of important characters, events, texts, and cultures we can track this development and see the impact it had throughout this thousand-year period and beyond. Each of the following articles will focus on a map, a character or an important event or account aimed at following the story of knowledge of the world and its geography through Classical Antiquity. However, to begin to understand Classical Antiquity, we must go back to its beginning.
From about 1200 BCE the world of the Bronze age entered a period that was violent, sudden and culturally disruptive, the great civilisations of the western world all but collapsed[11]. The empire of the Hittites vanished[12], the fractious Mycenaeans disappeared, taking their palace-based culture with them[13], Babylon was sacked[14], the Assyrians retreated[15], and a diminished Egypt[16] became isolated and wary of the outside world. Almost every city between Pylos and Gaza was violently destroyed and many were abandoned.
The Lion gate of Mycenae
What followed has been termed a dark age[17]. For several centuries, the world had to piece itself back together and adapt to the new realities that the Bronze age collapse brought with it. By the 8th Century BCE civilisation was on the rise once more, rediscovering and building on top of the half-forgotten knowledge of a glorious past, defining what they knew of the wider world and what they discovered in their own terms. Through the next thousand years this process would see civilisation grow to new heights and change beyond recognition, forging ideas and knowledge which still influence our own.
This publication is going to explore how Classical Antique civilisations understood their world, what they knew of far off lands and people, and what the way they depicted this knowledge can tell us about these societies. The first stop is ancient Mesopotamia, the heart of Classical Antiquity at the beginning of the period and the city of Babylon, a name which still resonates 2500 years later.
The ruins of Babylon
[1] Violatti, Cristian. “Greek Dark Age.” Ancient History Encyclopedia. Ancient History Encyclopedia, 30 Jan 2015. Web. 05 Jul 2020.
[2] Cartwright, Mark. “Ancient Olympic Games.” Ancient History Encyclopedia. Ancient History Encyclopedia, 13 Mar 2018. Web. 12 Jun 2019.
[3] Mark, Joshua J. “Ancient Rome.” Ancient History Encyclopedia. Ancient History Encyclopedia, 02 Sep 2009. Web. 12 Jun 2019.
[4] Wasson, Donald L. “Fall of the Western Roman Empire.” Ancient History Encyclopedia. Ancient History Encyclopedia, 12 Apr 2018. Web. 12 Jun 2019.
[5] Wyeth, Will. “Justinian I.” Ancient History Encyclopedia. Ancient History Encyclopedia, 28 Sep 2012. Web. 12 Jun 2019.
[6] Crabben, Jan V. D. “History of Assyria.” Ancient History Encyclopedia. Ancient History Encyclopedia, 18 Jan 2012. Web. 12 Jun 2019.
[7] Mark, Joshua J. “Babylon.” Ancient History Encyclopedia. Ancient History Encyclopedia, 28 Apr 2011. Web. 12 Jun 2019.
[8] Davidson, Peter. “Achaemenid Empire.” Ancient History Encyclopedia. Ancient History Encyclopedia, 11 Feb 2011. Web. 12 Jun 2019.
[9] Mark, Joshua J. “Alexander the Great.” Ancient History Encyclopedia. Ancient History Encyclopedia, 14 Nov 2013. Web. 12 Jun 2019.
[10] Mark, Joshua J. “Roman Empire.” Ancient History Encyclopedia. Ancient History Encyclopedia, 22 Mar 2018. Web. 12 Jun 2019.
[11] Podcasts, BBC. “The Bronze Age Collapse (In Our Time) — BBC.” Ancient History Encyclopedia. Ancient History Encyclopedia, 25 May 2019. Web. 12 Jun 2019.
Studies, Luwian. “The End of the Bronze Age.” Ancient History Encyclopedia. Ancient History Encyclopedia, 01 Jun 2016. Web. 12 Jun 2019.
[12] Mark, Joshua J. “The Hittites.” Ancient History Encyclopedia. Ancient History Encyclopedia, 01 May 2018. Web. 12 Jun 2019.
[13] Cartwright, Mark. “Mycenaean Civilization.” Ancient History Encyclopedia. Ancient History Encyclopedia, 24 May 2013. Web. 12 Jun 2019.
[14] Mark, Joshua J. “Babylon.” Ancient History Encyclopedia. Ancient History Encyclopedia, 28 Apr 2011. Web. 12 Jun 2019.
[15] Crabben, Jan V. D. “History of Assyria.” Ancient History Encyclopedia. Ancient History Encyclopedia, 18 Jan 2012. Web. 12 Jun 2019.
[16] Mark, Joshua J. “New Kingdom of Egypt.” Ancient History Encyclopedia. Ancient History Encyclopedia, 07 Oct 2016. Web. 12 Jun 2019.
[17] Violatti, Cristian. “Greek Dark Age.” Ancient History Encyclopedia. Ancient History Encyclopedia, 30 Jan 2015. Web. 12 Jun 2019. | https://medium.com/mapping-civilisation/mapping-classical-antiquity-29031058069f | ["Lewis D'Ambra"] | 2020-07-14 16:07:06.540000+00:00 | ['Maps', 'Mapping', 'History', 'Ancient History'] |
Side Gigs That Will Make You Money as a Programmer in 2020 | 1. Freelancing
Freelancing is a great way of making some side income while still holding down your day job. Or perhaps, if conditions are right, it could potentially become a full-time job. Though freelancing sounds great on paper, it requires a lot of discipline and effort to find clients and projects. One thing you need to keep in mind is freelancing only works if you possess the ability to self-regulate. If you lack this ability, then honestly, there is no point in starting down the freelance path. Instead, a nine-to-five job would be best suited for you as it provides order and structure.
Without digressing and regurgitating the same content you’ll find online, I want to give you a different take on what has worked for me. One thing I learned early on is that you shouldn’t follow the herd. Carve your own unique path that sets you apart from the rest of the pack.
Platforms like Upwork or Fiverr offer a lot of opportunities but can easily turn into a perpetual rabbit hole you might not want to go down. The rates are pretty low if you don't have a name, so I would only recommend them if you just want to dip your toes into the water for the first time or are satisfied with a little bit of additional income. This, of course, doesn't mean you can be successful on them.
A better strategy would be to work on your LinkedIn profile, contact recruiters and past clients from your network, go to conferences and meetups, and look out for platforms that match up remote workers with companies.
Take advantage of and leverage Facebook groups. There are dozens of Facebook groups that are designed specifically for freelancers. Programming-based groups are other places that teem with opportunities for freelance work. These are probably a better option than sourcing leads in places like Upwork and competing with C-grade, $5 programmers. Facebook groups also allow you to showcase your work and garner support, make friends, and expand your network and connections. Below is a list of groups to get you started in your pursuit of finding that extra dollar.
There is no shortage of Facebook groups you can join and explore. Go ahead type “remote jobs” or “programming jobs” — or be creative about your search terms. You’ll be surprised at how many groups there are.
Here is some professional advice from one software developer to another: Do not — I repeat, do not — burn your bridges when you leave your current day job! There are exceptions, of course, to this rule. Sometimes it’s ethical to burn bridges, e.g., when your boss is a horrible manager who treats employees like dirt and rips off your customers or deliberately acts in ways that go against your ideals and principles.
In the event that you have nothing against your employer, I would urge you to keep in contact and plant little seeds to keep doors open once you resign. Seed planting is the idea that you periodically reach out, not seeking work but to maintain that ongoing rapport. Let your former employer know once in a while that you’re open to new creative pursuits. As long as they have that idea, it's easier to ask for freelance work in the future. It does help if prior to leaving your previous job, you excelled and built a reputation as the go-to guy.
I encourage you to take some time out to read my other piece on hard truths about being a freelance programmer. It’ll help you navigate this uncharted territory. | https://medium.com/better-programming/side-gigs-that-will-make-you-money-as-a-programmer-in-2020-9124760f3c8 | ['Timothy Mugayi'] | 2020-02-12 19:21:21.367000+00:00 | ['Python', 'Freelancing', 'Programming', 'Software Development', 'JavaScript'] |
The One Where We Stay in Touch | The One Where We Stay in Touch Agnes Follow Feb 15 · 2 min read
Photo by KaLisa Veer on Unsplash
There’s a version of us where we stay in touch.
There’s a version of us where your voice notes are my favorite podcasts: long, winding, whimsical, nonsensical recounts of your day, your take on Netflix series, Matcha smoothies, people who don’t know how to walk on the street.
There’s a version of us where you text me right after I write a blog post with your favorite lines, the typos I missed and the Grammarly corrections I ignored.
There’s a version of us where every Instagram story sparks a conversation, where you share screenshots of your Bumble matches and your boss’ emails and the list of terribly touristy things you are preparing for when I come to visit.
There’s a version of us where we visit each other.
There’s a version of us where we have 5 AM conversations: I eat breakfast and you eat lunch and we pretend that we are still chatting about work over avocado toast at our favorite cafe.
I’m sure of it.
Every birthday I don’t hear from you, I think of our other selves. Our better selves, going out on a Monday night because you insisted on starting your birthday on the dance floor.
Every promotion you don’t tell me about, I think of our better selves clinking glasses at the fancy sky bar we save for special occasions. Cheers.
Every holiday I go on, I see the version of us who could never agree on what to pack except for comfy shoes and a sexy black dress. I always pack too little, you always pack too much. I see a version of us where we are still sharing suitcase space.
Every failed date I go on, I imagine the version of us where I tell you over lunch and you blush at everything I say because you can tell the waiter is listening.
Every successful date I go on, I think of what you’d make of him. Would you laugh at his geeky jokes? Would you roll your eyes? Would you like him? What would you talk about when you met? what would he say? what would you say?
There’s a version of us where we don’t talk anymore.
There’s a version of us where you never leave.
There’s a version of us where I go with you and another where I leave and you stay.
There’s a version of us where we stay in touch.
That’s the one I like. | https://medium.com/medusas-musings/the-one-where-we-stay-in-touch-12e42f9938ed | [] | 2020-02-18 17:11:38.795000+00:00 | ['Friends', 'Writing', 'Friendship', 'Relationships'] |
Weekly Writing Prompt: Resistance | Weekly Writing Prompt: Resistance
All change faces Resistance, it’s part of the battle
Photo by Kevin Hellhake on Unsplash, cropped by author.
Resistance comes in many forms. These inner battles on occupied land take many shapes.
From Steven Pressfield’s artistic resistance, where we are our own worst enemies pushing back against our own dreams.
To the famous WWII French resistance immortalised in film as well as one of my favourites Allo Allo.
Then the most famous resistances of all — the Rebel Alliance. Fighting against the oppressive, Third Reich inspired, Empire in Star Wars.
Even retreating from war and coming back down to Earth, there’s plenty of resistance too. In our companies, there’s enough resistance to spawn a whole industry of change managers.
In our societies, we resist tyranny from within. From terrorists to freedom fighters. It’s usually victory that ensures the historical upgrade.
Resistance often appears regardless of the change itself. Inertia is ever-present in people, societies and the physical world. The universe itself resists all change to a degree.
But Resistance can be a good teacher. Disagreement is often healthy, and the resulting diversity is fertile ground for creativity.
You have 50 words to make us feel this inner battle
It’s up to you if the change is inside the person, in their society, organisation or the physical world.
Let’s make it more challenging this week, you cannot use the word resistance in the title or the text.
Remember the first story accepted gets featured for the week.
Viva La Résistance! | https://medium.com/microcosm/weekly-writing-prompt-resistance-1599e70b206c | ['Zane Dickens'] | 2020-12-25 16:12:42.653000+00:00 | ['Writing Prompts', 'Flash Fiction', 'Writing', 'Resistance', 'Microcosm'] |
Anxiety is a Smoldering Flame That Can Easily Ignite | Anxiety is a Smoldering Flame That Can Easily Ignite
How can we control the worries of the unknown and manage our anxiety?
Anxiety is no joke. It is the one emotional challenge that, probably everyone, has at one time or another, and at varying levels. Every individual who has anxiety disorders have their own unique triggers that cause a flame to ignite, sparking the anxious path to an inferno. I have been dealing with anxiety since my childhood, without ever realizing it. As an adult I know it’s there, and am working on managing it.
Photo by Adam Wilson on Unsplash
The Silent Fire
The thing about anxiety that most people have in common, is that it’s a silent affliction. Anxiety doesn’t allow you to voice your feelings or the words that match what your body is going through. In many cases, no one else knows if someone is having an anxiety attack, because it is a subtle, paralyzing struggle that typically creates an inability to speak out.
During the highest points of my anxiety, and my worse days, my meltdowns were in private. I tend to go to my car, or to the washroom and try to compose myself before anyone notices that I am struggling. This can be helpful if you are someone who is able to manage the illness, however, if you are unable to cope, it can make the anxiety elevate to new levels.
Think of a time when you were in a situation that created your mind to spiral into anxious thoughts. What triggered it? Could you manage it, or did it take a dangerously worse turn?
I have had both happen, personally.
During a visit to Walmart a few years ago, I had a full-on breakdown. I had been quietly shopping mid-morning, with a basket on my arm. I had been filling it with necessities, like deodorant, toothpaste, and a few odds and ends. I needed some large containers to sort my pantry out at home, and found myself staring at multiple shelves of various types of bins.
Before I knew what was happening, the lights in the store seemed blinding, the music from the speakers sounded distorted and overbearing, and the voices of the few people around me seemed like they were in my ears, echoing.
Without thinking, I put the basket on the floor, and ran out of the store, as fast as I could. I sat in the driver's seat of my car and shook, before breaking down into bone wracking sobs. This was not my finest moment. My flame of anxiety burst into an inferno that I lost control of.
No one would have guessed what was going on with me. Even passersby, had they seen my crying, would never have known what just transpired. Hell, I didn’t even know what had happened. It was all just too much for me to handle.
Photo by Priscilla Du Preez on Unsplash
In Canada, it is estimated that 1 in 5 people suffer from anxiety.
In the US, it affects 40 million people per year.
These are extremely alarming statistics, and throughout 2020, more and more people are developing high anxiety due to the pandemic and the state of our economies.
The 6 Types of Anxiety
According to this website, as well as Beyondblue.org, there are 6 major types of anxiety disorders:
Maybe one or more of these seem familiar:
Separation Anxiety Disorder- Often children have this disorder when they are away from their parents. When adults have this type of anxiety it is typically because they fear being alone, and they rely heavily on their spouse, partner, or family members to function normally. Pets will often show signs of this type of anxiety, particularly when they are new to their homes, or if they have suffered trauma.
Often children have this disorder when they are away from their parents. When adults have this type of anxiety it is typically because they fear being alone, and they rely heavily on their spouse, partner, or family members to function normally. Pets will often show signs of this type of anxiety, particularly when they are new to their homes, or if they have suffered trauma. Specific Phobia- This is the type of anxiety that horror movies base their premises on. This type of anxiety is the fear of encountering a specific fear. Examples of this are spiders, snakes, hurricanes, clowns, or even people you deem as a threat or an enemy. Many people have specific phobias of a variety of things, like enclosed spaces, or heights, but when it is mixed with an anxious mind, it can become paralyzing.
This is the type of anxiety that horror movies base their premises on. This type of anxiety is the fear of encountering a specific fear. Examples of this are spiders, snakes, hurricanes, clowns, or even people you deem as a threat or an enemy. Many people have specific phobias of a variety of things, like enclosed spaces, or heights, but when it is mixed with an anxious mind, it can become paralyzing. Social Anxiety Disorder ( Social Phobia )- This is one of the most common types of anxiety. Having Social Anxiety is what causes you to feel awkward, or even invisible around other people. When you have social anxiety, the fear is that people will judge you, dislike you or not notice you. Most people with anxiety disorders struggle with social situations often. The fear of this type of anxiety is the perception of how others think of you. More often than not, a socially anxious person will cancel plans and gatherings, for fear of being uncomfortable.
( )- This is one of the most common types of anxiety. Having Social Anxiety is what causes you to feel awkward, or even invisible around other people. When you have social anxiety, the fear is that people will judge you, dislike you or not notice you. Most people with anxiety disorders struggle with social situations often. The fear of this type of anxiety is the perception of how others think of you. More often than not, a socially anxious person will cancel plans and gatherings, for fear of being uncomfortable. Panic Disorder- Panic disorder has an ironic twist. This is the fear of having panic attacks. People who suffer with this, base their fears on their experiences of having panic attacks in the past. They fear that they will begin to have a meltdown because they become embarrassed by the vulnerability, and what happens to their appearance if an attack should occur. In some cases, a panic attack could cause extreme shaking, sweating, or even wetting or soiling your pants. When someone with anxiety has endured panic attacks that cause themselves embarrassment, it becomes one of their greatest fears.
Panic disorder has an ironic twist. This is the fear of having panic attacks. People who suffer with this, base their fears on their experiences of having panic attacks in the past. They fear that they will begin to have a meltdown because they become embarrassed by the vulnerability, and what happens to their appearance if an attack should occur. In some cases, a panic attack could cause extreme shaking, sweating, or even wetting or soiling your pants. When someone with anxiety has endured panic attacks that cause themselves embarrassment, it becomes one of their greatest fears. Agoraphobia- This could very well be the most common disorder of 2020. This is the fear of going outside the safety of your home. With COVID-19 fears, this is one of the more understandable and relevant disorders. I have suffered from this myself, being told to stay home and fearing the germs of others, in the outside world. This disorder can cause further complications such as hoarding, weight gain, eating disorders, and other health issues. Agoraphobia tells the anxious person that it is safer and better to stay in your home because the world is a scary place, which could lead to panic attacks.
This could very well be the most common disorder of 2020. This is the fear of going outside the safety of your home. With COVID-19 fears, this is one of the more understandable and relevant disorders. I have suffered from this myself, being told to stay home and fearing the germs of others, in the outside world. This disorder can cause further complications such as hoarding, weight gain, eating disorders, and other health issues. Agoraphobia tells the anxious person that it is safer and better to stay in your home because the world is a scary place, which could lead to panic attacks. Generalized Anxiety Disorder- This disorder is a combination pack of any or all of the above. The predominant component of GAD is WORRY. You worry all the time, about things like money, disasters, family members, friends, what people think of you, and other fears that are triggered by random day to day events or situations. It is extremely common for GAD to cause or be accompanied by depression, because the worrying strips you of your ability to be positive or happy.
Photo by Melanie Wasser on Unsplash
What Anxiety Feels Like
When you are in a situation that causes anxiety, a few things happen to your mind and body, most likely before you even realize it. At first, you just feel annoyed or irked by something that has triggered you. It could be noise, an odor, a word, an environment, or simply a thought. It could be chaos in an environment, or silence at home that sends you into an anxiety attack.
You begin to feel agitated, and you may feel trembling or an increased heart rate, or both. Often this is followed by a feeling of being too hot, or sometimes too cold. You lose the feeling of comfort before you begin to understand what is happening. It is also important to note that not everyone feels the same when they have an anxiety attack, or in the way they feel anxiety disorders.
Internally, your brain begins to match the racing of your heart, and thoughts begin to process rapidly through your brain. You feel the Flight, Fight or Freeze response ignite, before you look around to see if anyone else is noticing that you have been triggered. Sometimes you become short of breath and feel like you are panting or suffocating, and you struggle to calm yourself down. You might begin to feel dizzy, and your palms sweat as you feel panic trying to seep in.
Some people feel weak or tired, while others feel like a surge of energy is zapping them from within. You are unable to focus or concentrate and everything around you seems distorted, too light, or even too dark, and you begin to worry. There may be feelings of impending doom, or in some cases, you fear everything around you. This is when you make the decision to stay, frozen, to deal with it, or feel the need to run. Some people who have intense anxiety will fight their way out of situations if people are too close to them or impeded their escape path.
Once the anxiety has sparked the fire, it becomes critical to determine the trigger and the emotions that have flared. Is it fear? Worry? Stress? Judgement? The environment? What was the moment that caused this feeling?
Before the fire rages out of control inside you, try and take some deep breaths, sitting in a quiet spot, and remind yourself that you are safe. If you need to cry, then cry. If you feel the urge to yell and scream, I encourage you to try and let it out, but go somewhere where you can be in private, or around trustworthy people. As you breathe through it, try visualization, like imagining a safe, beautiful, peaceful place. Sip some water to stay hydrated and to allow your mouth to feel something. This will distract your brain. There are more tips and tricks to work through your worry and fear that emblazes your inner fire. You can read more about these strategies here.
Some people with anxiety have even advised eating a salt packet, or sugar packet, or drinking something fizzy, like soda, to engage your taste buds into distraction. Most people find this effective. Others will wear an elastic band around their wrist and snap it to trigger themselves back to the present with a bit of pain. I don't recommend this technique, personally, but others find it works for them.
Once you begin to calm down, take a moment to look around you, and intentionally absorb where you are to become grounded and present again. This will help your focus come back, and keep your breathing steady. Look at the cars or trees near you. Watch people walk by, and feel the seat beneath you. Take the time you need to recompose.
After you have time to regroup, try and talk it out with a trusted person, a therapist, or at least make a point of journaling what occurred. It is super important to get a grasp on what triggers your anxiety, in order to understand and manage it. If you are unable to comprehend what is happening, the coals of fire that smolder away inside of you can become a blazing fire, and become out of control, spiraling you into depression, addiction, or even suicidal thoughts.
Photo by JJ Ying on Unsplash
My Anxiety is Like an Invisible Chain
I cannot tell you how many events I have missed in my lifetime due to my anxiety. I have missed family and high school reunions, trips, concerts, shows, dates, dinners with friends, and on and on. It is like I am chained to the safety of my house.
Why? Because of fear and worry, that’s why. Because of made-up scenarios in my anxious head, and because of the crippling impending doom of my own self-esteem and value. It is ridiculous how soul-crushing this illness can be.
One very valuable lesson I have learned with mental health is this:
Anxiety is rooted by your future. Depression is rooted from your past.
Read that again, and think about it.
Depression comes from trauma or struggles that you have experienced and carry with you. If you are unable to let the emotional side effects of trauma, you cannot fight through depression in a healthy manner.
With anxiety, you fear what the future holds. Often, this is partnered with the experiences from your past, that have caused depression. Anxiety is the thought process of predicting the future.
For example, “I cannot attend that party because everyone will laugh at me”.
“I am unable to go to the store because the last time I went, I had a meltdown. I know it will happen again”.
This is a future prediction, and we don’t actually know how true it is, or how things will unfold for us until we do it. Anxiety, however, has other plans for us. Anxiety will keep us from doing things that are uncomfortable, or potentially damaging to our value or self-esteem. Anxiety draws pictures in our minds of what situations or events will look like, making worse case scenarios come alive, so that we avoid doing things to feel emotionally, or in some cases, physically safe. We honestly learn to use anxiety as a crutch to keep us in comfort zones that pose no risks.
The thing about fear and worry, however, is that it’s not REAL. Just because we think it “might” happen, doesn’t mean it will. In fact, because we come up with worse case scenarios, in order to avoid taking part in events and situations, we “should” be prepared for anything. Instead, the scenarios scare us into freezing and avoiding plans and commitments.
Photo by Kunj Parekh on Unsplash
F.E.A.R
In working with my therapist, I have learned an acronym for fear, and it fits anxiety perfectly:
False Evidence Acting as Realty
Or… False Evidence Appearing Real
How accurate is that?
Isn’t that what anxiety is, in a nutshell? It is False evidence that we believe is the reality of situations. We get the evidence from the worries in our brains, based on experiences, or based on what we imagine will happen, and it becomes the reality that keeps us from partaking in uncomfortable environments. But it’s false.
Knowing this has helped me tremendously, as far as going out and being uncomfortable in situations goes. I remind myself, if I have a visual of the worst-case scenarios, or the false evidence, that it’s not real. It’s not fact.
That typically helps me get out the door.
Photo by Vladimir Fedotov on Unsplash
My Fire Still Burns
I still have anxiety-daily. This world has maxed my anxious brain for me with all the new fears and the continual pressure to avoid people. You would think that someone like me would be relieved that I don’t have to go out into stores and businesses and that it is encouraged NOT to go out, yet it fuels the anxiety flames even more.
I am medicated now, after discussions with my therapist and physician, and it “seems” to help somewhat. However, the thought of going out to stores with restrictions, and the potential for standing in lines to get in, along with the fear of COVID itself, breathes life into the fire that is my anxiety.
The best advice I can offer is to find ways to keep your anxiety from igniting into a full blown blaze. Learn to take deep cleansing breaths and count backward from 100 if you have to. Take the time to understand what triggers your worries and fears, and try to find solutions for worse case scenarios before you take steps out the door. Meditate, work out, and get enough sleep at night. It all helps.
Each step is a step. Each day is a new day. Stay present as much as you possibly can by purposefully absorbing your surrounding environment. Don’t use your past or false evidence as a guide to what you “might” encounter.
One day at a time, one step at a time, and one new uncomfortable moment at a time.
Snuff out the flames of your anxiety, and hopefully, over time, it will stop smoldering in the pit of your gut. | https://medium.com/publishous/anxiety-is-a-smoldering-flame-that-can-easily-ignite-50cbefcc16b3 | ['Kristina H'] | 2020-12-06 22:43:44.562000+00:00 | ['Stress', 'Anxiety', 'Self', 'Mental Health', 'Life'] |
Just-in-Time vs. Just-in-Case Learning | Are you still learning like you were taught in school?
Just over a year ago I got the idea to catalog some of my thoughts on self-development and learning via this blog. I had never written anything publicly before and the last time I wrote something longer than a thank-you note or email was during my high school English class.
I was unsure how to start a blog or what it should look like. I didn’t even know if I had anything worthwhile to say. So, naturally, to kick-start this process, I began reading up on the subject. I read numerous blogs to pick apart what I liked and didn’t like about their style and content. I read books on writing and how to start an online business. I watched YouTube videos on how to build a WordPress site. I consumed any piece of relevant information I could get my hands on.
Over the next few months, I read tens of thousands of words related to this subject. Many hours were spent trying to learn how to start a blog. But can you guess how much time was spent on writing my first post? You guessed it — zero. All of this time spent consuming information to provide myself with a false sense of understanding of how to write a blog when all I needed to do was create a WordPress site and just start writing.
Just-in-Time vs. Just-in-Case Learning
Just-in-Time
There is a common term in the manufacturing world known as just-in-time manufacturing. Developed by Toyota in Japan in the 1960s and 1970s, it became a major competitive advantage in the automotive industry. Instead of following the industry standard of just-in-case manufacturing, whereby automotive manufacturers maintained large inventories of materials requiring expensive warehousing and additional labor, Toyota built its cars using a made-to-order process. This process allowed them to save millions of dollars in inventory costs which were then pumped into their manufacturing processes to speed up production.
By solely focusing on building cars in demand, Toyota was able to dominate the market by having low overhead, short lead times and the ability to adapt quickly to new market trends. Meanwhile, the competition was left behind with inventory they couldn’t sell.
This just-in-time model can also be applied effectively to how we learn. This involves putting all of your resources (in this case, your time and attention) into learning a skillset in demand. By focusing on developing one skill at a time, you speed up your learning process giving you an eventual competitive advantage in the skill-driven economy. However, this type of learning does not come naturally since most of us have been conditioned our entire upbringing to learn via the just-in-case model.
Just-in-Case*
The just-in-case model is how most of us were taught in school. You’re fed a wide variety of information spanning multiple subjects with the intention of some of it proving useful in the future. This type of learning worked well in a classroom environment where the information taught was soon reflected on the test to gauge understanding. Of course, most of the information taught is soon forgotten unless you constantly refresh yourself on what was learned.
This classroom model for learning often incentivizes students to learn by rote memorization of the material instead of trying to master the fundamentals. Having the ability to accurately recall dates, names and formulas don’t equate to an understanding of why an event occurred or how a formula represents a law of nature.
This model may have helped you excel in the education system but often falls short when trying to advance in the skills-centric workforce. To better understand why just-in-case learning doesn’t translate well to the real world, let’s look at some of the pitfalls of adhering to this model.
*Please note I am not trying to devalue the education system for providing us with a broad knowledge base. For the sake of this article, I am focused on skills-based learning needed to excel in the real world. We will table the discussion of the importance of philosophical knowledge for a later post.
Pitfalls of the Just-in-Case Model
Information Overload
If there was one thing the education system taught me, it was to always go back and re-read the chapter and any notes if you were unsure about a particular topic. While this technique might have worked in a structured school environment where the information is provided and organized, it is inefficient for trying to learn something new. You end up spending most of your time reading and re-reading the entire material in search of the one core concept you missed.
A more efficient way to determine where you are weak in your understanding is to use the Feynman Technique, made famous by the Noble winning physicist Richard Feynman. This technique involves writing out what you know about a particular subject in a clear and simple manner as if you were going to teach it to a child. This technique will reveal any gaps you have in your understanding since we often mask our confusion with complex vocabulary and jargon. Recognizing where the knowledge gaps lie helps you pinpoint where you need to focus your time and attention during your studies.
False Sense of Progress
When my most important goal is unclear or difficult to complete, I will often binge on “just-in-case” information found in articles and videos related to the topic. While it feels like I am making progress towards my goal, in reality, I am procrastinating from doing the hard work. I find it easy to fall into this feel-good trap of telling yourself you’re being productive by learning more about the subject matter when all you’re doing is spinning your wheels.
It would be a much better use of time to break down the goal into more manageable chunks to determine where the hangups are hiding. This will free you up to fully focus on the problem at hand to determine the best course of action for reaching your goal. Correctly defining the problem will allow you to start consuming the necessary information to help you reach a solution.
“Neomania”
We tend to obsess over the shiny and new. Whether its breaking news, the latest tip or trick, or even the latest NYT bestseller — we can’t seem to get enough. Our desire to constantly be in the loop is motivated by our desire for social status which is further fueled by well-budgeted marketing campaigns and social media shares. Our fear of being left out of conversations concerning current events compels us to read up on the newest information leading to this “neomania” phenomenon.
This innate desire to signal to our peers how well-informed we are will coincidently lead to more uncertainty in our understanding. We feast on a steady diet of new information not yet vetted by the course of time to determine its credibility. This leads to a constant “truth-seeking” while we rapidly update our worldview with each new piece of information presented. It would be a much better use of our time to study information that has withstood the test of time.
Changing Your Model
Perhaps you’re like me, obsessed with consuming information for the sake of learning. The just-in-case model from the education system has been implanted so deep into your subconscious when you hit a roadblock in your work, you revert back to reading more information on the subject. If you relate to this and wish to fight the infomania urge, I encourage you to follow some simple steps which have helped me be more intentional with my learning.
First, determine what skills you are interested in developing. If you wish to be successful in the future, pick a skill that others find valuable and are willing to pay for. Have a goal to master only one skill at this time. Be concrete about what you want to accomplish with learning this skill. Not having a clear understanding of what you are trying to accomplish will lead to procrastination and wasted effort.
Once you have made a decision, only search for answers to the question keeping you from further developing your skill. Any information that doesn’t help you progress at this time should be viewed as a distraction trying to diffuse your attention.
If you wish to succeed with this method, you must choose to remove yourself from the noise. This includes distancing yourself from social media where you are constantly bombarded with new information you are supposed to care about. You can’t keep a focused mind if you continually allow yourself to be fed random information in exchange for a cheap dopamine fix. Your ability to focus is what will separate you from the pack during the skill development phase.
With enough practice and patience, you can re-wire your brain to follow the just-in-time model for learning. Making this shift in your learning habits will allow you to develop new skills at a much quicker rate. So if you wish to stay ahead of the curve or even stay relevant in this new economy, this model for learning has never been more important. | https://blakereichmann.medium.com/just-in-time-learning-983ee971bfff | ['Blake Reichmann'] | 2020-01-14 15:27:02.918000+00:00 | ['Productivity', 'Learning'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.