title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
We’re Off to Never-Neverland | And then last week, we watched the two-part, four-hour documentary, Leaving Neverland, on HBO. To me, it devastatingly proved the guilt of Michael Jackson, beyond a Reasonable Doubt. #JayZ
In fairness, whenever I watch true crime, I generally think the person did it. I’m like a police officer that Sarah Koenig describes in Serial:
~“Cops pretty much think everybody is lying to them all of the time.”
We watched Steve Avery in Making a Murderer on Netflix. He did it. We started The Case Against Adnan Syed on HBO. He did it. We watched The Ted Bundy Tapes on Netflix. He did it.
Serial. Murderer. Bundy. We live in a joyous household.
Obviously, Bundy’s a safe bet. The difference, of course, is that Syed and Avery were charged with one count, whereas Bundy and Jackson (and R. Kelly and Bill Cosby and Harvey Weinstein and the Catholic Church) are charged with multiple counts. They used to say that, due to the enormous number of pension and benefits it owed to its workers, General Motors was basically a healthcare company that happened to make cars. In that way, the Catholic Church is basically a scandal that happens to hand out wafers.
Harsh? Well, I cannot think of a single case where an institution or a celebrity implicated by multiple victims has ever turned out to be innocent. Can you?
In the long run, female and child abuse are the consistent themes here. (I’m generally hesitant to use the term “rape culture,” but I was stunned to learn that 17 states in America have no minimum marital age requirement. Between 2000 and 2015, over 200,000 minors were legally married in the United States. If that’s not rape culture, then what is?) What is abundantly clear in Leaving Neverland is that Jackson was a strong misogynist. There are only oblique references made to this fact, but he clearly hated women. Of all of the themes, this struck me as one of the most chilling. The fact that Jackson was effeminate only serves to make him more of a Buffalo Bill type of character.
The MJ documentary begins with a disclaimer: “The following film contains explicit descriptions of sexual abuse that may be disturbing to viewers.” They should add a line: “And if you don’t, yeah, get that checked out.”
The two men who have come forward against MJ are James Safechuck and Wade Robson. (Ya gotta love those names, by the way. Straight out of Central Casting. They’re a little on the nose, huh? Jackson chucked their safety and robbed their son.) Safechuck was born in the States and Robson was born in Australia. (MJ really does like to go Down Under.)
Did he really have to rub it in and claim he has two?
In excruciating detail, the boys (now men) describe how Jackson destroyed their lives and their families’ lives. The Jackson Estate is alleging they’re only doing this for the money. OK, then how come there are no allegations against Prince? Or Bruce Springsteen? Or Tom Petty? They’re all global ’80s superstars with wealth.
Yet another defense is: “Well, there were other kids, including Macaulay Culkin and Corey Feldman, who went to Neverland and have no stories of abuse.” C’mon. That’s flimsy and you know it. First off, Jackson wouldn’t target fellow celebrities; they’re too empowered and have a network of powerful people looking after them. Predators prey on the weak. (And btw, Feldman finally came out to say he can no longer defend Jackson.) Moreover, just because Michael liked boys doesn’t mean he liked all boys. That’s like the lame concern straight men have about gay men: “I don’t want them hitting on me.” Relax, douche. With that kind of homophobia, you’re lucky if anybody hits on you.
“W here were the parents at?” — “The Way I Am,” Eminem
Criticism of the parents is justified. That said, you have to put yourselves in their place. They’re middle-class families with no extraordinary means… child abuse wasn’t as well known as it is today… and Michael Jackson was the biggest star in the world. The documentary shows the parents tried to stop it, but eventually, Jackson manipulated them and won. In acting class, we learned that comedy derives from a sane person in an insane world (Arrested Development) or an insane person in a sane world (Borat). A sane person in a sane world is a documentary. An insane person in an insane world is reality TV. The parents had entered an insane world. Was it strange that MJ asked to sleep in the same room with their kids? Yes, but Michael Jackson is calling you on your home line and coming over to your house to play. THIS WHOLE THING IS INSANE. How do you make decisions about what your kids can and cannot do at Neverland?
I saw firsthand how crazy celebrities can make us. As I entered adolescence, two men dominated my bedroom walls: Jordan (the other Michael) and Andre Agassi. I can’t remember exactly when I met him. OK, it was April 3, 1991. Seriously. I don’t even need to look it up. Spring Break. In Orlando, Florida, with my family. (Perhaps to search for my lost glove.) Perkins Restaurant. 8:30 am. There he was, with his girlfriend at the time, Wendy Stewart (not to be confused with Wendy Darling from Peter Pan). I got a picture with him, and I spent the next few hours lying down in the back of our van, staring up at the sky. I couldn’t believe I’d met my idol; I was the very definition of starstruck. Years later, I met him several more times, as I volunteered as a ballboy and Player Locker Room worker at the ATP Tournament in Cincinnati. I thought I was obsessed with him until I learned what a ballgirl did. Agassi threw up in a towel and she took it home. I’m not joking. How in the world could anybody carry somebody else’s vomit to her house? I’m telling you — celebrities make people do crazy things.
When We Both Had Hair. | https://funnyindian.medium.com/were-off-to-never-neverland-b8b2e14b0c52 | ['Rajiv Satyal'] | 2019-03-30 18:42:29.096000+00:00 | ['Michael Jackson', 'Leaving Neverland', 'Abuse', 'Pop Culture', 'Music'] |
3 Key Metrics to Measure Developer Productivity | Measuring developer productivity is a tough puzzle to solve. With so many variables to consider, it’s hard to identify a workable metric. While creating a measurement system for developers is no easy feat, it’s also not impossible. The task becomes more approachable when managers consider these three metrics:
Value of Code: Identifying the value of code involves more than just counting the number of lines. Static Object, a code management software, uses a “Line Impact” algorithm to asses each line of code and assign a value to it. This allows the software to quantify the impact on the code base and measure changes over time. Line Impact attempts to measure code in the same way a manager would — by considering many variables. While this is more complex, it also leads to a more accurate picture of how effectively individual programmers and teams are performing. Speed: Productivity can’t be measured by code value alone. On most teams, it’s also critical to get work done quickly. That’s where speed comes in. Static Object includes a velocity measurement that allows managers to see each developer’s average Line Impact per day and observe how contributions trend over time. While speed is a great indicator of productivity, it’s not a complete view. It should be weighed carefully with the value of the work being done as well as any refactoring that happens later. Tech Debt: Perhaps the most elusive measurement, technical debt can be described as extra development work that arises when code is implemented. A quick solution today might solve an immediate problem, but create other problems that need to be solved in the future. Code duplication and complexity, lack of documentation, and programming rule violations are all factors that contribute to tech debt.There is no single formula to to calculate tech debt. However, identifying the debt sources and estimating the amount of added development time is a good place to start. As Static Object continues to add new features and integrations, the ability to measure tech debt is becoming a reality. The tool will identify where time is spent refactoring and revising code after a project is considered “done”. The work done in this time is roughly equivalent to tech debt.
There are countless ways to measure software developers. These factors range from personality and culture fit to coding speed and years of experience. When measuring developer’s productivity, every organization has a different set of variables. Using the right tools, managers can get better insight into the work being done and measure performance more objectively.
Interested in exploring how Static Object could help your organization? Try a 30-day free trial today! | https://medium.com/static-object/3-key-metrics-to-measure-developer-productivity-c7cec44f0f67 | ['Gwen Schlefer'] | 2018-07-10 17:16:06.396000+00:00 | ['Management', 'Productivity', 'Coding', 'Product Management', 'Software Development'] |
A story of recycling, privilege and social class in behaviour change | Photo by Lisa Fotios from Pexels
This is a story about a kitchen bin, privilege and how our unconscious class blinkers can bias the way we think about behaviour change.
Privilege as the absence of inconvenience has been on my mind recently as I’ve immersed myself in Stephanie Land’s Maid and her experience of living in poverty: poorly paid work, ruthless working conditions, humiliating requirements for government aid with a side dish of domestic violence and homelessness. One of Barack Obama’s favourite books of 2019, it’s a deeply insightful glimpse into a vastly different life to mine and important perspective for my professional work as an applied behavioural science practitioner.
But first, let’s talk about recycling bins.
A rubbish start
As I folded away a small cardboard box and put it into the recycling drawer of our new kitchen bin, I made a comment to my husband about how much easier recycling was now that we had this miraculous thing, and how much tidier our kitchen was. Before buying it, we used bags-for-life to store recycling which made our kitchen look messy — although we did mostly recycle and collect kitchen waste, the friction created by the “solution” made it tempting to just bin things.
The catch is that the solution to removing this friction cost us 160 euros(GBP150/USD190). We hesitated buying the bin for a year, because who spends that much money on a bin and there was always something better to spend it on than, well, a rubbish storage solution.
The fancy bin
Even at this stage of our lives when my husband and I are finally reasonably comfortable, it was a big spend. I thought back to the early days of my former company when I was always worried about money and the first few years of my career when my salary was barely enough to cover London living costs… spending half of my monthly food budget on a bin was simply unimaginable.
As I sipped my morning coffee, I started to think about the role of money in sustainable behaviour change. I wondered just how much class bias there might be in our profession, so I did what comes naturally to me: research.
Money, money, money
The average salary in 2019 was £30,420 a year with a weekly take-home pay of £585. Age and location create variation and the average is pulled up by those who are doing much better financially: the average of the Top 3 occupation groups is 30% higher than the national average, and 52% higher than the average of the Bottom 3 groups.
Notes: the groups were averaged to simplify the discussion — the middle and bottom groups are more homogeneous, and although not all of us in this field will be in the A group, the average should be enough for illustration.
As I dug deeper into the Office of National Statistics data, I learned that most of us working in applied behavioural science or related fields are in that Top 3 group, and more specifically in occupation group B which makes up 21% of UK workers which leaves 68% of the workforce earning less (charts at the end of the post). Most importantly, on average the group our various professions belong to earn 32% more than the national average.
Source: ONS, as above
That’s more than enough to buy a fancy bin for recycling, with some left over for buying organic produce and eco-friendly laundry detergent.
Back to the bin
Our lived experience gives us a certain set of lenses view the world with and even if we’ve been through less fortunate times in the past, our memory is patchy so it’s only human to forget about it once the situation improves.
I thought about three of my former rental homes in London, all of which were tiny with nowhere to store recycling. In many areas you need to put your recycling out in a plastic bag, which in practice means you have a bright orange bag full of rubbish adorning the corner of your kitchen at all times.
Some people have a SMEG fridge in their kitchen, I had this beauty.
A lot of behaviour change discourse especially in articles about sustainability* revolves around overcoming the intention-action gap and framing the challenge in psychological terms. This morning reminded me of the invisible yet powerful barriers that can exist in that gap, ones that we’d rather not even think about let alone admit when someone asks us about our green intentions. The lack of those barriers in our own lives can also lead us to subtly judge others who are not living up to the same standards, maybe even perceiving them as… irrational.
When our own living circumstances lack certain inconveniences, it’s much harder to imagine them as potential barriers let alone consider them when we think about solutions — of course, the problem must be psychological and we need to create an intervention. Many behaviour change frameworks and mnemonics reflect the tacit assumptions of their creators and depending on which one you use, you might forget to consider what the COM-B model defines “Physical Opportunity” provided by the environment (e.g. time, location and resources).
One recent example of how our class blinkers can bias our assumptions of barriers to “better” behaviours has been evident in the discussion about behaviour change in times of COVID-19.
Corona and the “opportunity from disruption”
I have read numerous articles on how the disruption from COVID-19 is an opportunity to “nudge” consumers towards a more sustainable future, many of them from a very middle-class perspective. Here’s one to illustrate the “opportunity in disruption of habits” content stream:
Discontinuity is abundant in times of coronavirus. Many of the cues in our environment that usually prompt our habitual responses are missing. The largely automatic process of getting ready to head out for work, initially cued by our alarm, is displaced. Similarly, a colleague who would usually pop past for a chat in the morning is only around virtually, eliminating the usual catch-up and tea break. This is both stressful and tiring, but it is also an opportunity to reimagine ourselves. (Source)
Life has indeed changed but not for everyone — and certainly not to the same degree. Some people’s experience resembles the above passage but it’s mostly about those of us who would normally work in an office because many others have continued to work as usual. A lot of people have also lost their jobs — sustainability will be the last thing on their minds because, much like our new kitchen bin, it has suddenly become a luxury they can’t afford.
Always start by checking your privilege
“Privilege is a hard concept for people to understand because normally when we talk about privilege we imagine unearned riches and tangible benefits for anyone who has it. Privilege is actually more about the absence of inconvenience, the absence of an impediment or challenge, and as such, when you have it, you really don’t notice it. But when it’s absent, it affects everything you do.” (John Amaechi, BBC Bitesize )
When I lived in London, it was difficult to recycle food waste and recycling was another hassle in an already busy, stressful life — especially in a small flat and without a car.
In contrast, my near-frictionless recycling luxury now includes a fancy kitchen bin, a garden compost and a car that allows me to dispose of recycling as often as I want to — it is merely a matter of self-discipline to do it. It might have only been a few years since I shared my small London flat with an orange recycling bag, but it’s a memory I keenly purged as soon as I could. Unlike my younger self, I can now also afford more eco-friendly products and generally close the intention-action gap more effectively by reducing friction for behaviours that align with my values.
Stephanie Land’s stories of living on a budget that could break from even the slightest unexpected cost were eye-opening and, to be honest, humbling. The beautiful houses in our town with sports cars in their driveways encourage upwards comparison and sometimes fool me into thinking my life is actually very modest when in fact I live a life full of invisible privileges.
It’s hard to imagine a life you do not live which is why it’s crucial to start any behaviour change project with a thorough analysis of the context with a framework like COM-B to reduce the blind spots arising from our personal experience — even if they are barriers you cannot influence.
Most importantly, before we attempt to change anyone else’s behaviour we need to start by checking our own privilege.
*N.B. I’m not an expert in sustainability — I’m simply using this topic as an illustration.
Additional charts: | https://squarepegmind.medium.com/a-story-of-recycling-privilege-and-social-class-in-behaviour-change-f8a81be84a79 | ['Elina Halonen'] | 2020-09-14 11:07:11.277000+00:00 | ['Behavior', 'Sustainability', 'Behavioral Science', 'Behavior Change', 'Recycling'] |
New Publication — Life’s Funny. and not always in a “haha” way. | New Publication — Life’s Funny
and not always in a “haha” way.
Photo by Mark Daynes on Unsplash
I started this new publication — life’s funny. But what does it mean?
It’s open to interpretation, and can mean many things. It can mean different things to different people.
I started this publication for all these reasons. | https://medium.com/lifes-funny/lifes-funny-213a35bd72e6 | ['Linda Horton'] | 2019-05-05 21:26:17.064000+00:00 | ['Life Lessons', 'Publication', 'Writing', 'Life', 'Humor'] |
When Isolation Turns to Solitude | When Isolation Turns to Solitude
When you’d rather be alone on a stranded island than be around another mindless person, group, or situation.
Photo by Joshua Rawson-Harris on Unsplash
When you finally realize your own mind, your own energy, your own time is the most precious thing you have in this world.
When you can see people for who they really are, shed the toxic ones, and cherish the real ones.
When the faint of heart is long gone, and you are okay with that. When you see, it wasn’t a plague that ruined anything other than what was not for you.
When you have truly accepted and appreciate life’s outcomes as blessings.
I didn’t see it at first.
I was waiting to be rescued by a man who promised me a whole new life of love in paradise. Like a lusty, naïve princess of a modern fairytale, I started dreaming of the day of rescue. With that mentality, I despised all that life was by agonizing and mentally withdrawing from everything. I mentally checked out of my career, and I even stopped bonding with my pup, Clyde.
I was either floating on a dream of entirely leaving this life and everything in it, or in complete disgust and frustration of everything that currently was. Dramatizing about the awful plague this, and the terrible virus that; I needed to be rescued.
As a young girl, I was told that being born on a Saturday comes with a special force. With strong enough will, this force can change the course of time and the direction of destiny.
The Reiki Master I visited last week even said I have a special, gentle force protecting me. I have tons of stories of this force that’s changed some situations, but this time the force formed a disease and locked me up at home.
I didn’t see it, and I definitely didn’t accept it. I thought I had finally met my soul mate, and now life was playing a cruel joke. But the force is wiser, of course.
I really needed a leap to the next level of my personal evolution; I thought that’s where I was headed. Instead, I was blindly sidetracked by a distant dream full of lies and emotional exhaustion, to sum it all up. It wasn’t the plague that ruined things.
We blame the plague for everything that’s now broken, politics, healthcare, financial crisis, relationships, but we don’t want to admit that the things were unstable, weak, or broken already.
One night I woke up standing up outside my bedroom surrounded by broken glass. I had completely trashed my apartment, and I didn’t even remember.
It was the new anxiety pills that were helping me deal with life plus some alcohol. Should I blame the pills, though? The alcohol, the plague, my subpar lifestyle, or that guy who was out sipping cocktails with his young female friends while I’m still cleaning up the broken glass 8k miles away.
It was a rude awakening.
If you never trashed your apartment, I can tell you; even though it was probably enormously liberating at the time of the blackout, I seriously don’t recommend it. Broken glass can never be completely swept up, at least not for months and the cuts on your feet take forever to heal.
Some people choose minimalism as a cute weekend project, but some people, maybe just me, need to trash their apartment because they have been staring at all these freaking things they had checked out from but were still seeing. Things from the past or things given, acquired, piled up, stuffed in, or stuck around me that honestly had to go somehow.
Reality can be so harsh.
Especially when you live an artificial fantasy life, avoiding the truth and what life really is, not dealing with issues, running around with mindless people, doing and buying mindless things, and surely when, like me, creating a whole new wishful life to escape the real one.
I had to literally pick up the pieces and make some changes since then. I had to cleanse and detox from everyone and everything that wasn’t adding love and value to my life.
“Evolving Involves Eliminating.” — Erykah Badu
Staying in isolated loneliness is detrimental to your life and Heath. Psychology Today says it can create depression and addiction, as more people eat and drink more. It can also develop cancer and disease. It’s something that has to be dealt with right. We are social creatures; we need to be around other people, but solitude reconnects us with ourselves and humanity.
If we work through those emotions that are making us lonely, “we free ourselves up for problem-solving, creativity, and spirituality. If we can embrace it, this opportunity to adjust and refine our perspectives creates the strength and security for still greater solitude and, in time, the substance and meaning that guards against loneliness.”
Photo by Grace Madeline on Unsplash
By definition, Isolation is staying away from others with a disease, or you have the disease. But maybe for us reading this, it’s a force calling for our own transformation, to a self much more powerful and wiser than ever before.
Solitude is the enjoyment of yourself, by yourself.
Solitude is constant gratefulness for all that life is.
Solitude is tough and unconditional love, care, and preservation for yourself.
Now is the time to become true to ourselves. To preserve, evolve, and love ourselves.
That is solitude. | https://medium.com/illumination/when-isolation-turns-to-solitude-cd8866db4207 | ['Georgia Dimitrious'] | 2020-12-27 23:59:26.744000+00:00 | ['Self Love', 'Self-awareness', 'Solitude', 'Isolation', 'Covid 19'] |
Developing a Customised Assistive Solution: Philip’s Engineering Internship Experience | My internship with Thought-Wired started in May 2017, after I met Dmitry and James at the AUT STEMpreneur event, where they shared their experience starting their tech start-up.
My Background: From Psychology to Engineering
In 2011, I graduated from the University of Auckland with a Bachelor of Science, majoring in Psychology. I thoroughly enjoyed the course, but the classes were theoretical and I didn’t feel that they would help me in developing a career outside of research in an academic environment.
Therefore, in 2012 I enrolled into AUT’s Bachelor of Engineering programme and graduated with 2nd class honours last year. Currently, I’m pursing a Master of Engineering degree with AUT. My research topic focusses on developing a software robotic controller.
Combining my background in Psychology and Engineering, I have always had an interest in the fields of Brain-Computer-Interfaces, machine learning and biomedical devices to help individuals who live with physical impairments.
Focus of my internship: Developing a customised add-on solution for nous™
In June, I started my internship with Thought-Wired and I was given the project of developing an add-on for nous that is customised for a patient with a rare, neurodegenerative disease called “Multiple System Atrophy” (MSA).
At first, I was tasked to build a standalone graphical user interface (GUI) with the functionality of capturing and analysing data from a sensory device. The original purpose of incorporating capturing and analysing signals with nous™(which captures and analyses focused attention-based EEG signals), was to utilise the remaining physical and cognitive capabilities that the patient still had, to maximise her ability to communicate with her family and the outside world.
When I first joined the project I was informed that the only physical capabilities that the patient had was tilting her neck and partially lifting her forearms. However, due to her condition, while I was still learning and experimenting on ways to capture input from her, I was informed that the patient had lost the ability to lift her lower arms, thus eliminating this movement as a possible communication method.
So I shifted my research focus to building the GUI for real-time data acquisition and analysis of signals captured from her remaining physical capability, while keeping in mind that the code should be kept as modular as possible, such that it can be easily implemented into nous’ main architecture.
Challenges with building the add-on
Building the basic structure of the GUI itself wasn’t too much of a challenge because I’ve built several throughout my studies. One of the biggest challenges when designing the GUI was trying to debug the code when there were no apparent faults in the logic or design of the code! I was baffled for several days when the GUI didn’t display the captured data in the manner that I designed it to do. The design and implementation of the code didn’t suggest any flaws or bugs when it was being checked repeatedly.
This is when I asked for Thought-Wired’s senior software engineer, Sean Carmichael, for advice on how to debug the problem that I was experiencing. After he sat down with me, and listened to how I designed and implemented my code, he suggested the bug may not be due to design or implementation flaw, but maybe something to do with hardware design or inherent capabilities of the compiler in which the GUI was written in.
After our discussions, we went over some data-sheets, experimentation and documentation, where we found that this was indeed the case! The fastest data processing rate for the compiler was significantly slower than the fastest data handling rate the hardware could handle. When we ran the GUI and hardware at the fastest sampling rate, the GUI was unable to process all the data acquired from the sensory device.
Takeaways and lessons learned
From there I found a way to work around this problem - but the biggest learning for me was the difference in applying knowledge in a real-life situation, compared to knowledge learnt within classroom/lecture settings. This is especially apparent when it comes to fault finding and understanding the underlining situation when you have little information to go off of.
Another important lesson I’ve learnt is that it’s difficult for universities to teach the practical skills required for real-world situations - even for AUT, which is renowned for taking a practical approach in educating its students. A lot of the material that universities teach students (especially in STEM related fields) becomes outdated quickly, are inapplicable in real-life situations, or are not taught extensively enough for students to immediately apply them to their job roles. For example, throughout the internship there were several algorithms that I wanted to try to implement but quickly found out that they are either too impractical to implement, or the learning curve is so steep that it is improbable to implement- especially given the time constraints of an internship.
Before I sign off, I’d like to thank everyone at Thought-Wired for providing me this chance to do an internship with them. It has provided me with valuable experiences beyond measure. Most importantly, I realised the areas that I need to work on the most, and strengthened my determination to continue a path in pursuing a professional career in the fields of BCI and machine learning. | https://medium.com/thoughtwired/developing-a-customised-assistive-solution-philips-engineering-internship-experience-593545560143 | ['Sarvnaz Taherian Ph.D'] | 2017-11-02 03:08:26.898000+00:00 | ['Disability', 'Assistive Technology', 'Biomedical Engineering', 'Internships', 'Design'] |
The one and only Figma plugin you need to improve your graphic design skills. | As a beginner, often happens to feel that your design is dull, and there is no way to make it look better. It’s normal. So an important question is: “what’s the best way to improve?”. Well everyone is different, but between artists there is one common approach.
Quantity over quality.
Pablo Picasso is known for his masterpieces, but I guess you can’t name more than ten artworks he made, or even five. But do you know how many did he make? well, around 147.800.
That’s nearly 150k. But why? because doing a lot of practice makes you learn, and by doing a lot of different things, you learn how to do a lot of different things. That’s it. In fact, a lot of artists are extremely prolific and learned their art by doing, doing, doing.
Also, you can not get quality if you’re a beginner, so it is a huge waste of time trying to make a few masterpieces.
But how to apply this “contradictory” principle to Figma? well by designing a lot of different things. And in this case, a very famous plugin comes in handy: Unsplash.
The Plugin: Unsplash.
Thousands of designers use Unsplash, but we’ll discover a new way to use it. Unsplash has an interesting feature which is Insert Random: it lets you bring to the artboard a random image.
To apply the quantity over quality principle, get a random image, and start designing around it. No cheating. The first image you get, the first you use. In this way, you are forced to learn something new, or at least to think in different ways.
I’ll now show four examples of how using random images allows you to design completely different websites.
Case N.1: the Basketball Hoop.
The first image I got was a basketball hoop. Let’s analyze it.
The image has three main colors (orange, dark blue, and white), and the topic is sports.
This already solves two issues: which colors to use and which mood the artboard needs. | https://uxplanet.org/the-one-and-only-figma-plugin-you-need-to-improve-your-graphic-design-skills-79d8c02b02f4 | ['Lorenzo Doremi'] | 2020-12-15 19:41:53.320000+00:00 | ['Design', 'Graphic Design', 'Figma', 'Design Process'] |
Summer Is Trash & You’re Lying To Yourselves | @ me all you want.
Photo by Heather Barnes on Unsplash
Listen to me. Summer is not good. I’m tired of accepting everyone’s lies. I’m tired of being pegged as the grumpy or weird one because I don’t enjoy needing a shower 26 minutes after I’ve taken a shower.
“OMG summerrrrrrrrrr! It’s so much fun and it’s fun and omg summer is the best time for having fun!”
This delusional attitude hasn’t made sense since grade school. Summer as an adult is just a continuation of entirely normal life but with more bugs and you’re always thirsty.
Unlike when we were kids, summer makes life harder. It’s hotter to do literally everything, and therefore more inescapably uncomfortable. There’s no fix for being hot the way a coat solves problems in winter. There is a limit to the degree of naked we can be in public, and I venture to guess they need air conditioning in nudist colonies, too. Peddle your joys of summer nonsense elsewhere and pass me a damp towel please.
This inescapable heat and often arrogantly abundant sunshine is misery most high and you can talk to me until you’re blue (sorry, red—it’s hot) in the face about the beach (it’s far) and pools (I don’t have one and they’re expensive to go to) and ice cream (I’m lactose intolerant) and whatever else you want. Summer makes living a normal, grown up, earning-a-living life harder and I think it’s time my opinions had their day in the sun. Heaven knows everything else is out there baking already.
Hey, I know what let’s do—let’s make a season that only feels comfortable when the majority of one’s body is submerged in water. That sounds like productivity won’t be a problem. Also, let’s make wearing clothes during this time feel like being suffocated by fabric. I’m sure HR won’t take issue with whatever Ace bandage I’m wearing as a garment today because it’s the only thing that won’t contribute to heat stroke. I’m also sure I smell just fine in this meeting.
It’s not just summer by the way, I hate all manner of entirely normal things that most people enjoy. Take skirts for example. No seriously, take the skirts away because I hate skirts. They’re (always) too tight around the waist and likely to reveal your secrets at the other end. No thank you. A skirt is just a dress that gave up halfway toward its goals and I don’t like quitters.
But mostly, it’s summer I can’t abide. I also can’t breathe or stay physically dry for any respectable length of time. And yet all around me are grown people frolicking about with giant inflatable unicorns swilling glasses of pink crap and acting like life is one big Instagram. But I see you. I see the sunburn you got setting up the perfect photo for 45 minutes. I see the phone you dropped in the shallow end. I see the magnified rosé hangover you got because your body is sweating out its water at thrice the normal rate. I see how much more you spend on Lyfts half the year because the subway is the cruelest summer mistress of all.
I shall no longer be made to feel like a meteorological outcast by a population that routinely covers its eyes to the truth about summer with popsicle sticky hands. I will proudly express my distaste and distrust and don’t come cryin’ to me when you find sand in your bed.
There’s room in this life for all sorts of seasonal opinions. It’s just that when it comes to societal takes on summer, you’re all extremely delusional, dehydrated, and wrong. | https://shanisilver.medium.com/summer-is-trash-youre-lying-to-yourselves-930a7831429a | ['Shani Silver'] | 2019-07-16 18:21:06.762000+00:00 | ['Summer', 'Writing', 'Life', 'Culture', 'Humor'] |
8 Surprising Tips To Help You Read a Book Every Week | Photo by Daria Nepriakhina on Unsplash
For as long as I can remember, reading has been my favorite hobby. From third grade to my present 32 year old self, it’s rare to find me without a book beside my bed or in my purse.
In fact, keeping a book with me at all times was a habit that was ingrained in me by my parents at an early age. Growing up in the 90’s and early 2000’s, we didn’t have our own cellphones. This meant that whenever we left the house, our parents would ask my siblings and I if we had some “reading material.”
The expectation was that you should always have a book, magazine, etc. because you never knew when you’d have a few minutes to kill running errands, in a waiting room, etc.
From that point on, reading became a habit, and one that I truly love. Now, I know that this might not be a habit for you, but you’d like it to be. No worries, I can help.
8 Tips To Add More Reading Time Into Your Daily Routine: | https://medium.com/illumination/8-surprising-tips-to-help-you-read-a-book-every-week-9a38a137f4c9 | ['Betsy Ramser Jaime'] | 2020-07-08 19:53:46.165000+00:00 | ['Books', 'Self Improvement', 'Personal Growth', 'Self', 'Personal Development'] |
RL — Model-based Reinforcement Learning | Photo by Jonny Caspari
Reinforcement learning RL maximizes rewards for our actions. From the equations below, rewards depend on the policy and the system dynamics (model).
In Model-free RL, we ignore the model. We depend on sampling and simulation to estimate rewards so we don’t need to know the inner working of the system. In Model-based RL, if we can define a cost function ourselves, we can calculate the optimal actions using the model directly.
RL can be roughly divided into Model-free and Model-based methods. In this article, we will discuss how to establish a model and use it to make the best decisions.
Terms
Control theory has a strong influence on Model-based RL. Therefore, let’s go through some of the terms first.
In reinforcement learning, we find an optimal policy to decide actions. In control theory, we optimize a controller.
Control is just another term for action in RL. An action is often written as a or u with states as s or x. A controller uses a model (the system dynamics) to decide the controls in an optimal trajectory which is expressed as a sequence of states and controls.
In model-based RL, we optimize the trajectory for the least cost instead of the maximum rewards.
Model-free RL v.s. Model-based RL
As mentioned before, Model-free RL ignores the model and care less about the inner working. We fall back to sampling to estimate rewards.
We use Policy Gradients, Value Learning or other Model-free RL to find a policy that maximizes rewards.
On the contrary, Model-based RL focuses on the model.
With a cost function, we find an optimal trajectory with the lowest cost.
Known models
In many games, like GO, the rule of the game is the model.
AlphaGO
In other cases, it can be the law of Physics. Sometimes, we know how to model it and build simulators for it.
Mathematically, the model predicts the next state.
We can define this model with rules or equations. Or, we can model it, like using the Gaussian Process, Gaussian Mixture Model (GMM) or deep networks. To fit these models, we run a controller to collect sample trajectories and train the models with supervised learning.
Motivation
Model-based RL has a strong advantage of being sample efficient. Many models behave linearly at least in the local proximity. This requires very few samples to learn them. Once the model and the cost function are known, we can plan the optimal controls without further sampling. As shown below, On-policy Policy Gradient methods can take 10M training iterations while Model-based RL is in the range of hundreds. To train a physical robot for a simple task, a Model-based method may take about 20 minutes while a Policy Gradient method may take weeks. However, this advantage diminishes when physical simulations can be replaced by computer simulations. Since the trajectory optimization in Model-based methods is far more complex, Model-free RL will be more favorable if computer simulations are accurate enough. Also, to simplify the computation, Model-based methods have more assumptions and approximations and therefore, limit the trained models to fewer tasks.
Learn the model
In Model-based RL, the model may be known or learned. In the latter case, we run a base policy, like a random or any educated policy, and observe the trajectory. Then, we fit a model using this sampled data.
In step 2 above, we use supervised learning to train a model to minimize the least square error from the sampled trajectory. In step 3, we use any trajectory optimization method, like iLQR, to calculate the optimal trajectory using the model and a cost function that say measure how far we are from the target location and the amount of effort spent.
Learn the model iteratively
However, it is vulnerable to drifting. Tiny errors accumulate fast along the trajectory. The search space is too big for any base policy to cover fully. We may land in areas where the model has not been learned yet. Without a proper model around these areas, we cannot plan the optimal controls.
To address that, instead of learning the model at the beginning, we continue to sample and fit the model as we move along the path.
So we repeat step 2 and step 4 and continue collecting samples and fitting the model around the searched space.
MPC (Model Predictive Control)
Nevertheless, the previous method executes all planned actions before fitting the model again. We may be off-course too far already.
In MPC, we optimize the whole trajectory but we take the first action only. We observe and replan again. The replan gives us a chance to take corrective action after observed the current state again. For a stochastic model, this is particularly helpful.
By constantly changing plans, we are less vulnerable to problems in the model. Hence, MPC allows us to have models that are far less accurate. Here is a video on how a model is trained under this concept.
(source)
Backpropagate to policy
The controls produced by a controller are calculated using a model and a cost function using the trajectory optimization methods like iLQR.
However, we can also model a policy π directly using a deep network or a Gaussian Process. For example, we can use the model to predict the next state given an action. Then, we use the policy to decide the next action, and use the state and action to compute the cost. Finally, we backpropagate the cost to train the policy.
PILCO
Here is the PILCO algorithm of training a policy directly through backpropagation.
Simplified from source
However, consecutive states in a trajectory are highly correlated. Backpropagation functions with correlated values often lead to vanishing or exploding gradients. So its promise is limited.
Global Model
What kind of models can we use to represent the system dynamics?
Gaussian Process
One possibility is the Gaussian Process. The intuition is simple. If two inputs are similar, their output should be similar too. For two data points, if one is closer to a known training data, its prediction is more certain.
Say, we sampled two data points x1 and x2 with observed values f(x1)=150 and f(x2)=200 respectively. Can we determine the likely values of f(x) for x1<x<x2? The plot above shows these possible values of f(x) within one standard deviation. As shown, the middle point between x1 and x2 should have the highest uncertainty about its value. The output of data points, like f¹and f², can be modeled as a Gaussian Distribution in the following form.
where 175 is the mean of f. K is the covariance matrix with each elementᵢⱼ measures the similarity between input xᵢ and xⱼ. For details on how to calculate K, please refer to here.
As another example, the right figure below samples the output of 5 data points. The graph plots the predictions using the Gaussian process. The blue line represents the means and the shaded area represents values within one standard deviation (SD). So for input x=5, the prediction within one SD is between -1.1 to -0.4 with mean about -0.7.
Source: Wikipedia
Here is the algorithm of the policy search using a Gaussian Process (GP) model:
Gaussian Mixture Model GMM
Another possibility is the GMM. Gaussian Mixture Model is a mixture of K Gaussian distributions. We assume the model has k most likely outcomes and we weight each possibility according:
To identify those k Gaussian distributions, we use Expectation-Maximization (EM) to cluster the sample data into k clusters (modes) that each represented by a mean and variance.
Deep network
Finally, we can also use a deep network.
Global model
Previously, we model the dynamics with a global model. If the dynamic is complex, we need a more expressive model like the deep network. But it requires a lot of samples to train it. If we land in space which is not yet trained properly, the model can be erroneous and lead us to bad actions that destroy the progress of the training.
Local Model
Alternatively, we can adopt an on-demand approach in developing models locally when we need it.
The local model is Gaussian distributed with linear dynamics.
The parameters for the linear dynamics is computed as:
Next, we will see how we train and use the controller to take action.
Controller
Modified from source
We run the controller p on the robot and observe the trajectory. With the collected samples, we can fit the model locally by estimating the derivative using samples.
However, the potential error increases as we move away from the trajectory where the local model is developed. Hence, we add a constraint to make sure the trajectory optimization is done within a trust region. In short, we will not take action outside this region which we cannot trust the optimization result.
This trust region is determined by the KL-divergence which measures the difference between the new controller and the old controller. If the trajectory is too different from the sampled one, we may get too aggressive and the calculated value can be too far away from the real value.
Image source
So what controller should we use? It will be too boring if the controller explores the same action over and over again for the same state during training. The repeating samples provide no new information to make the model better. Hence, we use a linear Gaussian controller to explore actions better. (linear dynamic with a Gaussian distributed output actions)
and Q is the cost function:
Σ is large when the cost Q is small. So we allow the exploration to go off-course more if the estimated cost is low. So how can we find a new controller in minimizing cost given the controller change is within a trust region?
Before solving that, we need to check out some of the properties of the KL-divergence between the new and the old controller. This will involve some maths but it should be easy to follow or feel free to browse through them quickly.
KL-divergence
We want to establish:
First, we expand the distribution of the trajectory using the new and the old controller:
(Note: both trajectories have the same initial and final state.)
Let’s compute the log term below first which is part of the KL-divergence definition:
KL-divergence is defined as:
Therefore, the corresponding KL-divergence is:
Next, we need an optimization method that works with constraints.
DGD is our choice. Let’s have a quick summary here. For those want more details, you can read DGD later. First, we need to find its Lagrangian 𝓛 and the Lagrange dual function g which is defined as:
Then the optimal x* of our objective can be solved by:
Intuitively, we start with some random value of λ and find the corresponding optimal values of 𝓛 using an optimization method. In our context, this will be a trajectory optimization method like LQR. Without proof here, g (i.e. 𝓛(x*, λ)) is actually the lower bound of our objective. So we want to move along g such that it gets higher and closer to our objective function. That is what steps 2 & 3 do which use gradient descent to move λ towards the steepest direction in increasing g. Then, with the new λ, we compute a new lower bound g. As we keep the iteration, we will find λ in which the maximum of g touches the minimum value of the objective function.
Model-based RL with local model
Again, our objective is:
We will use LQR to optimize step 1. LQR is quite complicated but for this context, you just need to know LQR is a trajectory optimization method. You can read LQR later if you want more.
The Lagrangian for our objective is:
We divide the objective by λ and create a new surrogate cost function
Here is the objective with the Lagrangian:
As explained in a previous article, LQR solves:
With a Linear Gaussian controller,
Then, we can apply LQR to minimize
This looks almost the same as our objective but the equation above uses the original cost function c. So to solve our objective, we can follow the same procedure but use our surrogate cost function instead of c.
Optimize trajectory with a local model
This is the final algorithm:
Planning
We can also use planning to generate simulated experience and use them to fit the value functions or the policy better.
The difference between learning and planning is one from real experience generated by the environment and one from simulated experience by a model. We use the real experience to fit the value function. We build a model of the transition and sample experience from it. Later, we can fit the value function again with the sampled experience. This improves sample efficiency because sample data are reused and it produces a more accurate value for V.
Model-based RL using sample-based planning (source)
Dyna-Q Algorithm
Here is the Dyna-Q algorithm which used the sampled data and the model to fit the Q-value.
Let’s discuss another example of using a model for a Model-free method. The variant for Policy Gradient is usually high.
But it can be reduced by averaging out from a large sample. This is expensive if physical simulations are needed. But we have a model, we may be able to generate the needed data from the model.
Overfit
In a Model-based method, we use a relatively small amount of training samples because of the computation complexity. In addition, we can use a deep network to model system dynamics. Nevertheless, we need to keep the capacity of the model small. Otherwise, it will be overfitted. Such an overfitted model will make bad decisions. However, this simple model also limits the maximum rewards. For example, in the Cheetah task below (a task to teach a simulated Cheetah to run), a Model-based method can not reach a total reward beyond 500. To reach higher, we need a more complex model with more training data. A more feasible solution is using a Model-based method to train a simple model. Then we use this model to initialize a model-free learner and use model-free training to push the expected rewards further up.
Model-based RL with ensembles
Another alternative is using an ensemble method based on the idea that independently trained models do not make the same mistakes and many weak learners can be combined to make good decisions. So instead of training one model, we can use different seeds to train many different models.
In planning, given a sequence of actions, we can determine the state transitions from each model and find out the reward. The reward of such an action sequence equals the average rewards using all these models.
Distillation
Overconfidence always kills. When we label our data, we make it as a definite answer. But in reality, this is less definite in reality. For example, the letter below is a 1 or a 7.
One problem in ensemble methods is the computation complexity increases by k times with k models. In distillation, we plan to match its performance but use one model only during inference. We still train the ensemble models and we use the prediction of the ensemble as a soft label.
where T is the tunable parameter Temperature and zᵢ is the logit output of the ensemble. We train another model with the intention to match the value of this soft label instead. Intuitively, we create a new model on what this ensemble may predict.
This strategy can be applied to create a single policy for multiple tasks. For example, we can train a policy for each Atari game below. Once these policies are trained, we will use supervised learning to match what these policies may predict. The objective function is simply a weighted log probability of the unified policy and weights derived from different individual policies.
Thoughts
Instead of fitting a policy or a value function, we develop a model to understand the inner workings better. Knowing how things work, we end up with fewer required samples which is the key selling point when the physical simulation is expensive. Here, we discuss the basic of the model learning. But I wish it is that simple. Model learning can be very hard and not generalize well for other tasks. Learning directly for raw images is difficult when information is all tangled. Can we deploy the solution that is economically feasible? Stay tuned for these questions for more in-depth Model-based RL methods in later articles.
Credit and references
UC Berkeley Reinforcement Learning Class
UCL Course on RL | https://jonathan-hui.medium.com/rl-model-based-reinforcement-learning-3c2b6f0aa323 | ['Jonathan Hui'] | 2019-11-02 03:22:48.636000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Reinforcement Learning', 'Deep Learning'] |
3 Tips and 1 Analogy for Keeping Your Writing Focused and On Point | 3 Tips and 1 Analogy for Keeping Your Writing Focused and On Point Sean Myers Follow Dec 5 · 9 min read
Photo by Stefan Cosma on Unsplash
As a professional legal blogger, I’ve written a lot of blog posts.
A lot. Of blog posts.
Over the course of that time, I have stumbled into most of the pitfalls that litter the path of a 500-word piece.
By far the most common problem is a lack of focus: I’m writing about Subject A, but Issue B keeps sneaking in.
The solution is simple — write two blog posts or, if that’s not feasible, figure out whether Issue B is relevant and, if it really is, quickly cover it so you can move on to the main topic.
But the solution is only obvious when you can see the problem.
Recognizing that you’re trying to write two different articles at the same time is the hard part.
Here are a few techniques that I’ve come to use over the past 6 years.
The problem of prerequisite knowledge
Sometimes, you’re writing an article and find yourself going back and explaining something else that needs to be dealt with first. You’re trying to write the article in 500 words, but you’ve already got 350 down before you can even begin.
An example was when I wrote a piece for a DUI-defense lawyer about problems with different breathalyzers (fun story: there are dozens of models of breathalyzers). Before I could get into the different problems that each model had, I found myself explaining the difference between the two types of breath testing machine (there are preliminary breath tests that happen during the traffic stop with a handheld device, and then there are evidentiary breath tests that happen on a bigger machine in the police station). I was also explaining the three chemical processes that preliminary breath tests use to detect alcohol (there are breathalyzers, alcosensors, and intoxilyzers).
Fun stuff, I know.
So fun that I hit my word count before I could even get to the point where I could talk about the dozens of makes and models of handheld breathalyzers. I hadn’t even dropped in the “did you know” factoid that I wanted to include (did you know that a “breathalyzer” is an example of a generic trademarked product like a Kleenex? Just like a Kleenex is a brand of facial tissue, a Breathalyzer is a brand of breath testing device).
You see the problem: Before I could talk about the dozens of different models, I had to discuss the three types of preliminary breath tests, and before I could tackle that issue, I had to differentiate between preliminary and evidentiary testing devices.
Writing a 500-word article usually takes me around 30 minutes. I was an hour in, with 750 words on my plate, and getting pissed at myself when I swore and groaned.
“I’m writing three different articles,” I grumbled.
Sure, I probably could’ve turned the 500-word article into a 1,000-word piece. But sometimes you can’t, like if your editor is expecting you to stick to the targeted word count. Regardless, you often shouldn’t — if you’re getting paid for 500 words, delivering 1,000 isn’t just a waste of time; it means you’re also burning through a potential blog topic that you could get paid for, and it creates expectations that will hurt you, later on.
I chopped the article up and wrote three pieces and, through the magic of the internet, simply linked to that which came before:
“There are dozens of preliminary breath tests [link], each of which works in one of three different ways [link].”
Don’t distract the reader
In some cases, the best writer is a good editor.
Good editors are constantly asking, “Why is this here?”
Writers should be asking that about their work, as well. Sometimes, when Issue B creeps into your article about Subject A, Issue B is either completely irrelevant or is such a minor detail that it isn’t worth taking the time to cover it, at all.
An example is this story I wrote, about poll monitoring during the 2020 election:
There’s a scene in the story where I get a text, right before polls close, to tell anyone in line that they have a right to cast a ballot if they’re already in line. However, because so many people voted absentee, there hadn’t been a line at the precinct in over 12 hours.
In real life, I read the text, looked around at the empty parking lot, and chuckled because there were, quite literally, crickets chirping in the night.
The editor told me to take out the part about literally hearing crickets in the grass. She didn’t believe that I could hear them on a 40-degree November evening in Pennsylvania.
Until then, it hadn’t occurred to me just how strange it was to hear crickets at that moment. But I had. They were everywhere at the polling precinct. When I walked through the grass, several would jump out of the way of every foot fall. They were even thicker in the mulch. They were hiding in the gap between the sidewalk and the concrete curb. I was talking with another poll worker when I stepped backwards and struggled to keep my train of thought — I’d felt one crunch under my heel.
But the crickets weren’t the story. They were just a tiny detail. Including that detail — cool as it was — would raise questions that I would then have to either leave open-ended or spend time developing.
When the editor told me I could leave the crickets in, if they really happened, I refused.
“Cool details that distract the reader,” I said, “still distract the reader.”
Keep them focused on what you’re really saying. Keep the writing tight.
Pick a feasible subject
The best way to notice that you’re trying to write two articles at once is to avoid it, in the first place.
Easier said than done, obviously. But when you write enough articles of a given length, you start to see how long it will take you to cover a topic. At this point, I can go months of legal blog writing without grumbling from the realization that I have two pieces in the same document.
The key is to simply have the capacity and the experience to foresee what you need to write about, before you write it. Unfortunately, given the wide variety of topics there are to write about and everyone’s different writing styles, this is nearly impossible to teach.
The point, though, is that if you’re a new writer — or an experienced writer, branching into an unfamiliar field — and you’re getting frustrating with yourself over how often you suddenly realize you’re writing multiple articles at the same time, just know: It gets better.
The f-stop analogy
At the risk of explaining a complicated concept with a far more complicated analogy, the article writing process is similar to a key component of photography: The f-stop, which describes the aperture or the focal length of a camera’s lens.
We’re talking about the hole in the camera lens that allows light to pass through to the inside of the camera — the hole that can be shrunk or widened by adjusting the metal blades that form its edges.
Photo by Alex Rhee on Unsplash
The size of this hole is measured in “f-stops.” Confusingly, the smaller the f-stop number, the larger the hole in the lens.
The bigger the hole in the lens, the more light gets in.
So, adjusting a camera lens to f/2 will let in a lot of light. From there, if you adjust it to f/22, the metal blades will tighten, the aperture and the hole in the lens will shrink, and less light will get into the camera.
But that’s not all that will happen.
The aperture of a camera also determines the picture’s depth of field.
Low f-stop numbers, like f/2, have a small depth of field. When you set the camera’s focus, only the objects at that distance will be clear and sharp. Everything else will be blurry and out of focus. High f-stop numbers, like f/22, have a large depth of field with everything in focus.
On left: An image with a low f-stop, with the focal point in the middle of the rope and the rest of the image out of focus | Photo by Nadine Shaabana on Unsplash. On right: An image with a high f-stop, with everything in focus | Photo by Luca Bravo on Unsplash.
Photographers use high f-stops for landscape shots because the subject is, well, everything. But they use low f-stops for portrait photography, headshots, or pictures where they want to focus on one particular object. By putting the rest of the picture out of focus, they eliminate any unwanted distractions.
Now you see where this analogy is going.
Writing isn’t the only medium where something can creep into, and distract from, the intended subject.
Just take a look at this photo:
Photo by Carly Gerlach on Medium
It’s still a pretty good photo. But what’s the subject, here? Based on the f-stop and blurry background, it’s intended to be the woman up front. But the colors of the houses attract the eye, in spite of their blurriness and the focal point.
Even in the foreground, there’s a distraction that pulls the viewer away from the true subject: The padlocks on the bridge. It’s one of those “love bridges,” where couples write their initials on locks and then fasten them to the rails.
It’s a pretty neat picture, but there are three photos, here: The portrait, the bridge, and the colorful river houses in the background. Put together, it’s difficult to know where to look.
That feeling is what a reader experiences when you write an article that’s trying to do too much.
And now you see what I mean
Hopefully, this very article made you experience that feeling, as well.
The focus of this article was to help people notice when they were writing articles that struggled to stay on point. Its aim is to help writers whose face is in their hands, or who are scowling at their screen and wondering, “Why isn’t my article coming together?”
The goal was to trigger that epiphany that widens the eyes with the thought, “I’m writing two different articles! I wanted to write about this, but this other thing keeps getting in the way.”
This article offered three tips for seeing the problem: Be on the lookout for prerequisite knowledge. Don’t distract the reader with details that aren’t worth the explanation. Pick a subject that can be covered in your targeted word count.
Then it used a photography analogy, using f-stops and pictures to make the lessons visual.
But was that analogy really necessary? Analogies are good, especially when they describe one thing in terms of something completely different — like explaining a writing technique in a visual way.
But was that particular analogy necessary? F-stops are a very technical term. Explaining them is quite difficult, even for people who are legitimately interested in photography.
As a writer, you need to weigh those interests. On the one hand, I weighed the value of the f-stop analogy and how it could explain, in visual terms, the task of focusing your writing on one subject. On the other hand, I weighed the time that it would take to give the reader the information they would need to understand the analogy and how it fit into the point that I had set out to make.
After weighing those interests, I opted to include the analogy. Explaining a writing problem in a visual way was important enough, in my eyes, to justify covering all of that prerequisite knowledge.
Prerequisite knowledge?
Yes, prerequisite knowledge.
What was the distraction in this article, then? What detail didn’t have to be included, at all?
That was also in the f-stop analogy. It was the detail about how the f-stop measurement impacted the amount of light that enters the camera.
Adjusting the aperture on a camera changes two things:
The amount of light, and The depth of field.
Only the depth of field mattered for the analogy. Yet I also mentioned how it impacted the amount of light that it let into the camera, which was irrelevant for my topic. Those four paragraphs beneath the image of the camera don’t just fail to move the article forward: They distracted you with details that you thought you would need to understand Subject A, but didn’t.
As for picking a feasible subject, length requirements are always the great limitation. The longer you can write, the more you can cover. On Medium, for example, a good rule of thumb is to keep things under a 10-minute read. I could have broken this article into two:
3 Ways to Notice That Your Article Has Lost Its Focus, and Keeping Your Article On Point is Like Taking a Photo
But I didn’t have to. | https://medium.com/writers-blokke/3-tips-and-1-analogy-for-keeping-your-writing-focused-and-on-point-77a12f550d6 | ['Sean Myers'] | 2020-12-05 03:06:56.174000+00:00 | ['Photography', 'Writing', 'Writing Tips', 'Blogging', 'Focus'] |
Finding Opportunities in The COVID Crisis | Finding Opportunities in The COVID Crisis DataSeries Follow Dec 17 · 10 min read
Source: Shutterstock
Thimble, a U.S. startup, offers flexible and short-term business
insurance to SMEs and micro-entrepreneurs. And business
couldn’t be better. “Demand has skyrocketed during the
pandemic,” says Thimble CEO Jay Bregman.
Employers in the U.S. have shed millions of permanent, full-
time jobs and an increasing number of these laid-off workers are
starting their own companies. They need insurance but due to
the uncertainty caused by the pandemic prefer not to take out the
annual policies offered by traditional insurers. Thimble provides
liability coverage for customers in more than 130 professions,
including handymen, landscapers, cleaning people, and dog
walkers. Policies can be purchased directly from the Thimble
website or app by the hour, day, week, month, or year. Bregman
says Thimble has seen an “incredible” rise in demand not only
from micro-entrepreneurs and small businesses but also from
bigger businesses, who, due to the pandemic, can’t forecast
what’s going to happen. “They can’t buy annual policies or
don’t want to buy new policies because they don’t want to be
stuck overpaying or paying for something they don’t need and
so they’re buying our products,” says Bregman. “We think the
time has come for New Age products that allow people to buy
only what they need.”
Thimble’s flexible offer is just one example of why agile
startups are seeing business boom during the pandemic while
insurers and some corporates in other sectors are finding
themselves on the back foot. The creative ways that startups are
responding to existing or emerging needs that are being
insufficiently addressed by large corporates and governments
during the COVID pandemic was the topic of a December 8
roundtable organized by DataSeries, a global network of data
leaders led by venture capital firm OpenOcean (an investor in
Thimble), and moderated by The Innovator Editor-in-Chief
Jennifer L. Schenker.
Panelist Austin McChord, Ex-CEO & Founder of Datto, a
provider of cloud-based software and technology solutions delivered by managed service providers, says he sees lots of opportunity in servicing the needs of SMEs and newly minted entrepreneurs. “A lot of this pandemic stuff is not that dissimilar to when a forest fire comes through and clears out a whole lot of things in an area but then creates new open space for new plants to show up and strive to survive and grow and flourish,” says McChord. “In a huge way, COVID is just making the future come faster.”
People who had considered launching their own businesses but were waiting for the right time are making the leap during the pandemic out of choice or necessity. “If you are in the business of serving these people then you had better be ready to reach out to them and onboard them,” says McChord. “There are so many new offshoots, so many new seeds being planted during this forest fire of a pandemic. I think it’s actually a really exciting time to be in this space. “
Enabling Micro-Entrepreneurs
Panelist Allan Martinson, a co-founder of Estonia-based startup
Xolo, couldn’t agree more. “There are millions of people in the
U.S. and Europe who are involved in independent work and the
crisis is actually creating more opportunities and more growth in
that sector,” says Martinson. Xolo offers a self-service-based,
highly automated, and location-independent management
solution to more than 57,000 micro-entrepreneurs in 119
countries. The company wasn’t sure what to expect when the
pandemic started. “We were quite afraid when COVID first hit
that many of our customers would end up going bankrupt or
close down their businesses,” says Martinson. Xolo sent out an
email to its customer base and asked them how they thought the
pandemic might impact them “We got about 400 replies and
about 25% of them said they were worried that the pandemic
would destroy their businesses,” he says. “We braced ourselves
for the worst but nothing like that happened. Freelancing has
been steaming ahead and actually accelerating during the last six
to nine months and, for us, that is a very welcome development.”
There are around 22 million self-employed people in Europe,
according to a December report compiled by Boston Consulting
Group and Malt, a French scale-up that helps corporates recruit
freelancers. Salaried work peaked around the year 2000, and since then, the number of independent workers has been on the rise across Europe, according to the report. Among them, freelancers are spearheading the growth of independent work: they are the fastest-growing segment of the European labor market. France has seen a 92% increase in the number of freelancers in the past 11 years while Spain has seen a 40% increase. The exception is Germany, where the number of freelancers in Germany has remained steady at around 1.3 million.Though the digital freelancer movement was initially driven by the rise of the IT sector and software developers, it is now continuing to grow thanks to workers from a wide variety of industries.
Servicing SMEs
It is not just the number of freelancers that are experiencing
explosive growth. The creation of new small businesses is also
on the rise, with the U.S. reporting a higher than usual number
of new business applications during the pandemic. In fact, in
the third quarter of 2020, the U.S. experienced the highest
quarter of new business applications since it began recording
data about them in 2004, according to a NPR story.
These new businesses, along with existing SMEs, are in need of
all kinds of IT support services. Datto works with managed
service providers that serve as the outsourced chief technology
officers or chief information officers for small businesses. “That
model has been really successful, and we don’t see that
changing, as it enables person-to-person face-to-face
relationships and that is something that the large players in the
industry just can’t provide,” says McChord.
Beyond IT troubleshooting, cybersecurity is among the top
priorities for companies both big and small in a time of
COVID, says McChord. “On the cybersecurity side the
pandemic has really challenged a lot of paradigms,” he says.
“And a big piece of it is that both small and large businesses
regarded the secure place to work as inside the corporate
network and then they built this big expensive wall around it to
ensure that only good things happen there. Well, guess what? No
one’s in the corporate network anymore. Workers are taking
these machines home, out into the wild, and they‘re all going
through VPNs. And so it has caused a lot of rethinking around
how a lot of that security is done, and where the security
infrastructure gets deployed.”
In the new work-from-home environment time management is
more important than ever, says panelist Fred Krieger, Founder
and CEO at Scoro, a work management platform focused on
SMEs.
“What we offer to companies, regardless of their size, is a
simple promise: we automate their workflow in a way that
allows them to cut five hours per week per person,” says
Krieger. “It is all about being efficient and working on things in
the right way.”
Work management software is “actually drastically growing, it’s
kind of exploding,” says Krieger. “Culturally, a lot of companies
have been completely dependent on different kinds of in-person
activities. The pandemic has pushed them out of their comfort
zone. They finally need to let go of measuring when and how
and where people show up. Instead, they need to actually
measure the outcomes.”
Krieger says he expects market consolidation both in terms of
tools and the number of players in the work management space.
While it can be tougher to sell horizontal solutions during a
pandemic when many SMEs are pinching pennies, Krieger says
he doesn’t see it as a handicap. “From a customer’s perspective,
these nice time management tools are kind of like patches, quick
fixes to problems. But when you have a lot of patches and quick
fixes that is where you get this drive to start consolidating,” he
says. “The pandemic has pushed people to adopt a lot of these
quick- fixes so in the medium-term we think COVID will be a
huge contributor to our success.”
Building a successful business during the pandemic has been
less straight-forward for Booksy, which specializes in helping
book appointments online, mainly for hair and beauty salons,
across different geographies. The salons were hard hit during the
pandemics, with many being forced to close their shops during
lockdowns. “The crisis had a lot of unexpected consequences for
us,” says panelist Marcin Borowiecki, General Manager of
Booksy’s U.S. operations. Rather than focusing primarily on
adding new clients, Booksy shifted its attention to helping
existing clients survive by helping them introduce gift cards or
manage consent forms and making sure the timing and volume
of appointments adhered to social distancing requirements.
The pandemic also opened a new line of business for Booksy in
an area that was completely unexpected. Banks and telecommunication companies, for example, started worrying about controlling — and limiting — the number of people visiting their branches in order to keep their employees safe and abide by government rules. “We have deals with BNP Paribas, with
Credit Agricole, with one of the major telco companies and a couple of other retail players who basically are utilizing Booksy to manage appointments and manage traffic in their outlets,” says Borowiecki. “For us it adds a completely new leg of business and opened up our marketplace to different types of
services that would have been impossible before COVID. “Banks” willingness to get back to business quickly while providing a safe environment for their clients and employees opened the door for a start-up to provide an
important solution for their day -to-day operations.“
Targeting Gaps In The Market
The panelists said they believe there are still plenty of untapped
opportunities to serve SMEs post-COVID, including, the need
for improved knowledge sharing, automation, convenience for
the end user, and the increasing expectations in this context, and
the importance of streamlining processes and design.
Datto’s McChord sees opportunity in knowledge sharing
between colleagues. Tools already exist such as business
communication platform Slack, work tools startup Notion and
Confluence, a web-based corporate wiki developed by
Australian software company Atlassian. “But nobody has nailed
it yet,” he says. “There is definitely a better way out there. If
information can be shared more easily and more quickly, you
could get a lot more done with less people and that would be
huge,” he says.
Ease of use is another area that offers opportunity. SMEs need
and expect turnkey solutions, says Booksy’s Borowiecki. The
goal is to come up with solutions that are customized, automized
and designed to be super easy for customers to use, he says.
Services that help SMEs become more efficient will also be in
high demand, predicts Scoro’s Krieger. “There is a lot of
demand for simplifying processes for companies, both big and
small. “Simplifying starts with creating a proper structure and
understanding what actually is moving the needle and then
becoming more proactive about time management,” he says.
“Companies need to decide what are their long-term goals.
Being efficient doesn’t mean you need to squeeze out more from
your team or from yourself. It means getting to the same result
in less time.”
Thimble’s Bregman sees opportunities in helping businesses
cope with uncertainty. “One of the things that we think about is
non-essential businesses here in the United States, and just how
difficult it has become, and how uncertain it‘s’ become, to run a
non-essential business,” he says. “We naturally think about
insurance. if their businesses shut down, as being part of the
solution to that and we‘re developing a product for that,” he
says. “But even outside of that, we think there are probably
other things, other tools, that can be used for when non-essential
businesses are inevitably going to be closed again.”
As the world adjusts to the new normal, Datto’s McChord sees
an opportunity to develop entirely new types of services for
SMEs that adopt hybrid solutions for workers. “What will small
businesses look like six to twelve months from now, when we’re
not government-mandated to stay home and where people do
their work becomes more of a choice?” he asks. “What does that
hybrid workplace look like? Workers will still probably want to
go to the office some of the time to get out of the house. But on
the other hand, it‘s now a proven fact that workers can be productive, working from home. We need to figure out what the tools and services needed to serve those businesses will look like,” he says.
There is no shortage of spaces requiring innovation, says
McChord. “If I was going to start a startup, I would look at
what are the tools that these big enterprises have that SMEs
don‘t and how do we make this available to small businesses at a
price point that makes sense to them and give them that same
capability so that they can compete with the biggest of the big,”
he says. “Entrepreneurs need to come into work thinking about
‘how do we empower these small businesses to compete and
take on large enterprise?’ If they do that then they’re going
to end up building the right tools and the right products. I think
what’s really exciting is there’s room for so many people to step
up and empower these SMEs to grow and take on their large
entrenched competitors.” | https://medium.com/dataseries/finding-opportunities-in-the-covid-crisis-94c28da22a3c | [] | 2020-12-17 11:02:54.910000+00:00 | ['Covid 19', 'Startup', 'Opportunity', 'Crisis', 'Market'] |
How to Make Someone Feel Extraordinary by Saying Very Little | How to Make Someone Feel Extraordinary by Saying Very Little
A three-step approach
Photo: 10'000 Hours/Getty Images
I walked into his office, a defeated salesman on the verge of quitting. Two hours later, I joined his mentorship program and signed a loan commitment for $12,500.
Nobody had ever made me feel like I was a man of such importance and stature. I strutted out the door, feeling like I owned the world.
But it didn’t start out that way.
I had stepped into his office hesitantly, intimidated by his display of success. The walls displayed photographs of his skiing exploits from all over the world. His desk featured the requisite “Man with Mercedes” photo. All of the trinkets on his shelves looked like museum pieces.
This guy had seemingly achieved it all; I was barely able to pay my bills. I hadn’t earned the right to occupy his airspace, I thought to myself. What the hell was I doing there?
It took him one minute to relieve my unease.
He invited me to sit.
“Just one sec,” he said.
Then he turned off his cellphone and placed it in his desk drawer. He called his assistant and asked her not to interrupt him unless there was a family emergency.
He took out a fresh notepad and pen and sat across from me. After exchanging brief introductions, he opened with, “Tell me about your struggles.”
He scribbled notes furiously as I spoke. When I stopped, he prompted, “That’s interesting. Can you tell me more?”
He’d throw in other questions (what he’d call “reversals”), not to frame the conversation but to keep me talking. The one or two times he wanted to change the direction of the discussion, he’d first ask permission.
By the time we had finished, he had filled up four pages of handwritten notes. He summarized his conclusions on a giant whiteboard labeled “Barry’s Action Plan to Hit $250K.” I told him my goal was $100K, but he responded, “After hearing your story, you’re capable of so much more.”
Fourteen years later, I consider his mentorship program the smartest investment of my life. It was an insane amount of money for me, and I’m still amazed at how he sold me: He had barely said a word, but I had never felt so important, so revered, by someone outside my family.
A few weeks after my mentorship began, he broke down his three-step approach to relationship building:
Demonstrate your interest in the person
The most important lesson he taught me was that demonstrations persuade more than words.
When I walked into his office for that first meeting, he never said: “I’m going to give you my undivided attention.”
Instead, he communicated with his actions. He wanted me to see him turn off his phone and place it in a drawer. His assistant already knew not to interrupt him, but the demonstration mattered. Even the selection of a clean notepad was by design.
I saw all this and concluded: “He’s treating me like I’m a dignitary.”
But your actions alone only make up half the demonstration equation. You also show interest through your body language. Slumped shoulders and lethargic movement signal disinterest. Sharp movements, good posture, and smiles show a sincere desire to hear what someone has to say.
Imagine meeting a friend for coffee. She greets you with a frown. She checks her phone every time you speak and breaks eye contact to stare at the table next to you. But she tells you she’s super interested in what you’re saying. Hard to believe, right?
If you want someone to know you’re interested in them, don’t tell them. Let them make the conclusion themselves from the actions you take and the body language you exhibit.
Use “reversals” to keep them talking about themselves
The Dale Carnegie disciples recommend asking questions. My mentor had his own take on the power of questions. He preferred reversals: brief statements and questions that keep the focus on the other person. Think of it as a game of tennis in which one player does just enough to keep the rally going. A reversal moves the conversation along without forcing it into directions your counterpart might resist. It gives them a feeling of control, which makes them more comfortable and more likely to open up to you.
Some examples of reversals:
“How so? I didn’t see that coming.”
“Really? Tell me more—if you’re comfortable.”
“Curious, how did that make you feel?”
“That’s interesting, and then what?”
“Why is that? If you don’t mind me asking.”
“And? Don’t stop now. I need to hear the rest.”
“That makes sense. What else?”
Notice how there’s a transition statement before or after each question. When you shoot back with just a question, it can come across as harsh or abrupt. The transition statement buffers the question and makes the conversation feel natural.
If you need to move the conversation in a new direction, always ask permission and give the other person the freedom to decline.
Show someone they’re capable of more than they believe.
I once had a friend who tried to raise the spirits of everyone she met. She’d profess her confidence in their abilities and praise them for their excellence. She was so kind, but the problem was that she often had no basis for making these assertions. The praise seemed disingenuous.
My mentor had made his praise feel sincere because he did his homework. He learned about me, pointed out my strengths, and explained how he would help me hone them. Only then did he tell me I was capable of more than I believed. “You should quadruple your income goal, perhaps more.” The argument was right there on the whiteboard. I had no choice but to believe it.
To make someone believe they’re capable of achieving more than they believe, you must do the hard work of discovery first. Without learning the truth, your praise is just bullshit. | https://forge.medium.com/how-to-make-someone-feel-extraordinary-by-saying-very-little-887811246bae | ['Barry Davret'] | 2020-08-10 16:10:35.101000+00:00 | ['Leadership', 'Life Lessons', 'Relationships', 'Psychology', 'Communication'] |
A Critique Of Your WIP From A Badger Who Hates To Read | A Critique Of Your WIP From A Badger Who Hates To Read
Ahem.
Before we begin, a quick question. The world must be full of humans eager to provide feedback in exchange for a little attention or cup of earl gray tea. There are so many humans in the world. Why didn’t you ask one of them to critique your WIP?
Instead of me, a badger who hates to read.
Some humans like reading. Upwards of 20%. Though, if the internet is to be believed, they mostly prefer taking pictures of themselves and arguing about who should pay more taxes.
And, yet, you’ve handed me this, a thick sheaf of papers completely unsuited for an underground environment. Did you know that I am nocturnal? Did you know I do not keep a lamp inside my sett? And that reading in the dark gives me headaches? (A sett is a cozy underground burrow where I can raise my cubs far away from pop music, social media and “writers”.)
Perhaps you could have given me a lamp instead of an unfinished book I do not want to read. That would have been thoughtful.
The only parts of this story I liked were the ones I accidentally spilled earthworms on, because I was able to lick them later and taste some of the nice earthworm flavor still lingering on the page.
Your hero was boring, mostly because he was a human and humans are boring. Always talking about other humans. Who cares?
Did you know your novel doesn’t pass the Badgel Test? There wasn’t a single moment when two humans had a conversation about a badger. I KNOW! I couldn’t believe it, either. The story was also completely devoid of otters, beavers, sandy soil, tasty bird eggs, jenga, public urination, slugs, tree bones, bowties, acorns, nice smooth rocks, or anything else a small furry mammal might find interesting. It’s almost like you went out of your way to make it a tiresome slog.
So my first note would be to work on that.
My second note would be to think about including an actual badger in the story. I understand this could be difficult for publishers logistically speaking, but it would be nice to open the pages and have a living badger jump out and say hello. Maybe a sexy one who is into the idea of helping me raise a litter of cubs and agrees with my hard stance on claw grooming.
Then the two of us could have a conversation together and forget about your story.
When I was about 3 pages through your novel, (the longest five hours of my life) I realized I should probably take a break and build another toilet so the idiots in the sett next door don’t get hopped up on pig butt worms (yes, a real thing) and forget where one sett sits and the other sett lies. Did you know that badgers use toilets to clarify the borders of their setts? Badger poop is a great deterrent.
That’s an interesting fact that would not be out of place in your novel.
Also, I noticed that in the novel the hero drives a car. You are going to have to change this. Cars kill upwards of 50,000 badgers a year. And here you are promoting them as an innocent mode of transportation instead of the souless murder beasts they are? Remove the car from your novel or one night you’ll hear a scratching at your window, then you’ll see hundreds of pairs of beady eyes shining in the moonlight, then your children will be dragged away screaming.
Two human children for 50,000 badgers seems more than fair.
There was one part I did like. It was the scene when the hero’s mom got drunk, because it reminded me of this time in 2016 when the other badgers in my sett found a bunch of rotting fruit and we got so wasted. It was hilarious, because Bob put on a tiny pair of pants and started rhyming “fur” with “brrr”. Badgers can get a little crazy sometimes. Not everyone knows that.
There were a few things I didn’t understand.
What is “pavement pizza”? I don’t think I’ve ever heard that term before.
I was also somewhat confused by the word “anthropocentrism”. Not sure what you were trying to say with that one.
I’d look it up in the dictionary, except, due to some unavoidably messy snacking, sections “A”, “B”, “C”, and all the other sections are currently unreadable.
Which reminds me — hopefully you won’t be wanting your manuscript back.
Not because of messiness or snacking. Of course not.
That would be rude.
I gave it the respect it deserves. I left it out by the new latrine and Steve feels it’s far better than any of the other toilet papers we’ve tried. He even wrote you a blurb! Feel free to use it when you publish your book.
“This novel was there for me when I needed it. It’s completely transformed my life and is far superior to any other book I’ve ever used to wipe my butt with. Thanks so much. You are a brilliant author.” — Steve the Badger.
I suggest you go with this blurb, since you probably wouldn’t like the one I was planning on writing. | https://sarah-lofgren.medium.com/a-critique-of-your-wip-from-a-badger-who-hates-to-read-a76e7422fa0a | ['Sarah Lofgren'] | 2019-06-04 16:22:39.630000+00:00 | ['Satire', 'Fiction', 'Writing', 'Animals', 'Humor'] |
Summoning Your Senses to Tell Your Story | Did you know that the olfactory (smell) center is highly connected to the memory center in the brain? Or that music can help unlock information in the mind that would otherwise be difficult to access? When writing your personal story or memoir, it isn’t enough to simply sit and outline the crucial events in your life. Delving into the senses can help peel back the layers of intellect and time to get to a deeper, more intensely emotional space. Folding vivid sensory details into your story also draws your reader in, helping her feel as if she is there with you. Here are some examples from literature, as well as some tips for accessing your own sense memories and descriptions.
Taste: In Remembrance of Things Past, Marcel Proust famously recounted how tasting a particular kind of cake (a madeleine) had an immediate, transporting effect to a distant memory: “No sooner had the warm liquid mixed with the crumbs touched my palate than a shudder ran through me and I stopped, intent upon the extraordinary thing that was happening to me. An exquisite pleasure had invaded my senses, something isolated, detached, with no suggestion of its origin… And suddenly the memory revealed itself. The taste was that of the little piece of madeleine which on Sunday mornings at Combray (because on those mornings I did not go out before mass), when I went to say good morning to her in her bedroom, my aunt Léonie used to give me, dipping it first in her own cup of tea or tisane.” More than 100 years later, readers are still talking about Proust’s madeleines and memories.
To evoke your own taste-related memories, here are a few things you can try:
-Page through family recipe collections or cookbooks to help conjure childhood mealtimes and holidays. Look for notes in the margins that tell a story.
-List favorite meals and foods from different time periods in your life, then write a few lines for each that describe the tastes but also delve beyond them to the cook, the guests at the table, your emotions and life events, etc.
-You might try making a particularly memory-laden recipe. The physical act of breaking the eggs, stirring the batter, etc., may very well bring back some sensations and insights about days past. Have a pen or recorder at the ready.
Touch: There are as many descriptions of tactile sensation as there are sensations — and they can be used to describe pain, pleasure, apprehension, excitement, tenderness — you name it, Here is a description of pain in Ray Bradbury’s Fahrenheit 451: “The pains were spikes driven in the kneecap and then only darning needles and then only common ordinary safety pins, and after he had shagged along fifty more hops and jumps, filling his hand with slivers from the board fence, the prickling was like someone blowing a spray of scalding water on that leg.”
To get at tactile-related memories, try these tips:
-Choose a scene from your life and focus solely on the tactile sensation. For example, if your scene is in a childhood tree house, maybe you recall the rough feel of the boards you sat on or the nail that stuck up on one of the ladder rungs. Perhaps the chill and the goosebumps on your legs when you climbed in after being out in the rain. Focus on listing adjectives that get at these kinds of sensations.
-Look back at old photos, again focusing on touch. Was your first outfit for school too tight around the collar? The prom dress exceptionally silky? What about that first time holding hands, or going to bed with your significant other?
-Try listing the three most comforting touch sensations from a selected period of your life, or, conversely, the three most uncomfortable.
Smell: Unlike the quotes offered for taste and touch above, this first quote on smell is about the genuine need for more smell descriptions in literature, from a piece by Jill McCabe Johnson in Brevity (the piece also has a fun “Does My Writing Stink” test): “Given the power of smell, you’d think authors would cram their work with scents, but we don’t. Open any literary journal and compare the instances of visual imagery with the number of references to smell. In fact, leaf through your favorite literary journals and see if you can find a reference to smell at all. Most ‘creative’ writing is oriented toward the visual — what the setting looks like, what the characters look like, what the objects at hand look like — which is important. Sight is a key tool for recognition and navigating space. Yet smell informs the very basics of our survival — eating, mating, and safety from predators — and it does so on the brain’s most fundamental level.” Something to think about, for sure.
The tricky thing about describing smell is that it often calls for a comparison. With taste, if you write, “she tasted the briny salt of the oyster,” anyone who has had an oyster will get the comparison. If you say, “the carpet felt bristly against her bare skin,” you accomplish a similar shorthand. Of course, there are many scents that are universally recognized — the scent of a lemon, for example, or that of a post-workout armpit without the benefit of deodorant. But when you get specific about scents, especially if they are not widely familiar, you will need some well thought-out words. Here is a brief example from Bruce Barcott’s Weed the People: “The smell of a grow room is the scent of transpiration, of fecund exertion. It’s the trapped sweat of a high school locker room, the funk of a hockey jersey steaming on a radiator.” Find more examples of scent descriptions in writing here.
Try these smelly exercises:
-If some scent calls powerfully to you from the halls of memory and you want to write about its meaning, see what you can do to reproduce the experience — maybe find a bottle of Jean Naté (yes, they still make it!) that you can keep open on your desk as you write about your mom, or a hydrangea plant to evoke your grandparents’ backyard. This strategy can also do a great job of evoking memory neurologically. Dare to play with less conventional descriptions, like the “funk of a hockey jersey steaming on a radiator” in the example above.
-List key memories you want to represent, and then across from each memory name an associated smell or two. As the start of this section notes, smell descriptions are underused. Stepping back to reframe descriptions to include scent can add a great layer for the reader.
-Try a smell quiz: ask family or friends what scents they recall from a specific event or era. You may uncover some surprises, like someone recalling the smell of diesel on a road you used to walk, or the smell of a ubiquitous brand of hair spray in the ladies’ bathroom at a favorite club.
Hearing: These words from Carson McCullers (in The Heart Is a Lonely Hunter) are a great example of what music can conjure: “One of those horn kind of instruments played a sad and silver tune. Then the music rose up angry and with excitement underneath. And finally the black march again.
But maybe the last part of the symphony was the music she loved the best — glad and like the greatest people in the world running and springing up in a hard, free way. Wonderful music like this was the worst hurt there could be. The whole world was this symphony, and there was not enough of her to listen.”
If vision is the most common sense invoked in literature, hearing has got to be the runner up. For more examples of the auditory in writing (in this case specifically music in writing), see this piece in Bustle.
See what you hear (well, hear what you hear) using these exercises:
-List key players in your life and work on descriptions of their voices and any audio quirks (like a super-loud sneeze or peculiar pronunciations).
-If you were designing a soundtrack for a particular era of your life, what would the songs be and why? Work on conjuring the melody and beat for the reader, as well as the feelings the songs evoked.
-If you are writing about a local setting, spend some time there — whether parking yourself at a familiar coffee shop or taking a walk through the neighborhood. Make it a mission to focus on the sounds of the place; make notes about volume, rhythm, interjections, key mechanical sounds, etc.
Vision: The eyes have it. We humans rely on vision so very much, and you don’t have to look far for a rich visual description. How’s this for compelling copy (from All the Light We Cannot See, by Anthony Doerr)? “Marie-Laure LeBlanc is a tall and freckled six-year-old in Paris with rapidly deteriorating eyesight when her father sends her on a children’s tour of the museum where he works. The guide is a hunchbacked old warder hardly taller than a child himself. He raps the tip of his cane against the floor for attention, then leads his dozen charges across the gardens to the galleries. The children watch engineers use pulleys to lift a fossilized dinosaur femur. They see a stuffed giraffe in a closet, patches of hide wearing off its back. They peer into taxidermists’ drawers full of feathers and talons and glass eyeballs; they flip through two-hundred-year-old herbarium sheets bedecked with orchids and daisies and herbs. Eventually they climb sixteen steps into the Gallery of Mineralogy. The guide shows them agate from Brazil and violet amethysts and a meteorite on a pedestal that he claims is as ancient as the solar system itself. Then he leads them single file down two twisting staircases and along several corridors and stops outside an iron door with a single keyhole.”
Can you duplicate that lush visual aura in your memoir? Here are some ideas to get your juices flowing:
-Peer more. No doubt there are old family photographs you have seen dozens, maybe hundreds of times. But this exercise is about peering — looking super closely and intently. Maybe in that front yard picture, you are looking past your family to the siding on the house and recalling how hard Mom saved to make that happen. Maybe you are seeing the un-mown lawn that recalls how difficult that summer was. Is the clothing freshly pressed, rumpled, mismatched? Take notes!
-Do some free association matching key events or situations to colors. Maybe your first impulse is that a particular summer was “blue.” What’s behind the choice — a seaside vacation? The literal blues? What you favored in your wardrobe? Those piercing eyes of your crush?
-If you had to design the ultimate tattoo to represent a person or event, what would it be and why? Put lots of thought into it, as if you are really going to get the tattoo!
Ideally, a good story weaves all of the senses in, but of course how and where they are represented makes all of the difference. Be selective, and be sure you aim for something that can be universally understood and appreciated because your word choice is stellar. | https://katherinehauswirth.medium.com/summoning-your-senses-to-tell-your-story-165c02bf0230 | ['Katherine Hauswirth'] | 2019-02-17 15:30:49.070000+00:00 | ['Writing Advice', 'Senses', 'Memoir', 'Storytelling', 'Writers On Writing'] |
IBM Watson SDK for Go | IBM Watson + Go
Have your first go at Go with the IBM Watson Go SDK , a wrapper for the IBM Watson APIs. The SDK includes ready to use data structures and takes care of all the underlying HTTP requests including authentication. Using this SDK, developers can now easily integrate Watson services into their applications.
How to install
Use the Go command on the terminal.
go get -u github.com/watson-developer-cloud/go-sdk/...
This will fetch the packages sub-packages and dependencies.
How to Use
First, provision a Watson service by following the steps here. Once you provision a service, you will get credentials which you would use in the following general steps.
Service Dashboard
Now, follow the below general steps in your code:
Import the service package
2. Instantiate the Watson service using the API key obtained above
3. Invoke the API endpoint using the service instance. For a successful response, it will contain the HTTP status code, response headers and API result
4. Handle responses and errors
Ready, Set, Go
Go ahead and explore the various capabilities of IBM Watson services with Go— from quickly building a simple chatbot to tagging and classifying visual content to unlocking hidden value in data to find answers and surface patterns.
There are also a bunch of examples for various services to help you get started. Play with the Go SDK and let us know what you think. | https://medium.com/ibm-watson/ibm-watson-sdk-for-go-1841d7aa0bcf | ['Erika Dsouza'] | 2019-12-24 09:09:18.901000+00:00 | ['Artificial Intelligence', 'Go', 'Announcements', 'IBM', 'Developer Tools'] |
6 Ways Startups Can Design for Accessibility | Why Does Accessibility Matter to a Start-up?
It’s understandable to say, “it’s not a priority for us, yet but we’ll get there,” when you are a startup without product-market fit. You have a lot on your plate.
My goal isn’t to convince you to solve for all the accessibility problems but be aware of the problems so you don’t make stupid design decisions. A general rule to live by is making your product accessible, likely, makes it more usable to end users and modular to your team. Also, many accessibility features for web, like “alt” tags improves your rank with Google.
If you build for an accessible future, like preparing for internationalization, it lays the foundational flexibility so you can easily implement it when you’re ready.
Acting like it doesn’t matter until you’re successful is a recipe for disaster and, more importantly, is pretty heartless.
How Many People Are Impacted?
Accessibility has many facets in visual disabilities, hearing, and physical. Each impacts how a person may or may not be able to interact with the app. If you want to think in business terms the market size that’s impacted is non-trivial.
6.7 million people from 16–75, in the US alone are reported to have a visual disability.
from 16–75, in the US alone are reported to have a visual disability. 1/12 men and 1/200 women have some type of color-blindness. Red-Green being the most prevalent.
have some type of color-blindness. Red-Green being the most prevalent. According to WHO, over 5% of the world’s population is profoundly deaf.
is profoundly deaf. 12.0 million people above the age of 14 in the US alone required the assistance of others in order to perform one or more activities of daily living or instrumental activities of daily living. This includes bathing, dressing, doing housework, and/or preparing meals.
Basics of Accessibility
There are some basic steps that you can take to be prepared to be accessible.
Don’t assume color vision — test your templates and designs in grayscale to ensure that all your elements are distinct without color. Don’t use calls to action like “click the red box to…” UXMag goes a bit deeper on some strategies to handle color deficiencies.
Contrast and sizing is important — zoom out 25% on your design to verify that all the major calls-to-action, core messaging, and interactions are visible and readable when you’re zoomed out. Don’t put key text on top of busy images without a mask. Use the squint test to verify that your key elements don’t fade away. Make sure to abide by Fitts Law in designing buttons and menus.
Use alt tags and watch out for micro-formats — alt and title attributes in HTML are used to describe images. They’re used by Google and by screen readers. Make sure that, if you’re describing an image, use well-known colors rather than an obscure proprietary name that means nothing to the average person. Also, micro-formats can be dangerous for screen readers as they have been known to misuse tags that are read aloud and cause confusion by reading machine instructions to the listener.
Sound shouldn’t be required — sometimes error feedback is only given as a sound. Don’t use sound as the only feedback mechanism even in the case where something good happens. A popup dialog, growl, or inline feedback could help accompany a the error or success ding.
Separate static templates from UI code — Although this is generally a best practice, you’ll be grateful you do this as you build out multiple front-ends for different form factors. Consider how your web interface reads in a screen readeron the web. For bonus, separate all your static text in language files so they can be internationalized as well.
Separate event handler from initiator — there is a movement to the Indie UIstandard so we separate how the “scroll,” “zoom” or any UI action was fired from what’s the app handler’s reaction to it.
For example, if a user wants to scroll down a page, they might use their finger on a touch screen, or click a scroll bar with a mouse, or use a scroll wheel, or press Page Down on a keyboard, or say “scroll down” with a voice command. All those different user actions can be translated into a simple IndieUI scroll event. IndieUI will allow web application developers to get these events from different devices without having to recognize how the user performed the action. With IndieUI, AT will have a simple set of events to control web applications, and web application developers will have a uniform way to design applications that work for multiple devices and contexts.
Getting Started Resources | https://medium.com/the-entrepreneurial-journey/6-ways-startups-can-be-more-accessible-1d51f0f6805 | ['Cyrus B. Radfar'] | 2015-10-23 17:47:07.019000+00:00 | ['Accessibility', 'Startup', 'Technology'] |
How to pick the best R&D Tax Consultant | When it comes to getting the most out of your R&D tax credit, hiring an R&D advisor is usually the best strategy.
In the UK, there are hundreds of companies offering R&D tax credit services. They range from one man/woman outfits to companies employing hundreds of tax relief specialists. If you count the big accountancy firms, that number increases by another order of magnitude. Overall, thousands of financial specialists can help you make sense of the research and development tax credit. The question is: how can you choose who is best prepared to help you?
In this short guide, we’re highlighting the most important things to consider when deciding to work with an R&D tax consultant.
Types of R&D Tax Consultants
In the world of R&D tax relief, there are broadly three types of companies that cover most of the market in London and the UK and offer services that range from forget-about-it to just-a-once-over.
Specialists
For the specialist, R&D is their bread and butter. They usually have a complete, hands-on service and try to take the whole process off your hands. The service typically includes an interview with your tech lead, writing the technical narrative, and creating the financial calculations. Comprehensive enquiry support is usually part of the package as well, so if HMRC wants to question any part of the claim, the specialist is there to handle it.
Accountants
Accountants are all-rounders, and you are probably very familiar with their services. Most accountants can and usually do offer RD tax advice as part of their offering, sometimes bundled with other services, sometimes for an additional fixed or % charge. The main difference between an accountant and a specialist is that the service an accountant can provide is usually limited (though not in all cases). Accountants will typically create the financial calculations for corporation tax and review a technical narrative that your team has put together. In most cases, they will offer only limited enquiry support, and most of the email and phone communication with the inspectors still being left to be handled by your employees.
Simplified Service Specialists
Given that many a business has acquired some form of experience with R&D tax claims, a new generation of service providers has sprung up to help the companies that just need a second opinion and some technical support. Most of these providers provide platforms that the client needs to input information into, and then most computations are generated automatically.
What to look out for when choosing the best R&D tax consultant
R&D spending
How much you’ve spent is an important question, as the more substantial the claim, the more carefully an HMRC inspector may look at it. If your credit is still on the small side, you can probably get away with filing a simplified claim or letting a non-specialist accountant create the claim. As your spending increases and the technology gets complex, a Research & Development advisor becomes critical. If your R&D spending is up to £30,000 — £40,000, which could translate into a credit of up to £10,000 — £12,000, it’s very probable that HMRC will not bat an eye, and you’ll be fine with a simplified filing. Your accountant can do the filing, or you can decide to self-file. A platform solution could be an easy way to solve this problem, as well.
Your time
To file R and D tax credits, there are a few more laborious steps that take a bit of time from you and your team. The filing preparation also often happens in moments when your tech team’s energies are better spent coding or designing, rather than writing pages of legalese about the eligibility of the technology they’ve created.
The tradeoff is not hard to guess — if you want to create at least part of the R&D claim, and have no problem being more hands-on in the process, then a lower service option, like letting your accountant check it, may be a great idea. Nobody knows your product like you, so writing the tech narrative yourself is often a good idea.
The difference a specialist makes in writing the claim is not necessarily a better understanding of the technology, but a better understanding of how to present it to an inspector, while ticking all the boxes for eligibility.
Financing your claim
If you are looking to finance your upcoming claim through Advance Funding, having a full-service provider is paramount. Most lenders will work with the consultant to understand the claim and get an accurate estimate of its future value. Often, without a specialist, it can be hard for a lender to extend a term sheet as there is a lot of additional uncertainty. Having enquiry support at your side in case the taxman needs more clarification is also crucial if you are expecting the claim to repay a loan. This helps speed things up, maximises your chances of claiming the full amount and puts the financing company at ease. Chat to us at Fundsquire if this is interesting to you, we can help.
Dealing with an enquiry
An enquiry is a process where an inspector will ask questions as to the eligibility of certain technology projects or certain expenditures included in your business’ claim. The probability of an enquiry is low if your spending is low, but as you start investing more in eligible technology, this probability increases. As you get in the high hundreds of thousands or millions in claim value, it becomes very likely that an inspector may want to take a second look at your claim. This makes sense, this is taxpayer money after all. Overall, the higher your spending amount, the higher the risk.
The cost
Photo by Matthew Lancaster on Unsplash
Last but certainly not least, an essential factor in assessing the best provider is how much of your claim value you are willing to part with.
A full-service R&D tax consultant can charge from 12–25% of the final claim value, depending on the size, complexity, and timeline of the claim submission. There is a bit of flexibility in the fees, depending on contract length, and in some cases, fixed fee deals are sometimes on the table as well.
An accountant may file the submission as part of a package, charge by the hour or charge a fixed price of a few thousand pounds to review and submit the technical narrative created by your team. Additional enquiry support may be on a per-hour basis.
Simplified service platforms can cost between 5%-10% or, alternatively, charge a fixed cost of a few thousand pounds.
The Wrap Up
Who is the best provider for you? It depends a lot on your situation: early-stage or more mature, spending a trickle or a tonne, having time on your hands or pouring every second into your primary business.
If you’d like a bit more information on choosing the perfect research and development advisor, we’re here to guide you. We have a panel of partners that range from full-service consultants to platforms and we can at least point you in the right direction. | https://medium.com/fundsquire/how-to-pick-the-best-r-d-tax-consultant-1545a9798211 | ['Alex Kepka'] | 2020-11-03 10:56:02.646000+00:00 | ['Funding Round', 'Startup Lessons', 'Funding', 'Startup', 'Venture Capital'] |
Design Systems Are Bullsh*t | Addressing the Hype
There’s no denying the design community has fallen hard for design systems. It’s graduated from popular trend to fully-fledged movement. We’re at a point where you can apply for specific design systems jobs and statements like, “in the future, every brand and every product will use a Design System”, appear perfectly reasonable.
I feel the same way about design systems as I did about design thinking before Natasha Jen gave a talk in 2018 entitled, Design Thinking Is Bullsh*t. Namely, why is no one talking about the downsides?
It turns out that isn’t entirely true. In searching over 200 articles on Medium and other blogs online, I picked up on an undercurrent beneath all the praise that is beginning to check the design systems movement. But it still feels like the criticism has been nervously muted, unnecessarily qualified and caveated. In the name of critical thinking and a healthy debate, I believe it’s time to make the case in no uncertain terms that design systems are, in fact, bullshi*t.
Defining Design Systems
The root of the problem with design systems can be found in the definition. Here’s a simple description from a popular article on the subject:
“A Design System is the single source of truth which groups all the elements that will allow the teams to design, realize and develop a product.”
~ Everything you need to know about Design Systems
It’s this attempt to take existing tools and practices, like style guides and patterns, and add other less tangible assets such as values and ways or working that, taken together, present a design system as a complete process from start to finish. This is a mistake. The very phrase “design system” should have alarm bells ringing.
In this quote from Brad Frost defining design systems, we can see further problematic thinking:
“A kit of UI components without accompanying philosophy, principles, guidelines, processes, and documentation is like dumping a bunch of IKEA components on the floor and saying ‘Here, build a dresser!’ The guidelines and documentation accompanying the UI components serve as the instruction manual that come with the IKEA components to help the user properly and successfully build furniture.”
~ Design Systems
It’s a neat sounding analogy, but it’s like saying we need a design system so our customers know how to assemble their own app from our react components. It reveals the suggestion that anyone should be able to build a design regardless of expertise. It also surfaces the idea that while some designers (presumably a minority) will have the responsibility for creating the principles and processes, the others will be handed the design system Allen key and charged with the robotic, rote assembly. Hardly an inspiring vision.
Design Systems Turn Design Into a Check Box
“In today’s world design has become this box that people just want to check off.”
~ Natasha Jen: Design Thinking Is Bullsh*t
There’s a fair amount of crossover in Jen’s critique of design thinking and the issues we can find with design systems. The biggest crossover is at the core of this idea that the design process should be simplified, sped up and accessible to non-designers.
By attempting to catalogue and rationalise everything, the result of the design system is to atomise and codify a designer’s process in a way that makes it appear understandable and, worryingly, actionable by anyone regardless of their design competence.
This devalues design and undermines designers.
“You cannot hold design in high regard while relegating so much of it to a centrally controlled system.”
~ Design Systems Create Bad Designers
We create systems to automate low-level tasks where the pursuit of efficiency is the driving objective. The underlying perspective in the thinking behind design systems, whether deliberate or not, is that design is in the way of more critical work.
Design Systems Waste Time
“We’ve been developing the contribution model for the GOV.UK Design System for the best part of 2 years, and we’re not done yet. Not even close.”
~ The myth that design systems solve easy problems
Design systems take a massive amount of time and effort to create, and require constant work to maintain, update and evolve. The bigger, the more comprehensive, a design system is, the harder it is to reference quickly and keep relevant. Time must be invested to not only update the system but communicate any changes across teams. The deeper you go down the design system rabbit hole, the more quick sand you lay out for your company to wade through in the future.
Once the design system is up and running, design is now entangled in development. Making changes to the design system will not be straightforward, potentially requiring sign off from people who may not share the same concerns as design. When design is fused with code, it slows down, and even discourages, changes to the product, as engineering is now needed to maintain and evolve the system.
Many product designers work for startups who undergo rebranding, product pivots and overhauls of their codebase on a dizzying frequency. I’ve worked at a startup that went through multiple rebrands and product pivots in its first three years. Why bother with a design system in this context? No one will thank you for wasting time on an internal product when you could have been working on the actual product, getting feature validation, finding product market fit or helping marketing and sales achieve broader business objectives.
It’s true that design systems allow you to go faster in one context — when you’re heading in the wrong direction. Once you finally have your design system ready to go, you can have that design request wrapped up before the day’s out. But by truncating the design process and placing too much emphasis on speed, design systems remove the space needed to question the assumptions behind a task, conceive of new possibilities, or even question if that new feature or screen is needed in the first place.
Design Systems Don’t Work
“The challenge with any design system is they normally don’t work, don’t get adopted, don’t grow or get used if they are imposed top-down without an awful lot of consultation.”
~ Design systems in difficult places
Design systems are a single point of failure, as they deliver the same components to multiple product and service touch points across an organisation. One bug or unintended error is now multiplied across every interface. Thanks to the nature of software development, a bug while easy to deploy is much harder to debug. The only solution for engineering is to invest even more effort in testing and maintenance.
The reality is that the only people who truly adopt design systems are the designers who create them. Design systems are the direct descendants of their equally tedious and ignored forefather, the corporate brand guidelines. In a fast-paced work environment, other team members will drop in and find just what they need in order to move on. They will assume that by any kind of lazy reference to the design system they are now good to go. That has serious consequences for communication and collaboration between teams.
Design Systems Ignore Context
“Designers are trained and expected to make good decisions based on judgment… To bake this judgment into a system manifested through groupthink is submitting the individual designer’s expertise to an outside force.”
~ Design Systems Create Bad Designers
A design system detracts from what should be any designer’s primary reference: context.
Now there’s a system, the pressure will be for designers to parse everything through its rules and piece together solutions from the available assets within the system. Coupled with the knowledge that the design system exists in order to sped up their work, contextual research and experimentation within those grey areas early on in the process will be squeezed under an excessively performance-focused culture.
The very objective of a system is to cover all bases. But this approach is destined to produce mediocre results within design, as any pre-defined system will always lack the critical context ingredient. Taking a component out of context, setting rules and a multitude of variables, creates the false illusion that this component can now be applied without reference to the new context. On top of this, by trying to systematically cover all eventualities the design system creates unnecessary bloat, documenting styles and components that may never even be needed.
The widely held opinion that a design system has to constantly evolve, is a recognition in itself that it will never be adequate enough to deal with the real world problems it faces. This is often presented as a caveat, but it illustrates flawed and misplaced thinking. If something is always in need of being updated whenever it is put into practice, how can it be relied upon?
Design Systems Straitjacket Creativity and Kill Craft
“Design systems can make designers lazy, driving them to think only in terms of the components that are available in the design system.”
~ The Hidden Trap in Design Systems
Design systems choose consistency and convenience over creativity and craft. When you overvalue consistency in the pursuit of uniformity, you create too many unnecessary rules that straitjackets creative thinking before it can even get going. When you overvalue convenience in the pursuit of speed, you kill the holistic working process necessary to develop a designer’s craft.
Design systems are well meaning, and no one can fault their ambition. But by dismembering the design process with prescriptive step-by-step rules, it restricts a designer’s freedom to work creatively.
One of the main reasons cited for implementing a design system is to achieve consistency. A slavish devotion to consistency kills creative thinking. Building a design system encourages designers to lose themselves in the details of components, while losing sight of larger product design issues. Users don’t care if every border-radius on your buttons is the same. They will, on the other hand, quickly ditch your product if you’ve failed to resolve core interaction problems.
What we should really care about is coherence. Coherence requires a degree of consistency, but not within predefined rules. There’s space for contextual deviations and greater experimentation, leaving the definition of where and how that coherence is achieved to the designer’s judgment.
Design systems remove what Nassim Taleb would call “soul in the game”. Constantly trying to optimise your work and squeeze more efficiency out of it is destined to leave you in a place where nothing of the artisanal craft can exist, leading you to eventually dislike your own work.
Only someone who has mastered their craft is in a position to innovate. Because through experience they understand that innovation requires, and results from, trial and error. Design systems have no place for trial and error by enforcing and spreading a uniformity in design. An approach that will not be able to successfully adapt and evolve — to innovate — within a constantly changing technological landscape.
Final Thought
I believe we need to understand design systems as the result of a way of thinking about design. When we choose our way of working, the tools we use, our mindset and focus, we need to consider the balance of certain key values. Contrary to the thinking behind design systems, I would choose to favour the following: communication over documentation, craft over convenience, creativity over uniformity, coherence over consistency, and context over everything. | https://uxplanet.org/design-systems-are-bullsh-t-7ecdb795cc62 | ['Pascal Barry'] | 2020-12-07 11:07:08.209000+00:00 | ['Product Design', 'UI', 'Design Systems', 'Design', 'UX'] |
Tools for Software Development Project while working with remote development teams | Decided to hire an external team for software development of your company? Wondering what are the tools you will need to have so that you can efficiently manage your remote development team? Read through this article and you will understand all that you require to make your business successful with
the excellent tools that you can use to administrate your remote development team.
Communication Tools — Skype, Hangout, Slack, WhatsApp
Communication is the basic need of a company, more so if you have a team to manage remotely. It is very crucial that all the necessary information is communicated to each and every member of the remote team from the headquarters so that there is are no problems in functioning due to
communication gaps.
There are a lot of ways or platforms that you can use to communicate with your remote team these days. If you have the number of each team member, in this case the easiest way would be to form a
Whats App group and start chatting and making calls as often as needed. If it is required for the team to operate with sharing your computer screen or by video calls then Skype would be the best option. If only video calls suffice the need then you can also consider hangouts from Google. Slack is another communication tool that is slowly gaining popularity for its features of integrated many third party services and support into it. It not only offers group chatting but it also allows you to make project specific teams with in it. File sharing and searching the previous communication or shared assets is quite easy in Slack. All these communication tools make sure smooth functioning of the remote team.
Project management Tools — Redmine, Asana, BaseCamp and JIRA
Project management with no second thought is one of the most important areas for a company while developing a software product. It needs more attention when you have to manage a remote development team because you have to ensure that all the features of project management, i.e.
planning/scheduling, collaborations, documentation and evaluation happen with ease. Just like the tools for communication, there are a variety of options available for project management as well. Some examples for the same from the top charts would be
Redmine, you can learn more about Redmine here: https://www.redmine.org/
Asana: https://asana.com/
BaseCamp: https://basecamp.com/
JIRA: https://jira.atlassian.com/
Code Sharing Tools — BitBucket, GITHub
The most important factor for the development team and the headquarters is how they can share the
codes that are developed by each development team. It is very important that it is done in a very secure yet easy to handle way so that the process doesn’t become cumbersome to both the parties. It is not just about code sharing but is very much needed for code versioning when it comes to multiple developers working on a single project. If people are working from multiples locations then use of such code versioning tools become more significant. Tools like GITHub and Bitbucket ensure that teams don’t waste time in just sharing the codes developed by them instead of actually concentrating on the development of these codes. Both these tools are in the top of the charts for code sharing between teams.
Reporting tools — Emails for daily status reporting
A development team or for that matter any team should have a reporting structure. They should all
have to send progress status reports to stake holders and product owners. The best way to share such information is to send emails daily, informing as to:
what did a developer do in whole day
are there any bottlenecks in the current task
what is planned for next day to work on
queries section and special notes section
Deploying the suggested tools or other similar tools will make project management and team coordination much easier while working remotely. These basic tools ensure that you can manage a
remote development team efficiently.
We at Yugasa endeavor our best to deploy all such development and project management practices
which bring transparency in project execution and increase team efficiency while working for clients across the globe. To discuss your project needs, feel free to reach us at [email protected]
. We shall be more than happy to assist you for any of your product development.
Originally published at yugasa.com on June 4, 2018. | https://medium.com/yugasa/tools-for-software-development-project-while-working-with-remote-development-teams-b4e4de444450 | ['Yugasa Software Labs'] | 2018-06-27 11:08:45.827000+00:00 | ['Outsourcing', 'Freelancers', 'Remote Working', 'Startup', 'Outsourcing Company India'] |
Louis | Miltiadis for Unsplash
Louis
My grandfather was a bounty hunter.
Louis Wasserberger was born to Sadie and Moses Wasserberger on March 9, 1904 on Pitt St. in the Lower East Side of New York City.
A brother, Henry, followed in 1911, and a sister, Adele, in 1918.
Louis met Anna Weinstein at a Young Men’s Hebrew Association dance, and they were married in June, 1925.
Anna had a daughter, Arlene, in 1927. Arlene legally changed her name to Arlyne when she was in her forties because she wanted to be different. By the time Anna was expecting her second baby, my mother, in 1934 he had left to be with Stella Raphael.
Anna and Louis divorced in December, 1941, just as the country was being pulled into the second World War.
My mother has no memory of her parents ever being together. This was during the days before divorce was common.
What she remembers of her father was when he would take her and her sister with him for visits. He carried a gun because he was a bailbondsman and a bounty hunter.
He was also employed as a gambler at the horse racing track and in a “floating crap game” that was constantly on the move to evade authorities.
Lou briefly worked in the Brooklyn Navy Yard during the war. This was the only legitimate job my mother remembers him having.
My Mom remembers going to another one of his favorite haunts, McGuries Pool Hall in New York City next to Roseland Dance Hall with him when she was a very young child.
Louis was married to Stella for a short time before she died in the Rikers Island plane crash in February, 1957:
Northeast_Airlines_Flight_823
By then my mother was married to my father who remembers going to the morgue at Bellevue Hospital in New York to identify the body. Louis had missed the plane.
Louis was involved with another woman named Gloria after Stella died.
He later married Ruth, his third wife, and had many happy years until she died of natural causes.
He then met Jackie, a woman not far in age from my mother and her sister, and married her. His fourth wife was behind the wheel of the car he was getting into when her foot slipped off the brake causing the car to knock him to the ground. His injuries proved to be fatal.
This story was written during the coronavirus pandemic of 2020 during lockdown in New Jersey. I decided to use the time at home to write family history while my parents are still answering the phone.
My mother is going to be 86 next month and is a stage 3 ovarian cancer survivor. She has been in remission for over a year. My father turned 87 in April.
Family archive photo of Louis, ca. 1920 | https://medium.com/narrative/louis-d2d3ad3e2325 | ['Victoria Ponte'] | 2020-06-26 16:47:59.455000+00:00 | ['Pandemic', 'Writing', 'Family', 'History', 'Family History'] |
A community of life | All Americans should care about the wildlife on the coast plain of the Arctic National Wildlife Refuge, even if we never visit.
(Photo credit: Credit: Alan D. Wilson via Wikimedia, CC BY-SA 3.0)
Protecting the Arctic National Wildlife Refuge is a legacy of our country’s conservation movement. Originally established in 1960 under President Dwight D. Eisenhower, the refuge pre-dates the publication of Rachel Carson’s seminal book, Silent Spring (1962) and the Santa Barbara oil spill (1969), which horrified the organizers of the first Earth Day (1970).
I’ve never been to the refuge, and, I suspect you’ve never been there either. In fact, the vast majority of Americans are less likely to visit the north slope of Alaska than almost any other spot in the U.S. Of course, if you’re lucky enough to have access to a private plane or you’re up for hiking and backpacking there from Fairbanks — with all of the food and provisions you’ll need — it’s surely worth the effort. By all accounts, it’s spectacular. However, that’s beyond so many of us. Still, even if we never get there, that’s not the point of protecting this place.
This reason why this wild and amazing place must be protected was eloquently explained in the 1964 Wilderness Act, which was enacted four years after the refuge was officially designated.
Its words clearly apply to this remote Alaskan area: “A wilderness, in contrast with those areas where man and his own works dominate the landscape, is hereby recognized as an area where the Earth and its community of life are untrammeled by man, where man himself is a visitor who does not remain.” | https://medium.com/environment-america/a-community-of-life-252c46586e07 | ['Ellen', 'Len'] | 2020-12-23 15:15:36.219000+00:00 | ['Environment', 'Wildlife Conservation', 'Arctic', 'Polar Bears', 'Wildlife'] |
‘Clanlands’ Will Take You on a Scottish Travel Adventure | ‘Clanlands’ Will Take You on a Scottish Travel Adventure
Visit Scotland from your home with authors Sam Heughan and Graham McTavish
Photo by Author (Kristi Jacobsen)
Let’s face it, 2020 didn’t go to plan for anyone. Late last fall, I started to research trips to Scotland for summer vacation. As a history nerd, outdoor enthusiast, and travel fiend, I envisioned a two-week vacation horseback riding through the Highlands, hiking Munros, and exploring castle ruins.
But, it’s 2020, and the pandemic squashed those dreams before they could even take shape. Heartbroken at the thought of staying put for the unforeseen future, I lost myself in adventure books, traveling to locations of the authors’ imaginations.
It’s not the same, though, especially when you’re in 18th-century America or early 20th century Africa, but it was enough for the time being.
But when Scottish actors Sam Heughan and Graham McTavish announced they wrote a book about their Scotland adventure for their upcoming Starz series Men in Kilts, I knew it was the stay-at-home adventure I needed. I immediately pre-ordered my copies and impatiently waited for the release date.
The book arrived, and I dug right in. It had all of my favorite subjects — history, travel, and entertainment — and I was ready to travel to Scotland, even if it was from the comfort of my Los Angeles living room.
I wasn’t disappointed. This book was the adventure I craved for eight months, and I highly recommend it to anyone looking for a brief escape.
Here’s why I recommend Clanlands, and why I would give it a 10 out of 10 every time:
Travel Adventures
Sam and Graham take the reader on a travel adventure around Scotland, stopping at some famous and unknown historical locations around the country. In Clanlands, they explore the beautiful countryside, macabre sites, and castles still used as homes.
Through their vivid descriptions and a few accompanying photos, they transport the reader to these stunning locations. I felt as though I was on this adventure with them — traveling in their camper, tasking whisky, and experiencing all that Scotland has to offer.
In a year of canceled travel plans and stay at home orders, traveling by book is a respite from the day-to-day monotony. Clanlands took me on a much-needed adventure from the confines of my small apartment and inspired plans for a future adventure of my own.
History Lessons
I’m a total history nerd and enjoyed how in-depth the authors went with their country's history. Sam and Graham delve into Scotland’s history, many of the original clans, and even their personal history — from the clans they descended from to their recent family history.
Through the book, my knowledge of Scotland’s history expanded from a brief understanding of the Jacobite uprisings in the 18th century to Scotland’s original inhabitants, the tensions between various clans, right down to the details of the Highland Charge. I learned the history of battles, castles, punishments, and how the Scottish people lived in past centuries.
It’s clear the authors did their research and didn’t rely on basic history lessons they learned while growing up. Clanlands is a history book I would gladly read again.
Laughter
Be ready to laugh out loud while reading or listening to Clanlands. The commentary between Sam and Graham, and Sam’s penchant for putting Graham in precarious situations, adds a comedic break to Scotland’s sometimes grisly history.
While it helps to know a bit about the authors and their friendships to understand some of the humor, it’s not a prerequisite to enjoying the banter and comedic situations. I got a kick out of each author interrupting the other with one-liners or to add their hilarious interpretation of events to the storyline.
I can’t remember a book that had me laughing this much, and it helped take my mind off my worries and current events.
Authentic Stories
Authenticity is often missing from Hollywood. Honestly, after five years of living in Los Angeles, working in entertainment, and bearing witness to tourists and celebrity hunting, I don’t blame celebrities for wanting their privacy.
Despite being private and protecting their families from the spotlight, Sam and Graham share anecdotes about their families, upbringing, and their starts in entertainment in the book. Clanlands doesn’t read as a tell-all but feels like you’re getting to know a new friend through honest and personal stories. | https://medium.com/books-are-our-superpower/clanlands-will-take-you-on-a-scottish-travel-adventure-b55a99ba683 | ['Kristi Jacobsen'] | 2020-11-28 11:09:10.652000+00:00 | ['Travel', 'Reading', 'Entertainment', 'Books', 'Book Recommendations'] |
10 Seconds of Inspiration to Wrap Up Your Week | I’d love to connect with you! May I send a brief inspirational message every Saturday morning? Visit CreateTeachInspire.com/saturday to receive messages like the ones above.
Here’s a little more about me: | https://jacquelynlynn.medium.com/10-seconds-of-inspiration-to-wrap-up-your-week-1402014b1937 | ['Jacquelyn Lynn'] | 2020-12-06 21:58:13.441000+00:00 | ['Inspirational', 'Inspiration', 'Motivation', 'Motivational', 'Inspirational Quotes'] |
Am I the Only Person On the Planet Who Can’t Meditate? | I’ve heard from more than one person that meditation has many benefits for the mind and the body. On more than one occasion, I’ve sat down and made a concerted attempt at meditating, only to find that I can’t quiet my mind long enough to stick to it for more than a minute or two.
And even that’s a struggle.
What IS meditation? Why do so many people swear by it and why can’t I do it?
Meditation refers to a state in which your body and mind are relaxed and focused. People who practice meditation talk about having increased awareness, focus, and concentration, as well as a more positive outlook on life.
It’s most commonly associated with monks, mystics and other spiritual disciplines (like the suddenly trendy practice of yoga).
But you don’t have to be a monk or mystic to enjoy its benefits. And you don’t even have to be in a special place to practice it. They say that you can meditate almost anywhere, even in your own living room.
But not in my living room.
There are lots of different approaches to meditation, and though the fundamental principles are the same, it doesn’t matter, I still can’t do it.
One of the most important principles of meditation is supposed to be that of removing obstructive, negative, and wandering thoughts and calming the mind with a deep sense of focus.
This is supposed to clear the mind of clutter and prepare it for a higher quality of activity.
But what exactly IS a “higher quality of activity”?
The negative thoughts you have of people who drive you nuts at work, that parking ticket you got, and the fact that your email account has been hacked for the third time, are said to contribute to the ‘pollution’ of the mind, and shutting them out allows for ‘cleansing’ of the mind, so that it can focus on deeper, more meaningful thoughts.
It ain’t working for me. I’m busy trying to come up with a new email password that more closely resembles a nuclear weapons launch code.
Some people even manage to shut out all sensory stimulation; no sights, no sounds, and nothing to touch, and they try to completely detach themselves from the commotion around them. They’re somehow able to focus on deep, profound thought.
In my world, this is similar to something called “sleeping”. You might be familiar with it.
It’s as close as I get to meditating.
They say that it can seem deafening at first since we’re all so accustomed to constantly hearing and seeing things, but as I continue this exercise I’m supposed to find myself becoming more aware of everything around me.
And this part confuses the hell out of me; If I’m becoming “more aware of everything around me”, how am I supposed to shut it all out?
And then there are the ‘positions’.
WTF?
The principle is supposed to be to sit in a comfortable position that’s conducive to concentration. Suggested positions are sitting cross-legged, standing, lying down, and even walking.
Come on now; who meditates while walking? (A show of hands?)
And if you’re meditating while lying down, according to my dad, that’s called “resting your eyes”. Or as I mentioned before, “sleeping”.
The chosen position is supposed to allow you to relax and focus. But on what? How do you focus on nothing?
While sitting or standing, the back should be straight, but not tense or tight. In other positions, the only no-no is slouching and falling asleep. But again…
Loose, comfortable clothes are supposed to help a lot in the meditation process since tight fitting clothes have a tendency to be constrictive and make you feel anxious.
But “loose comfortable clothes” could also be oh…I dunno…pajamas?
Studies have shown that meditation actually does contribute beneficial physiological effects to the body. And there has been a growing consensus in the medical community to further study the effects of this.
If you’re like me and feel like you need help learning to meditate, here are some hacks for us all;
Count.
Close your eyes. Sit up straight (wherever you are is fine).
Take a deep breath in through your nose and silently count ONE. Slowly let that breath out through your nose and silently say TWO. Repeat this until you get to TEN.
Start over each time your mind wanders and you lose your count (which will probably happen a few times and that’s just fine.)
Get your pet.
Simply spending a few minutes petting an animal can be a very relaxing and calming activity, especially when it’s done mindfully.
Try Mindfulness Meditation.
With this type of meditation, the goal is not to completely clear your mind of all thoughts; instead, it’s to be fully aware of your thoughts and surroundings in the present moment.
As for falling asleep while meditating, Elisha Goldstein, PhD, co-founder of The Center for Mindful Living in Los Angeles says that “The goal is, of course, to be awake. But if all you have time for is a meditation as you fall asleep, that’s totally fine.”
Peace :) | https://medium.com/the-ascent/am-i-the-only-person-on-the-planet-who-cant-meditate-24339ae9c4b2 | ['Sienna Clarke'] | 2019-09-19 14:47:33.369000+00:00 | ['Life Lessons', 'Self Improvement', 'Self-awareness', 'Meditation', 'Personal Development'] |
How to Find Stillness, Productivity, and Enjoyment Every Day | Success is nothing more than an accumulation of positive acts.
How can I succeed in business? What’s the secret to becoming a full-time writer? And where can I find the magic formula for learning new skills?
Those are common personal growth questions that many people ask themselves.
They believe that someone has a recipe for success and that they just need to find it. Those thinking patterns hold you back.
No matter if you’re building a business, learning a new language, or improving your physique, a combination of small habits will lead to success.
You need to become a little bit better every day and add a small piece to the puzzle.
That’s where stillness, productivity, and enjoyment come into play.
No matter what you are trying to accomplish, you’ll need those three elements daily.
First, stillness will help you remain calm, focused, and determined. Productivity, on the other hand, will help you achieve more in less time. In other words, you’ll use your time wisely. Finally, you need to enjoy your endeavor to stay motivated and retain your purpose.
Together, the three can help you attain any summit by creating a daily merger of calmness — ensuring that you do the work without distractions, productivity — boosting your time management, and fun — transforming arduous chores into playful challenges.
How do we combine the three?
There are various effective methods to incorporate these three positive states into your everyday life.
On this basis, here are five ways to find stillness, productivity, and enjoyment every day. | https://medium.com/the-innovation/how-to-find-stillness-productivity-and-enjoyment-every-day-d2f49920595c | ['Jack Krier'] | 2020-12-27 20:32:52.056000+00:00 | ['Self', 'Self Improvement', 'Lifestyle', 'Mindfulness', 'Productivity'] |
Timeless web design: Online portfolios today — and in the year 2000 | Timeless web design: Online portfolios today — and in the year 2000
A nostalgic trip through homepages of brand agencies and type foundries on the cusp of the new millennium.
Philippe Starck, a renowned industrial designer, once said:
“A designer has a duty to create timeless design. To be timeless you have to think really far into the future, not next year, not in two years but in 20 years minimum.”
Born in 1990, I witnessed the internet evolve from text to immersive audio-visual experiences. And throughout, I wondered—can design be timeless on such a fast-changing medium as the web?
Starck’s own website unfortunately wasn’t up in 2000, but archive.org has preserved landing pages of many other agencies of the day.
Are you ready to jump into the time machine, shed a tear of nostalgia, and see if their designs would speak to the audiences of today?
Design studios
Many of the top design agencies today were founded long before personal computers and the web. As such, they had to consciously transition from print, to the Internet.
Pentagram
London
Pentagram, founded in 1972, is the world’s largest independently-owned design studio. Its list of past and present partners includes such stars as Alan Fletcher, Bob Gill, Paula Scher, and Michael Bierut.
Pentagram’s 2000 website was written in Adobe ColdFusion Markup, and like many websites of the day, tried to replicate print design elements on the web.
The menu, drop cap, titles, and image captions are pixel-perfect low-resolution GIFs. The columns — hand-crafted with <td> tags. | https://uxdesign.cc/timeless-web-design-online-portfolios-today-and-in-year-2000-234ff5612bb9 | ['Philip Seifi'] | 2020-11-27 23:51:24.575000+00:00 | ['Web Design', 'UI', 'Branding', 'Design', 'UX'] |
Ask Anything to the teacher who founded freeCodeCamp.org — via the Hacker Noon Community | Ask Anything to the teacher who founded freeCodeCamp.org — via the Hacker Noon Community
“There are so many aspects of civilization that can be improved through better systems — and software is just a way of telling computers how to enact those systems.” — Quincy Larson
Hey Hacker Noon community!
I was a 30-something teacher who learned to code and ultimately founded freeCodeCamp in 2014. I am now working on the nonprofit full-time to help expand these learning resources.
I’m doing an AMA here on Hacker Noon’s community forum. I’ll answer questions live at noon EST on Thursday, July 25. You can ask me questions in advance here on this thread, and heart other people’s questions to make sure I see them and answer them.
Excited to be here for this AMA on July 25th, 2019 at 9 am PST.
Feel free to ask a question. Talk soon!
And for additional reference, check out Hacker Noon’s past AMAs: | https://medium.com/hackernoon/ask-anything-to-the-teacher-who-founded-freecodecamp-org-via-the-hacker-noon-community-69a3a18dba40 | [] | 2019-07-25 17:05:51.535000+00:00 | ['Software Development', 'Coding', 'Freecodecamp', 'Development', 'Hackernoon Ama'] |
The What, Why, and How of Using a Skeleton Loading Screen | The What, Why, and How of Using a Skeleton Loading Screen
Skeleton loading screens will improve your application’s user experience and make it feel more performant
LinkedIn.com style skeleton loading screen example (Image source: Author)
What do Reddit, Discord, Medium, and LinkedIn have in common? They use what’s called a skeleton loading screen for their applications.
A skeleton screen is essentially a wireframe of the application. The wireframe is a placeholder until the application finally loads.
Here’s how usually skeleton loading screens look. Notice how they’re a replacement for the traditional loading spinner.
Skeleton loading screen for the usual blog post
The skeleton loading screen essentially impersonates the original layout.
This lets the user know what’s happening on the screen. The user interprets this as the application is booting up and the content is loading. | https://medium.com/better-programming/the-what-why-and-how-of-using-a-skeleton-loading-screen-e68809d7f702 | ['Indrek Lasn'] | 2020-11-29 23:34:41.003000+00:00 | ['Startup', 'JavaScript', 'Software Development', 'Programming', 'Front End Development'] |
The 3 Top Performance Improvement Issues (AKA Excuses) That Almost Everyone Faces | Performance improvement issues aren’t just about working more productively. They are about living more effectively. If you want to perform at your best, you need to eat healthily, move regularly, and develop a well-balanced lifestyle.
Sounds good, right?
So what’s holding you back?
Excuses.
It’s easy to say you want to improve. But when push comes to shove, it is even easier to not change how you act. You can dream up endless excuses justifying why you can’t, shouldn’t, or don’t want to follow through on what you know is best.
All these excuses do one thing — release you from taking 100% responsibility for everything in your life. Three of them, in particular, show up, again and again, in every domain of life.
Here’s how you can identify these performance improvement issues and ultimately learn how to grow past your excuses and start living up to your potential.
Issue 1: “I’m Not Sure How”
We all would like a well-paved road to follow. Unfortunately, once you finish school, there’s no more syllabus to structure your learning. There’s no more curriculum to show you exactly what you need to know. And there’s no more teacher to answer your questions and guide you down the path. You need to design the course, the content, and the tests all for yourself.
This is tough work, and it can be overwhelming. It’s easy to get stuck in “If…then” excuses that sidestep personal responsibility:
If I just knew what steps to take to improve my diet, then I would eat better.
If I just had someone create an exercise plan, then I would workout more often.
If I just knew what I was passionate about, I would be able to find a job.
If I just knew how to scale my business, I would be more successful.
The reality is that most of these are technical problems. You can either look-up the answers or buy yourself the solution. Unless you’re looking for advice or data about an extremely obscure topic, a quick internet search will reveal something relevant to your goals.
It is not a question of whether the information and opportunities are out there. Searching for more information can become an addiction in and of itself.
It’s a question of whether you are willing to do the work. Another Google search won’t change who you are. Actually immersing yourself in a new activity will.
You need to commit to the version of yourself you want to become, and you need to learn to trust yourself. The excuse, “I don’t know how” is often another way of saying “I don’t believe I have what it takes.”
This is why support from others is a vital part of improving your performance. A coach, mentor, or teacher can give you the technical know-how and make you feel capable and confident to take your next steps. The best coaches breakdown your issues into bite-sized actions that you can follow through on.
More importantly, coaches and mentors empower you to take action despite not knowing every step along the way.
Uncertainty should not inhibit action.
The way through the excuse of “not knowing how” is to look inside at what you do know. Start from there. Have other people cheer you along, and trust that the journey will provide you the answer you need.
Performance Improvement Issue 2: “Now Is Not A Good Time”
I had a client who was in the middle of a divorce, just lost one of his parents, and was trying to keep his business from going bankrupt. It was a messy and incredibly hard time. I cautioned him that such a stressful situation might not be the best time for a dramatic life overhaul.
He proved me wrong.
He said, “If not now, when? I’m already in the middle of dramatic change. What’re a few more changes going to do?”
So he turned his whole routine on its head and started taking better care of himself. He began cooking most of his meals and going for runs in the morning (previously he had never run more than a mile.) He even made a daily practice of journaling about his dreams for 10 minutes during breakfast.
His story illuminates one important fact of life-you are always in medias res: “In the middle of things.”
You will always be juggling your health, work, family, relationships, and the rest. These parts of life don’t pause just because you need some breathing space. More often, they only speed-up and become more complicated.
There might be some natural ebbs and flows to your overall busyness. Take advantage of these natural dips to double-down on what matters most. But never think that some magical future time awaits when you’ll suddenly feel ready. If you wait for all the pieces to be in place before you can begin, you’ll be waiting forever.
The real question is not whether this is the right time. The question is whether you are willing to tolerate your life as it is, indefinitely.
If the answer is no, then why wait to make a change?
If today isn’t the right time, what makes you think tomorrow will? The truth that arises here is that you might not be ready to let go of the way things are now. Even if it is unpleasant, at least it is familiar.
But sadly, staying put will not get you what you want.
“ We must be willing to let go of the life we planned so as to have the life that is waiting for us. “ Joseph Campbell
Performance Improvement Issue 3: “I Know What To Do, But I Still Don’t Do It”
You could easily call this performance improvement issue “The king of self-sabotage,” or “The prince of procrastination.” It constitutes the royal family of excuses.
You understand what you can do to help yourself. You just don’t follow through. At least not regularly.
I’ve worked with clients who have started and stopped a healthy habit so many times it makes your head spin. They’ve got all the knowledge in the world. They just lack consistency and conviction.
This performance improvement issue has many flavors. Here are the most common:
The Program Hopper. You get bored easily and thrive on novelty. As a result, you can’t stick with one habit long enough to reap the benefits. After a few weeks, you ditch the whole program because you didn’t see dramatic results. You move onto the next shiny new solution until it begins to bore you. The cycle of unfulfilled expectations continues. The Hopeless Heartbreak. You have tried to lose weight, exercise regularly, manage your emotional reactivity, and take better care of yourself. But after many attempts, you’re right back where you started, if not worse off. You’re filled with shame and dismay. Another failed attempt. The thought of letting yourself down once more is just too much to bear. You’d rather stay stuck in your unfulfilling ways than try again and fail. The Visionless Victim. You think you want all these things: a bigger business, better relationships, a healthier body, etc. But you don’t have a clear vision of what you really want. It’s not the six-pack abs or a fatter bank account that really matter. It’s the feeling of confidence in your own skin. It’s security for your family’s future that making money affords. The problem is that you haven’t fully articulated the impact you’re looking for. The Cynical Complainer. You think the world is out to prevent you from having you have what you want. You’d rather complain about the perceived injustice rather than do something to change your situation. The problem is that you never see the good that is right in front of you. There’s a good chance that people would be willing to help if you weren’t always so distrustful of human sincerity. The problem is that you need to withhold your contempt and work together with others to make progress on what you want. The Undeserving Downer. You know what you want, but you don’t believe you deserve it. “Who am I to do all these things?” the voice in your head says. This lack of self-worth undermines all your motivation. The self-doubt can be paralyzing. You’d rather dither around, waiting for permission from others than muster the self-respect necessary to step towards your goals.
These are a few of dozens of reasons why you might not take action or make progress. A lot of them stem from unresolved emotional blocks and limiting beliefs. You not only need to have a clear idea of what you want, you need to open your heart to the possibility that you can actually get it-That you actually deserve it.
Closing that gap between what your head says, “You Can’t” and what your heart believes “You Can,” requires self-confidence, proper support, and a bit of faith. Putting these ingredients into place is the only way to prevent more excuses from sabotaging your progress.
What If Others Are Blocking My Performance?
Not all performance improvement issues are due to personal blocks. There are very real external factors that can hamper you from showing up as your best self.
Social factors like income, race, and gender systemically privilege some and disadvantage others. Not everyone has equal access to resources or equal amounts of support from friends or colleagues. Politics of an organization (or a family for that matter) can keep you from getting ahead even when you try.
Nonetheless, you must first overcome your internal excuses before you can even begin working through the social factors that might get in your way.
Performing Well Is Living Well
Improving your performance is not some corporate hack to squeeze more productivity out of you. It is about playing the game of life better.
Performance spans all domains of your life, from your relationships to your work, to how you talk to and treat yourself. When you live well, you will automatically perform well. Work on your own personal development. Round your edges, build awareness of your contradictions, and become the authority for your own life. If you don’t do it, no one else will do it for you.
~ Jeff Siegel
I’m Jeff Siegel, a wellness coach and mindfulness teacher, helping people upgrade their habits and improve their health.
Sign-up for the Healthy Habits Newsletter for free bi-monthly wisdom on how to eat, move, and live better.
If you’d like to explore working together, you can schedule a private 20-min consultation call with me. | https://medium.com/swlh/the-3-top-performance-improvement-issues-aka-excuses-that-almost-everyone-faces-86e87c4581e7 | ['Jeffrey Siegel'] | 2020-04-06 19:21:06.808000+00:00 | ['Performance Improvement', 'Self-awareness', 'Self Sabotage', 'Personal Growth', 'Personal Development'] |
Excel Addin Software Frameworks — Extending the Microsoft Office Platform | Although the desktop may be slowly dying as phones, tablets, and other devices encroach, many businesses and consumers remain loyal fans and active users of Microsoft Office and Excel. According to Microsoft, there were 1.2 Billion users of Office desktop and 60 million users of Office 365 in the cloud as of March 2016; the latter amount more than doubled to 135 million active users in April 2018. Excel’s core capabilities include a flexible tabular data representation indexed by rows and columns, a catalog of built-in functions, a charting module, a data import module, and a rich programming model using VBA. For heavy, data intensive commercial and academic users of Excel, there is a need to extends Excel’s features with custom calculations, new data sources, real-time refresh, and embedded GUI’s.
In this series, I will be covering several advanced topics for extending Excel’s core capabilities.
Excel Addin Framework Choices
UDF
Embedded GUI
RTD
Portability
Deployment
Troubleshooting
There are are several criteria when deciding which Excel add-in framework to use.
Excel Addin Software Frameworks
If you are looking to build an Excel product suite or family of add-ins, then I strongly encourage you to examine Add-in Express or Excel DNA. Both are fully featured and have broad language support (unlike PyXLL), and both are much easier to use than the native Microsoft VSTO libraries. Both have active user communities with 1000’s of developers each.
If you intend to support other Office products such as Word or Outlook and are also not comfortable with using open source software, then choosing Add-in Express makes wise sense from a long-term maintenance, TCO perspective.
If you intend to develop Excel add-ins which use native, platform features that are idiosyncratic to a specific version of Office (which I do NOT recommend), then use VSTO.
If you are budget conscious and are developing a PoC or MVP for just Excel, then choosing Excel DNA and Net Office is a solid starting point that you can build upon.
References
Enjoy the article? Follow me on Medium and Twitter for more updates.
This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +392,714 people.
Subscribe to receive our top stories here. | https://medium.com/dataseries/excel-addin-software-frameworks-extending-the-microsoft-office-platform-ea90e06b4555 | ['Bishr Tabbaa'] | 2020-04-14 08:04:33.949000+00:00 | ['Microsoft', 'Excel', 'Software', 'Architecture', 'Design'] |
How to Render Next.js with NestJS | How to Render Next.js with NestJS
Did I just make Next.js better?
If you read this article you probably have a little knowledge about these two frameworks, if not I would recommend you first to take a look at these, they have very good documentation on their websites.
Storytime
So apparently I started a little Github repository where I created some web-app boilerplates and I wanted a server-side rendering solution as well. So while I was sinking in the deeps of the monorepo magic world🎩, I faced a question regarding Next.js and NestJS. Do I really need two node servers?🤔 One to render the react application and one for the backend APIs? NO! So here came the idea to let’s just merge these two.
Next.js is amazing!
The guys at Next.js already thought about this, they have a description on their website regarding how can you use a custom server, so from here my job becomes 100% easier.
But there’s a catch, nothing comes without a price. The price that we will lose the Automatic Static Optimization but I’m okay with it. In my projects, almost every page has “blocking data requirements”, so it’s not a big drawback in my situation.
Integrating with NestJS
In NestJS we wrap up controllers and services in modules, so in my case, I think the best option is to create a module which purpose is only to serve the UI side, so basically to render our react pages.
Let’s call this module View Module, and it will contain the View Controller(route handling), View Service(providing the next render). | https://medium.com/javascript-in-plain-english/render-next-js-with-nestjs-did-i-just-made-next-js-better-aa294d8d2c67 | ['Alex Toth'] | 2020-12-06 19:24:16.035000+00:00 | ['Nestjs', 'Nodejs', 'Nextjs', 'JavaScript', 'React'] |
If I have 6 hours to chop down a tree… | If I have 6 hours to chop down a tree…
…would I spend 4 hours sharpening the axe?
Should I even chop down the tree?
Will there be consequences — for me, for anyone else, or for anything nearby?
Is it the right axe?
Is there a more suitable axe for me, for the type of tree?
Am I the right person to chop down the tree?
I’ve never chopped down a tree before — should I find someone more experienced?
Do I need a brand new axe?
Is it time to look at the latest axes on the market?
Is it a team sport?
Is it a one, two, three, or even a four-person job?
Is it even the right tree to chop down?
Have I understood the instructions correctly?
Would it lead to the right outcome?
Has anyone considered the bigger picture?
Does the axe even need sharpening?
I’ll take a few practice swings to see how much it needs sharpening.
Is there a better way?
Could the tree be replanted elsewhere instead of chopping it down?
So, if you had 6 weeks to improve a product…
…How would you spend 4 weeks? | https://uxdesign.cc/if-i-have-6-hours-to-chop-down-a-tree-51dba58d9d68 | ['Pete Woodhouse'] | 2019-08-12 23:45:25.203000+00:00 | ['Design Thinking', 'Product Design', 'Product Management', 'Productivity', 'UX'] |
Why Hustle Culture Is Frowned Upon | Why Hustle Culture Is Frowned Upon
Isn’t ‘being productive’ good? Well…
Photo by Austin Distel on Unsplash
A few days ago, one of my favorite writers tweeted something that caught my attention. He had just finished speaking in a webinar, and remarked that during the Q&A session, he received many variants of this question: “How can we stay productive during quarantine?”
Afterward, he proceeded to rant, saying things like “Why are we so fixated on productivity?” and “At this rate, we’ll all die from productivity.”
I mean, this guy’s actually an amazing writer and I love his work, but I have to admit that these particular tweets of his are a little overdramatic.
They do have some merit, though. Productivity can be toxic sometimes, and this is more or less a byproduct of that infamous hustle culture. | https://medium.com/assemblage/why-hustle-culture-is-frowned-upon-45f1b1767c48 | ['Aushaf Widisto'] | 2020-11-13 12:39:18.125000+00:00 | ['Productivity', 'Self Improvement', 'Hustle', 'Advice', 'Work'] |
Book Review: “Philip and Alexander” | Book Review: “Philip and Alexander”
The new book from acclaimed historian Adrian Goldsworthy examines antiquity’s most famous father and son in a new light.
As everyone knows, I’m always on the lookout for a new book about antiquity, in particular anything having to do with Alexander the Great. This particular figure has always sort of cast a spell on me, perhaps because he’s become something of a queer icon.
So, when I saw that NetGalley had a copy of noted historian Adrian Goldsworthy’s new dual biography of Alexander the Great and his father Philip II, I knew that I had to get a copy for review.
As its title implies, this is a book that is very much about both Alexander and Phillip. Indeed, the book’s most noteworthy contribution to the already vast library of books about Alexander the Great is its dual emphasis on the father and the son. Poor Philip always seems to get short-shrift in the history books, the curse of having a son who would become one of the most famous men from antiquity. As Goldsworthy documents, however, were it not for Philip and his efforts to consolidate his power in both Macedonia and Greece, it’s very unlikely that Alexander would have been able to accomplish even a fraction of what he ultimately did.
In fact, one can’t help but admire Philip for his tenacity, his political skill, and his military abilities. Indeed, it’s something of a miracle that he managed to survive the cutthroat world of the Argead court for as long as he did, particularly since so many of his predecessors were cut down by those closest to them. Having managed to ascend to the throne, Philip set about forging Macedonia into a formidable force, and the book documents the way that he managed to first defeat the various tribes that sought to exploit Macedonia, before moving on to Greece. Philip’s genius was that he was able to take advantage of the Greek city-states’ chronic unwillingness to band together. By the time they realized just how much of a threat this “barbarian” was, it was far too late.
Philip’s great misfortune is that his perception in the present has been shaped by Hollywood’s representation of him. Two major films have so far been made about the life of Alexander, one in the 1950s and one in the early 2000s. In both cases, Philip comes across as a debauched sot unable to carry a sword or control what’s going on in his own home. By the time he’s cut down during a ceremony, it almost comes as a relief. Fortunately, Goldsworthy’s book goes a long way toward correcting this perception, allowing us to see the man in all of his complexity.
One of the book’s other great strengths is that it does give a pretty thorough examination of the broader world into which Philip was born, particularly Athens. Of all the city-states, this most famous city-state had arguably the most vexed relationship with the Macedonian king, due in no small part to the inveterate hatred born toward Philip by the famed and influential orator Demosthenes. We also get a glimpse into the workings of Macedonian court life which, Goldsworthy points out, centered around the king and his nobles. Understanding these contexts is vital for grasping the type of ruler that Philip went on to be. Personally, I would have liked to see more of these parts of the book, particularly since such context is so important to understanding Philip’s and Alexander’s role on the geopolitical stage.
When it comes to Alexander, however, the book falls a bit flat. Goldsworthy seems to have an almost pathological avoidance of anything having to do with Alexander’s personal life, and so those looking for illumination about the great man’s relationships with his mother Olympias, his friend and lover Hephaistion, or his lover and confidant Bagoas the eunuch are certain to be disappointed. Unlike almost every other historian I’ve read on the subject, Goldsworthy refuses to accept that Alexander had deeply physical relationships with men, and even at the end he can only bring himself to refer to Hephaistion as his dearest friend (he has a similarly prudish attitude toward Achilles and Patroclus). While his rationale for doing so is sound as far as it goes — we simply can’t know for sure whether the relationship was sexual — it does at times seem as if Goldsworthy is letting his own prejudices regarding same-sex eroticism color his understanding of Alexander.
What the book lacks in exploration of Alexander’s personal life it more than makes up for in discussions of battles, soldiers, and tactics. Quite frankly, I find those parts of the book the least enjoyable, mostly because I’m just not that much of a fan of military history. However, for those who want a detailed analysis of Alexander’s military exploits, particularly one he crosses into Persia and begins his conquest of that mighty, sprawling empire, this book more than fits the bill.
Throughout the book, Goldsworthy is very open about the fact that there is much that we don’t know about Alexander and his life. The sources are often written many years after his death. This is, of course, the danger in writing a narrative history about people who lived several millennia ago, but there were times when I began to wish that Goldsworthy would just move on from the constant uncertainty and tell the story that he wants to tell.
That being said, I found the concluding chapters to be the most compelling, in part because they make a convincing case for the need to continue exploring the lives of these two men. For better and worse, Philip and Alexander fundamentally reshaped the world that they found. Alexander in particular would cast a huge shadow over the rest of antiquity, and prominent men like Julius Caesar and Augustus would continue to yearn to achieve the same level of greatness (and, in Goldsworthy’s estimation, fail).
For the most part, I enjoyed this book. It’s a useful account of the lives of two of the ancient world’s most important rulers, men whose fame continues to shine down through the millennia. | https://medium.com/cliophilia/book-review-philip-and-alexander-6d0503ed20e9 | ['Dr. Thomas J. West Iii'] | 2020-09-04 16:09:45.669000+00:00 | ['Books', 'Biography', 'History', 'Greece', 'Classics'] |
I Deleted All My Dating Apps A Month Ago | Gone feels really good.
Photo by Alisa Anton on Unsplash
This year feels really different. I’ve changed my career, my self care, my weekly exposure to public transit contaminants—everything. It was as if after my underwear drawer was finished, my soul needed something else to Kondo.
I have been on dating apps for 11 years, consistently. At any given time, there were between two and five dating apps downloaded onto my phone, and I probably checked one or two of them with regularity. For more than a decade I swiped through faces, sent messages, and engaged in all the typical activities one participates in when you date a piece of glass and plastic. If you’re curious, that rounds out to about 4,000 days worth of dating apps.
In my mind, in this house, we refer to dating apps as the Bucket Of Nothing. A place where time, effort, and hope go in and spit out jack shit on the other side. Regardless of the apps I’ve tried, the methods I’ve tried, or the energy I’ve given dating apps for the whole of 11 years, I have never once met a partner as a result.
I used to tell myself, “it’s not like the apps are preventing me from meeting someone, so why not use them?” It was just a little swipe-swipe on the couch every evening or every morning from bed before my feet had touched the withering hardwood. And in truth maybe the time commitment wasn’t a big deal. Maybe it was as distracting and mechanical as the way I now open Candy Crush instead.
I also used to tell myself that dating apps were a part of how I earned my living, and one can’t really write about single life without engaging in a major part of its modern version, can she? The fodder the dating apps used to give me, lord—the fodder! Who can forget the three bald men, or the courteous canceller, or mustachio? The Bucket Of Nothing is, at the same time, a well of inspiration.
But this year is different. There’s a lot more truth to 2019, I’m more inclined to take care of myself than I was before. The truth about dating apps is that they actually were taking up space in me, they were subconsciously reinforcing to me that nobody wants me, and in general they were born from the flames of hell and they have to go.
At the time of the Great Cull I believe there were four. Tinder, Bumble, Hinge, and Raya—which I wasn’t even on by the way, I was never deemed worthy enough to upgrade to member from eternal waitlist peasant—all promising to be better and different and all containing the same people and same overall mechanics and same cesspool world where all that ever happens is nothing much. I match, I message, and I never hear from men, ever. Nothing. Over and over again, for years.
So they went. One evening (or was it morning, time blends together when you stop accepting bullshit) I both deleted my accounts and uninstalled all of my dating apps. I don’t know where I summoned the will, I don’t know how that moment itself was the inspired one, but enough. I had tried enough, I had received nothing enough. More than a decade of let-downs trying to tell me I was the problem. I’ll never let binary code speak to me like that again.
It took mere minutes, and when it was done, there was a small voice inside of me that had been trained to think, “wait—this is the only way you’ll ever meet someone.” But there was also a louder, smarter voice reminding me how much room I just made in my life for so much more to come in, only one of those things potentially being a partner.
I used to think of the Bucket Of Nothing as a void, of my singleness as a void, of everything I didn’t have as a terrifying chasm making me feel like I too was little more than empty space.
It is a new year and there is no longer room in my consciousness, my subconsciousness, or my iPhone memory for dating apps. If it doesn’t serve me, if it doesn’t light me up, if I approach it with cynical dread and over a decade’s worth of disappointment, goodbye—and don’t ever come back.
It’s been a month and I can palpably feel that the spaces dating apps leave behind are not voids, and they’re not empty. For the first time in a long time, maybe ever, I have clear space that’s free of nothing, and full of potential. | https://shanisilver.medium.com/i-deleted-all-my-dating-apps-a-month-ago-982d24418cc4 | ['Shani Silver'] | 2019-03-21 11:30:13.855000+00:00 | ['Writing', 'Life', 'Culture', 'Dating', 'Humor'] |
The Syrian researcher determined to invest in the future of a displaced generation | The Syrian researcher determined to invest in the future of a displaced generation
Millions of Syrian children are being educated in Jordan, Lebanon and Turkey. In 2014 Hiba Salem left her Damascus home to gain the qualifications needed to contribute to a generation facing formidable challenges within these host communities.
Hiba Salem in Cambridge (Nick Saffell)
My head is full of children’s voices. I’m writing up my PhD and beside me is a pile of children’s diaries and a heap of interview transcripts. As I tap away, I see the faces of the 80 boys and girls I spent three months with in 2017. These young people made a lasting impression and I feel a deep responsibility to them. They trusted me and I don’t want to let them down.
These children are refugees from my home country Syria. They live in Jordan. When I interviewed them they were aged 13 to 16 and were students at four different schools. Listening to their thoughts and opinions about their lives and futures was the fieldwork for my PhD in the Faculty of Education.
I was worried that teenagers wouldn’t want to talk. The opposite was the case. They really want to have their voices heard and they often had many questions they wanted to ask me. What was it like in England, did it rain all the time, what did people eat, and what was a PhD? They need people to care about them and help them meet their potential.
Hiba Salem introduces her diary project to Syrian teenagers
Rain, I was able to tell them, is part of life in the UK. When I arrived in Cambridge nearly four years ago I came straight from boiling hot Damascus to a wet, grey city. It took me a couple of weeks to adjust and begin to understand how Cambridge and its collegiate system works. I’d lived in America for seven years but when I went to Sainsbury’s for the first time I couldn’t understand what the cashier was saying.
I was brought up in Syria. But I’ve been much more fortunate than the children whose experiences I research. When we moved as a family it wasn’t to flee war but for my dad’s job. I was nine when we went to live in Washington in the USA. My early life in Syria had been fun. Playing with friends and eating delicious Arabic ice cream with pistachios sprinkled on top.
That idyllic Damascus childhood took place before the present troubles. Even when the initial uprising took place in 2011 we had no idea about what was to come. Syrian culture is all about family. Syrian families are always visiting each other and eating specialities like the musakhan, shawerma and fatteh dishes I’ve been teaching myself to make in Cambridge.
Diaries devised by Hiba Salem to capture thoughts and opinions
I started at an American school without a word of English. I sat in the class for a few days completely silent. The first word I understood was ‘draw’. I couldn’t speak but I was encouraged to draw. A couple of years later, I wrote a set of poems for my English class. At first my teacher refused to believe that my language skills could have developed into creative literacy — but they had.
Syria was always home. We went back there every summer and when I was 16 we moved back to Damascus for good. I went to a private school and then to a private university where I studied computer science, graduating second in my class. I got a good job as a database programmer.
The uprising became a civil war. As the conflict took hold, the war became more and more complicated and entrenched. We’d wake up in the morning to the sound of bombs falling. The commute to work became terrifying.
I decided to leave my job. I’d already been thinking of changing career. I was interested in education and psychology and I wanted to do something people-centred. I applied to UNICEF for a role in their Child Protection programme and didn’t get in. Although I’d worked as a volunteer with children, I didn’t have enough experience.
To work in education, I needed a qualification. I began to explore Master’s courses and wrote a research proposal. Several institutions offered me a place for postgraduate study. Among them was Cambridge which became my dream. I was interviewed on Skype by two professors in the Education Faculty who were warmly encouraging.
Funding my research was the next big issue. My Master’s course was funded through the Cambridge Trust in partnership with the Said Foundation. My doctorate is funded by three separate bodies, including the Queen Rania Foundation.
Many refugee children have big ambitions — just like anyone else. They want to be doctors, footballers, radio broadcasters. They have a strong desire to be valued by the wider world. They’ve been traumatised by terrible events. Their fathers and brothers have been killed. They’ve seen people being blown up.
Text from a diary: My motto in life is to be free, proud, with my head held high forever.
We all need people who care about us. When I finished my fieldwork in Jordan, I had letters from children saying that I was like a big sister to them. Many refugee children drop out of secondary school — boys to support their families by working and girls to help their mothers at home or marry early.
In host communities, refugee children aren’t always welcome. In Jordan schools operate a double shift system. Jordanian children go to school in the morning and Syrian children, together with another set of teachers, attend the same school in the afternoon. A schism results. Attempts to integrate the two school groups show positive outcomes but are few and far between.
Segregation creates profound problems. I look particularly at the negative impacts of segregation on social cohesion between communities within refugee-hosting nations. My work demonstrates the importance of speaking to students and including their voices in research and planning.
Children are not naïve. Refugee children are well aware of the hurdles they face. Even if they are bright, and doing as much studying as they can, they know that without money they won’t be able to progress into higher education. We need to ensure they get the opportunities they deserve.
Envelopes containing students’ wishes — which include learning to swim, becoming a pharmacist and getting rid of worry
Resources are stretched. Lebanon, Jordan and Turkey have taken in more than five million Syrian refugees. Communities are under immense pressure. The influx of refugees has drained national resources and challenged security. Rising unemployment rates, reduced schooling spaces, and inflated housing costs are key stress points. Syrian refugees live within restrictive conditions, such as laws not permitting them to work.
My research is qualitive rather than quantitive. The questions that form the basis of my work are semi-structured and I used a diary format to allow children to express their ideas freely. What I’ve learnt will help me and others to consider how policies can respond to the daily and contextualised challenges that refugee students experience.
Damascus remains close to my heart. My parents chose to remain there. We’re lucky to live in a relatively safe neighbourhood and they don’t want to leave — they’re determined to stand defiant. When I visit them, I fly to Lebanon and take cabs through a series of checkpoints to the Syrian capital.
I’m not sure what my next step will be. But what I’m certain about is that I want to remain in educational research and make a contribution to the field of forced migration.
This profile is part of our This Cambridge Life series. | https://medium.com/this-cambridge-life/the-syrian-researcher-determined-to-invest-in-the-future-of-a-displaced-generation-d508850c03a5 | ['University Of Cambridge'] | 2018-07-09 09:59:34.751000+00:00 | ['Education', 'Migration', 'Psychology', 'Refugees', 'Syria'] |
What is Predictive Analytics? | Source: Unsplash
Predictive analytics uses historical data to predict future events. Typically, historical data is used to build a mathematical model that captures important trends. That predictive model is then used on current data to predict what will happen next, or to suggest actions to take for optimal outcomes. Predictive analytics has received a lot of attention in recent years due to advances in supporting technology, particularly in the areas of big data and machine learning.
Why It Matters?
Big Data
Predictive analytics is often discussed in the context of big data. Engineering data, for example, comes from sensors, instruments, and connected systems out in the world. Business system data at a company might include transaction data, sales results, customer complaints, and marketing information. Increasingly, businesses make data-driven decisions based on this valuable trove of information.
Increased Competition
With increased competition, businesses seek an edge in bringing products and services to crowded markets. Data-driven predictive models can help companies solve long-standing problems in new ways. Companies use predictive analytics to create more accurate forecasts, such as forecasting the demand for electricity on the electrical grid. These forecasts enable resource planning (for example, scheduling of various power plants), to be done more effectively.
Cutting-Edge Technologies
To extract value from big data, businesses apply algorithms to large data sets using tools such as Hadoop and Spark. The data sources might consist of transactional databases, equipment log files, images, video, audio, sensor, or other types of data. Innovation often comes from combining data from several sources.
With all this data, tools are necessary to extract insights and trends. Machine learning techniques are used to find patterns in data and to build models that predict future outcomes. A variety of machine learning algorithms are available, including linear and nonlinear regression, neural networks, support vector machines, decision trees, and other algorithms.
Predictive Analytics in Action
Predictive analytics helps teams in industries as diverse as finance, healthcare, pharmaceuticals, automotive, aerospace, and manufacturing.
Automotive — Breaking new ground with autonomous vehicles: Companies developing driver assistance technology and new autonomous vehicles use predictive analytics to analyze sensor data from connected vehicles and to build driver assistance algorithms.
Aerospace — Monitoring aircraft engine health: To improve aircraft up-time and reduce maintenance costs, an engine manufacturer created a real-time analytics application to predict subsystem performance for oil, fuel, liftoff, mechanical health, and controls.
Energy Production — Forecasting electricity price and demand: Sophisticated forecasting apps use models that monitor plant availability, historical trends, seasonality, and weather.
Financial Services — Developing credit risk models: Financial institutions use machine learning techniques and quantitative tools to predict credit risk.
Industrial Automation and Machinery — Predicting machine failures:
A plastic and thin film producer saves 50,000 Euros monthly using a health monitoring and predictive maintenance application that reduces downtime and minimizes waste.
Medical Devices — Using pattern-detection algorithms to spot asthma and COPD: An asthma management device records and analyzes patients’ breathing sounds and provides instant feedback via a smart phone app to help patients manage asthma and COPD.
How It Works
Predictive analytics is the process of using data analytics to make predictions based on data. This process uses data along with analysis, statistics, and machine learning techniques to create a predictive model for forecasting future events.
The term “predictive analytics’’ describes the application of a statistical or machine learning technique to create a quantitative prediction about the future. Frequently, supervised machine learning techniques are used to predict a future value (How long can this machine run before requiring maintenance?) or to estimate a probability (How likely is this customer to default on a loan?).
Predictive analytics starts with a business goal: to use data to reduce waste, save time, or cut costs. The process harnesses heterogeneous, often massive, data sets into models that can generate clear, actionable outcomes to support achieving that goal, such as less material waste, less stocked inventory, and manufactured product that meets specifications.
Predictive Analytics Workflow
We are all familiar with predictive models for weather forecasting. A vital industry application of predictive models relates to energy load forecasting to predict energy demand. In this case, energy producers, grid operators, and traders need accurate forecasts of energy load to make decisions for managing loads in the electric grid. Vast amounts of data are available, and using predictive analytics, grid operators can turn this information into actionable insights.
Step-By-Step
1. Import data from varied sources, such as web archives, databases, and spreadsheets. Data sources include energy load data in a CSV file and national weather data showing temperature and dew point.
2. Clean the data by removing outliers and combining data sources.
Identify data spikes, missing data, or anomalous points to remove from the data. Then aggregate different data sources together — in this case, creating a single table including energy load, temperature, and dew point.
3. Develop an accurate predictive model based on the aggregated data using statistics, curve fitting tools, or machine learning.
4. Energy forecasting is a complex process with many variables, so you might choose to use neural networks to build and train a predictive model. Iterate through your training data set to try different approaches. When the training is complete, you can try the model against new data to see how well it performs.
5. Integrate the model into a load forecasting system in a production environment. Once you find a model that accurately forecasts the load, you can move it into your production system, making the analytics available to software programs or devices, including web apps, servers, or mobile devices. | https://medium.com/ai-in-plain-english/what-is-predictive-analytics-22b32c33c86c | ['Johar M. Ashfaque'] | 2020-12-28 08:04:29.833000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Predictive Analytics', 'Analytics', 'Deep Learning'] |
Five Tips for Surviving and Thriving During NaNoWriMo | and why it might be worth starting if you haven’t yet
Photo by Windows on Unsplash
I listened to a podcast recently where the host interviewed Carey Lohrenz, the first Female F-14 Tomcat Fighter Pilot.¹ Lohrenz’s job was literally life-or-death, especially when it came to landing a high-speed fighter jet at night on a boat’s airstrip. She described it as pitch black with minimal visibility. The boat was likely pitching with the waves. There were often storms. The weather and darkness and lack of sensory input made it disorienting.
She consistently lived with the stress of a job where making a mistake or miscommunicating could kill her or her team members. As she spoke about how the fear showed itself in tangible physical and mental ways, she said something that has stuck with me for weeks.
She said, “Ironically, the only thing you can do to ever slightly mitigate that tension, or that anxiety, is to do it again.”
I understand that writing a novel is not in any way a life or death situation. Most of us write because we love it, and we hope to be successful with it, but we would probably keep doing it even if we never are. But the only way we’ll know if we’ll make it, the only way we’ll get better at doing it, is just to do it, and then do it again.
National Novel Writing Month, or NaNoWriMo², is a perfect way for authors to throw themselves into the writing lifestyle. Because it has specific perimeters (write 50,000 words) and goals (write 1,667 words per day) within a finite amount of time (November 1–30), with a tangible reward at the end, it’s an excellent tool for would-be authors. It takes courage to take that first step, and for many people, NaNoWriMo can be that first step.
There is a lot of information about NaNoWriMo and its history on the Internet, so I won’t spend time on that here, but even though I am in no way affiliated with the program, I am a huge advocate of NaNoWriMo because when the focus is quantity over quality (more on that later), perfectionism must necessarily be pushed aside. As writers, we’re more able to focus on simply putting words on a page, and it becomes an excellent motivator to finish an idea you’ve been putting off, start a new project, or simply a low-stakes way to try your hand at putting your creative work into the world.
Roman educator Marcus Fabius Quintilianus said, “Write quickly and you will never write well; write well, and you will soon write quickly.”³
Maybe we’re more efficient than the Romans were. Maybe having laptops instead of quill and ink or stone tablets makes a difference, but I believe the exact opposite. Writing quickly will help you write better because you’ll write more often, you’ll produce more content, and you’ll get better as you practice. So with that said, here are a few ways to start NaNoWriMo if you haven’t yet, or thrive during the month if you have.
Schedule your time
NaNoWriMo is a stressful month (throw in an election, and you might just be maxing out your anxiety levels) that will work out so much better if you’re disciplined. Put it on your calendar and know when it’s coming each day.
A lot of times we don’t prioritize writing. We feel like it’s not as important as our job or our family or cleaning the house or any number of other things, so we leave it to chance, hoping to squeeze it in. But if we schedule the time, it creates the mindset that it’s a priority and makes it easier to be disciplined.
New York Times bestselling author and prolific writer, Joanna Penn, says: “Schedule your writing time. Seriously, this could be a transformational step if you’ve not done this before. It’s not complicated. Get out your calendar or your smartphone app or however you schedule your time, and put in slots for writing.
“Then show up for that time to write just as you would show up for a business meeting or a gym class or anything else that is time-sensitive. Stop making your writing slot optional or showing up late as if it doesn’t matter.”⁴
Knowing when to show up each day will reduce stress levels and make sure your writing goals stay on target.
Plan ahead
NaNoWriMo is all about fast writing and not necessarily good writing, which is often expected when you’re writing quickly. There’s a famous Anne Lamott quote about not letting perfectionism get in the way of your shitty first draft. However, Lisa Cron makes the brilliant distinction that “there’s a massive difference between the shitty first draft on an actual story and a shitty first draft that randomly romps all over the damn place.”⁵
I’m a big advocate of at least knowing the main guideposts for your plot, so you’re not wasting time creating a nonsensical mess that will require massive rewrites, and then leaving the in-between moments open for creativity and discovering your characters as you go along.
I say if you:
1. Have a basic premise
2. Know what your main character wants/needs/is motivated by
3. Have guideposts for the beginning, the middle, and the end
Then you’re ready to roll!
One plotting method that I personally love is designed by Dan Harmon, creator of Community and Rick and Morty, where he uses a simple circle chart to map out the arc for each character, a circle for each episode of his shows, and a circle for each overall series arc. It’s a very clever and simple way to visualize the direction of a story, and I’ve found the method helpful in my own writing.
If you refuse to plot anything ahead of time, at least take a minute and write down a sentence or two about where you’re going the next day. You will save a lot of time and energy that you might otherwise spend staring at a blank screen and wondering where to go with your story next.
Put yourself in the zone
It’s a lot of fun to look at famous authors and their writing rituals. Stephen King grabs his tea and vitamins and sits in the same place every day at the same time; Maya Angelou would rent a hotel room and only use it for writing; Jack Kerouac once had a habit of lighting a candle to write by, then symbolically blowing it out when he finished. A lot of us, maybe even without realizing it, have rituals or spaces or specific writing quirks we do to put ourselves in the zone.
If you don’t have one yet, consider writing to a specific playlist that you only use when writing. Kick off your writing by reading a book with a tone or voice similar to what you’re going for. Write in the same spot or at the same time every day.
When you do the same things over and over, it rewires your brain to be triggered to do that thing subconsciously, instead of having to fight yourself all the time. Let your brain know it’s time to write by finding a consistent habit or place or sound to tie it to, then it’ll be much easier to get in the groove.
I personally love power poses. If you haven’t watched Amy Cuddy’s TedTalk⁶ about body language, do that. There are also some great yoga poses and breathing techniques that are incredibly effective at getting my energy levels high, and now those have become the triggers that automatically put me in the zone to sit down and be productive.
Write in bursts
There are many books out there about writing fast. Two of my favorites are The 8-minute Writing Habit by Monica Leonelle and 5,000 Words per Hour by Chris Fox. Or The Pomodoro Technique by Francesco Cirillo, which talks about working more effectively by using 20 minute chunks of time. But the basic premise of all of them is the same: set a timer, don’t go back to correct yourself, and write as much as you can in that short amount of time.
The reason I think this is so effective is because our brain knows there’s an end in sight. We can do anything for 8–20 minutes, and it’s even fun to compete against ourselves to see if we can beat our numbers each time. This is the method I have used for drafting hundreds of pages. Write for 20 minutes, as fast and hard as you can, then take a break, and do another 20 minutes later in the day. You can also find a writing buddy and set the timer together and compete to see who finishes the most words. Writing sprints are fun and easy and keep you from getting stuck on perfection.
Most of us could complete our daily 1,667 NaNoWriMo words in three or four 20-minute blocks, so really, that’s only taking up an hour or so of our day.
Take the decision making out of it
This goes along with scheduling, planning, and putting yourself in the zone, but making the decisions about when to write and what to write and where to write, etc., will make a huge difference in you not burning out.
There are some really interesting stories of wearing down prisoners of war and how the body is physically incapable of fighting back or making decisions when the body is under too much stress or pressure.
Even outside of being a prisoner, some of the most successful people in the world consciously avoid having to make decisions that are stressful simply by making small choices ahead of time. For example, Mark Zuckerburg, Barak Obama, and formerly Steve Jobs, wear basically the same outfits daily so they don’t have to waste mental energy making that choice every morning. Ariana Huffington doesn’t take electronics into her bedroom so she’s not tempted to waste time or play around instead of resting. Mark Wahlburg gets up at 2:30 a.m. to keep up with his workout schedule and everything he wants to do professionally.
None of these are huge decisions, but they all require planning and dedication in order to make the most out of each day and each opportunity for success. They are non-negotiable decisions that create a ripple effect for their larger achievements.
During writing in general, and NaNoWriMo in particular, you’ll often already be mentally tired and stressed. Don’t add to that by forcing yourself to decide every day whether you’re going to write or how much you’re going to write, or even what you’re going to write. Give yourself as much chance for success as possible by deciding it once — and then sticking to it.
A Final Word
As previously mentioned, in the age-old debate over quantity versus quality, NaNoWriMo is all about quantity. And that’s okay because it doesn’t ask you to keep that pace forever and it doesn’t promise that your book will be 50,000 words of perfection. “Winning” NaNoWriMo does not mean your book is ready for publication. It’s after that point that you want to shape your rough draft into a quality novel. Maybe your December can be all about editing. Maybe the 50,000 words set by NaNoWriMo only take you halfway to the finish line of your book. Either way, November 30th is not the end of that book, whether you crossed the NaNoWriMo finish line or not. It’s simply an excellent first step in the right direction.
Just like landing a fighter jet in the dark of night with bad weather — though with admittedly much lower stakes — we can’t control every part of our publication journey, but we can face our fears and take that first step toward accomplishing our dreams.
Productivity expert James Clear states: “Maybe there are people who can achieve incredible success overnight. I don’t know any of them, and I’m certainly not one of them… the only way I made progress — the only choice I had — was to smart small.”⁷ And NaNoWriMo is the perfect opportunity to get started. | https://medium.com/the-innovation/five-tips-for-surviving-and-thriving-during-nanowrimo-88bc0c52eb95 | ['Emma Boone'] | 2020-11-18 21:24:45.375000+00:00 | ['NaNoWriMo', 'Writing Life', 'Writing', 'Writing Tips', 'Writers On Writing'] |
醫院&診所都需要的專屬線上預約 / 網路掛號 系統! | 免費的跨平台線上預約排程系統,同時支援網頁版以及行動裝置
Oh, Instagram, the source of inspiration for countless millennials, a visual gem and the perfect “look at how cool I… | https://medium.com/simplybooktw/%E9%86%AB%E9%99%A2-%E8%A8%BA%E6%89%80%E9%83%BD%E9%9C%80%E8%A6%81%E7%9A%84%E5%B0%88%E5%B1%AC%E7%B7%9A%E4%B8%8A%E9%A0%90%E7%B4%84-%E7%B6%B2%E8%B7%AF%E6%8E%9B%E8%99%9F-%E7%B3%BB%E7%B5%B1-542035888538 | ['Simplybook.Me'] | 2020-12-08 11:57:39.507000+00:00 | ['Simplybook', '五分鐘打造專屬預約系統', 'Productivity', 'Medical', 'Simplybookrecommend'] |
Does Word Count Really Matter Or Are Other Writing Trends More Important For Success? | Does Word Count Really Matter Or Are Other Writing Trends More Important For Success?
For the most part, size really doesn’t matter
Since Google’s John Mueller has clarified that word count isn’t a ranking factor, the best rule to follow about story length is to write until you’re done, no more, no less. Don’t aim for a particular number, or when writing on Medium, a particular number of minutes. Just make sure you cover your topic fully, whether that takes 500 words of 5000.
As writers, we sometimes get a bit obsessed with rules. You’ll hear people say things like “Write what you know,” or “Show don’t tell” as if these things have been passed down by an oracle. Yet rules change and no rule applies 100 percent of the time.
One of the other rules we hear, especially now, is that length is crucial for articles to be successful. We see this emphasis even on Medium with the shift from focusing on fans to focusing on read times for calculating earnings. Common questions you’ll see in content related forums are things like, “How short is too short?” and “How many words minimum should an article be?”
In SEO related stories, you’ll also see a discussion of content length and there’s a myth that there is an absolute word count that Google considers it’s “sweet spot,” this being about 2500 words per post. Regardless, this number has been bandied around and many writers shoot for this number as a minimum length for their stories.
How Much Does Word Count Matter in 2020?
Many writers continue to believe that an article’s performance is largely determined by word count. To determine the degree to which this is the case, internet marketing guru Neil Patel examined the ideal word count for different industries and determined it could be anywhere from 300 to 2700 words depending on the type of post.
Yikes. So what does that mean for those of us trying to produce content that will gain a good following?
After reading Patel’s comment above, it may feel like there is no one answer for the ideal length for a story. That’s because there isn’t. There is no magic word count to try to reach when it comes to the length of your posts. Yet, at the same time, longer blog posts do perform better than short ones in terms of search results and Google rankings. In fact, there are correlation studies that show that there is a positive association between page length and Google rank. So, what’s the deal?
Senior Webmaster Trends Analyst for Google, John Mueller, says not to bother trying to analyze search results to determine what word count Google prefers for different areas and types of writing. He has gone so far as to debunk the myth that word count matters in 2020, saying unequivocally that word count is not a ranking factor for Google.
Yet, he didn’t say that word count doesn’t affect other ranking factors. The higher the word count, the greater the chance to comply with the reader’s intent and the larger the opportunity to generate backlinks, both of which are ranking factors. Longer posts also are more likely to be perceived as more solid because of the breadth and depth of information included about a topic. This helps to establish authority, something readers look for when searching for information.
The Factors That Help Your Content Succeed
While Mueller has said that just trying to reach a certain length in order to match the word count of content found on top ranking sites won’t help your content’s performance, he has also said that there is an overlooked nuance to word count. This has to do with relevance.
Lengthy posts that are written in an attempt to reach some arbitrary word count tend to drift off topic. Sometimes this is due to trying to be comprehensive by including material other articles don’t or simply trying to reach a certain word count that seems optimal. Whatever the reason, the content becomes about something other than the topic intended. This leads people to stop reading, as the article no longer fulfills their needs or the expectations they had when they chose the article.
In regards to this, Mueller says it’s important to stay relevant and to understand what readers want when reading an article on the topic you are writing about and what they hope to gain based on your title. If you are going to try to analyze search results to help you write an article that performs better then focus on what readers mean when they type a search term. Google refers to this as need beneath or the latent question. These phrases refer to the hidden meaning of the search term a reader types.
For example, if you type in the phrase web traffic the results will be largely stories about ways to generate traffic to a site. From Google’s point of view, when you type that phrase, you might as well be asking, “How do I increase traffic to my website?”
In other words, if you want to write a story on the topic of web traffic, writing about how it’s defined is not likely to be of interest to many people. Discussing the three top ways to gain the most traffic to posts about psychology would. Mueller says this is the kind of thing that is more important than overall word count.
The goal is to write with an eye towards answering the reader’s question or intent. If your topic calls for more information you should provide it, writing a longer post. If it doesn’t call for a lot of information, then provide just what is called for and don’t add fluff in an effort to lengthen it to a particular number of words.
Take Away
The length of a post in and of itself doesn’t impact it’s effectiveness. When considering word count, it is the case overall, that stories that are too short or too long can affect the number of readers it attracts and that posts ranking. Yet other factors are just as, if not even more important than word count.
Remember, if your post isn’t useful to your reader in some way, then your readers will likely turn to other articles for what they are looking for. With Google’s new focus on what is communicated in a story over length, making sure your article is substantive is also important for ranking.
Don’t shortchange your topic but also don’t use more words than is necessary to fully cover the subject. If you say what needs to be said, no more, no less, you’ll find you don’t have to focus on word count. Your story will automatically be the right length, no matter what the word count is. And that won’t just please search engines. It will please your readers. | https://medium.com/mental-gecko/does-word-count-really-matter-or-are-other-content-trends-more-important-af008cc57cd7 | ['Natalie Frank'] | 2020-02-28 19:33:07.650000+00:00 | ['Writing Advice', 'Traffic', 'Success', 'Writing', 'SEO'] |
Review: PolyNet — 2nd Runner Up in ILSVRC 2016 (Image Classification) | Review: PolyNet — 2nd Runner Up in ILSVRC 2016 (Image Classification)
By Using PolyInception Module, Better Than Inception-ResNet-v2
In this story, PolyNet, by CUHK and SenseTime, is reviewed. A building block called PolyInception module is introduced. A Very Deep PolyNet is composed based on the module. Compared to Inception-ResNet-v2, PolyNet reduces the Top-5 validation error on single crops from 4.9% to 4.25%, and that on multi-crops from 3.7% to 3.45%.
PolyNet, By using PolyInception module, better than Inception-ResNet-v2
As a result, PolyNet (with the team name CU-DeepLink) obtains 2nd Runner Up in ILSVRC 2016 classification task as below. And it is published as 2017 CVPR paper. (Sik-Ho Tsang @ Medium)
Compared with ResNet (The winner in ILSVRC 2015), which got 3.57%, PolyNet got 3.04% as shown below:
ILSVRC 2016 Classification Ranking (Team Name: CU-DeepLink, Model Name: PolyNet) http://image-net.org/challenges/LSVRC/2016/results#loc
This relative improvement is about 14%, which is not trivial!!! | https://towardsdatascience.com/review-polynet-2nd-runner-up-in-ilsvrc-2016-image-classification-8a1a941ce9ea | ['Sik-Ho Tsang'] | 2019-03-20 16:00:25.800000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Image Classification', 'Data Science', 'Deep Learning'] |
To My Neighbor At WeWork | Hi, Derrick. This is Leanna, I am working near you this week at WeWork on Maple, and I just wanted to talk to you about something.
Are you just watching porn while you work or are you actually working on those porn videos which you blare so loudly on your laptop? I mean I can’t tell. You’re doing something with your hands on the machine, which actually is kind of comforting to me, because it at least communicates that you’re not jerking off, which would be even worse, of course.
But if you are editing or mixing or transcribing these porn videos, I have to ask you, Gees, doesn’t it just seem kind of basic that you would use head phones?
I can avoid looking over at your laptop, which usually seems to be displaying an enormous erection in closeup entering some hideously large looking labia like some sort of hellish tunnel boring machine.
But the sound is a problem. I did come over and ask you to turn it down or put on headphones, and you just kind of shook your head and completely ignored me.
I mean, really.
I don’t know what it is about our generation. Was it really that we all got trophies? Is that our problem? Do we think that everything we do is just great and nobody can criticize us or make any kind of request of us?
Or is it that you have some kind personal issue with me?
Listen Derrick, I’ve seen you looking over at me while I work as if there was some kind of problem with what I’m doing here at WeWork this week. Of all the nerve!
I have news for you mister. What I’m doing is a perfectly safe, natural and in fact very necessary job, and if you have a problem with it, you need to take it up with the State of California and Governor Gavin Newsom. Because the State makes it very clear that embalming dead bodies if you are using all natural products and no dangerous chemicals, does not in fact have to take place at a licensed morgue, funeral home or other facility.
In other words, it is perfectly within my rights to bring the bodies here this week. We are renovating our lab at the funeral home. I have only three more bodies and I will only be at WeWork until the end of the week.
So please, can we agree to work with each other in a spirit of cooperation and civility?
Or do I have to come over there and pump some stevia-based formaldehyde up your ass?
Thank you,
Leanna Adams
Licensed Embalmer,
Soft and Soothing Liquids
All natural, All Organic, All Artisanal Embalming Services and Products | https://medium.com/the-haven/to-my-neighbor-at-wework-882faa1318ee | ['Christine Stevens'] | 2019-09-19 03:51:49.420000+00:00 | ['Satire', 'Humor', 'Funny', 'Startup', 'Work'] |
Streaming Data from Apache Kafka Topic using Apache Spark 2.4.7 and Python | Press “CTRL + C” to end the Spark context.
Step 5: Running Own Functions on Output
While printing aggregated CDC data is interesting, it is hardly useful. If you want to run your own functions (whether to store the information on the Spark node or stream it elsewhere), changes need to be made to the completed file. One way to do it is to substitute the “pprint()” function for “foreachRDD” so that each reduced set of fruit and totals can have a function run on them.
# To program your own behavior, change this...
counts = dks.map(lambda x: json.loads(x[1])).flatMap(lambda dict: dict.items()).filter(lambda items: items[0]=="payload").map(lambda tupler: (tupler[1]["after"]["fruit_name"], tupler[1]["after"]["num_sold"])).reduceByKey(lambda a, b: a+b).pprint() # To this...
counts = dks.map(lambda x: json.loads(x[1])).flatMap(lambda dict: dict.items()).filter(lambda items: items[0]=="payload").map(lambda tupler: (tupler[1]["after"]["fruit_name"], tupler[1]["after"]["num_sold"])).reduceByKey(lambda a, b: a+b).foreachRDD(somefunction)
Once this is done, custom functions can be run by replacing “somefunction” above with the function name. Here is an example function that will do the same behavior as “pprint()”, but, by virtue of the format the Kafka data is read into Spark, will leave out superfluous timestamps.
def printy(a, b):
listy = b.collect()
for l in listy:
print(l) counts = dks.map(lambda x: json.loads(x[1])).flatMap(lambda dict: dict.items()).filter(lambda items: items[0]=="payload").map(lambda tupler: (tupler[1]["after"]["fruit_name"], tupler[1]["after"]["num_sold"])).reduceByKey(lambda a, b: a+b).foreachRDD(printy)
Using a custom function to leave out timestamps
Notice that there are four different aggregation events with no timestamps between them and prints nothing if no insertions happen. With a little bit of editing this function can export these values to a separate program that can track the totals for each fruit over different spans of time. This will be covered in the final part of this tutorial.
Step 6: Changing the Spark Job to Filter out Deletes and Updates
Updates and deletes are not considered. If you require updates and deletes to be filtered out, it will take some work with Python logic and some extra filtering of the JSON data. This will be based on the “op” parameter found at the end of each JSON data string.
Operation parameter for inserting a new row
Operation parameter for updating a row
Operation parameter for deleting a row
Completed Python File
The below file, when submitted as a Spark job with /etc/spark/bin/spark-submit — packages org.apache.spark:spark-streaming-kafka-0–8_2.11:2.2.3,org.apache.spark:spark-sql-kafka-0–10_2.11:2.2.3 readkafka.py , takes in all new CDC data from the Kafka topic every two seconds. In the case of the “fruit” table, every insertion of a fruit over that two second period will be aggregated such that the total number value for each unique fruit will be counted and displayed.
#Imports and running findspark
import findspark
findspark.init('/etc/spark')
import pyspark
from pyspark import RDD
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
import json #Spark context details
sc = SparkContext(appName="PythonSparkStreamingKafka")
ssc = StreamingContext(sc,2) #Creating Kafka direct stream
dks = KafkaUtils.createDirectStream(ssc, ["testDB.dbo.fruit"], {"metadata.broker.list":"{replace with your Kafka private address}:9092"}) # Transforming CDC JSON data to sum fruit numbers
# based on fruit name
def printy(a, b):
listy = b.collect()
for l in listy:
print(l) counts = dks.map(lambda x: json.loads(x[1])).flatMap(lambda dict: dict.items()).filter(lambda items: items[0]=="payload").map(lambda tupler: (tupler[1]["after"]["fruit_name"], tupler[1]["after"]["num_sold"])).reduceByKey(lambda a, b: a+b).foreachRDD(printy) #Starting Spark context
ssc.start()
ssc.awaitTermination()
Addendum
In the next part of this tutorial, we will install Grafana, Graphite Carbon, and Graphite Web onto an Ubuntu 18.04 EC2 instance to stream and plot the CDC data transformed by Spark. The Spark Python job from this tutorial will also be edited to use StatsD to interface with Graphite Carbon. A link will be added HERE when Part 3 is available. | https://sandeepkattepogu.medium.com/streaming-data-from-apache-kafka-topic-using-apache-spark-2-4-5-and-python-4073e716bdca | ['Sandeep Kattepogu'] | 2020-12-02 20:17:21.812000+00:00 | ['Microsoft Sql Server', 'Change Data Capture', 'Apache Kafka', 'Apache Spark', 'Python'] |
Clean energy technology is taking cues from sunflower spirals, schooling fish and other natural phenomena | Clean energy technology is taking cues from sunflower spirals, schooling fish and other natural phenomena
By observing how plants, animals and even mud behave, renewable energy innovators are uncovering new ideas for improving efficiency and output
Photo © iStockphoto.com/Mlenny
By Shweta Narayan for Ensia | @ensiamedia |
When we think about renewable energy, we think of rolling fields with windmills or industrial rooftops covered in silicon solar panels designed by human engineers in high-tech labs. As engineers work to make energy systems more efficient and affordable, some are finding inspiration in nature.
Organisms and natural systems have had some 3.8 billion years to evolve. Because energy is the currency of life, in the process they have come up with energy-efficient ways to function. From more productive wind turbines to strategic solar arrays, there’s a lot we can learn from nature about improving energy production and use.
For example, scientists at Cornell studying the movements insect wings make as the insects hover found that the wingtips trace out figure-eight patterns, minimizing power consumption. Such energy-saving kinematics could help improve the efficiency of miniature unmanned air vehicles (UAVs) used for surveillance.
The idea of imitating nature to design machines is not new. Leonardo da Vinci’s ornithopter was inspired by the flapping wings of birds, although it never actually took flight. From bridges and buildings to water management and food distribution, other examples of biomimicry abound in today’s world.
Now, as renewable energy grows in popularity, scientists and engineers are looking to nature for insights into designing wind, marine and solar energy devices in a way that increases efficiency and reduces environmental impact.
Solar Spirals
In July 2016, a solar-powered airplane flying over the desert region of Andalusia in Spain photographed breathtaking images of the Gemasolar concentrated solar power plant. The plant, operated by Torresol Energy, consists of 2,650 heliostats — mirrors that turn to track the motion of the sun, fanning out around, and reflecting sunlight toward, a 150-meter (490-foot)-high tower. The central tower houses molten salts that can store the energy of that light for extended periods of time.
In a fascinating article published in Solar Energy in 2012, researchers at Massachusetts Institute of Technology and RWTH Aachen University in Germany reported that the placement of heliostats for a concentrated solar plant like Gemasolar could be optimized by mimicking the spiral arrangement of florets in a sunflower. This pattern, called Fermat’s spiral, occurs commonly in the arrangement of leaves on stems and florets in flowers.
The spiral arrangements of seeds on a sunflower provides a model for optimum arrangement of heliostats in a concentrated solar plant. Photo © iStockphoto.com/undefined_undefined
The researchers found that for a solar plant with a central tower, the efficiency of the heliostats closest to the tower was higher. Hence, arranging them in a Fermat’s spiral pattern would lead to smaller footprints and higher efficiencies for the power plant. The inspiration from sunflowers doesn’t stop there — the researchers also found that angling each heliostat at a “golden angle” of 137.5° with respect to its neighbor would result in less blocking and loss of solar radiation.
Alexander Mitsos, one of the lead researchers on the project, points out that although the biomimetic layout has seen a lot of interest, the Fermat’s spiral pattern has not yet been directly implemented in a commercial concentrated solar power plant. Some CSP plants like the Gemasolar plant do seem to have a spiral pattern. However, “as far as I know, these are not the biomimetic ones,” Mitsos says.
Tapping the Tides
Energy found in waves off the U.S. coast could theoretically supply the equivalent of about 66% of U.S. electricity generation in 2017, according to the U.S. Energy Information Administration. To tap into the vast potential of the oceans to provide energy, University of Wisconsin computational scientist Jennifer Franck draws inspiration from the flapping flight of insects, birds and bats to design “oscillating hydrofoils” — to extract energy from tides.
Conventional devices for extracting energy from tidal currents rotate. An oscillating hydrofoil resembles an aircraft wing, but with a symmetrical elliptical cross section that allows for energy harvesting as the tide ebbs and flows. The hydrofoil heaves in response to tidal currents to turn the energy of tides into electrical current. Franck compares this pitching and heaving motion to the fluke of a large whale, except that the animal usually uses this motion for propulsion.
What is it about flapping motion that makes it a good source of power? Franck and her collaborators found that heaving at certain frequencies and pitching at certain amplitudes leads to the generation of a large amount of lift force. Not only that, but because the motion mimics natural movements of fish and aquatic mammals, “we think that it is more friendly for the environment,” Franck says.
The team has shown that this device can be scaled up and can also function well in shallow water. It is currently working to determine optimum placement of components.
“My sense is that if we can develop an optimum array configuration of these flapping foil devices, it would generate enough energy per square foot to make it competitive with wind and solar energy,” Franck says.
Inspired by Mud
Reza Alam, a professor of mechanical engineering at the University of California, Berkeley, found his inspiration for reducing the cost of marine energy in a rather unlikely place — mud.
“Mud can take up a huge amount of energy from ocean waves,” says Alam. In the coastal state of Kerala in southwest India, he notes, the rivers bring abundant mud to the shoreline during the monsoons. The mud absorbs energy from waves, calming the water, attracting fish and giving local fisherman a bountiful catch.
“If mud can do such a great job in harnessing energy from ocean waves, why don’t we design something that behaves like mud, and responds to the action of waves passing over it?” he asks.
In a laboratory testing facility, an artificial seafloor “carpet” whose design was inspired by the behavior of mud transforms wave energy into hydraulic pressure. Photo courtesy of the Theoretical and Applied Fluid Dynamics Laboratory at UC Berkeley
Taking inspiration from this phenomenon, Alam and his team designed an artificial seafloor “carpet” that absorbs energy as the mud does, then turns it into useful power. Potential applications include powering offshore aquaculture and seawater desalination.
“In California alone, an average of 35 kilowatts of energy per meter of coastline come towards the coast from the ocean,” Alam says. “This means that every meter of California coast can power seven houses with the device operating at 20% efficiency, which is conservative.”
The team is currently testing different materials and configurations in a wave tank to figure out what works best in different environments, such as rocky or muddy shores. A former graduate student from Alam’s lab, Marcus Lehmann, started a company called CalWave Power Technologies that works on an anchored wave energy technology inspired by the seafloor carpet concept.
Fishy Turbines
At Stanford University, bioengineering professor John Dabiri and colleagues are testing vertical axis wind turbine farms inspired by fish schooling patterns.
Conventional wind farms employ horizontal axis wind turbines, which spin at right angles to the wind much as windmills did on the farms of yesteryear. While individual horizontal axis turbines operate at high efficiencies, the turbines need to be spaced far apart so that the airflow patterns generated by one turbine do not interfere with the performance of neighboring turbines. To tackle this issue, Dabiri’s team turned to vertical-axis wind turbines instead.
Swimming fish create patterns of water movement in their wake that resemble the patterns of airflow generated behind wind turbines. Rather than being inhibited by these flow patterns, neighboring fish actually utilize them to enhance and coordinate their swimming as constructive interference of flows between neighbors minimizes the “drag,” or resistance to airflow. (If you’ve ever drafted a truck while driving or another rider while bicycling, you’ve experienced the phenomenon yourself.)
Dabiri’s team used this fish-schooling pattern to inspire wind farm design for optimal energy harvesting. Rather than following the conventional horizontal-axis approach and spacing turbines far apart, they placed vertical-axis turbines in close proximity.
They found that if neighboring turbines are staggered and rotate in opposite directions, the alteration of wind speed and direction by adjacent turbines can actually be beneficial for collective performance of the wind farm. In fact, the team’s studies at the California Institute of Technology’s Field Laboratory for Optimized Wind Energy (FLOWE) found that the power generated per unit area can be almost 10 times greater at high wind speeds compared with that for modern horizontal axis turbine farms.
Commercialization Challenge
It certainly appears that biomimicry has plenty to offer efforts to improve the efficiency and economics of renewable energy. However, a significant impediment seems to be the slow pace of commercialization.
The reasons for this are complex and interwoven. In the case of marine energy, the lack of consolidated test facilities is a concern for scientists, especially because permits for testing in the ocean are hard to obtain. New technologies are tricky to assess without designated test sites and dedicated funding from the government and industry.
Survivability in harsh environments and environmental impact are also major concerns for any clean energy technology.
“The hardware development is inherently slow and expensive,” says Dabiri. “The idea of using biological inspiration is usually attractive, but the hard work is in developing a technology that can function successfully in the real world for a long time.”
In the case of concentrated solar power and wave energy, the limiting factor appears to be economic.
“The idea of using wave energy to generate electricity is not new, and there are thousands of patents with some brilliant ideas out there — and interestingly, for wave energy devices, most of these ideas work,” says Alam. “But the question is, can you generate power that can compete with fossil fuels?”
The jury is out over how many of these bio-inspired technologies will see the light of day. For the sake of the planet, many hope that at least some of them do.
Editor’s note: Shweta Narayan wrote this story as a participant in the . The mentor for the project was Nate Berg.
UPDATED 08.31.19: The focus of CalWave Power Technologies’ work was corrected.
Originally published at ensia.com on August 29, 2019. | https://ensiamedia.medium.com/clean-energy-technology-is-taking-cues-from-sunflower-spirals-schooling-fish-and-other-natural-cbca941b719d | [] | 2020-01-12 20:16:01.103000+00:00 | ['Environment', 'Technology', 'Energy', 'Clean Energy', 'Renewables'] |
This 6-Minute Method Will Help You Deal With Difficult Emotions | How to manage difficult emotions
I think I’ve gotten better at managing my emotions by implementing some healthy habits in my life.
But there are times when I’m in a certain situation that triggers some difficult emotions that overwhelm me completely.
I deal with them by doing a simple 6-minute exercise that helps me cope with them and feel calmer afterward.
A big problem is that we often ignore the difficult emotions or bottle them up because we are afraid to face them. That will just worsen the problem and it will make you feel irritated even at the smallest things.
The exercise I’m about to teach you helps you deal with these difficult emotions by facing them and accepting them. It works because it forces you to pay attention to anything but your emotions and the only way to get through them, it’s right through the middle.
It’s an 8-step exercise
1. Sit with the feeling
Sit down in a chair or lay down in bed. Close your eyes, that way you can’t escape your emotions
2. Put names to the feeling
By naming your emotions, you reduce their impact on you. So try to identify the exact words of what you’re feeling.
Usually, powerful emotions are a mixture of multiple ones, so give several names. For example, hurt, pessimistic, discouraged, rejected.
3. Remind yourself that it’s just a feeling
Don’t give it too much power, yet try to listen to its message.
What is this feeling trying to tell me?
Why am I feeling this way?
Was it triggered by a recent event or a past experience?
Take some time to reflect on the message of your feeling
4. Let your tears out
Don’t hold back.
Now is time to let everything out and by no means diminish the importance of your emotions. There is a reason or several reasons why you are feeling that way and they are all valid.
5. Recognize that no feeling lasts forever
Accept your feelings and accept that whatever you are feeling doesn’t determine who you are.
For me, this was a very important realization because when you recognize that you are not your emotions, it’s power over you decreases.
6. Picture the feelings as a wave
This is the step I like the most. What you need to do is imagine water waves washing over you.
I visualize myself standing still while waves come from every direction and wash over me.
It’s like these water waves are your emotions and they try to hit you hard. The stronger the emotion, the harder the water waves will hit you.
And all you need to do is stay still, don’t do anything to stop the waves from hitting you.
I like this step because it helps me remember that my emotions are not me and they are just trying to provoke me to react aggressively.
7. Use your breathing to help you
While the water waves are still washing over you, focus on your breathing.
Every time you inhale, you are breathing in strength to fight this difficult emotion. Your ability to tolerate it gets stronger and stronger.
Repeat it over and over.
8. Put it aside and distract yourself from it
You’ve sat with the emotion. Once you feel it lessening, you can finish the exercise and distract yourself from it.
You’ll notice it’s easier to distract yourself from it than before you did the exercise. | https://medium.com/live-your-life-on-purpose/this-6-minute-method-will-help-you-deal-with-difficult-emotions-af604848b8d | ['Leo Zeballos'] | 2020-12-25 18:03:00.896000+00:00 | ['Mental Health', 'Self Improvement', 'Life', 'Life Lessons', 'Self'] |
The Great Divide | Storm On The Way?
[W]e have to rethink our attitudes toward one another and toward the pursuit of truth. It’s not simply recognizing that people who hold different views than we do aren’t by definition stupid, corrupt, wicked or malicious; it’s that we come to a place where we believe we might have something to learn — or at least something to consider — from those whose views and outlooks and life experiences are different than mine. — Peter Wehner
In a perfect world, or one at least much better than the one we’re mired in now, those sentiments would merit greater respect and thoughtful, fair-minded consideration. Without the level of understanding Mr. Wehner wisely suggests, Left and Right will remain political and cultural combatants fighting on turf neither can claim exclusively as their own, and for purposes more muddled than we realize.
Notwithstanding, each partisan side — determined to gain some nebulous advantage over the other — will continue to dig in that much more. (This is assuming, unfortunately, that we’ve not yet reached bottom.) There will be no winners. There will, however, be much more harm to follow.
We could consider the possibility that better options await.
This sounds simple enough, of course. Reality intrudes. As one planted firmly on the left side of the divide — and no doubt echoing a complaint from those on the Right — the explanations/rationales/justifications for viewpoints contrary to my own are often maddeningly nonsensical.
A recent USA Today opinion piece* about this week’s election, which I had at first glance quickly dismissed as yet another echo crisscrossing the great political and cultural divide that is America in 2020 — just one more offering in an endless parade of Trump-supporting observations I find both intolerable and baseless — nonetheless struck a chord.
* by Lauren DeBellis Appell [links to her own citations can be found in Ms. DeBellis Appell’s commentary.]
That initial reaction was predictable. It’s by now almost automatic when assessing support for an opposing candidate. The refrain usually sounds something like this: “What the &*%!? is this person thinking?” Most of the time, partisan assessments begin and end with variations of the same inquiry.
Some are kinder than others. These days, however, not many. Why waste even the slightest measure of whatever generous spirit remains on someone obviously delusional and so clueless at this point that he/she still supports/opposes Donald Trump/Joe Biden?
This time, I regrouped. I did so not to find more reasons or opportunities to mock yet another opinion I cannot pretend to understand, bolstering my bona fides as a credentialed opponent of Trump. I did try, however. Despite the explanations and comments offered by the author, support for Trump is for me incomprehensible still. But I knew that. Step One, first. Then, stop.
The intent here is not to persuade, accuse, or denigrate. It’s instead a small stone tossed with different purpose into our troubled waters. The ripples are not designed to gently re-direct Trump followers away from the currents that flow hard Right. Why I should explain that support for Trump is an awful decision would be just another opinion sinking quickly into the mud, lost amid countless others, all roughed-up in some fashion.
Shaping our rebuttals is a process. A separate, reactionary process quickly develops in response to what Trump supporters offer. Perhaps if they understand how and why we react as we do to their processes — precisely what and how we are assessing their views and reasons — closed doors may crack open just a bit.
Behind our many standard knee-jerk objections, assorted levels of consternation, and bewildered astonishment are a cascade of insults and condemnations waiting to be unleashed. Investing a bit of time to first consider the why which generates our criticisms, and then sharing them, might instead create some space for rational and respectful dialogue. There are matters much more important than filling our insult score cards.
Worth a try….
After establishing why she’s not a Biden supporter as her circumstances might otherwise suggest, Ms. DeBellis Appell planted her Trump flag. “I’m proudly voting for him based on the promises he’s kept and the leadership he’s shown over the last three and a half years.
President Trump has done more that I support than any president in my lifetime, and certainly more than Joe Biden in his decades in Washington.”
Two paths before me: mock/insult/dismiss, or approach it with a more open mind and explain first. Easier said than done, but Door 2 was the choice. Door 1 is, as ever, an option.
A brief foray into her views about the months of protest across the nation left no doubt that Ms. DeBellis Appell was not especially sympathetic as to why the protests took place, and certainly no fan of how some protesters — sadly — saw the demonstrations as an excuse and opportunity to let the less-than-honorable aspects of their personalities lead the way. Most of us who do support the protests draw the line when they cross the line.
She states:
The question is, which America do you want your kids to grow up in?
An America where the Orwellian mantra of “peaceful protests” reverberates in dismissing violence and anarchy? Where public safety is an afterthought? Where “mob rule” rules the day as we’ve seen in many Democrat-run cities across the country as rioting and looting have destroyed businesses and ended innocent lives.
When facts and contradictions get in the way, one strategy is easy enough to rely on: ignore them, and hope no one is paying that much attention. Generalizations help, too. The broader, the better. It’s commonly understood that the conservative id is not inclined to factor in nuances or too many considerations which might call into question the point being raised. But facts matter, including the ones calling into question their particular narratives.
By what standard is “public safety an afterthought,” or “mob rule” a common feature of [snarkily-labeled] “Democrat-run” cities? Trump rallies hardly qualify as peaceful, respectful gatherings! Far-right groups such as the Proud Boys or those Michigan extremists who plotted to kidnap, try, and apparently kill the Governor of an American state don’t suggest a gathering of the local church choir in a neighborhood park.
Cherry-picking examples to bolster a stand taken is hardly uncommon, one definitely not confined to Trump supporters. It’s easy, simple to incorporate, and does have many self-serving advantages. But what does an unwillingness to make a good-faith effort at examining the different viewpoints and considerations contribute to our hyper-partisan battles except to ratchet up the animosity? Why reference one apparently partisan-inspired killing, as the author did, but not mention, for example, Kyle Rittenhouse?
How can anyone in good faith debate who bears responsibility for failing to, as the author argued, “unequivocally condemn violence” yet mention not so much as a word of Trump’s repeated and often overt incitement/acceptance of violence as a tactic? He certainly is not the sole contributor, although it must be noted he does have a higher profile than … well, everyone else.
Real-life isn’t as easily persuaded by partisan assertions that one’s “team” offers the only valid perspectives. Ignoring or dismissing alternative viewpoints is a choice. Outcomes and consequences are never far behind.
How much more and for how much longer do we want to test the limits of partisanship before THE last line is crossed? Tuesday’s election and the threats bandied about, not to mention the many less-than-subtle hints Trump himself keeps dropping into conversation about unfavorable results, are closing whatever thin gap remains between here and that last line.
To what end? Does anyone think that once the bonfire is lit we can just as quickly and easily extinguish it and retreat to our side of the divide for yet another round of I’m Right and You’re An Idiot?
To wrap up the portion of her essay on violence, Ms. DeBellis Appell then offers this:
In an unprecedented move, police organizations all over the country, some of which have never endorsed a candidate before and some which had previously endorsed Democrats, are endorsing Trump this year.
Why not explain that the source references exactly one [legitimate, albeit traditionally right-leaning] organization, instead treating it as if it is The Official Last Word on the matter? It took me no more than thirty seconds to discover, among many other endorsements offered to both Biden and Trump, this Fox News posting: “More than 175 current, former law enforcement officials” who were endorsing Joe Biden, with a sub-title indicating they “slam Trump as ‘lawless’ president.”
Tie game once again. What’s been accomplished except to enhance distrust and provide easy excuses to mock the other side for not being fair-minded and forthcoming about important particulars intentionally omitted? Gratifying to be sure — limited in value though it is — but at what cost?
Amplifying an assumed fear [“Suburban moms like me believe the safety of our families also depends on this election”] as reason to latch onto that endorsement doesn’t seem to resonate with the majority of suburban women, if most polls are to be believed. Despite Trump’s determined efforts, widespread hand-wringing at the prospect of an army of Cory Bookers moving into their neighborhoods and being … uh, neighbors and such has yet to be observed. The dog whistles are getting louder, however.
In the interests of not offending the author’s religious concerns, and more specifically avoiding commentary on both her admiration for and defense of Amy Coney Barrett, along with the Senate Republicans’ less than honorable actions regarding the appointments of the three most recent additions to the Supreme Court, I’ll leave that discussion for another time. But Ms. DeBellis Appell’s characterization of Justice Barrett as “a trailblazer” begs what seems like an obvious and legitimate question: In which direction?
But to that point — recognizing that extending cherry-picked arguments and nothing else has its drawbacks — I do want to note Ms. DeBellis Appell’s concerns that “Biden threatens to turn the Supreme Court into his own liberal mini-Congress. He wants to form a commission to explore options that ‘go well beyond’ adding to the court” seem a bit off the mark. Her pre-emptive conclusion that a possibly-expanded Supreme Court would “rubber-stamp his policies, including perhaps a national lockdown over COVID-19” suggests an admirable skill at predicting the future, but seems a bit heavy-handed as to the fears implied.
Aren’t Republicans suggesting the same expectations regarding their policy preferences? Trump hasn’t been shy about asserting “his” judges should decide election results in his favor.
Her final word on the imagined possibilities she herself suggested: “Now that is scarier than anything Trump has said or proposed” as being the most worrisome issue we face seems to overlook a few other pre-election episodes already on the books.
I can’t blame most right-leaning citizens from steering clear of defending the galling hypocrisy and unprincipled efforts of Senate Republicans to first deny President Obama a choice to nominate a SCOTUS appointee and then ram through Amy Barrett. Skipping right over their failure to instead prioritize relief to millions suffering as a result of Trump’s appalling non-management of the pandemic comes as no surprise, either.
The almost obligatory snide comment, while not uncommon from partisans on either side: “Stay tuned for the Supreme Court to start resembling game day on a football field with ‘Team Biden’ jerseys instead of black robes,” does call into question the sincerity and validity of the purported concerns. Why not instead consider the intent of the bipartisan commission Biden proposes as an honorable proposal, rather than stating an intention to simply pack the Court with four new left-leaning Justices on Day One?
Are we so numb to a lack of reasonable governance over the past four years that we’re incapable of even recognizing a good-faith effort? If nothing else, it is a welcome contrast to the countless behind-the-scenes trampling of norms and ethics by the current Administration, with accompanying misinformation if an explanation is called for.
It’s difficult to square right-wing concerns about what Biden might do next year when comparing Trump’s less-than-subtle efforts to throw the imminent election itself into chaos over his repeated — baseless — claims of voter fraud and the hypocrisy tagging along.
As for the prospects that right-wing judges might be called upon to weigh in on the election….? That’s a nightmare inside a conflagration no one should welcome. There will be no controlling what would quickly become uncontrollable. That situation cannot end well for any of us, regardless of who is declared the “winner” and on what basis, if it is on anything other than counting the votes.
Another concern of Ms. DeBellis Appell’s is reopening schools. As a parent of three, the youngest an Army veteran now pursuing a Master’s Degree on his way to a Ph.D., perhaps I’m not as qualified as she is to discuss the objectively understandable (if in application unimaginable) challenges parents of young school-age children are contending with each and every day.
Her position is quite clear. Some may quibble with the conclusion, but no thinking person would suggest this situation confronting tens of millions of households has been a picnic for parents, children, or teachers:
“The virus has taken a toll on our children as well as our economy and physical health. Keeping our kids isolated at home learning behind a computer screen is a disaster.”
One could add this perspective: A disaster, not doubt, but for many, not nearly as painful as are the consequences of a POTUS who knew from the get-go that the virus was quite deadly yet chose to do nothing; repeatedly misled the nation; undermined experts; disregarded the well-being of his own supporters by holding superspreader rallies despite the scientific, fact-based evidence of the dangers posed, and all because addressing the virus as any rational leader should have done immediately also risked bad publicity. As many other have observed, for a narcissist like Trump, there appears to be no greater threat than doing/saying anything — or nothing — that might in some way reflect poorly on him. Quite the trade-off….
Mindless defenses which skip right past these legitimate perspectives are exasperating, to say the least. Duly noted that dealing with a pandemic is yet another contentious public issue for which the easy and obvious answers have long since left the building. But having done nothing in preparation for these challenges, coupled with a complete absence of guidance and national policy to be expected from a President, it’s beyond debate that Trump’s efforts haven’t exactly been shining examples of leadership.
No one can yet say with any certainty what the long-term consequences might be to those who’ve contracted and survived/recovered. Some of the reports — small samples though they might be — provide legitimate and serious justifications to worry about needless exposure or foolish risk-taking. It is almost beyond rational comprehension to think that any other official would have so cavalierly ignored the responsibility to provide guidance and leadership. It is all the more confounding given that Donald Trump was well aware of the threat — and its potential — from the earliest days of the pandemic.
Leaving it to local communities, if not to individual groups and families, to formulate plans on their own is both impractical and unconscionable in a nation with the resources we possess. Ignorance and denial may be options of choice for some who prefer to think of this as a hoax or an exaggeration. Millions of others would gladly exchange places with those so confident about their thoughtless conclusions.
While there is no reason to doubt Ms. DeBellis Appell’s observations about the mental health consequences to children and families, we should not forget the impact on teachers, their families, and the communities themselves. No one should be surprised when frustrations boil over. But neither should we dismiss concerns and fears about the virus itself. Nearly a quarter of a million American families have very real and painful, unending grief to contend with.
Ms. DeBellis Appell states that “in Biden’s America he’ll always stand with the teachers, even when they are wrong.” Would any elected official from either party automatically offer blanket support to teachers or other constituencies “even when they are wrong”? What’s the point of making such an allegation?
“How do we know?”, she asks … and then answers by linking to a Wall Street Journal article. “Because he told them,” she explains. “In July he said, ‘You don’t just have a partner in the White House, you’ll have an (National Education Association) member in the White House,’ referring to his wife, Jill Biden.” I’ll confess that I’m not clear on how that translates into total, blind support for anything and everything teachers might propose.
Is it automatically a terrible thing to be looking out for teachers? Is anyone prepared to legitimately doubt Joe Biden’s commitment to children and families? And why cast support for teachers as somehow indicative of a bias against children?
The argument is all the more exasperating after taking a few minutes to examine the “leadership” of Betsy DeVos as Secretary of Education. For example:
Education Secretary Betsy DeVos said Tuesday that it’s not her responsibility or that of the federal government to track school districts, their coronavirus infection rates and how they’re reopening — the most direct response to education leaders across the country who have been urging the Trump administration for a comprehensive database to help them navigate the pandemic…. The statement comes as the country’s 13,000 school districts struggle with how to provide instruction for more than 50 million children during the pandemic. Educators have been especially critical of DeVos and the Trump administration for lack of guidance, especially given how much pressure the White House has put on them to reopen for in-person learning. “This is a significant missed opportunity,” Noelle Ellerson Ng, associate executive director of policy and advocacy at AASA, the Superintendents Association, says. “It represents a continued lack of leadership. And while we often talk about the importance of local control, when it comes to a pandemic, leadership should start at the top and go down.”
We’re now confronted with many issues of national impact, importance, and consequence. Scoring points for one team or the other is exciting for those few seconds, but hopes for a better future call for more than partisan nonsense, half-truths [if that], and juvenile insults.
We might all benefit from more honorable discussions about our disagreements and find ways to move all of us forward in the direction of a better future. What’s the alternative?
Regardless of Tuesday’s outcome, that will matter more than we know. | https://medium.com/discourse/the-great-divide-db1a0a37a288 | ['Richard Turcotte'] | 2020-11-01 21:53:34.055000+00:00 | ['Partisanship', 'Polarization', 'Election 2020', 'Consequences', 'Coronavirus'] |
A Complete Guide to Python Lists | A Complete Guide to Python Lists
Use Python Lists Like a Pro
Photo by Corinne Kutz on Unsplash
In this article, I will try to explain Python lists, along with exploring why and when to use them, meanwhile giving you some hints about the correct usage of the list methods.
Let’s understand the Python list data structure in detail with step by step explanations and examples.
What are Lists in Python?
Lists are one of the most frequently used built-in data structures in Python. You can create a list by placing all the items inside square brackets[ ], separated by commas. Lists can contain any type of object and this makes them very useful and versatile.
Fundamental characteristics of Python lists are as follows; They are mutable, ordered, dynamic and array type (sequence) data structures.
Let's explore each of these characteristics in further detail;
New elements can be added or removed in runtime. Lists are not fixed-size objects, so they are called dynamic data structures .
. The order in which you specify the elements when you define a list is maintained so lists are ordered .
. Lists can be changed after they have created. You can add, remove items or modify the value of available items in a list object. You do not need to have another copy of the list object for revising it. So they are called mutable objects .
. Although Python lists are not fixed-size and constrained like arrays in C++ or Java, they are still array type data structures where the items contained are stored in memory sequentially and accessed by an index number representing the memory block for a specific element.
in C++ or Java, they are still array type data structures where the items contained are stored in memory and accessed by an representing the memory block for a specific element. A list object can contain duplicate elements.
When to Use Lists?
Let’s get a basic understanding of when to use lists over another data structure;
When we want to store a collection of heterogeneous objects in a single data structure. We can store any of the primitive data structures (integers, strings, etc) as well as compound data structures (lists, sets, tuples, dictionaries, etc) inside a single list object.
When we want to keep the order of data unchanged. The order we put the items into the lists is preserved, we can access the elements with the same order as we put them in.
As lists are mutable, it is not a good idea to use them to store data that shouldn’t be modified in the first place.
How to Create a List?
There are many ways of creating and initializing lists. You can use square brackets[] or list(iterable) constructor as shown below;
# empty list with square brackets
empty_list = [] # empty list with square brackets
empty_list = list() # list of integers
integer_list = [1, 2, 3] # list with mixed data types
mixed_list = [8, "Hi", 3.3] # list from an iterable object
iterable_list = list("data-science") # a nested list
nested_list = ["cat", [3, 2], ['d']]
Python Sequence Functions and Methods
Lists support sequence operations like Indexing, Slicing, Membership, Concatenation, Length, Iteration and some others as they are sequence type objects.
Indexing: Items in a list can be accessed by index using the indexing operator. Python indexing starts from zero. You can use a negative integer as an index to start from the end. If you use an integer beyond the length of the list then you get IndexError. If your index is not an integer then you get TypeError.
# list of integers
integer_list = [1, 2, 3, 4, 5, 6, 7] # indexing
print(integer_list[0]) #prints '1'
print(integer_list[6]) #prints '7'
print(integer_list[-1]) #prints '7'
print(integer_list[-7]) #prints '1'
print(integer_list['a']) #TypeError: list indices must be integers
print(integer_list[7]) #IndexError: index is out of range
Slicing: Slicing is a very powerful tool as it helps you to create another list with a range of objects contained in the original list. Slicing seems unintuitive at first but once you grasp the basics you can use them easily to enhance the lists operations.
# list of integers
integer_list = [1, 2, 3, 4, 5, 6, 7] # slicing with [start:end] takes all elements between 'start' to 'end' does not include 'end' element
print(integer_list[1:3]) # prints '[2, 3]' # slicing with [start:] includes all elements after 'start' including the 'start'th element
print(integer_list[2:]) # prints [3, 4, 5, 6, 7] # slicing with [:ends] includes all elements before 'end' and does not include the 'end'th element
print(integer_list[:5]) # prints [1, 2, 3, 4, 5] # slicing with negative integers to start indexing from end
print(integer_list[-2:]) # prints [6, 7]
print(integer_list[:-5]) # prints [1, 2]
Membership: You can check whether if a specific object is member of a list or not. To do this, you can use ‘in’ or ‘not in’ membership operators.
# list of integers
integer_list = [1, 2, 3, 4, 5, 6, 7] # membership
4 in integer_list # True
8 in integer_list # False 4 not in integer_list # False
8 not in integer_list # True
Concatenation: You can use ‘+’ or ‘*’ operators to concatenate the lists.
# list of integers
integer_list = [1, 2, 3] # list of strings
string_list = ['data', 'science', 'rocks'] integer_list + string_list #[1, 2, 3, 'data', 'science', 'rocks']
string_list * 2 #['data', 'science', 'rocks', 'data', 'science', 'rocks']
Length: If you pass a list to the ‘len()’ function, you will get the length of the list in return.
# list of integers
integer_list = [1, 2, 3, 4, 5, 6, 7] # len() with lists
len(integer_list) # 7
Iteration: Lists are iterable objects so you can iterate over a list with for loops.
# list of integers
integer_list = [1, 2, 3] # iterate over list elements
for element in integer_list:
print(element) # 1 2 3
Other operations: There are some additional operations that apply to most of the sequences such as max(), min(), count(), index(). Let’s see how you can use them with list objects;
# list of integers
integer_list = [1, 2, 3, 1, 8, 1] # max(), min()
max(integer_list) # 8
min(integer_list) # 1 # count(), index()
integer_list.index(3) # 2
integer_list.index(1) # 0
integer_list.count(1) # 3
integer_list.count(9) # 0
List Methods
Here are the remaining methods you can use with the list-objects.
append(element): Adds a single element to the end of the list. This method modifies the original list and does not return a new list;
# list of integers
integer_list = [1, 2, 3] # append()
integer_list.append(5)
print(integer_list) # output
# [1, 2, 3, 5]
extend(other_list): Adds the elements in other_list to the end of the original list;
integer_list = [1, 2, 3]
string_list = ['data', 'science'] # extend()
integer_list.extend(string_list) #[1, 2, 3, 'data', 'science']
integer_list.extend('rocks') #[1, 2, 3, 'data', 'science', 'r', 'o', 'c', 'k', 's']
insert(index, element): Inserts a single element at the given index and shifts the elements after to the right.
string_list = ['data', 'science'] # insert()
string_list.insert(1,'-') #['data', '-', 'science']
string_list.insert(3,'rocks') #['data', '-', 'science', 'rocks']
sort(key=None, reverse=False) — Sorts the given list, does not return a new list object. sort() function sort the list in ascending order. You can provide reverser = True if you need to sort the list in descending order. You can also define a custom sorting order by creating a function to define which element of list items to be used as a key for ordering. As you can see in the below example, second_char(string_element) function returns the second character of each string element in the original list. The list then sorted based on the return value of the second_char function.
integer_list = [1, 2, 3, 1, 8, 1] # sort()
integer_list.sort()
print(integer_list) integer_list.sort(reverse=True)
print(integer_list) # custom sort
string_list = ['science', 'rocks', 'data'] def second_char(string_element):
return string_element[1] string_list.sort(key=second_char, reverse=False)
print(string_list)
reverse() — Reverse simply reverse the list order. This function does not return a new list.
— Reverse simply reverse the list order. This function does not return a new list. remove(element): Finds the first instance of the given element and removes it. Throws ValueError, if the given element is not present.
integer_list = [1, 2, 3, 1, 8, 1] # remove()
integer_list.remove(1)
print(integer_list) #[2, 3, 1, 8, 1] integer_list.remove(1)
print(integer_list) #[2, 3, 8, 1] integer_list.remove(2)
print(integer_list) #[3, 8, 1] integer_list.remove(9) #ValueError: list.remove(x): x not in list
pop(index): Removes and returns the element at the given index. Removes and returns the last element if the index is no provided.
integer_list = [1, 2, 3, 4, 5, 6, 7] # pop()
integer_list.pop(3)#returns 4
print(integer_list)#[1, 2, 3, 5, 6, 7] integer_list.pop() #returns 7
print(integer_list)#[1, 2, 3, 5, 6]
List Comprehensions
List comprehensions provide an elegant and concise way of creating lists. The syntax for the list comprehensions;
new_list = [expression for member in iterable(if conditional)] integer_list = [x for x in range(10) if x % 2 == 0 if x % 5 == 0]
print(integer_list) #[0, 10, 20, 30]
Conclusion and Key Takeaways
As data structures are fundamental parts of our programs, it is really important to have a solid understanding of Python lists to create efficient programs.
I explained why and when to use the lists, some of the key takeaways are listed below;
Fundamental characteristics of Python lists are as follows; They are mutable , ordered, dynamic and array type (sequence) data structures.
, and data structures. We can use the lists when we want to store a collection of heterogeneous objects in a single data structure .
when we want to store a collection of in a . New elements can be added or removed in runtime. Lists are not fixed-size objects, so they are called dynamic data structures .
. The order in which you specify the elements when you define a list is maintained so lists are ordered.
Thank you for reading! | https://towardsdatascience.com/a-complete-guide-to-python-lists-6b592c8d5707 | ['Erdem Isbilen'] | 2020-11-22 23:12:15.859000+00:00 | ['Python', 'Python List', 'Python List Methods', 'Programming', 'Python List Comprehension'] |
The Power of Probability in AI | The Power of Probability in AI Shafi Follow Jun 14 · 6 min read
An Overview of Probability in AI, ML and NLP
This blog explains basic Probability theory concepts which are applicable to major areas in Artificial Intelligence (AI),Machine Learning (ML) and Natural Language Processing (NLP) areas. Probability is the heart of AI.
Following are major topics discussed and applicable in AI area, familiar with these topics will make you comfort in AI.
Please note that all these topics come under Probability Theory , in images I mention only Probability, Chain Rule and Bayes — take it this as Probability Theory.
1) Distributions: 2) Probability Axioms,Random Variables, Types of Random Variables 3) Conditional Probability 4) Independence 5) Bayes Rule 6) Chain Rule 7)Maximum Likelihood, and 8) Maximum A Posteriori (MAP)
Probability and Information Theory in AI
Let’s start by giving small introduction for probability and information theory in Artificial Intelligence,both subjects used for uncertainty. Probability theory allows us to make uncertain statements and reason in the presence of uncertainty, whereas information theory measure the disorder (or uncertainty) in a probability distribution.
Distribution: In simple terms its a data source and provides various kinds of data to use in AI applications, so that we can draw samples from distributions ( like Normal, Poisson, Bernoulli, Binomial, etc.,), We can generate distributions by using functions and probability concepts. We can build our own distributions and later draw sample for Training and Testing data sets.
Probability Formula in Mathematics
Where n is the total no of events and n(E) favourable events.
Probability: The probability of a desired event E is the ratio of trails that result in E to the number of trails performed. It always lies in [0,1].
Axioms of Probability :
1) 0<=P(E)<=1
2) P(S)=1
3) P(success) = 1-P(failure), therefore P(success) + P(failure =1)
Where E — Event, S= sample space (set of all possible outcomes of an experiment.Sample Space: The set of possible values of each trial.
2) Random Variable: A Random Variable (RV) is a variable that can take on different values randomly. For example, x1 and x2 are both possible values that the random variable can take on. X = [x1,x2]. Random Variables are 2 types 1) Discrete Random Variable (DRV) — It has finite or countably infinite number of values/ states. 2) Continuous Random Variable (CRV) , It is associated with a real value. Note: All Variables and Events are expressed in terms of Random Variables.
3) Conditional Probability: It is defined as some event, given that some other event has happened. This is known as Conditional Probability. We denote that Y= y given X=x. It can be expressed with the formula. It is only defined when P(X =x) > 0. X and Y are Random Variables.
Conditional Probability Formula
4) Two Random Variables X and Y are said to be Independent if their distribution can be expressed as product of two factors , X and Y are conditionally independent given Z.
Conditional Independence Formulas
5) Chain Rule or Product Rule: Joint Probability Distribution two or more Random Variables may be decomposed into conditional distributions.
Chain Rule for 2 ,3 and N Random Variables
6) Bayes Rule :
Bayes Rule formula
Parameter Estimation :Estimating the value for parameter
7) Maximum Likelihood Estimator (MLE)
MLE Formula
It is used to Estimate the Parameter to reduce the error between Training Set and Prediction
The Random Variables will be words (internally convert into Vectors), Vectors, Numbers, etc.,
8) Maximum A Posteriori (MAP)
MAP Formula
MAP is used for prediction using Bayesian Rule,While the most principled way is to make prediction using the full bayesian posteriori distribution over the parameter , it is still often desirable to have a single point estimate.
Probability in Artificial Intelligence (AI)
AI Subjects or fields can be categorised as Learning, Problem Solving, Uncertainty & Reasoning , Knowledge Representation and Communication.
This Diagram shows where Probability Theory can be applied in AI area, Learning (Specially Machine Learning) & NLP be part of AI , but listed out separately due to widely used & necessity for understanding.
Probability Theory applying in different models in AI
In this diagram listed out Bayesian Networks based Probabilistic programs for making reasoning, reasoning over time and for decisions. The models listed out with respect to its area like Learning-based for Machine Learning, State-based for Problem Solving, Logic-based for Bayesian Networks, Logical-based for First Order Logic, and Communication for NLP (which is not listed in the diagram).
Probability in Machine Learning (ML)
The following diagram shows where Probability Theory can be applied in Machine Learning algorithms area,mostly it would be Generative Algorithms, Classification Algorithms and Estimation of Parameters.
Reinforcement Learning (RL) is the branch of ML and works on Environment and Reward basis,here we apply in MDP and POMDP process.
Generative Algorithms:
Usually we try to learn p(y/x) directly from the space of inputs to the labels are called Discriminative learning algorithms. Where y is label and x is input space.
The algorithms that we instead try to model p(x/y) and p(y) are called Generative Learning Algorithms.
Using Bayes Rule: p(y/x) (posteriori)= p(x/y) p(y) (priori). The Well known example is E-mail classification whether it is ham or spam. These tasks are called Text Classification in NLP.
For Example: In Naive Bayes Classification Algorithm both Generative and Discriminative tasks come into picture.
Estimation of Parameters: Estimation of parameters almost comes in all Supervised algorithms where parameters need to reduce error between Training and Testing data sets. For ex: Regression, Logistic Regression, Naive Bayes, Neural Networks, SVM, etc.,
Linear Regression as Maximum Likelihood:
The following example gives MLE (Maximum Likelihood Estimation) for Regression Algorithm for Mean Squared Error (MSE) loss function.
Input Space — x ; Algorithm Output: ^y. Mapping from x to ^y is chosen to minimize MSE, Instead of producing a single prediction ^y, we have to think of the model as gives p(y/x). Now p(y/x) is the distribution to all of those different y values that are compatible to x. Since the examples are IID., the conditional log-likelihood is
MLE Equation
Expanding MLE equation
Probability in Natural Language Processing (NLP)
This diagram shows where Probability Theory can be applied in NLP.
Probability Theory can be applied in NLP for N-grams, Language Modeling (LM), Conditinoal Language Modeling (CLM), Text Classfication (Email — spam or ham), parts-of-speech, Speech Recognition, Machine Translation , Information Extraction ( by applying CRF (Conditional Random Field)), etc.,
Knowing the concepts and background stuff is very important in AI, I hope this will give you starting point to start AI internal working stuff. Other topics also come into the picture but these are major ones, once you understand these other topics can grab easily.
Imagine expert in Probability Theory stuff will take you in many fields in AI. That’s why Probability is the Heart of AI.
Thanks for reading this article and drop a note for comments, mistakes etc., | https://medium.com/swlh/the-power-of-probability-in-ai-bfe07bbea061 | [] | 2020-06-16 11:18:05.679000+00:00 | ['Probability', 'Artificial Intelligence', 'Machine Learning', 'NLP', 'Bayesian Statistics'] |
Decision tree Intuition | What is a Decision Tree ?
I am sure you are using Decision Trees in your day to day life without knowing it. For example,
Imagine you only do four things at the weekend: go shopping, watch a movie, play tennis or just stay in. What you do depends on three things: the weather (windy, rainy or sunny); how much money you have (rich or poor) and whether your parents are visiting.
Decision Tree example
Interpreter: You say to your yourself: if my parents are visiting, we’ll go to the cinema. If they’re not visiting and it’s sunny, then I’ll play tennis, but if it’s windy, and I’m rich, then I’ll go shopping. If they’re not visiting, it’s windy and I’m poor, then I will go to the cinema. If they’re not visiting and it’s rainy, then I’ll stay in.
T hus, Decision tree is a type of supervised learning algorithm. It works for both categorical and continuous (regression) input and output variables. In this technique, we split the population or sample into two or more homogeneous sets (or sub-populations) based on most significant splitter / differentiator in input variables.
Decision trees use a criteria (there are multiple criteria available) to decide to split a node in two or more sub-nodes
The creation of sub-nodes increases the homogeneity of resultant sub-nodes. In other words, we can say that purity of the node increases with respect to the target variable
Decision tree splits the nodes on all available variables and then selects the split which results in most homogeneous sub-nodes.
Why Decision Trees?
Easy to Understand and Interpret.
Requires little data preparation. Other techniques often require data normalization, dummy variables need to be created and blank values to be removed.
Able to handle both numerical and categorical data.
Able to handle multi-class problems.
Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret
Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.
Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.
The final decision tree can explain exactly why a specific prediction was made, making it very attractive for operational use.
Important Terminology related to Decision Trees
Root Node: It represents entire population or sample and this further gets divided into two or more homogeneous sets.
It represents entire population or sample and this further gets divided into two or more homogeneous sets. Splitting: It is a process of dividing a node into two or more sub-nodes.
It is a process of dividing a node into two or more sub-nodes. Decision Node: When a sub-node splits into further sub-nodes, then it is called decision node.
When a sub-node splits into further sub-nodes, then it is called decision node. Leaf / Terminal Node: Nodes do not split is called Leaf or Terminal node.
Nodes do not split is called Leaf or Terminal node. Pruning: When we remove sub-nodes of a decision node, this process is called pruning.
When we remove sub-nodes of a decision node, this process is called pruning. Branch / Sub-Tree: A sub section of entire tree is called branch or sub-tree.
A sub section of entire tree is called branch or sub-tree. Parent and Child Node: A node, which is divided into sub-nodes is called parent node of sub-nodes where as sub-nodes are the child of parent node.
How does Decision Tree work?
There are multiple algorithms written to build a decision tree, which can be used according to the problem characteristics you are trying to solve. Regression trees are used when dependent variable is continuous and Classification trees are used when dependent variable is categorical.
Few of the commonly used algorithms are listed below:
ID3
C4.5
CART
CHAID (Chi-squared Automatic Interaction Detector)
Though the methods are different for different decision tree building algorithms but all of them work on the principle of Greediness. Algorithms try to search for a variable which gives the maximum information gain or divides the data in the most homogeneous way.
If you want to dig deep into the above mentioned algorithms then check out this link http://www.shogun-toolbox.org/static/notebook/current/DecisionTrees.html
In-short..
There are multiple metrics used by decision trees in order to find out the best split variables.
Entropy
A decision tree is built top-down from a root node and involves partitioning the data into subsets that contain instances with similar values (homogeneous). ID3 algorithm uses entropy to calculate the homogeneity of a sample. If the sample is completely homogeneous the entropy is zero and if the sample is equally divided it has entropy of one.
Mathematically,
Information Gain
Entropy gives measure of impurity in a node. In a decision tree building process, two important decisions are to be made — what is the best split(s) and which is the best variable to split a node.
Information Gain criteria helps in making these decisions. Using a independent variable value(s), the child nodes are created. We need to calculate Entropy of Parent and Child Nodes for calculating the information gain due to the split. A variable with highest information gain is selected for the split.
Check out this link for more information on entropy and information gain-http://www.saedsayad.com/decision_tree.htm
Limitations:
Over fitting: Over fitting is one of the most practical difficulty for decision tree models. This problem gets solved by setting constraints on model parameters and pruning (discussed in detailed below). Not fit for continuous variables: While working with continuous numerical variables, decision tree looses information when it categorizes variables in different categories.
How to avoid “Over fitting” ?
Pruning is a technique in machine learning that reduces the size of decision tree by removing sections of the tree that provide little power to classify instances. Pruning reduces the complexity of the final classifier and hence improves predictive accuracy by the reduction of over-fitting.
In other words, it will check for the best split instantaneously and move forward until one of the specified stopping condition is reached.
We first make the decision tree to a large depth.
Then we start at the bottom and start removing leaves which are giving us negative returns when compared from the top.
Suppose a split is giving us a gain of say -10 (loss of 10) and then the next split on that gives us a gain of 20. A simple decision tree will stop at step 1 but in pruning, we will see that the overall gain is +10 and keep both leaves.
I hope this post give you a quick introduction to Decision tree intuition in machine learning.
Thanks for reading. | https://medium.com/greyatom/decision-tree-intuition-a38669005cb7 | ['Biraj Parikh'] | 2017-08-09 18:20:59.130000+00:00 | ['Statistics', 'Business Intelligence', 'Decision Tree', 'Data Science', 'Machine Learning'] |
Adopting TypeScript Will Make You Suffer. | Immutability?
I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be. — Rich Hickey, creator of Clojure.
Programming with immutable values nowadays is becoming more and more popular. Even modern UI libraries like React are intended to be used with immutable values. Immutability definitely eliminates a whole category of bugs from our code.
What is immutable state? Simply put, it is data that doesn’t change. Just like strings in most programming languages. For example, capitalizing a string will never change the original string — a new string will always be returned instead.
Immutability takes this idea further, and makes sure that nothing is ever changed. A new array will always be returned instead of changing the original one. Updating user’s name? A new user object will be returned with its name updated, while leaving the original one intact.
With immutable state, nothing is shared, therefore we no longer have to worry about the complexity of thread safety. Immutability makes our code easy to parallelize.
Functions that do not mutate(change) any state are called pure, and are significantly easier to test, and to reason about. When working with pure functions, we never have to worry about anything outside of the function. Simply focus on just this one function that you’re working with, while forgetting about everything else. You can probably imagine how much easier development becomes (in comparison to OOP, where an entire graph of objects has to be kept in mind).
Immutability in TypeScript?
Dealing with immutable data structures in TypeScript is significantly worse than in JavaScript. While JavaScript developers can use libraries that help with immutability, TypeScript developers typically have to rely on the native array/object spread operators (copy-on-write):
Unfortunately, the native spread operator doesn’t perform a deep copy, and manually spreading deep objects is cumbersome. Copying large arrays/objects is also not good for performance.
The readonly keyword in TypeScript is nice, it makes properties immutable. However it is a long way from having support proper immutable data structures.
JavaScript has good libraries for working with immutable data (like Rambda/Immutable.js). However, getting such libraries to work with the TypeScript type system can be very tricky.
JavaScript is a clear winner for immutability. Too bad, TypeScript. | https://medium.com/swlh/typescript-will-make-you-suffer-7cc6ca4b1233 | ['Ilya Suzdalnitski'] | 2020-12-06 17:28:20.687000+00:00 | ['Programming', 'JavaScript', 'Reasonml', 'Software Development', 'Typescript'] |
Astronomers Spot Comet Chury’s Ultraviolet Aurora | Astronomers Spot Comet Chury’s Ultraviolet Aurora
The first identification of an ultraviolet aurora around the comet Chury is further evidence that nearly half a decade after the end of the Rosetta mission, the ESA’s historic project is still delivering breakthrough science.
Earth’s aurora provides a stunning visual light show called the Northern Lights which has fascinated observers for centuries. But astronomers have discovered that other bodies such as other planets and moons have auroras too. Now, researchers at the University of Bern have for the first time discovered an ultraviolet wavelength aurora around the comet 67P/Churyumov-Gerasimenko — or Chury for short.
Comet 67P/Churyumov-Gerasimenko or Chury for short. ( © ESA/Rosetta/NAVCAM, CC BY-SA IGO 3.0)
The effect — discovered by the team from their analysis of data from the European Space Agency’s (ESA) Rosetta mission — is caused when charged particles from the Sun, carried in what is known as the solar wind, strike Chury’s coma — gas in situ around the comet. The phenomenon is similar in ways to that which causes the Northern Lights above Earth but strikingly different in others.
“Rosetta is the first mission to observe an ultraviolet aurora at a comet,” says Matt Taylor, ESA project scientist. “Auroras are inherently exciting — and this excitement is even greater when we see one somewhere new, or with new characteristics.”
Marina Galand of Imperial College London, lead author of the study, says: “The resulting glow is one of a kind. It’s caused by a mix of processes, some seen at Jupiter’s moons Ganymede and Europa and others at Earth and Mars.”
The study comes almost exactly four years after the ending of the Rosetta mission in 2016 and is published in the journal Nature Astronomy.
Chury’s Ultraviolet Aurora Identified for the First Time
The finding represents the first time that an ultraviolet aurora has been correctly identified around a comet, and it wouldn’t have been possible without the data form the ESA’s Rosetta mission. In fact, the UV glow around Chury — a Jupiter-family comet that was originally located in the Kuiper belt — had been spotted before but had been wrongly attributed as photons from the Sun, much like the ‘airglow’ of the Earth — a faint emission of light by a planetary atmosphere.
This image shows the key stages of the mechanism by which this aurora is produced: as electrons stream out into space from the Sun and approach the comet, they are accelerated and break down molecules in the comet’s environment. Some of the atoms of hydrogen and oxygen are produced in an excited state and de-excite by producing ultraviolet emissions, the observed aurora. The auroral nature of the emissions has been revealed from the analysis of observations from a set of in situ and remote-sensing instruments aboard Rosetta (RPC, ROSINA, VIRTIS, MIRO and Alice). © ESA ( ESA/ATG medialab)
Thus the key question for the Bern team, was, could they prove that the ultraviolet glow was actually caused by solar wind electrons accelerated towards the comet and striking the gas in the coma and not ‘airglow’ — or ‘nightglow’ as it is often called?
Martin Rubin from the University of Bern Physics Institute and co-author of the paper, explains: “Since this process is a very high energy one, the resulting glow is also highly energized and therefore in the ultraviolet range, which is invisible to the human eye.”
Galand continues, empathising the importance of Rosetta’s data: “By analyzing the Rosetta data though, it was revealed that solar wind electrons are the reason for the glow and not in fact photons, as previously assumed.”
Animation of ultraviolet aurora being produced at comet Chury. ©ESA (spacecraft: ESA/ATG medialab)
One interesting difference between the glow observed around Chury and Earth’s aurora is that the former doesn’t require a magnetic field around the comet. Therefore, this data could give astronomers and astrophysicists a new way of studying the Sun’s solar winds.
“The observation of cometary aurora phenomena definitely has an aesthetic value,” says Rubin. “Beyond that, the UV observations from Earth could one day also provide information about the solar wind at these comets — even without a space probe like Rosetta being on site.”
The analysis conducted by the team required data from various with the Rosetta Orbiter Spectrometer for Ion and Neutral Gas Analysis (ROSINA) mass spectrometer, in particular, providing vital data on the Chury’s composition and the density of the coma surrounding the comet.
The team combined this with data from other instruments on Rosetta such as the ALICE UV spectrograph, Rosetta Plasma Consortium (RPC) Ion and Electron Spectrometer (IES) and the Langmuir Probe (LAP), the Microwave Instrument for the Rosetta Orbiter (MIRO) and the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS).
The Rosetta Mission: Still Delivering the Goods
The controlled impact of the Rosetta lander on comet 67P/Churyumov-Gerasimenko — Chury — that occurred on 30th September 2016 brought an end to the ESA’s historic mission. It followed two years of remote observations on the target comet, leaving the craft to rest in the comet’s Ma’at region.
After entering into an orbit around Chury on 6th August 2014 — following a ten-year journey that saw it also collect data on the asteroids 867 Steins (in 2008) and 21 Lutetia (in 2010) — it launched the Philea lander. The arrival of the lander at Chury’s surface in November that year marked the first ‘soft landing’ of a human-made spacecraft on a comet and the first scientific activities conducted on such a body.
Artist’s impression of the Rosetta orbiter deploying the Philae lander to comet 67P/Churyumov–Gerasimenko (not to scale) ( © ESA–C. Carreau/ATG medialab.)
Early in the Rosetta mission, data from the craft had allowed astrochemists to detect the presence of important chemical elements, necessary for the early development of life such as the amino acid glycine — a key component of DNA and cell membranes. The finding added credence to the theory that some of the ingredients for life arrived at the surface of Earth by comet collision.
And Rosetta has continued to make history.
In February this year, a study based on a synthesis of Rosetta data showed that during the two years it observed Chury, the comet’s nucleus changed colour, becoming progressively less red. Researchers found that Rossetta saw this shift as Chury approached the Sun and that it was reversed — with the comet once again reddening — as it moved away from our star.
Comet, comet, comet Chameleon: Colour changing comet. (ESA)
Such observations simply wouldn’t have been possible without Rosetta and the ESA’s long term in situ dedication to tracking a comet as it orbits the Sun. No ‘snapshot’ of Chury could have delivered either of the findings described here and so-many more not mentioned. Further to that; this new research shows that secrets still lurk in the data Rosetta collected and that the mission is still capable of delivering surprises. It’s just up to researchers to tease it out.
As Kathrin Altwegg, head of ROSINA, the mass spectrometer at the University of Bern, points out: “The analysis was complicated and required data from various instruments. | https://medium.com/predict/astronomers-spot-comet-churys-ultraviolet-aurora-7abae99ce419 | ['Robert Lea'] | 2020-09-21 15:02:46.976000+00:00 | ['Space', 'Science', 'Astronomy', 'The Universe', 'Comet'] |
Roman and Jewel. A AAMBC Review. - The AAMBC Journal | We were very excited to received Roman and Jewel by Dana L. Davis after having read the synopsis. Dana L. Davis is an accomplished actress and author. She has previously written 2 young adult novels, both boasting 4 and 5-star reviews.
Roman and Jewel is centered on 16-year-old actress Jerzei Jhames. She has auditioned for the Romeo and Juliet broadway retelling, Roman and Jewel, for the main female role. However, even after losing the role to celebrity, Cinny, her undeniable talent does not go unnoticed by producers and the male lead, Zeppelin. The novel follows the self-proclaimed “Good Girl” through her journey of self-discovery and her navigation of the teen romance world.
We rate this novel 4.5 out of 5 stars. The novel was very well written and was captivating from the first few pages. The characters were well developed and seemed to come alive off the pages. Another plus to this novel is that the characters were multicultural and from different walks of life. Although this novel’s intended audience is teens, the novel was still very entertaining and age-appropriate. The novel is centered on a theater show, and there is a great deal of mentioning theater shows, techniques, and infamous people. As a person who may not be deeply involved in the theater world, these mentions could become cumbersome. | https://medium.com/the-aambc-journal/roman-and-jewel-a-review-f6881b25ab04 | ['Aambc Review Community'] | 2020-11-07 00:22:27.560000+00:00 | ['Book Review', 'Book Recomendations', 'Black Writers', 'Black Books Matter', 'Books'] |
Confessions Of A $100,000 Copywriter | He was a referral, so I had to be a little bit polite. He booked time on my calendar and I sent him across some samples to review.
“Tim said you’re a great writer, good to work with. I was reading the samples you sent over, and I thought to myself: ‘I bet this guy wants to make a hundred thousand dollars this year!’”
Imagine a pause here. A pregnant one. Eight and a half month’s worth.
“I don’t bill by the year,” I said. I started up on my option for a monthly retainer.
“Nah, I’m serious. We’ll get a contract started, and you’ll be on your way to making $100k a year.”
I had heard this before. It was the same pitch I heard when I sold vacuum cleaners door-to-door. It’s what you hear when your friend really wants to tell you about how they’ve found financial freedom with a vitamin shake.
It’s not that I couldn’t use a hundred grand, who couldn’t?
There’s just a weird stipulation when one person offers you that much money to write.
You might get it as an advance for a novel. If that’s the case, you’ve already written five novels, three of them were turned into movies, one of which starred a bubbly Disney airhead.
You might also get that kind of money if you are handling a VERY large copy project. The type you have to hire other writers for. The kind that comes with TV commercials on national broadcast and writing in product placements for the movies.
A referral? And one who opens the conversation with cash?
“Imma send you a link,” he says to me. It pops up in my email, and I open it. It’s a three-page job description that reads like a sales pitch.
The third sentence in: I’m here to resolve all of the woes you have as a freelance writer.
He’d handle the prospecting; he’d close the sales. I’d put pen to paper.
“I need someone who can work fast,” he says. “I need copy turned around in like 24 hours, tops. Can you do that?” | https://medium.com/the-ascent/confessions-of-a-100-000-copywriter-cb076fce4ca9 | ['David Pennington'] | 2019-11-22 13:11:01.367000+00:00 | ['Copywriting', 'Lifestyle', 'Careers', 'Writing', 'Freelancing'] |
If I Had a Million Dollars (What to Do With all The Money Your Startup Raises?) | …after my first round of startup financing.
Equities.com
I love that song from the Bare Naked Ladies. If I had a million dollars does not, however, refer to personal wealth in this case. It refers to the successful conclusion of a startup’s first seed round of financing.
Securing a good capital infusion is kind of like a mixed bag of good and bad. Many founders have to fight the urge to upgrade their offices, lease that crazy wild sports car they figured they would get when they ‘made it’ and hire a bunch of staff while others use the funds wisely to scale up their business.
I know a CEO of a failed startup although technically he still has the doors open but he has no employees. He had a great idea and seemed to be on track to live the American dream until he secured his first round of money. He went way over budget on a website and hired 5 C-level employees. As his mentor, I advised against it but he ignored my advice. See ya later!
It was not only financially unfeasible but the level of work was not sufficient to warrant that caliber of staffing. Within 6 months his funds had dwindled to next to nothing and when he looked to his investors they looked the other way. They had been watching what he was doing with their money and they were not amused. The staff, unable to work for shares, soon left.
Just as you, the founder, had done when starting out, everything had to be scaled. You need to consider the frugal versus efficient use of the funding. Planning can help with your burn-rate and financial coaching can help you spend the money wisely but you need to focus on your reasons for raising the money. Unlike the resource industry that seems like they raise money to raise money you need to produce a tangible and measurable result.
From my experience about 25% of startups actually use their raised funds prudently. Founders who have bootstrapped the business from the beginning using their own money usually have a different focus when spending investors’ money. There are a lot of lessons learned from bootstrapping the best of which is that money runs out sooner than expected.
Scaling a tech startup by hiring more programmers or maybe a marketing person makes good sense. The biggest expense of any startup should be in talent followed by marketing people. The theory is with the seed capital the founder can hire people to take a bit of the load off him and allow him better use of his time. It makes sense to me to have professionals create the pitchdecks, schedule meetings and all the milieu activities anyway. Now would be a good time to get all the accounting in place so there is a solid footing for the future.
A good ‘best practice’ is to put the ‘use of funds’ description in any circular or business plan the company distributes so there is clarity after the money is raised. It’s a good reminder for the founder and will allow the investor to sleep a little better.
One of the problems I have encountered after Round A is investor hesitation. Well, let’s call it like it is — pressure! Investors in early rounds may not be that experienced. After all, the first rounds are usually just penny stocks so it has an ease of entry. Pressure from investors can make the founder grow too fast, make bad decisions or hire staff too quickly.
After the initial seed round the founder has been getting used to the fact that bootstrapping is out the window. He’s created a company complete with working prototypes, staff, investors, meeting upon meetings, interviews and more. He is creating a buzz but he needs more money. Round B is required.
One hopes that the first round investors are happy with our founder and his results so that they are willing to invest more money into the startup. But as my partner reminded me today, it’s easy to get financing when you can tell investors your company did 200% increase over each of the last two years. When you consider the first year had $200k in revenues and the second year did $600k those numbers make a good story. It gets difficult to keep the buzz going when revenues are at $2MM and the third year produces ‘only’ a 30% increase. Thirty percent sounds phenomenal but the story to investors is the company failed to increase by 200% — ahghg!
Capital raised from Round B would probably be best used to increase production and staff. With this round though, the staff won’t be a receptionist that came with the first round. We’re looking at PR staff and business development professionals. It’s now expansion time and long range planning with good people in place to make large scaling a reality.
After the second round the CEO is still driving the bus but is certainly expected to be doing less of the operational stuff. He has a business that is getting noticed he can’t falter now so those 10 hours days he was used to in the first round have increased to 12 hours just to keep up with developments.
If he’s reached the Round C of financing he is riding the wave. He has survived, probably in spite of himself and the company has a cadre of investors who continue to capitalize his dream. The Board has now told him that he needs a professional CFO and there may even have been suggestions that he replace himself with a senior guy.
When the financing pours in based on the faith of others it’s time to take stock of everything and make sure the startup (now probably 3 or more years old) succeeds or moves to a liquidity event. The CEO has replaced the daily operations of the startup with a President and has a carefully selected team behind him. His operations are covered, PR is handled and Investor Relations continue telling the story to the investors.
The founder has reached a pivotal point in the management of the startup. He is now a leader! The journey is only beginning.
Gary is a Partner at Equifaira Partners Inc. — Liquidity Event Planners in Vancouver
Originally published at www.equities.com. | https://garybizzo.medium.com/if-i-had-a-million-dollars-what-to-do-with-all-the-money-your-startup-raises-b98dd2e0762e | ['Gary C. Bizzo'] | 2018-03-19 22:29:49.581000+00:00 | ['Startup'] |
She Doesn’t Think She’s Beautiful | Sign up for Reflective Vibes
By Know Thyself, Heal Thyself
Reflection, reflection and more reflection. Each Sunday* we'll be sending out an inspirational quote followed by a short commentary, either in written form, video or link. Enjoy! Take a look | https://medium.com/know-thyself-heal-thyself/she-doesnt-think-she-s-beautiful-108693fdfc05 | ['Elyse Wright'] | 2020-12-13 10:27:23.771000+00:00 | ['Beauty', 'Poetry', 'Writing', 'Poem', 'Friendship'] |
12 of the Best Children’s Books You Need to Get for Your Kids | Sharing the best in children’s literature
When I was in elementary school, we read books. So. Many. Books. I remember because that was what I loved to do most.
Much of what we read in class (and what I read outside of class because of the guidance I received from my parents) were books that could be found on either the Newbery award winner or honor book lists.
When I became pregnant with my first daughter, one of the first things I did was hop on eBay and buy one of the many lots of past Newbery award winners available to share with her when she got older.
I’m not kidding. It really was one of the first things I did. I’m that much of a book nerd!
Two years later, I gave birth to a second daughter. So now I have two little humans with whom I can share my love of books and reading — assuming I can tear them away from the TV and tablets long enough.
This is a list of some of the books I grew up reading that I hope to share with my daughters very soon, now that they are both of an age at which they can better appreciate them.
I hope they love these books as much as I did.
Caddie Woodlawn (1936)
I think I was in 4th grade when I read this one. I had no idea it won the medal so long ago, though. For some reason, I thought it was newer than that. Then again, when you’re 10 years old and a book lover, what do you really care when a book was written? The only thing that matters is that the book exists, and there it is — just waiting for you to read it.
It turns out that this book, like the ever-popular Laura Ingalls Wilder books, is based on a true story. Caddie is based on author Carol Ryrie Brink’s own grandmother. As a young girl, I loved Caddie’s spirit, and I loved imagining myself in some of the same situations and predicaments.
Reading this book certainly did inspire my imagination, and I suppose it must be as inspiring for many little girls since the book has achieved such stature over time.
This is a must-read for any little girl who loves adventure.
Call It Courage (1941)
This is another one we read in 4th grade, and it’s another one that’s age is surprising to me. Keep in mind, I was reading these in the mid-’80s.
I loved this book because a 15-year-old boy was the hero. Not that I was a 15-year-old boy when I read it. I’ve never been a boy. But I was a child, and I did have fears and insecurities, and I would never even consider taking an ocean voyage by myself at that age — not now, either, come to think of it.
Mafatu is a hero in every sense of the word, and this book is appropriately titled. It inspires us all — young and old, male and female. That’s what makes it a true classic and an absolute must for all to read.
Johnny Tremain (1944)
This is another book featuring a boy that I loved.
Anyway, this is a historical novel, set in the era of the American Revolution. Our hero, Johnny, is an indentured servant working as a silversmith’s apprentice. He significantly injures one of his hands while working (preventing him from doing any more silverwork), but he doesn’t let that handicap stop him. He gets a job at the local newspaper (The Boston Observer) and gradually becomes involved in some of the most crucial happenings of the Revolution.
This was a fabulous book because you could actually imagine yourself in Johnny’s position — meeting all these key figures in American history and getting right in the middle of the action. At least, that’s what I did when I read it. I guess that’s why it’s stayed with me for so long.
I highly recommend this book for any child who’s even remotely interested in history. It will not disappoint.
Rifles for Watie (1958)
This is another American history-inspired novel, although this one centers around the Civil War.
16-year-old Jeff Bussey joins the Kansas (Union) infantry, only to be captured by rebel commander Stand Watie and his men. The reader follows Jeff’s adventures as a prisoner of war with bated breath. And he meets a lot of interesting characters along the journey. My favorite, who I still remember to this day, was Heifer, the camp cook. I always thought it was kind of mean that he was named after a cow!
I really love that this book was written by an actual historian. I didn’t know that when I was reading it for the first time, of course (or, if I did know, I really didn’t care). But, looking back on it as an adult, it gives the whole story a much more realistic feel.
This is not just a work of fiction. It is a very well-researched work of fiction. And that makes the story even better.
The Witch of Blackbird Pond (1959)
This one is also historical fiction, set in the American colonial times — before the Revolution.
16-year-old Kit grew up in Barbados with her grandfather, so she’s not at all prepared for life with the New England Puritans. She doesn’t fit in with any of them, and she frequently irritates most of them. She befriends an old Quaker woman who has earned the nickname Witch of Blackbird Pond, and Kit soon finds herself in danger of being classified as a witch herself.
I was first drawn to this book by the word “witch” in the title. I’ve always been fascinated by anything remotely supernatural. So, I was a little disappointed when nothing supernatural occurred. But, looking back after 30+years (and after having read other historical novels focusing on the same time period), I can appreciate the book even more for what it is — a commentary on how women in Puritan times were completely subjugated and even condemned to death for being the least little bit outspoken.
This is a book every little girl should have in her library — and one every mother should read with her little girl, to explain how life used to be for women and to celebrate how far we’ve come as a society since.
Photo by JIMMY ZHANG on Unsplash
Island of the Blue Dolphins (1961)
This is Robinson Crusoe for preteen girls. It’s shorter, easier to read, and much more interesting than that epic tale, in my opinion.
12-year-old Karana finds herself alone and stranded on an island in the Pacific for 18 years. This is the story of her struggle for survival and her journey of self-discovery.
This was another great adventure story, and it’s one I still fondly remember. I can’t say I felt any real kinship with Karana. After all, I’ve never been alone on a Pacific island, and I’ve never been that courageous or ingenious. Still, her story is inspiring even to mere homebodies like me.
A Wrinkle in Time (1963)
I’ve never really been much of a science fiction/fantasy fan, but I’ve always loved a good story, and Madeleine L’Engle certainly can tell one (despite the fact that she starts this novel with the one line every writing instructor tells his students not to use: “It was a dark and stormy night.”). I suppose she can get away with it, though, since she’s not just any writer. She’s a Newbery award-winning author.
On that “dark and stormy night,” the mysterious Mrs. Whatsit comes into the home of Meg Murry and explains to Meg, her mother, her brother, and their friend Calvin O’Keefe the reason for Meg’s father’s disappearance. Mrs. Whatsit then leads Meg and the two boys on a journey to rescue her father. They travel to different worlds, and they even venture outside of time as we know it.
This is a true sci-fi/fantasy classic, and even those of us who don’t normally enjoy those kinds of books will find something to love in this one.
From the Mixed-Up Files of Mrs. Basil E. Frankweiler (1968)
Imagine you run away from home. Okay, that’s not so hard for many of us to do, is it?
Now, imagine you run away to the Metropolitan Museum of Art in New York City and hide out there for a week without anyone finding you. This was a little harder for me to imagine, growing up in Kentucky, but I loved the idea of it. Being surrounded by all that beautiful art, and being able to stay indoors the whole time (I did thoroughly sympathize with Claudia, the main character, on that point)?
That’s my kind of running away!
Add to that the mystery of who Mrs. Basil E. Frankweiler is, and why she would sell a Michelangelo angel sculpture to the museum for a mere $250? Now we’re talking about my kind of book.
This was a captivating, lighthearted read that I loved as a child and look back fondly upon as an adult. It is one I will definitely be reading to my little girls — or maybe I’ll encourage them to read it themselves when they’re older.
After all, stories that are read to you are good, but stories you read and experience for yourself are sometimes even better!
Sounder (1970)
A boy and his dog. I wrote a story with that title once when I was little, and it may well have been inspired by all the boy and dog stories I’ve read over the years. I don’t exactly remember what my story was about, but I know it wasn’t anywhere near as good as this one by William H. Armstrong.
This is a beautiful story about a boy’s friendship with his dog. But it’s more than that. It’s the story of a boy who has to grow up prematurely to support his family because his father has been put in jail for stealing a pig. It’s a story of racial prejudice and injustice. And it’s all the more interesting to me because it was written by a white man. Armstrong writes with such clarity and feeling as if he actually has experienced the injustices himself. There are not many white writers who could claim the same.
This is another historical novel, and it’s set in a time and place in the U.S.’s history that is dark and tragic — the Jim Crow years. I would say that time period is better left forgotten. But is it, really?
We would do well as a society to remember that time and history so that we would never repeat it again. And that’s why I think Sounder’s story should always be read.
Summer of the Swans (1971)
I loved this book when I read it as a preteen. I totally identified with Sara’s feelings of being awkward and ugly. I felt exactly the same things when I was that age. Now that I’m older, I realize there are very few of us who aren’t awkward and/or ugly in some way, so I don’t feel so down on myself. Not all the time.
But back to Sara: She learns that her negative self-image is nothing more than just thinking about herself more than she thinks about other people. When her younger, mentally handicapped, brother Charlie goes missing one night, Sara joins in the search for him. In the process, she discovers she’s a much more likable person than she previously thought.
This really is a beautiful story, and it’s one every girl should read. I’m not sure boys would enjoy it that much, but it might be worth a try.
Photo by Piotr Makowski on Unsplash
The Westing Game (1979)
This is one of the best mystery stories I’ve ever read. Ever. Seriously. And I’m talking adult and children’s books both.
An eccentric millionaire who owned the Westing Paper Company leaves his fortune to 15 people who live in the same apartment complex — if they can figure out who killed him. The most amazing thing is that he leaves them all clues that they have to piece together in order to solve the murder.
To this day, I remember all the clues. I don’t actually remember the whole story, though, so I think I could read the book again without any major disappointments. In fact, now that I’m writing about it, I really do want to read the book again!
If your child is anything like I was and loves a good mystery, get this book today.
Bridge to Terabithia (1978)
I loved, loved, loved this book. Really, I did.
I loved the fact that the two main characters — Jess and Leslie — created a whole other world just using their imaginations, and it actually became real. At least, it became real to them, and that’s what matters when you’re talking about imagination.
I loved experiencing Terabithia with them and watching as their friendship developed. There’s nothing like having a best friend, and I loved the way this boy and girl were able to be best friends without anything being complicated by sex. Granted, they were only in 5th grade at the time, but still — it seems like kids grow up way too fast these days, and that sense of innocence and fun just doesn’t seem to exist much anymore.
I will have to say I hated Katherine Paterson’s ending, although I won’t spill it here because I don’t want to ruin the book for any who hasn’t read it (or anyone who hasn’t seen the movie). I just will never understand why she wrote the ending the way she did. But even that marks this as a wonderful book because it makes you think about it — even years later.
Definite classic. Definite must-read!
Newbery Honor Books I’ve Read and Loved
These are some other books I loved as a child — notice the Little House on the Prairie series held a special place in my heart — as did mysteries, fantasies, and just about anything with an animal in it. Come to think of it, my tastes haven’t really changed much over the years!
1938: On the Banks of Plum Creek — Laura Ingalls Wilder
1940: By the Shores of Silver Lake — Laura Ingalls Wilder
1941: The Long Winter — Laura Ingalls Wilder
1942: Little Town on the Prairie — Laura Ingalls Wilder
1944: These Happy Golden Years — Laura Ingalls Wilder
1953: Charlotte’s Web — E.B. White
1957: Old Yeller — Fred Gipson
1966: The Black Cauldron — Lloyd Alexander
1968: The Egypt Game — Zilpha Keatley Snyder
1972: The Headless Cupid — Zilpha Keatley Snyder
1974: The Dark Is Rising — Susan Cooper
Newbery winners — a little something for everyone
I hope this list of Newbery award winners and honor books has given you some new ideas for great books to add to your children’s reading lists.
There is something to be said for books that can stand the test of time, and these books certainly do prove their tremendous staying power over and over again. The American Library Association was right on the money when they selected these books as the best in children’s literature. They truly are just that. | https://medium.com/raise-a-lifelong-reader/12-of-the-best-childrens-books-you-need-to-get-for-your-kids-41586c90aae6 | ['Mishael Witty'] | 2019-10-19 14:37:40.888000+00:00 | ['Ideas', 'Books', 'Parenting', 'Life', 'Relationships'] |
Spiritual Emergency:. By Julia Sellers | Crisis or Transformation?
By Julia Sellers
“Spiritual emergence” is a profound spiritual opening that takes place in the form of different spiritual experiences that usually don’t constitute a serious problem or impairment in the everyday lives of the individuals who experience them. According to a paper by British psychiatrist Nicki Crowley, this kind of emergence is an organic process within human development, during which individuals are able to experience transpersonal elements. By transpersonal, I mean experiences and perspectives that extend beyond the personal level of the psyche and ordinary life.
“Spiritual emergency,” a term first used by Czech psychiatrist and noted transpersonal psychology researcher Stanislav Grof, is closely related to spiritual emergence. Grof posits that the phenomenon of spiritual emergency can actually be helpful in easing many of the problems of today’s world if this phenomenon is supported and understood in the correct way. He was one of the first professionals to identify the spiritual awakenings that happen spontaneously to many individuals in the form of spiritual emergencies. During a spiritual emergency, individuals experience mild or severe distress resulting in impairments in their psychological and social functioning caused by spiritual experiences that may be too difficult for the person to handle. A spiritual emergency may thus be defined as a crisis during which experiences are so intense that they temporarily disrupt the sense of self. According to author Emma Bragdon, the phenomenon of spiritual emergency is quite broad and may be seen as the reason behind the different forms an individual’s struggle takes, including addiction. I further agree with transpersonal psychologist David Lukoff, who posits that spiritual emergency often involves altered states of consciousness.
As both an experiencer and a researcher of out-of-body experiences, I posit that spiritual emergencies constitute an integral part of the spiritual emergence phenomenon, which includes a range of extraordinary spiritual experiences that either happen spontaneously or are elicited by spiritually oriented practices as well as the use of other external elements, techniques, and agents. According to Lukoff, spiritual emergence encompasses phenomena such as mystical experiences, near-death experiences (NDE), meditation-related experiences, kundalini awakening, psychic openings, visionary experiences, purported alien encounters, and other spiritual problems. These spiritual experiences are also called spiritually transformative experiences (STEs), non-ordinary transcendence experiences (NOTEs), and exceptional human experiences (EHEs).
Rejecting the Pathology Label
According to Hungarian psychiatrist Szabolcs Kéri, such spiritual experiences may be accompanied by “pathological” symptoms such as hallucinations, odd behavior, depression, and/or odd thoughts. Therefore, individuals suffering from such symptoms may be misdiagnosed with mental illness. As Grof pointed out, spiritual and mystical experiences offer personal growth potential. They can trigger a powerful transformation and further personal development in individuals undergoing such experiences. Mislabeling them as pathological symptoms may be damaging to spiritual development as well as to the individual’s psychological and physiological well-being.
Kéri has noted out that out-of-body experiences (OBEs) and other extraordinary human experiences such as glossolalia (“speaking in tongues”), or possession may be mistaken for psychoses if the cultural background of the individuals experiencing them is ignored. Psychologist Michael A. Persinger’s study of religious experience, for example, revealed intriguing electroencephalogram (EEG) activity in separate cases of glossolalia and transcendental meditation. During the transcendental meditation case, the EEG showed delta-wave activity in the temporal lobe that lasted about 10 seconds. The second case involved a spike in wave activity in the temporal lobe of an individual who performed glossolalia. Both cases represented healthy individuals with no history of pathology. Based on the study, Persinger hypothesized that experiences of a mystical or religious nature naturally occur in the temporal lobe and are of a transient nature.
Fortunately, there is new hope for people experiencing spiritual experiences which are too much for them to digest without seeking appropriate professional help. The hope comes in the form of a new diagnostic category called “Religious or Spiritual Problem,” which in 1994 was officially entered into the Diagnostic and Statistics Manual of Mental Disorders (DSM) IV.
The new category actually defines spiritual problems as distressing episodes in the life of individuals involving the questioning of spiritual values which are not necessarily related to an organized church or religious institution. Based on this diagnostic category, spiritual problems in the broad categories above may for the first time be officially treated as non-pathological rather than pathological problems.
Understanding Out-of-Body Experiences
Consider OBEs. During these experiences, people often see their own physical body from the elevated visuospatial perspective typical of these experiences. According to psychiatrist Stuart W. Twemlow, OBEs should not be treated as pathological or even as abnormal. He therefore posits that therapists should view OBEs as experiences with the potential for spiritual transcendence. Furthermore, psychologist Alexander De Foe suggests that since both NDEs and OBEs are life-changing experiences that may have a significant impact on an individual’s psychological well-being, those who undergo them should be encouraged to talk openly about their extraordinary experiences within a counseling setting.
Although different intensities of OBE may be distressing, experiencing one does not automatically mean that the person suffers from psychosis. I believe that spiritually-based OBEs may be considered profound transformational experiences and/or spiritual problems experienced by those who undergo spiritual emergence or its more intensive form, spiritual emergency. They may or may not have features resembling psychosis. A number of authors note the resemblance between paranormal experiences and pathological states such as psychosis. Kéri studied the relationship between religious conversion as a form of spiritual emergency and psychosis. The study found that of 53 individuals referred to a psychiatry center with a diagnosis of psychosis, 24 were not pathologically ill after all. Instead, they had had spiritual experiences, such as religious conversions, which represented a deep, transformative episode in their lives.
In addition, some therapists interested in the potential healing aspects of the OBE phenomenon have introduced techniques aimed at helping their clients to intentionally trigger an out-of-body-like experience during the therapy session using artificial means. The goal is to encourage the spiritual as well as personal-level development that a transformative OBE may offer. There has been little research done so far on the therapeutic utilization of OBEs artificially induced during the counseling session, although Paul W. Schenk, in his 2006 book The Hypnotic Use of Waking Dreams, suggests that waking dreams contain certain elements occurring during both NDEs and OBEs. Within the framework of therapy, he encourages clients to deliberately induce a waking-dream state with the help of different visualization- or imagination-based techniques. The aim is to attain an OBE/NDE-like experience that can be utilized for further personal and spiritual growth.
What the Research Shows
The transpersonal element of altered states of consciousness, including OBEs, has been recognized by many authors who study the subject. According to De Foe, the topic of OBEs deserves more attention, especially from the point of view of how therapy may aid those experiencing OBEs. The majority of the current OBE studies examine elicited OBEs in the clinical population rather than in the healthy population or look at OBEs induced artificially rather than at will or occurring spontaneously in the waking/active state.
To date, OBEs in people with pathological conditions such as epilepsy have been studied a fair amount. However, there is a severe lack of studies aimed at researching spontaneous OBEs within the non-pathological population, which have healing as well as transformative potential. Individuals who undergo spontaneous, naturally occurring OBEs may be hesitant to talk about them out of fear of being put down or ridiculed if they do. De Foe’s study argues there has been a severe lack of research into the therapeutic benefits of exploring OBEs. According to him, one of the reasons is the lack of a general agreement on how to approach the phenomenon of OBEs within the counseling framework.
A 2018 study conducted by psychologists Habib Nobakht and Karl Dale implied that dissociation and trauma are common features of both NDEs and mystical experiences. Psychiatrist Jerome Kroll and colleagues studied the relationships between different types of altered states of consciousness such as mysticism, absorption, dissociative episodes and childhood/adolescent trauma and neglect. Their study showed that the tendency to experience dissociative states of consciousness was not correlated with the tendency to undergo mystical experiences characterized by altered states of consciousness. According to the 2016 study conducted by psychologist Yochai Ataria, similarities exist between mystical and traumatic experience. The author posits that one of the most significant common elements of both is the subject’s encounter with nothingness. Interestingly, Greyson and Khanna’s 2014 study of near-death survivors found that NDEs were associated with greater post-traumatic spiritual growth. The study further revealed that NDEs have no influence on post-traumatic spiritual decline.
My Unique Perspective
I myself began having spontaneous out-of-body experiences many years ago. The first was back in 1994. I remember waking up in the middle of the night. Without checking where I was located, I knew I was in my bedroom but clearly out of my body, hovering above my bed. Suddenly, something pulled me very strongly toward the window. I tried intensely to resist but could not, as I was physically out of my body. I don’t remember why I was pulled to the window, as I could have just as easily been pulled through the wall or a closed door. (When out of body, you can easily pass through both.) I clearly heard my own breathing and my heart beating as if the sounds were coming from a nearby radio.
Furthermore, I was able to hear everything that was going on in the next room as if I were present. Suddenly, I was able to see a light coming from either the left or the right — I could not tell which, because when you are out of body, sides sometimes get reversed as through a mirror. The light was becoming more and more intense. I also remember that I tried to raise my hand; however, I could not see a hand. My hand was a part of my real, physical body lying on my bed at that time, but what I saw from above was only its contours. It looked cloudy, shadowy, and gaseous and I knew that this was not a hand made of flesh, muscle, and tissue. It was a phantom, an etheric double hand. At the point of looking at the phantom hand, I clearly thought to myself: “Get back to your body.” And so I did, right after I intended to, using my mind.
Based on the knowledge drawn from such OBE encounters, I came to believe that death does not exist, space and time are transcendent, and life itself is but a small portion of a physical dimension of a much greater, holographic, multidimensional existence or consciousness. These perceptions have convinced me that the transformative nature of OBEs holds great potential, whether it is a onetime event or gradually develops over the course of one’s life. OBEs have a high potential to heal both on the psychological as well as physical level. Further, they are transformative events that bring a host of potential benefits to their experiencers. The gifts of OBEs include unitive consciousness, visionary experiences, ineffability, mystical and contemplative states, etc. I know a man who has had more than 15,000 spontaneous, genuine OBEs. They have helped him greatly on his journey of spiritual emergence, leading to transformation and even transcendence.
Therefore I strongly believe that OBEs in healthy individuals are an essential part of the development of the human psyche, as are other extraordinary or transcendental states of consciousness. I further believe OBEs within the healthy population, where there is no history of clinical pathology, should be fully respected by society and treated as non- pathological. As of today, there is no substantiated scientific evidence proving that extraordinary or other anomalous experiences of a spiritual nature are dysfunctions, deviations, or pathologies.
It is too bad that in many cases people’s OBEs are deeply misunderstood by society and mistaken for a pathological condition by the medical community. Clearly, further scientific research on the effects of spontaneous OBEs and other extraordinary human experiences on the overall well-being of individuals — especially the potential to heal and transform spiritually — should be conducted. | https://medium.com/mad-in-america/spiritual-emergency-crisis-or-transformation-3004847ffcea | ['Mad In America'] | 2020-04-06 18:06:50.652000+00:00 | ['Out Of Body Experience', 'Transpersonal Psychology', 'Mental Health', 'Spiritual Growth', 'Altered States'] |
6 Peculiar Lessons From My 6th Month on Medium | In November, I’ve written 11 posts (one of which was a silly poetry challenge, so I’d count that out, as I didn’t put much effort into it). My curation rate was just over 60%.
I didn’t get accepted into the major publications I usually targeted — like The Ascent or P.S. I Love You. Overall, I felt uninspired and blah, which you can probably notice from the random topics I’ve chosen. But it was the best I could do, and there will always be months like that.
What encouraged me was that older work started to pay off — hence the $65 after hardly any good writing in November. Stats have stayed pretty much the same, while my income went up significantly. I can explain that through either more reading time or a different kind of audience reading my pieces.
Do I feel good about my stats & earnings?
Short answer: Yes. Progress is progress, and I’m still finding my voice on Medium. I don’t feel like I’ve yet given it my all, which could significantly boost my views over the months. Right now, I’m still learning consistency, and I suspect that quality only comes after first making writing a habit. Habit formation is tricky, you guys, and I’m grateful that I’m at least showing up.
Long answer: I feel good about my earnings, but I’m still battling self-doubt on whether I’ll actually make it on Medium. I’ll talk more about this in the lessons below, so read on if you feel the same way.
#1 No amount of courses will teach you virality
As I mentioned in my previous articles (which I’ll link at the very end), I enrolled in two popular courses on Medium writing. One is taught by Sinem Günel, and the other is taught by Tim Denning and Todd Brison.
Both courses are exceptional, and I love that I got to experience two very different teaching styles. However, they have left me a little bitter. And here’s why:
After trying to get my titles, images, research, and formatting on point, reaching a high curation rate, and being accepted in large publications, success is still elusive. The success marketed by these courses, that is. This is why I think they should stop doing that altogether — they are selling the exception, not the rule.
I’ve noticed that many of the writers who routinely accomplish virality (not one-time hits) are extraordinary people. People who either have successful businesses at a young age or have a vast and various amount of life experience. People who are experts in a popular field — like finance, I.T., martial arts even. People who always make it a massive hit when sharing their knowledge. (like Amardeep Parmar — check him out, he only wrote three pieces this month, and he’s still insanely successful.)
So, I guess that in order to monetize your life experience, you need to have some relatable and notable things to share.
#2 Community is very important for boosting morale
What I appreciate most about these courses is the incredibly supportive community they have around them.
We cheer for each other, share our victories, and clap on each other’s articles. We give and receive feedback and feel like we’re not alone in this.
You don’t have to be part of a private Slack group to benefit from this experience; you might already have your community right here on Medium, without even being aware of it.
It’s the people returning to your posts again and again and taking the time to respond. It’s the friends you eventually make in Medium Facebook groups. It’s the editors you get friendly with in small publications.
But there’s a cautionary side to community, too. There are people out there doing the work for years, pumping tons of good content out there, getting curated, getting into large pubs, and still not seeing results. It breaks my heart to see this. And occasionally, especially in F.B. groups, there will be salty people trying to bring you down.
Bottom line? We should be learning from our community, even if it’s sometimes disheartening. The fact that people share their process, the money they make, and what they’ve learned is already precious information.
#3 At the end of the day, it’s all about the reader
I love writing about my life. It’s even in my bio: “Processing life through stories.” This month, however, I’m trying to move away from narrating life experiences. I’ll be exploring more general topics and only insert my life experience where it’s needed.
I feel like I’m not relatable enough, but maybe you have a different opinion on that (and I’d love to hear feedback from you). Let me paint a small picture of my life real quick:
I’m an orthodontist in the process of moving away from my career. Talking about my current job is not a lot of fun, and I don’t think people would be interested to hear more about it. Talking about career change has been met with an equal number of rejections from pubs, and I don’t know why.
I live in Europe — Romania, Transylvania. The cultural difference is real, and I’d love to write more about what’s happening on this side of the world, but it doesn’t really resonate. I see writers outside of the U.S. trying to write like they’re Americans or hiding where they’re actually from like it’s a bad thing. Like we’re lesser than. This American-centric approach should really stop, or maybe it’s just my perception.
I struggle with anxiety. Now, that’s indeed relatable. But I can’t use it a lot in my topics because it’s triggering, and it’s a heavy topic sometimes.
I love gardening, but Medium isn’t into that. Just like I love hiking, pets, and knitting. Those are some quirky passions that people don’t seem to read much about on here.
So I revert to writing personal development stuff because I’m into that too, and people seem to like it. In the end, it’s all about bringing value to the reader by writing what they love to read. Medium is not my diary, and I’m not Shannon Ashley…yet. Hoping to be her someday, fingers crossed.
And three more bite-sized lessons for you:
#4 Writer’s block is essentially self-doubt
When you run out of ideas, analyze your self-talk. I bet it’s trash. There’s only one way to get out writer’s block, and it’s by writing what you perceive as crappy content. Ship it into the world and let go.
#5 If content is king, then research is queen
I’ve lost count of the number of times editors have asked me to include more research in my articles. Medium is promoting quality, well-researched pieces, and we should be grateful for that. It’s training us to become journalists of sorts. Real writers. So don’t skip this step, and don’t believe that one or two links will do. Back up your arguments with reputable sources.
#6 Publication guidelines are hidden gems
When we write a story, it goes something like this: we have an idea, craft a good title, and then write our little hearts out. We don’t care about paragraph length, sources, what’s in it for the reader, topic, etc.
Next, we try to find a publication that will accept our new story without even bothering to understand what that publication is all about.
How about you reverse engineer your process and start with a certain publication in mind? Respect their guidelines, adhere to their general style, thoroughly research your topic, bring tons of value. I bet they won’t reject it then…But let’s face it, we rarely do this. If it were easy, everyone would be doing it.
Final thoughts
Writing on Medium is rewarding in its own way. It opens so many doors; it connects me with so many people.
I have my doubts about ever becoming a serious writer as I navigate my way through the online world, trying to find opportunities. But even if this experiment doesn’t work out, I will come out victorious because of a writing habit, a huge portfolio, and a skill to produce content. In the end, those are valuable skills to apply in blogging, freelance writing, and other similar areas.
If you’re new to my journey, follow my progress below, and you can subscribe to my Newsletter here.
Until next time, my friends. Don’t doubt yourselves, and keep writing! | https://medium.com/illumination/6-peculiar-lessons-from-my-6th-month-on-medium-2adf1ddf8dcf | ['Adriana Sim'] | 2020-12-02 12:22:55.954000+00:00 | ['Writing Tips', 'Medium Writers', 'Mindset', 'Inspiration', 'Writing'] |
Research in Computational Biology and Bioinformatics | Research in Computational Biology and Bioinformatics
Subfields, research areas, data sources and where you can publish your work in the fields of computational biology and bioinformatics
Computational biology and bioinformatics are two popular fields in the scientific research community as more and more interdisciplinary fields emerge. Many who seek higher studies opportunities have asked me this question;
What are the areas one can study or do research in computational biology and bioinformatics?
So, I thought of sharing this article explaining a few subfields in computational biology and bioinformatics, possible research areas, data sources and where you can publish your work.
Image by Arek Socha from Pixabay
Main Research Areas
1. Genetics and Genomics
The study of inheritance based on DNA and how individuals vary is known as genetics, whereas the study of the structure, functions and mapping of genomes is known as genomics. Researchers make use of data obtained from DNA and RNA sequencing and microarrays to determine important nucleic acid patterns and structures.
Image by PublicDomainPictures from Pixabay
Metagenomics (also known as environmental genomics) is a subfield of genomics which studies the genomes of micro-organisms obtained from environmental samples.
Image by Gerd Altmann from Pixabay
A few research problems in genetics and genomics include,
Genome assembly Haplotype phasing Gene prediction Metagenomics binning Plasmid detection
2. Transcriptomics
Transcriptomics is the study of an organism’s transcriptome. The transcriptome is referred to as the sum of an organism’s RNA transcripts. The DNA information in the genome gets converted to RNA through a process called transcription. A segment of DNA that gets transcribed into an RNA molecule is called a transcription unit which encodes genes.
Image by Gerd Altmann from Pixabay
A few research problems in transcriptomics include,
Transcriptome assembly Transcriptome mapping Applications of transcriptomics in autoimmune diseases Differential expression of miRNAs
3. Proteomics
Proteomics is the study of proteins. Proteins play an important role in living organisms for growth, regulation and maintenance of the body’s tissues and organs. The process of transcription produces messenger RNA (mRNA) which serves as a template for the synthesis of protein through translation. Hence proteins produced depend on the genes that are transcribed from the mRNA.
Image by Gerd Altmann from Pixabay
A few research problems in proteomics include,
Applications of proteomics in drug discovery Protein folding Protein structure prediction Protein-protein interaction networks
4. Metabolomics
The study of metabolites, which are molecules produced by metabolism within tissues and cells is known as metabolomics. Researchers try to identify and quantify metabolites using different analytical methods and interpret data. There are difference subfields of metabolomics such as metabonomics and exometabolomics.
Image by Gerd Altmann from Pixabay
A few research problems in metabolomics include,
Metabolic reprogramming Mass spectrometry strategies Identification of biomarkers
5. Phylogenetics
Phylogenetics is the study of how species evolved and what relationships exist within groups of organisms. Relationships are determined using phylogenetic inference methods with DNA sequencing data or morphology. This produces a phylogenetic tree which shows the evolutionary history of a group of organisms.
Image by skeeze from Pixabay
A few research problems in phylogenetics include,
Inferring phylogenetic trees Phylogenetic networks Bayesian phylogenetics Phylogenetic model selection Evolutionary models
6. Systems biology
Systems biology attempts at understanding cells, tissues and organisms, and how they behave and function from the perspective of systems. Researchers try to understand biological processes such as cell growth and maintenance, metabolism and homeostasis, using mathematical models and simulations.
Image by Colin Behrens from Pixabay
A few research problems in systems biology include,
Gene regulatory networks Modelling metabolic interactions Model protective mechanisms induced by antibiotics Studying cell signalling pathways
Data Sources
There are many databases containing biological data available at present. Given below are a few popular databases.
DNA databases
RNA databases
Protein databases
Image by Gerd Altmann from Pixabay
Software and tools
Many open-source software and tools have been introduced to solve various problems in computational biology and bioinformatics. These tools range from simple command-line tools to sophisticated GUI-based applications. The scientific community is encouraged to publish source code publicly under open-source licensing so that others can reuse, modify and improve the code.
Image by Gerd Altmann from Pixabay
Many categories of tools can be found across the literature such as,
Where to Publish Your Work?
Among the possible conferences, you can submit your work in the area of computational biology and bioinformatics to,
Intelligent Systems for Molecular Biology (ISMB) European Conference on Computational Biology (ECCB) Research in Computational Molecular Biology (RECOMB) Workshop on Algorithms in Bioinformatics (WABI) Asia Pacific Bioinformatics Conference (APBC) Pacific Symposium on Biocomputing (PSB) International Conference on Bioinformatics & Biomedicine (BIBM)
Image by mohamed Hassan from Pixabay
Among the possible journals, you can submit your work in the area of computational biology and bioinformatics to,
Final Thoughts
I have explained only a few subfields and problems in the areas of computational biology and bioinformatics. There are many more, and you can find further information by doing a bit of Googling.
Hope you found this article informative. Feel free to share this article with your friends who are planning for higher studies in computational biology and bioinformatics.
Cheers! | https://medium.com/computational-biology/research-in-computational-biology-and-bioinformatics-121d92681aad | ['Vijini Mallawaarachchi'] | 2020-08-21 05:10:34.320000+00:00 | ['Research', 'Biology', 'Genomics', 'Science', 'Bioinformatics'] |
Is tech-veganism the trend of the future? | Avocado toast. Photo by Anna Pelzer.
Is tech-veganism the trend of the future?
How the debate over ethical technology will lead to an industry where there’s something for everyone
We all use technology even if we are not directly involved in the industry. While, of course, we have interactions with plenty of industries we do not have a role in ourselves, technology plays a proportionally larger role in a majority people’s lives in comparison with number of people who produce said technology. And because the risks involved with not understanding it are relatively high compared to other fields we don’t understand, technological awareness is becoming less and less and niche and more so common knowledge.
For example, we don’t have to understand the process of catalytic cracking to trust that the gasoline we pump into our cars will be consumed as fuel and allow us to drive. Perhaps the only other industry that poses the same level of risks as technology is food, and people have become intensely more aware of what goes on in that industry in recent years, seen in the rise of organic, vegan/vegetarianism, fair trade, and all other sorts of mindful food consumption trends.
I think there will be something like this in the tech industry very soon. Not to say that there’s anything inherently “evil” about the tech industry as it is, but people will soon realize the colossal effects the decisions of many have on the few, and then start to take a more active, and educated, approach when it comes to their technological lives. This will lead to something like an organic movement for technology, where people opt for a higher quality product at a higher price, not because of any different in the specific features it offer, but simply by virtue of the process in which it was developed. Namely, people will take a more active interest in the “mission” behind the companies whose products they use, and decide whether that company’s values align with their own.
The signs of this happening are already there, with several large tech companies being under scrutiny by government and consumers alike. But up until now it’s been very divisive and extreme. There are the tech vegans (“Facebook is bad, don’t use it”), and the tech normies, i.e. those who don’t care about where the food (software) is coming from, as long as it tastes good (functions well). But what’s lacking is the option to take a mindful approach to technology which does not entail eliminating certain companies from your life completely (or, attempting to, at least. Living without Google is harder than you might think).
What I see happening in the future is simply more options. Similar products but all coming from smaller players in the industry, rather than one of the big giants, and that those smaller players being able to subsist indefinitely without having to be acquired. Think Whole Foods (now a mainstream brand) vs. your local hipster grocery market. They can both co-exist without competing with each other. You can shop at either one according to price, social impact, convenience, and of course, quality. | https://medium.com/swlh/is-tech-veganism-the-trend-of-the-future-9768c7ba3dc1 | ['Nick Sukie'] | 2020-09-29 19:01:34.393000+00:00 | ['Digital Life', 'Software Industry', 'Startup', 'Mindfulness', 'Technology'] |
The Difference Between Healthy and Unhealthy Shame | The Difference Between Healthy and Unhealthy Shame
YES — there is a healthy type of shame and it’s not what you think.
Photo by Max Brown
The subject of “healthy shame” might be touchy for you (because no shame is healthy shame, gaddamit!), but please don’t jump to conclusions.
I’m only using this terminology because I honestly can’t find a better word for it. If you think of one — please let me know.
What I do know, is the “healthy shame” I’m about to describe is not guilt, it’s not embarrassment, and it’s not even something that can be used to “motivate” someone to do better or be different.
But it is something that could change the course of your entire life.
I know it did for me.
Healthy shame is our humanity, really, but that’s super vague so let’s unpack it together and you can tell me what you think.
The nature of your humanity is…
Vulnerability.
We’re all vulnerable as fuck, people. You’re kidding yourself if you think otherwise or if you think you can outrun or get rid of your vulnerability.
Case in point: One cell in your brain can randomly go haywire and you’re donesies, man.
Another case in point: People do weird things to you and you do weird things to other people and sometimes this breaks you.
You can’t protect yourself from death or heartbreak, darling. This is the epitome of vulnerability.
The difference between healthy shame and unhealthy shame lies in how we interpret our vulnerability.
Mkay, cool. Can we get to the difference between healthy and unhealthy shame already?
Healthy shame is believing your vulnerability is beautiful and needs love. Unhealthy shame is believing your vulnerability is a flaw that needs fixed.
Healthy shame is the humility, compassion, and tenderness you tap into when you get in touch with your vulnerability.
Healthy shame is the wisest version of yourself, wooing you back to your humanity and vulnerability despite your best efforts to escape it.
Healthy shame is the tender arms wrapped around you when you’re at your most broken, whispering into your hair, “You don’t need to be fixed, darling. You just need to be loved.”
This embrace of healthy shame is so tender, so full — so vulnerable — that we quite often can’t stand it.
This is where unhealthy shame comes in.
Unhealthy shame tells us we are a broken machine to be fixed rather than a human to be loved. It cracks the whip and tells us we’ll never be good enough if we don’t get our shit together.
We prefer this lie, because it offers the hidden promise that one day we’ll get rid of our vulnerability if we just try hard enough.
But of course, we can’t get rid of our vulnerability, so we keep trying and trying, frantically spiraling our way into burnout or suicide.
Don’t let this happen to you, love. You are far too precious.
Healthy shame and unhealthy shame are really just two different lenses looking at the same thing.
The nature of humanity is vulnerability, which means we are all broken in some way.
Whether we use healthy or unhealthy shame to frame this reality has the power to change the course of our entire life.
Do your best to ignore the lie of unhealthy shame that says your brokenness is evidence there is something intrinsically wrong with you that you must fix.
Instead, listen to the heartbeat of healthy shame; let the drum beat lead you back to your broken self in this broken moment where you can finally hear the truest whispers you’ve ever heard: | https://medium.com/just-jordin/the-difference-between-healthy-and-unhealthy-shame-9a46c2f6661f | ['Jordin James'] | 2019-08-27 03:00:29.701000+00:00 | ['Relationships', 'Love', 'Life Lessons', 'Mental Health', 'Life'] |
Building a Multiple Object Detection Model with TensorFlow’s Object Detection API | This post isn’t meant to be an in-depth explanation of machine or deep learning, but rather, provide a practical guide on setting up object detection for projects. This blog post will cover building a custom object detection system using TensorFlow’s Object Detection API. I have written another blog post on how to build a custom, single object detection model using Fast AI, which is linked here!
Multiple Object Detection on a Web Application running on Chrome
This is part one of two on building a custom object detection system for web-based and local applications. The second part is written by my coworker, Allison Youngdahl, and will illustrate how to implement this custom object detection system in a React web application and on Google Cloud Platform (GCP).
While there are a few examples of how to implement object detection models online, many are deprecated, do not provide clear documentation on troubleshooting, do not provide customization instructions, or do not provide instruction on exporting the machine learning model. For TensorFlow specifically, the Object Detection API is difficult to navigate, and the troubleshooting process took quite a bit of time. It is my hope that this blog post provides some troubleshooting tips and easy, step-by-step instructions for setting up a custom object detection system. Additionally, many existing tutorials or examples use an ML model that is very slow and would not be practical on mobile.
For this blog post, I ran everything with the following specs: macOS Catalina, Version 10.15.4, 16GB RA, 2.3 GHz, 8-Core Intel i9 .
Background
The impetus for this project is for use in an object-detection web application for detecting products in real-time. In the age of coronavirus, this application is useful as it allows customers to immediately gain information about products without requiring people to physically touch them. Additionally, the app is accessible as a web application rather than a smartphone-specific application and creates exciting opportunities for personalization due to its portability, ease of use, and detection capabilities.
References & Acknowledgements
Before beginning the post, I’d like to acknowledge the following people, of which the work is heavily based off of or referenced. Tanner Gilbert has done some great work with documenting how to use the Object Detection API on his YouTube channel. His work and code reference previous work done by Dat Tran and EdjeElectronics on using the Object Detection API. I found the following guide by Adrià Gil to be incredibly useful for troubleshooting and learning how to properly export TensorFlow models. Additionally, I’d like to thank and acknowledge Allison Youngdahl for her help with proofreading this article and for assistance with troubleshooting as well.
Decision Making
When building the web application, the team looked into several tools for object detection. The two tools that came to mind first were TensorFlow and Fast AI. However, the team also looked into MediaPipe as a potential solution. The team eventually chose TensorFlow because of available documentation on porting TensorFlow models to web applications. Fast AI doesn’t, at the time of writing this blog, have an explicit tutorial on multiple object detection — a desired feature of the web application. The team strayed away from MediaPipe due to a lack of available documentation as well. This blog post will walk through TensorFlow’s Object Detection API for multiple object detection, which was used to build a model for the web application.
TensorFlow’s Object Detection API
TensorFlow’s Object Detection API is an open-source framework that’s built on top of TensorFlow to construct, train, and deploy object detection models. There are new models being added even today, with the most recent addition in March 2020 at the time of writing this article. By employing transfer learning (repurposing a pre-trained model for use with items outside the original training data set), the Object Detection API powers multiple object detection for custom items provided you have an appropriately built/sized dataset.
Building a Custom Model with TensorFlow’s Object Detection API
Disclaimer: For the object detection API, I am writing the instructions assuming that you are using a Mac. If you are following along and use Windows, I cannot guarantee that the same steps will work for you. Thank you for your understanding.
Step 1) Clone the Repository and Install Dependencies
The first step is to clone the TensorFlow models repository and set up the Object Detection API. Tanner Gilbert has dockerized this process, which is available here. If doing it manually, you can begin by first cloning the TensorFlow models repository by typing: git clone https://github.com/tensorflow/models . After cloning the repository it is a good idea to install all the dependencies. But first, we should probably install Anaconda. Assuming that you have homebrew installed, you’re going to want to install Anaconda via brew cask install anaconda . Then, you need to insert the following line in your ~/.bash_profile :
export PATH=”/usr/local/anaconda3/bin:$PATH”
This will enable things to work. The terminal command to get this working:
echo ‘export PATH=/usr/local/bin:$PATH’ >>~/.bash_profile
Now, setting up a conda environment. I found a pretty helpful conda cheat sheet online. If you run conda info , you can check if anaconda was installed properly. To create a conda environment with a specific version of Python, run the following code (replace parenthesis as necessary):
# GENERAL
conda create — name (name of project) python=(version you want)
# EXAMPLE
conda create — name tf-object-detection python=3.7.4
Now, you need to activate the environment by doing the following: conda activate tf-object-detection .
Now, installing those pesky dependencies. I would suggest using conda to install but you can also use pip, both of which are shown below:
pip install — user Cython
pip install — user contextlib2
pip install — user pillow
pip install — user lxml
pip install — user jupyter
pip install — user matplotlib
OR
conda install Cython
conda install contextlib2
conda install pillow
conda install lxml
conda install jupyter
conda install matplotlib
conda install tensorflow=1
Installing the COCO API
COCO is a large image dataset designed for object detection, segmentation, person keypoints detection, stuff segmentation, and caption generation. If you want to use the dataset and evaluation metrics, you need to clone the cocoapi repository and copy the pycocotools subfolder to the tensorflow/models/research directory. Here’s what that looked like on my local machine:
cd cocoapi/PythonAPI
make
cp -r pycocotools <path_to_tensorflow>/models/research/
cp -r pycocotools /Users/**put username here**/Desktop/deep-learning-multiple-shoe-training-and-porting-model-tensorflow/Tensorflow-Object-Detection-API-Train-Model/models/research git clone https://github.com/cocodataset/cocoapi.git cd cocoapi/PythonAPImakecp -r pycocotools /models/research/cp -r pycocotools /Users/**put username here**/Desktop/deep-learning-multiple-shoe-training-and-porting-model-tensorflow/Tensorflow-Object-Detection-API-Train-Model/models/research
Using make won’t work on Windows. To install the cocoapi on Windows the following command can be used:
pip install “git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"
Protobuf Installation & Compilation
The Tensorflow Object Detection API uses .proto files. These files need to be compiled into .py files in order for the Object Detection API to work properly. Google provides a program called Protobuf that can compile these files. Protobuf can be downloaded here. Place the downloaded file anywhere you want (for example in the Desktop folder). The specific file I needed to download was the following: protoc-3.11.4-osx-x86_64.zip . After extracting the folder, you need to go into models/research and use protobuf to extract python files from the proto files in the object_detection/protos directory.
The official installation guide uses protobuf like:
./bin/protoc object_detection/protos/*.proto — python_out=.
This script should work if you installed and did everything correctly. The steps below are if, for some reason, they aren’t working (which is mostly an issue if you’re using a Windows computer). Sometimes, the * , which stands for all files, doesn’t work for people so you can use this Python script to execute the command for each .proto file.
import os
import sys
args = sys.argv
directory = args[1]
protoc_path = args[2]
for file in os.listdir(directory):
if file.endswith(“.proto”):
os.system(protoc_path+” “+directory+”/”+file+” — python_out=.”)
This file needs to be saved inside the research folder and I named it use_protobuf.py. I had renamed the downloaded protoc folder from protoc_macosx_version to protoc and moved it to the research folder. Again, the python script command using use_protobuf is only if the original protoc script command isn’t working! Now we can use it by going into the console and typing:
python use_protobuf.py <path to directory> <path to protoc file>
In my case, I had to run the following commands:
xattr -d com.apple.quarantine protoc/bin/protoc
xattr -d com.apple.quarantine protoc/bin/
protoc/bin/protoc object_detection/protos/*.proto — python_out=.
Adding Necessary Environment Variables & Finishing the TensorFlow Object Detection API Installation
Lastly, we need to add the research and research slim folder to our environment variables and run the setup.py file. To add the paths to environment variables in Linux you need to type (in terminal):
export PYTHONPATH=$PYTHONPATH:<PATH_TO_TF>/TensorFlow/models/research export PYTHONPATH=$PYTHONPATH:<PATH_TO_TF>/TensorFlow/models/research/object_detection export PYTHONPATH=$PYTHONPATH:<PATH_TO_TF>/TensorFlow/models/research/slim
In my case, I would run the following:
export PYTHONPATH=$PYTHONPATH:/Users/**insert username here**/Desktop/deep-learning-multiple-shoe-training-and-porting-model-tensorflow/Tensorflow-Object-Detection-API-Train-Model/models/research export PYTHONPATH=$PYTHONPATH:/Users/**insert username here**/Desktop/deep-learning-multiple-shoe-training-and-porting-model-tensorflow/Tensorflow-Object-Detection-API-Train-Model/models/research/object_detection export PYTHONPATH=$PYTHONPATH:/Users/**insert username here**/Desktop/deep-learning-multiple-shoe-training-and-porting-model-tensorflow/Tensorflow-Object-Detection-API-Train-Model/models/research/slim
To run the setup.py file we need to navigate to ../models/research and run:
# From within /models/research/
python setup.py build
python setup.py install
Now, run the object_detection_tutorial.ipynb from the object_detection folder (Tanner Gilbert has created this helpful Jupyter Notebook, which is available here). You can also check everything is working by simply importing object_detection inside a python shell: import object_detection . If there’s no output, it’s likely working. If things go well, your jupyter notebook looks like the following:
Gathering Data for Transfer Learning
Now that the Tensorflow Object Detection API is ready to go, we need to gather the images needed for training. To train a robust model, we need lots of pictures (at least 50 for each item being trained with 50 images of various items in the same photo) that should vary as much as possible from each other. That means that they should have different lighting conditions, different backgrounds, and lots of random objects in them. You can either take the pictures yourself or you can download them from the internet. I’ve included a separate repository that walks through formatting images and exporting them here. The only difference is you should also run conda install pyqt .
Inside the Fix_Image folder, there is a folder called images which should be empty. Before training the model or creating the testing or training directories, it’s essential to reformat the images (at least, for the way I’m doing it in TensorFlow) to reduce the resolution of the images. This is crucial to prevent the training process from taking too long. Let’s say that you have taken your photos and you’ve added them to the images folder. The next step is to make sure you’re in the Fix_Image directory. Next, run the following command in terminal:
python transform_image_resolution.py -d images/ -s 800 600
This automatically changes the resolution of all the photos in the images folder via a Python script. The script has run properly if you get no output. You can check if the script has actually worked by going into the images folder and seeing the resized images — they will look different than the original photos!
After you have all the images move about 80% to the object_detection/images/train directory and the other 20% to the object_detection/images/test directory. Make sure that the images in both directories have a good variety of classes.
It’s important to note that having .png and .jpg copies with the same name may mess up things when generating XML files as noted by folks who have followed my instructions. For example, if you have picture1.jpg and picture1.png, this may create issues when generating XML files (next step). One way to prevent yourself from having duplicate photo names is the following method (which I haven’t tested):
Download the following script here. Place the downloaded files in a folder called google-images-download. Then navigate to the folder where the python script is and execute (where the item of interest is the object you’re interested in collecting images for):
python google_images_download.py — keywords “(item of interest)” — limit 100 — format jpg
Labeling Data
With all the pictures gathered, we come to the next step — labeling the data. Labeling is the process of drawing bounding boxes around the desired objects. LabelImg is a great tool for creating an object detection dataset.
LabelImg supports two formats, PascalVOC and Yolo. For this tutorial make sure to select PascalVOC. LabelImg saves a xml file containing the label data for each image. These files will be used to create a tfrecord file, which can be used to train the model. The code and documentation for this part is available through my Github in the following repository.
At the end of things, it should look something like the following for the training/test folders (I’m just showing a snippet of the test folder). | https://ronak-k-bhatia.medium.com/building-a-multiple-object-detection-model-with-tensorflows-object-detection-api-5a71eaaa5b96 | ['Ronak Bhatia'] | 2020-05-29 17:14:28.118000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'TensorFlow', 'Multiple Object Detection', 'Object Detection'] |
If You Failed to Get a Job, You Are Not Alone | If You Failed to Get a Job, You Are Not Alone
How industry titans have contextualized and dealt with failure
When you get rejected from a job, you get the impression that you are a failure, the only person who didn’t manage to get that, that you are doomed.
You beat yourself up.
However, if you look at the data, it is hard to find someone who never failed, or was never rejected.
In this category, we include people who also currently amaze us with the work that they have done. | https://medium.com/better-programming/if-you-failed-to-get-a-job-you-are-not-alone-66c074b211ed | ['Fatos Morina'] | 2020-12-16 15:04:54.705000+00:00 | ['Life Lessons', 'Programming', 'Startup', 'Life', 'Technology'] |
Take the boring out of business | Do you already follow any of these “boring” companies on Instagram?
When I started working with my colleagues at Imille, we together analyzed the way that corporations use Instagram in order to help us outline our Instagram strategy for Enel Group, one of the global leaders in energy production and distribution.
As you may know, posting on social media without having anything obviously trendy or fancy to show is such a big challenge, but I also think it’s possible to find the right approach for (almost) any activity. You just need to find the right point of view.
The first step is to take a look at the bigger picture, far from what services or products the business offers. And, more importantly, avoid publishing the company’s work environment. Although it’s important to tell your story and to say who you are, the audience cares more about your company’s impact on the outside world: if I follow a corporation on Instagram, I expect to see some kind of initiative, rather than just a pretty picture that shows you patting yourself on the back.
In this post, I will explain some challenges we faced while developing the project for Enel Group. It won’t be an in-depth analysis, but instead, a series of useful insights for how to build a model for corporate Instagram publications in a more human and ‘deinstitutionalized’ way (and less boring!)
1. Others do it. Should I?
This is the first question that arises, but it’s a mistake. To feel pressured to do something should always raise an alarm. Instead, “Why should I?” is the first question that you should ask yourself. Remember that there’s a crazy amount of content out there, but you need to find your own way on Instagram. Opening an account just because your competitors have one is a big mistake. Posting random stock pictures of happy employees and busy offices is not engaging. In fact, you could end up looking like this meme...
Not the image you are going for!
2. Have a plan! Build on your pillars
As usual, you need a game plan.
When the client asked how we would represent the world of Enel on Instagram, we found ourselves looking at one of the current top leaders in the global energy market. And yet, this company commonly carries an old-fashioned image as a public monopoly in Italy, especially among older generations, despite that it advocates for innovation through partnerships and new business lines. This means that Enel is not always recognized as the innovation leaders they truly are. But the scene has radically changed now: the activities of a modern utility go far beyond energy production. The sector is rapidly changing with renewables, climate action, and technology advancements, giving electricity a more important role every day and putting utilities at the very center of the innovation ecosystem. This brief description is enough to understand that there’s a whole world behind a multi-utility and a new context it can fit into with a clear vision.
Now we are in the center of the technology development scene, and this is something that should be portrayed also in our Instagram. It can be difficult to summarize the entire strategy of a multinational company, so once we define the framework in which we operate, we can choose some “pillars” on which we can start building our communication. It’s Instagram. A company can’t tell everything, so just make a choice. It can also be just one message, as long it’s important enough to engage and truly impact the audience.
An example: Allianz, one of the biggest insurance companies in the world, has a 100% sustainability-focused and ‘insurance policy free’ Instagram account.
On worldcleanupday 2019 the Allianz team is part of the 18m volunteers across 150+ countries to clean up waste in our cities and environment. We’re out on the streets of Munich and globally. Join the movement for a cleaner planet! 🌎🌍🌏
3. I want to be captured by your caption
So now that we understand that we shouldn’t put all of our universe and our operations into one account, we should focus on something that has value both for our strategy and our users. Let’s go back to our Instagram account. It goes without saying that Instagram is mainly visual: pictures catch the eye, but the captions are also key to telling your story and stress your point of view. There are also many interesting examples by other big players from other service sectors.
Let’s take a look, for instance, at what Deutsche Bank, UBS and Microsoft do:
Every year the Christmas tree that stands in the lobby of our HQ here in Frankfurt is not only beautifully decorated with lights and baubles, its branches are also decked with Christmas wishes from people in need. We’d like to thank all our colleagues who year after year make dreams come true for children, the elderly and the homeless in Frankfurt and the surrounding areas.
I like Deutsche’s Bank editorial approach: simple (yet not posed) pictures with informative captions describing their role and impact on society. It’s like a little magazine, nothing that drives you crazy, but still it radiates a reliability and loyalty feeling. They use a neutral tone of voice, they don’t “sell” anything: they’re just showing the positive results of their work, the impact they have on the people around them and how they shape society
Did you know the world wastes around a third of the food it produces? And yet each day there are 200,000 more people to feed. But, turning forests into farmland to allow for more livestock isn’t an acceptable solution anymore. If we fully or partially switch to plant based alternatives we can help save water, energy and land resources.Did you know the world wastes around a third of the food it produces? And yet each day there are 200,000 more people to feed.
UBS has a similar approach, choosing to focus on sustainability rather than directly advertising their services. On their profile, you can find green-tips and higher-level insights about international sustainability goals. This is their way to show how they are “passionate about the future” (as their bio reads). Yes, it does sound a little like marketing fluff, but by scrolling down their IG account, you find both great editorial and aesthetic coherence (I like this cold color grade!).
At Microsoft Quantum, our ambition is to help solve some of the world’s most complex challenges. We’re on the path to building the first topological qubit as we strive to bring general purpose quantum computing to reality.
Microsoft also avoids talking about how marvelous their technology is, showing instead how useful it is for people, companies and organizations.
Back to Enel, we chose to focus on sustainability because it is at the very core of the group’s strategy and it is actually the backbone of its entire communication. We have tied the concept of sustainability to the role of energy in this new stage, a stage in which a multi-utility evolves from being a simple energy supplier to become a platform for change. Sustainability and technological innovation have a very tight bond and together drive this strategic choice. To tackle the urbanization and climate change challenge, cities evolve and become green. People adapt their habits and lifestyle. Sustainability is a whole world that is changing. It is a paradigm shift.
What does all this mean in the context of Instagram? It means that a picture of a wind turbine is not enough to illustrate sustainability.
4. Engage and inform
The effect of telling how our life will change thanks to the development of smart cities or how electrification will be crucial to face climate change might not be immediate. Images without data to back them up lack substance and lose their effect.
In addition, in order not to get lost in the good-will and set phrases mare magnum, it helps to give relevant information to our followers: if you’re such a big, well known international organization it is expected that you give some genuine insights. I know it’s Instagram, so I don’t expect a scientific paper, but it is still possible to share some interesting data and ideas. Even Instagram stories are a perfect place to slide in relevant content.
An example: take a look at the short and well-focused videos made by The World Economic Forum:
5. Choosing photos. People in a context
By not focusing on selling business services or self praising, we are storytelling Enel’s role in the energy transition.
As we said before, the transformation of the energy sector will have an impact both on the environment, through the reduction of emissions, and also for the everyday life of people with smart counters and the electric car boom. Because of this, on the visual level, we constantly focus on people. They are not the main protagonists, but they are in a context: the context of change. From the photography point of view, it means that you should mainly to use portraits with a context and wide-angles, avoiding close-ups or narrow fields.
Are portraits a mistake? No, they are not, in fact, many competitors use portraits to tell stories, but from our point of view, it is more coherent to give relevance to the different backgrounds. In any case, the important thing is to make choices that help the account to have conceptual and aesthetic coherence.
People pay more attention especially if there is an organic story evolving through the posts. You can involve your employees or customers as long as you avoid being self-referential. And if you like the idea of involving people, you can do it by making them part of the big picture. | https://medium.com/redshirts/take-the-boring-out-of-business-ae00cbdfeba7 | ['Andrea Pontara'] | 2020-04-21 12:14:35.011000+00:00 | ['Corporate Communication', 'Social Media Strategy', 'Instagram', 'Content Marketing', 'Storytelling'] |
How to Protect Android App From Reverse Engineering | In my last article, I discussed how to reverse engineering an android app. If you did no reverse engineering before just do it to learn how easy it is to reverse an android app to the original source code.
In this article, I will let you know how to protect an Android app from reverse engineering.
If you want to protect your app from reverse engineering free, unfortunately, there is no such tool. What you can do is to make it difficult for the attacker.
There is a detailed article from android developer website where you can know how to do it. You just have to update your project-level build.gradle file. This feature is not enabled by default.
android {
buildTypes {
release {
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile(
'proguard-android-optimize.txt'),
'proguard-rules.pro'
}
}
...
}
You can do more optimization by applying some proguard rules. But obfuscation will not protect your app — using some tools an attacker can still reverse engineering your android app.
Then what should you do?
You have to pay a yearly fee to buy some tools which will protect your android app from reverse engineering. In most cases, it is not required, if you’re a hobbyist or indie app developer. But if you think your app contains some intellectual property which should not be open to an attacker, you have no other choice but to purchase any of these tools.
1. What premium tools should you use
There may be more than this list. But I only used Dexprotector and Dexguard.
These tools are expensive. And they charge a different price for different developers.
Dexguard charges for per app, so since 2015, I no longer use that. You will not see the license fee on their website, as they charge a different price for different developers. So you have to request for a quote if you are interested.
I used Dexprotect for a few years and I used to pay over USD 400 yearly. The good thing is they don’t charge for per app. Lately, I no longer use the tool as my android apps don’t earn good money to support the price. There is no price mentioned on their website. So you have to request for a quote if you are interested.
Though I never used DashO but this is also similar like the above two. I don’t know what the license fee as they don’t publish the fee on their website.
2. How easy to use
If you use Proguard before, you can easily use Dexguard. They also have a guide here. But still, I found it is a bit complicated to use.
On the other hand, Dexprotector is easier to use compare to Dexguard. They also have a guide on their website.
As I used both, this is my unbiased opinion. As I never used DashO, so I couldn’t know about that tool.
3. Are these tools provide total protection
Yes, I think so. These tools are powerful, and how they encrypt the whole android application only they know. But what I noticed using Dexprotector, when a new major Android version releases, the app crash. So one time when I complained to them, they told me to update my Dexprotector tool.
4. Are these tools support Flutter
Dexprotector team told me, just follow how I used to protect native android apps by their tools. But I am not sure whether they do anything on the dart code or not. If you use Flutter, on their website they have a dedicated page related to obfuscation so you can read that.
Conclusion
If you are a hobbyist developer, I don’t think it is wise to pay money to protect your android app. But if you’re afraid of exposing your intellectual property right, or you don’t want your app to become pirated, then you may use these tools. Normally, Fintech companies use these tools to protect their apps as they have sensitive information. | https://medium.com/level-up-programming/how-to-protect-android-app-from-reverse-engineering-28cb7914c6f3 | ['Mahmud Ahsan'] | 2020-10-28 08:24:46.103000+00:00 | ['Technology', 'Data Science', 'AndroidDev', 'Software Engineering', 'Programming'] |
Gap Trading. An Introduction & Back-test in Python. | Gaps can occur due to fundamental and technical reasons, but we are mostly interested in identifying and trading them. In the currencies market, the visible gaps are the ones that occur during the weekend. Since it is traded all day long for 5 days a week, the presumed gaps would probably look like giant candles, but since we cannot know for sure, we will stick to the common definition of gaps.
“We call the act of trading based on gaps: Playing the gap.”
There are different types of gaps and distinguishing them can be quite tricky:
A common gap: It generally occurs in a sideways markets. It is likely to be filled because of the market’s mean-reversion dynamic.
It generally occurs in a sideways markets. It is likely to be filled because of the market’s mean-reversion dynamic. A breakaway gap: It generally resembles a common gap but the gap occurs above a graphical resistance or below a graphical support. It signals acceleration in the new trend.
It generally resembles a common gap but the gap occurs above a graphical resistance or below a graphical support. It signals acceleration in the new trend. A runaway gap: It generally occurs within the trend but it confirms it more, therefore, it is a continuation pattern.
It generally occurs within the trend but it confirms it more, therefore, it is a continuation pattern. An exhaustion gap: It generally occurs at the end of a trend and close to a support or resistance level. It is a reversal pattern.
Note that most of the above specificities come from personal experience as some sources state that common gaps are least likely to be filled. Also, the runaway and exhaustion gaps are so similar that it is almost impossible to know which is which at the moment they appear, therefore, they suffer from hindsight bias. | https://kaabar-sofien.medium.com/gap-trading-an-introduction-back-test-in-python-7d59ea39962f | ['Sofien Kaabar'] | 2020-12-30 05:55:34.979000+00:00 | ['Artificial Intelligence', 'Finance', 'Machine Learning', 'Data Science', 'Trading'] |
Bilingual Poem (SP and EN): Radio | Towers fell and up flew
metaphors, lamentations.
A drone, Herzog ersatz, shot the collapse.
No more blue pulsar impressions.
The staff chilled Hope in their icebox
(The most desired reply).
Instruments probed cold, vast, void;
Crashed and carved unknown pleasures.
We missed the cosmic bus,
Delayed by stale supremacies;
Old rotten tensions gave way
To anemic celebrations
Of bubbly-worthy star gazers…
In Another Time-Slice,
Arecibo collected aural debris —
Never San Salvador; always Guanahani…
We’ll weave neural simulacra,
And dull the pain of conquest.
This encryption is from a non-human source. | https://medium.com/polyglot-poetry/radio-f65a44f9df39 | ['Miguel Adrover'] | 2020-12-10 00:58:38.465000+00:00 | ['Puerto Rico', 'Poetry', 'Astronomy', 'Polyglot Poetry', 'Science'] |
Taking Questions from the Late Justice Ginsburg: Fine-Tuning Billion+ Parameter Transformers Using Model Parallelism | Token Types for GPT2: Implementing TransferTransfo
You can never go wrong by taking a cue from the 🤗HuggingFace team. We will follow the TransferTransfo approach outlined by Thomas Wolf, Victor Sanh, Julien Chaumond and Clement Delangue that won the Conversational Intelligence Challenge 2. transformers implements this easily as token_types . Note that token_types do not work with t5 , only gpt2 . Token types are more often associated with models like BERT, but Wolf et. al showed that creating a trainable embedding will make it easier for the model to distinguish between speakers. Think of this as a way of smuggling metadata about segments of the sequence into the problem space. There are lots of ways to structure token_types . When training gpt2-xl , we’ll use a conventional approach where tokens uttered by Justice Ginsburg are coded as a 1 and everything else is coded as a 0 .
Creating an Experiment with gtext
All of the above steps can be implemented by using GenerativeText from gtext . There are several important training settings noted in the gtext documentation. Here’s our setup:
from gtext.generative_text import GenerativeText experiment = GenerativeText(
csv_file = "../data/scotus_dialogue_qp.csv",
persona = 'justice ginsburg',
dataset_type = 'dialogue') experiment.training_settings(
model_name = 'gpt2-xl',
max_length = 1024,
model_parallel = True,
gradient_accumulation_steps = 8,
num_train_epochs = 3,
evals_per_epoch = 1,
sim_model_name = 'bleurt-large-512',
models_dir = '../models/',
save_total_limit = 1,
verbose = True) experiment.prepare_dataset(
use_context = True,
use_context_token = True,
token_types = 'SequenceTarget',
set_context_token = '<|context|>',
set_pad_token = '<|pad|>',
set_sep_token = '<|sep|>',
seed = 1) experiment.load_training_arguments()
experiment.save()
TrainingArguments
The experiment object above configures the TrainingArguments for you when you call experiment.load_training_arguments() . However, if you aren’t using gtext , you can configure the TrainingArguments as below. Crucially, set model_parallel=True or the TrainingArguments will default to data parallel behavior. You won’t be able to train large models and will get out-of-memory errors.
With gtext
from transformers import Trainer train_args = experiment.train_args
Without gtext
from transformers import TrainingArguments train_args = TrainingArguments(
model_parallel=True, # Remember to do this!
output_dir='../models/my_model_dir',
do_train=True,
gradient_accumulation_steps=8,
logging_steps=338,
evaluation_strategy='steps'',
eval_accumulation_steps=8,
num_train_epochs=3,
per_device_eval_batch_size=1,
per_device_train_batch_size=1,
save_steps=338,
save_total_limit=1,
)
Loading the Model
With gtext
model = experiment.get_model()
Without gtext
from transformers import GPT2LMHeadModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('gpt2-xl')
tokenizer.add_tokens(['<|context|>', '<|pad|>', '<|sep|>'])
tokenizer_length = len(tokenizer)
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
model.resize_token_embeddings(tokenizer_length)
Model Parallelization
Parallelizing transformer models involves distributing the trainable layers, including the attention blocks and the embeddings, across several devices. PyTorch comes with a built-in way to do this. For TensorFlow, Mesh TensorFlow mtf is often used. Unfortunately, there isn’t an easy solution like eisen for transformers. Until recently, you would’ve had to do a lot of groundwork to parallelize the layers of a transformer. Fortunately, model parallelization for gpt2 and t5 is supported in transformers 4.1.0 and later. These models can be parallelized and de-parallelized with methods on t5 and gpt2 models.
To parallelize a model in transformers:
model.parallelize() # with no device_map, distributes the model’s attention modules evenly across all devices
To deparallelize a model:
model.deparallelize() # moves the model back to CPU
If no device_map is passed to the parallelize method, then the attention modules are evenly distributed across all devices that can be detected. There are other modules like embedding layers and language modeling heads that will be automatically loaded onto the first GPU (there are esoteric reasons why it must be the first GPU). In many cases, inefficient distribution of the modules will mean that the first device runs out of memory while others have room to spare, meaning you won’t be able to train a large model that your machine should be able to handle. With gpt2-large , gpt2-xl , and t5–3b , it is best to use get_device_map from gtext to retrieve the right device_map . For t5–11b , the size of the other modules is so trivial compared to the size of the attention blocks that you do not need to provide a custom device_map . If the default distribution method and the get_device_map from gtext won’t work for you (for example because you have differently sized GPUs), you can create your own device_map and pass it the parallelize method like this:
# Device map for gpt2-xl
device_map = {
0: list(range(0, 9)), # attention blocks 0–8 will be placed on the first device
1: list(range(9, 22)), # attention blocks 9–21 will be placed on the first device
2: list(range(22, 35)), # attention blocks 22–34 will be placed on the first device
3: list(range(35, 48)), # attention blocks 35–47 will be placed on the first device
}
model.parallelize(device_map)
In the above example, the device_map has 48 layers and is spread across 4 GPUs. That’s because gpt2-xl has 48 layers and the machine being used is p3.8xlarge, which has 4 identical GPUs. The number of attention blocks differs by model and model size. See here for details.
Getting a Device Map with Gtext
from gtext.device_maps import get_device_map device_map = get_device_map(4, experiment.model_name)
model.parallelize(device_map)
Creating a Custom Device Map
device_map = {
0: list(range(0, 9)),
1: list(range(9, 22)),
2: list(range(22, 35)),
3: list(range(35, 48)),
} model.parallelize(device_map)
Optional: Similarity Metric with BLEURT
The transformers library enables you to pass custom metric calculators to the Trainer via compute_metrics . This is an important feature of the library, as there are a number of task-specific and general metrics in NLP that can be used to benchmark models. A function passed to compute_metrics takes an EvalPredict object, though the function used in this example will not need EvalPredict . GenerativeText has a method get_sim_calculator to create a bleurt calculator if you specify a sim_model_name . See here for list of BLEURT models and token sizes. Similarity scores for BLEURT range from -inf to 1.0, where 1.0 represents perfect similarity. As such, the goal is to maximize the similarity metric.
Getting a BLEURT Simularity Score with gtext
sim_calculator = experiment.get_sim_calculator(model)
Custom Similarity Calculator
In practice, your metric may require generation which is not compatible with what’s in EvalPredict . In such situations, you can create a callable object, initialize it with the model and evaluation dataset, and ignore EvalPredict . The way that Python and PyTorch deal with the model object ensures that you won't be making a copy of the model simply by saving it to the calculator. Instead, the object will continue to point to the same model being trained by the Trainer . Here's an example of a class for a custom metric calculator that ignores EvalPredict :
class SimilarityCalculator():
def __init__(self, model: Callable, metric_name: str, eval_: Callable, sim_collator: Callable,
tokenizer: Callable, other_settings: dict):
self. model = model
self.metric_name = metric_name
self.eval_ = eval_
self.sim_collator = sim_collator
self.tokenizer = tokenizer
self.other_settings = other_settings def __call__(self, eval_prediction):
# Calculate similarity_metric
similarity_metric = some_function() return {self.metric_name: similarity_metric}
Training
The training dataset consists of 2706 samples. With gradient_accumulation = 8 , that means there are 338 steps in each epoch. The evaluation dataset consists of 310 samples. It takes about 1–2 hours per epoch to train. There will be a long pause in the training whenever the trainer saves the model, especially if the model is t5–3b or t5–11b . The model can be trained like this:
from transformers import Trainer trainer = Trainer(model = model,
args = train_args,
train_dataset = experiment.train,
eval_dataset = experiment.eval_,
data_collator = experiment.train_collator,
compute_metrics = sim_calculator)
trainer.train()
The gpt2-xl model was trained on p3.8xlarge. The t5 model was trained on p4d.24xlarge. Total training time was about 4.5 hours for each. Both were trained with the maximum token length of 1024. Evaluation metrics in NLP tend to correlate, but they don’t always. Recalling that the BLEURT scoring runs from negative infinity (bad) to 1.0 (perfect similarity), the results of this run show that categorical cross entropy calculated on the evaluation dataset is best after just the first epoch on gpt2-xl while the BLEURT similarity score (also calculated on the evaluation dataset) continues to improve in subsequent epochs. For t5, the model have reached its maximum potential in a single epoch and begins to diverge[updated chart]:
Inference
T5 Example dialogue on Fulton v. City of Philadelphia with t5–11b, 1024 tokens, 3 epochs. Ginsburg’s text is generated by model.
It’s time to use our models. We need a context, so we’ll use the QUESTION PRESENTED for Fulton v. City of Philadelphia:
context = "<|context|>QUESTION PRESENTED: The City of Philadelphia chose to exclude a religious agency from the City's foster care system unless the agency agreed to act and speak in a manner inconsistent with its sincere religious beliefs about marriage. The Third Circuit upheld that action under Employment Division v. Smith. The questions presented are: 1. Whether free exercise plaintiffs can only succeed by proving a particular type of discrimination claim-namely that the government would allow the same conduct by someone who held different religious views-as two circuits have held, or whether courts must consider other evidence that a law is not neutral and generally applicable, as six circuits have held? 2. Whether Employment Division v. Smith should be revisited? 3. Whether a government violates the First Amendment by conditioning a religious agency's ability to participate in the foster care system on taking actions and making statements that directly contradict the agency's religious beliefs?"
Sampling Method
Sampling methods are an ongoing area of research in persona-based problems, but there is some indication that nucleus sampling is as applicable in this domain as it is in other types generative text problems (Li et. al 2019). The relevant parameters here are top_k , top_p and temperature . You may get better results by adjusting these. The parameters can be set for both the BLEURT calculator and for inference noted below.
An Opening
In SCOUTS Oral Arguments, the Petitioner starts with a statement. We’ll give this statement and the context to the model to begin.
history = "<|sep|><|petitioner|>Mr. Chief Justice, and may it please the Court: The courts below made a simple error. They failed to understand where Employment Division versus Smith controls and where it doesn't. Smith doesn't control when the government uses a system of individualized exemptions or when it makes other exceptions that undermine its rules or when it changes the rules to prohibit a religious practice. Philadelphia made all three of those errors here. The City still can't identify a neutral, generally applicable law, even after six attempts. And it now acknowledges its decisions are subjective and individualized. Yet, the courts below still
applied Smith. They even said Smith would be a dead letter if Petitioners prevailed. That demonstrates the confusion and instability Smith has caused. Respondents, rather than defend Smith, ask the Court for a newly minted constitutional standard that's even less protective of religious exercise. That approach has no basis in the text, history, or traditions of the Free Exercise Clause. The City has no compelling reason for excluding Catholic Social Services, which has exercised its faith by serving at-risk children in Philadelphia for two centuries. Nor does it have any interest in refusing to allow the agency to step aside and provide referrals elsewhere. Yet, Philadelphia is refusing to place children with loving mothers, like Sharonell Fulton and Toni Simms-Busch, just because they chose to partner with an agency who shares their faith. Respondents act as if this is a zero-sum game: Either LGBTQ couples can foster, or Fulton and CSS can. But the law and decades of experience say otherwise. The Free Exercise Clause is at the heart of our pluralistic society, and it protects Petitioners' vital work for the Philadelphia community. I welcome the Court's questions.<|sep|><|justice ginsburg|>"
Using gtext
DialogeInference is a tool for persona-based inference from gtext . Sampling parameters such as temperature , repetition_penalty , top_k and top_p can be added.
inference = DialogueInference(model = model,
context = context,
tokenizer = tokenizer,
model_type = experiment.model_type)
inference.chat(interlocutor = "Petitioner")
Manual Inference
Manual sampling requires some additional legwork. Since this model was trained with token_types, you will need something like the get_token_types function from gtext when sampling a gpt2 model. Here’s a simpler example with t5:
import torch # Tokenizer
tokenizer = experiment.tokenizer # Format the input
text = tokenizer.context_token + context + tokenizer.sep_token + '<|petitioner|>' + history + tokenizer.sep_token + "<|justice ginsburg|>" # Tokenize
input_ids = tokenizer.encode(text)
# Send tokens to device
input_ids = torch.tensor([input_ids], device = model.device) # Generate outputs
outputs = model.generate(input_ids, max_length = 1024) # Decode
tokenizer.decode(outputs[0])
Conclusion
We have built a faint echo of the late Justice — something less than a ghost but more than a shadow. The model guesses that Ginsburg’s responses in this case focus on the nature of the partnership between the religious organization and the City, whether the same protections would apply to a secular organization seeking a similar exemption as the foster agency and the nature of the conditions placed by the City of Philadelphia on participation. Changing the sampling parameters enables more diverse utterances that may diverge from the persona. Now try building your own! | https://towardsdatascience.com/taking-questions-from-the-late-justice-ginsburg-fine-tuning-billion-parameter-transformers-using-cf1a85b92b0a | ['Alex Orona'] | 2020-12-22 13:18:56.860000+00:00 | ['NLP', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Editors Pick'] |
RESTful API Documentation Made Easy with Swagger and OpenAPI | Swagger in Action
Now that we have understood what OpenAPI and Swagger are, let us see these in action. As part of this article, we will develop a REST application. We will then use Swagger UI to render our API documentation. Following that, we access the API document (available in JSON format) through Swagger Editor. Lastly, we will use Swagger Codegen CLI to generate a server and a client stub to demonstrate how one can use an OpenAPI document to mock the REST web services.
What are we building?
We will build a Spring Boot application that offers us to manage blood donors. It allows us to create, update, delete and view donor information.
Refer to this link for a step by step guide on how to set up the application in a development environment. Complete source code can be downloaded from this Github repository.
Following are the summary of steps:-
Create a new Spring boot application with JPA, H2 and Web dependencies
Create the model, service and the controllers
Run the application and try accessing various endpoints & its operations
Below is the application pom file:
pom.xml file
We have added the following two additional dependencies from io.springfox to enable Swagger 2 and Swagger UI:-
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>
Swagger Configuration
Now that the project is up & running and we can access our REST endpoints, let us add the swagger configuration:-
Swagger Configuration
This is a Spring configuration with Swagger documentation information. We have added metadata information about the REST API such as API name, author, website, license and so on. We have also instructed Swagger to only generate documentation for the components present in the io.codefountain.swagger package.
Accessing Swagger UI
Since we have enabled Swagger, let us see the documentation of our API endpoints done by Swagger. This is rendered through Swagger UI in the following link:
http://localhost:8080//swagger-ui.html#/donor-controller
Swagger default documentation
Swagger has put together the following information:-
Document metadata (API name, license, website, contact and so on)
All REST endpoints with default information it can infer from code. Note that endpoint descriptions are method names
These are the default information. Let us now explicitly document of our API with swagger annotations to provide a detailed description and information about the endpoints and operations.
Documenting Rest Controller
As discussed, we will now document the REST controller explicitly. Swagger provides several annotations to add documentation metadata that it pulls while generating the documentation.
For each of the REST endpoint and its associated operations, we have provided ApiOperation and their various responses with ApiResponses annotations.
REST controller with explicit documentation
Restart the application and access the same URL:
Updated Swagger documentation
This time, Swagger has pulled the information supplied through the annotations. Not only this, it has now added explicit response information with HTTP response codes:-
API documentation with HTTP response code
Accessing Swagger Editor
So far we have accessed the API documentation locally. Swagger also generates the documentation in the JSON file format adhering to the OpenAPI specification. We can share this JSON file with the consumers and they can read the endpoint information, generate client and server stubs.
Our REST API documentation can be accessed through the following URL:-
http://localhost:8080/v2/api-docs
api-docs.json
This JSON document conforms OpenAPI specification and can be accessed through Swagger Editor as shown below:-
API Document in Swagger Editor
Anyone with access to this document can view the API endpoints and all other related metadata such as model structure, data types and so on. | https://medium.com/swlh/restful-api-documentation-made-easy-with-swagger-and-openapi-6df7f26dcad | ['Somnath Musib'] | 2019-11-17 20:58:27.076000+00:00 | ['JavaScript', 'Software Development', 'Technology', 'Software Engineering', 'Programming'] |
Best in Show: Fairs and Exhibitions Turn to Online 50/50 Fundraisers | Glencoe Agricultural Society
The Glencoe Agricultural Society in Ontario saw COVID cancel their 2020 events but that didn’t cancel their operational costs. They approached The Lotto Factory to do something a little different than other groups to help them through these tough times.
We’re operating a one-time Classic 50/50 fundraiser for them, drawing in late September. Unlike our Progressive draws which can see people playing months in advance, our Classic differs in that 100% of purchases go into the current draw. There are no rollovers. Somebody is walking away with half the jackpot when it’s drawn at the end of September.
The Classic 50/50 allows for ‘deals’ on tickets sales so for the Glencoe Agricultural Society we are offering 1 for $10, 10 for $40, 25 for $75, and 50 for $100. Each purchase pushes their jackpot higher and higher.
Their jackpot rolled over $2,000 in no time and we won’t be surprised if it’s five digits when it draws in a month.
Click here to play! | https://medium.com/the-lotto-factory/best-in-show-fairs-and-exhibitions-turn-to-online-50-50-fundraisers-dd14cb5ffc74 | ['Robb Clarke'] | 2020-08-24 10:59:32.043000+00:00 | ['Canada', 'Technology', 'Fundraising', 'Startup', 'Charity'] |
Feeling depressed now? Read this! | Are you feeling depressed now? Or do you know someone who’s going through it? It’s painful, it’s hard. I know, because I’ve been there too. Therefore I am here to try and give you something to anchor in, to look forward to. Trust me, I juiced out my years of experience in getting through my struggle to bring you this piece. The 3 ways to deal with a depressed me, and here they are neatly (and in sequential order). I also made a YouTube video regarding it. Check it out here.
1. When did you last take care of yourself?
Any self-care act is an act of self-love. That can be from buying a pair of shiny new shoes, down to brushing your teeth and drinking water. The good news is, most of us shouldn’t need to dig deep to find ONE self-care act that happened in the last 24-hours. Hold on to this one self-care act in your head. We will need it later further down this piece. Having a chaotic mind and can’t remember it? Write it down. It’s okay to feel overwhelmed.
2. Be nice, give yourself acceptance
Acceptance of our state is the first move to take control of our fate. It is especially important to give acceptance to our depressed self as it will do more good than harm.
To be fair, we are being too harsh on ourselves sometimes. It happens even to the best of us. Bill Gate gets self-doubts too. With that, we tend to ignore just how much we have achieved in life. Besides, Pareto’s principle taught us that 80% of our happiness only comes from 20% of the happy memories we accumulated throughout life, showing just how easy it is for us to overlook our achievements and happy moments.
80% of our happiness only comes from 20% of the happy memories we accumulated throughout life.
Take control of your fate by accepting how you are feeling now. Stop allowing depression to continue taking away your immense potential and love you care to bring to this world.
3: Writing it all out
Pen and paper do wonders, from writing notes, spreading powerful messages (like this one) to alleviating psychology pain and reducing loneliness. Find a peaceful place with a pen and paper, follow me as you write.
You may start by writing that you love yourself (write your name!) and you accept how you are feeling now. For ones who just can’t find a way to acceptance, please write it with the future tense. Will is a very powerful word, not only does it imply a commitment for change, but it also serves as a powerful reminder of how you wish to see your future.
Will is a very powerful word, not only does it imply a commitment for change, but it also serves as a powerful reminder of how you wish to see your future.
The following will now be more about you, less so, my guidance. Write down your first negative thought you come up with that is giving you an uncontrollable stream of sadness? Follow on from that thought onwards, write your second, third and fourth thoughts down. Maybe they are all negative, that’s okay and is expected, as they are what is bothering you enough take you into a depressed state.
I sometimes feel embarrassed by my thoughts, or worse, not knowing where to begin with. If you’re like that, take your time and write that very thought you are embarrassed about. Remember, no one else needs to know what that is. If you don’t know where to begin (then I have this imaginary tutorial for you that costs only £999 today. Sign up now for 10% off), start writing how you feel. Looking back at my journal, ‘stupid’, ‘dumb’, ‘embarrassing’, ‘screwed up’ were common words I used. Write what those scenarios are that make you feel this way, detailing them.
Where the magic happens
Assuming you followed all the steps in order, this is what I need you to do now.
Remember the thought you had to hold onto from your first exercise? Write that down on the same piece of paper. Remember that tiny window in the walled dark room from the article cover? This is what I want you to see on your paper.
Take a look at the whole page. You’ll discover that even though you may have 99% of negatives in your head, there is at least 1% of positivity from when you took care of yourself, from loving yourself. Amplify that positivity and ask yourself, what else did you do to show that you’ve loved yourself? Add more to the list until you feel more in control. You will be amazed how much positivity is being hindered under a depressive state.
Even though you may have 99% of negatives in your head, there is at least 1% of positivity from when you last took care of yourself
Summary
Following these 3 methods in this order has certainly made a massive impact on my life. When you are ready to go further, revisit the pessimistic thoughts you wrote to reveal the elements that are tormenting your deepest insecurities. The rest of the work is up to you (and your therapist if applicable) to work on. If you want to learn more about these 3 techniques visually, go to my YouTube video here. | https://medium.com/dreamer-do/feeling-depressed-now-read-this-45aa687df191 | [] | 2020-09-25 12:13:57.055000+00:00 | ['Mental', 'Depression', 'Mental Health'] |
Write What You Know (And What It Really Means) | Courtesy of Kapoompics (pexels.com)
‘Write what you know’ is old advice but one that comes with its fair share of debate. Some people say that it’s simply good sense. If you’re a single mother (or father), you’re going to be able to write authentically about your experiences in a way that many others can’t.
Others argue that by only writing about what you know, you’re restricting yourself and won’t have as many areas that you can explore. They say that we wouldn’t have any fantasy stories and that the fiction that we do have would be a lot less exciting as most people tend to lead rather dull lives.
Here’s the thing: both groups are right. You should include your life experiences, yes, though what many don’t appreciate is that we know a lot more than we realise.
The idea of ‘writing what you know’ can be broken down into two categories: the practical and the personal.
First: the practical. Take a minute to think about what you know.
Do you know what it’s like to be made bankrupt?
Do you know what it’s like to save a person’s life?
If we look at the first group’s point of argument, it is these areas which you should write about and why shouldn’t you? After all, it is these kinds of experiences that you can portray truthfully to provide a thrilling and engaging story. You know what it means to go through these challenges and to come out the other side in a way no-one else can.
In this category, writing what you know doesn’t have to be restricted to life-changing experiences, either. If you know what it’s like to drag yourself out of bed every morning to go to work, you have life experience. If you know what it’s like to trip over the dog’s favourite toy or deal with an annoying sibling, you have a range of life experiences that you can introduce into your character’s lives.
In a way, the idea of using practical experience does have the additional benefit of encouraging writers to go out and experience things. It’s easy as a writer to be cooped up all day writing about other’s lives that you forget to go out and live your own. In this regard, it can give you a needed push to go out do things so you can write about it in a way that feels genuine and truthful- essentially: ‘write what you know’.
However, writing what you know isn’t just down to physical experiences. It isn’t doesn’t just refer to experiences that you can taste or touch. The job of any writer- any artist, in fact, is to trigger a reaction within their audience and to do that, we also need to invoke emotion- we need to make a personal connection.
There are things in life that we all know- either through personal experience, the experience of others, or through the media and the world around us. There are emotions, worries and fears that we can all recognise and that, as a society, we empathise with.
We all know what it’s like to be disappointed. We all know what it’s like to be happy or afraid. We all know what it’s like to laugh until we feel like our sides are going to burst or have someone offer comfort- whether a complete stranger or a friend.
There are things we all know through personal growth, and it is important to recognise the need to represent these moments as well.
Write about your experiences, yes, but also recognise that you as a creator know more than you realise and express it in your project.
You don’t need to have climbed fifteen mountains to write about it, but you can use your experience of elation, frustration and dedication to write about your character’s journey.
If you’re a boat captain write about it but include the emotional impact of what it’s like so readers can experience it too.
Here are some more suggestions on areas you can explore to ‘write what you know’:
Embarrassing yourself in front of someone you’re attracted to
Picking yourself up and starting again
Feeling sick from being so nervous
Feeling incompetent/ child-like in a sea full of put-together adults
Being stuck in a conversation you desperately want to escape
Seeing a new-born child for the first time
Using your savings to buy something you’ve been after for a long time
Standing up for yourself/ others
Worrying over test results (car, medical, exams)
Mutual attraction/ flirting
Family events that inevitably end in arguments
Desperately needing the toilet
Being lost/ arriving at your destination after being lost
Sharing a secret/ hearing a silly rumour about yourself
Losing a loved one (pet, relative, friend)
No matter which angle you approach this advice from- whether practical or personal- our job as writers is to help our readers feel as though they are living through our characters. In the end, it doesn’t matter how we’re applying our experience, only that we are creating one for the reader to enjoy.
What are your thoughts on ‘writing what you know’? Share your thoughts. | https://medium.com/swlh/write-what-you-know-and-what-it-really-means-793262db0aa1 | ['Mary Fletcher'] | 2020-02-26 14:47:55.254000+00:00 | ['Writing Advice', 'Novel Writing', 'Writing', 'Writer', 'Short Story Writing'] |
Chrome Extensions to Boost Your Productivity | Chrome Extensions to Boost Your Productivity
10 must-have Chrome extensions for developers
BG Photo by Pierre Châtel-Innocenti on Unsplash
Over the past decade, Google Chrome has become the go-to application when anyone wants to browse the web on desktop and mobile. Many people don't even like to browse on their devices until they install Chrome. That being said, it is also infamous for consuming device memory and even slowing it down. But there is one more reason why it’s still the most popular browser: support for the latest web features and developer tools.
Chrome is the go-to browser for all developers due to its large user base that no developer can ignore and also the tools it provides during the development stage of a website or even mobile application. The Chrome Marketplace also has some of the best apps and extensions to help developers with a lot of add-on features during development and testing. In this article, we will go through a list of extensions that can come in handy during various stages of development. | https://medium.com/better-programming/chrome-extensions-to-boost-your-productivity-895c49a3ccdd | ['Nabil Nalakath'] | 2020-08-19 21:35:41.287000+00:00 | ['JavaScript', 'Software Development', 'Productivity', 'Programming', 'Web Development'] |
Five Steps to Changing Your Life (they boil down to one thing) | Three years ago I knew something needed to change. A lot of somethings, really.
I’d been published, by Penguin, a couple of years before. But my books had failed, my publisher dropped me, and I wasn’t writing. At all. For the first time since I was about ten years old, I was seriously considering just doing something else with my life.
Only, I felt so sick, I wasn’t sure I could do something else with my life. I was in pain all of the time. I woke up every morning so exhausted that I just wanted to cry. I often did. I weighed 368 pounds and I was afraid for my mobility. I couldn’t stand long enough to make dinner. I couldn’t walk far enough to make it to the end of a parking lot.
This is me at the point where I felt the absolute worst. I’m in New York City here, with my daughter (who is taking the picture) and all I want is to go back to our Air BnB and sit down. I’m in so much pain. Only our Air BnB is in New Jersey — at the top of a steep hill — and the idea of just sleeping on a subway to avoid trying to climb it feels like a decent one.
I had a job that I earnestly hated. I was working as a teaching assistant in a high school special needs classroom(because I wouldn’t let myself just go ahead and be a classroom teacher — that meant, in my mind, that I’d really given up on writing.) I made less money, by quite a lot, than my son did working at Wal-Mart at the time, and the job was awful.
I loved the kids, but the teacher was a burn out who hated me because I brought her students books to read on my second day of work. (For real.)
We were also drowning in debt. Credit cards. A couple of loans. Braces. A car note. We were paying all of our bills, but we were standing on the edge with our toes hanging off. A lay off, an illness, would have dumped us off the cliff.
My mother-in-law got sick around then. Sick enough to be in and out of hospitals and nursing homes for a year. Part of her illness included delirium, so I sat with her all day, everyday, so that she wouldn’t have to be physically restrained to her bed.
And while I was spending sixteen hours a day sitting with her, I realized that her illness was caused by something she could have avoided if she’d changed her life when she was my age. In her case, smoking. Her fifty-year, two-pack-a-day habit had caused her vascular dementia and severely high blood pressure.
I realized I could do something, at 43-years-old, that would make my life better when I was in my seventies. Not just one thing. A whole bunch of things. I could change my life, if I just thought about the next steps and took them.
I was five years younger than my mother was when she died of breast cancer. I latched onto the idea of making some major changes over the next five years. (I called it 60 Months to Ironman, but it’s so much bigger than that.)
I went to my doctor. I was diagnosed with sleep apnea and given a CPAP, which made me look like Darth Vader and a vacuum cleaner had a baby, but after one night wearing it, was my new best friend.
And I was referred to a weight loss surgeon.
I had weight loss surgery six months later. I lost 120 pounds in the next six months after that (which was weird and not particularly fun, btw.) The pain went away. I didn’t need the CPAP anymore, because with weight loss my sleep apnea resolved itself.
I also started Ninja Writers. I knew I needed to write again. I wanted a community. I couldn’t find one, so I built one. That experiment has been the most amazing thing I’ve ever been a part of. Ninja Writers are my people. And I was writing again, for real.
Between Ninja Writers and my books, we were able to pay off all of our debt.
I went back to school, too. For an MFA. I’ll graduate in August. I hope Ninja Writers and writing are my work for the rest of my life, but if I ever need a job again, I won’t have to be a teaching assistant under a burned out teacher who hates me ever again.
Last year, I wrote a book. A middle-grade book called The Astonishing Maybe. I found another literary agent. She sold my book to MacMillan. And another one, too. I earned enough from them to make sure I won’t have to get a day job again for at least two years.
Long story short, three years ago, I decided to just start taking the next step to change my life. I did that, over and over and over. And it worked.
I lost 120 pounds. I wrote a book that sold to a two-book deal to a major publisher for two years income. I earned an MFA (well, I’ll graduate in August, but the course work is done!). I started a business that I flat-out adore. My husband and I paid off our consumer debt.
I had lunch with my best friend a couple of months ago and he said that he barely recognized me. I’m so different from when we met. And I thought to myself — I’m the same. I’ve just chipped away at the stuff that was masking my shine.
Here are some steps, in case you find yourself in a place where something has to give.
Evaluate where you are. Be honest. Now’s the time to really look at every aspect of your life and notice what’s working and what’s not. As miserable as I was three years ago, I had a lot going for me, too. A supportive family. A safe place to live and enough income to live there (even if we were on the edge.) An education. I knew what I wanted to do with my life, which I know is a true gift. Despite how I felt, I was relatively healthy. No diabetes or heart disease or addiction. Let your imagination free. Really think about where you want to be in five years. I love the five year timetable. It’s far enough away to let you really make huge changes — and to have some space for forgiveness if you stumble. And you will. I did. We’re only human. Make some big, hairy goals. I mean, big. Don’t worry right now about being reasonable or about managing your expectations. It’s okay to dream. There is value in dreaming big, even if you don’t end up exactly where you thought you would. But, try to make your goals something you have control over. For instance, a goal of writing a book and submitting it to agents (which is only up to you) is better than a goal of being a bestseller in five years (which depends on so many people who aren’t you.) Think about the next step for each goal. Just literally, the very next baby step. Let’s stick with the goal of writing a book. The next baby step is to develop your idea. Then after that, plot your story. Don’t worry about writing it. Just plan it for now. If ‘plot your story’ is too big — you’ll know, because you don’t actually do it — then narrow it. Plot one scene at a time. Then when that’s done, your next step is to write for ten minutes. That’s all. Don’t worry about the step after that (the next many, many steps will look the same anyway — write for ten minutes.) Do that, for every goal. Be brave. Change is hard. It’s scary. Human beings are hardwired to seek out the status quo. You don’t have to be brave enough for the end goal though. Not today. You only need to be brave enough for the very next little step. Before I had weight loss surgery, I was terrified. What if I died? What if I had the surgery, but didn’t lose weight? But it wasn’t so scary to set an appointment with my own doctor. Or to ask her for a referral. Or to go to the refferal. Or to call my insurance company. And on. And on.
Those steps all boil down to this one thing: Keep doing the next thing.
I have this thing — I call it the Secret Weapon. It’s a bunch of printable tools I created to use myself. I still use it everyday. Maybe it’ll help you, too.
Where would you like to be in five years? Share it in a response, if you want to. Sometimes writing it down, makes it real. | https://shauntagrimes.medium.com/how-i-changed-my-life-and-you-can-too-d6a0b352a2cd | ['Shaunta Grimes'] | 2019-07-02 19:08:21.636000+00:00 | ['Self Improvement', 'Writing', 'Life', 'Weight Loss', 'Self'] |
COVID Underdogs: Sri Lanka | COVID Underdogs: Sri Lanka
Like New Zealand except better
In lieu of a wedding, Darshana Kumara Wijenarayana and Pawani Rasanga gave to the needy
As a note, Sri Lanka didn’t test enough, had a resurgence and is absolutely struggling with its second wave. 🤦🏾♂️
Sri Lanka is used to disaster. Over the last 15 years, I’ve lived through a tsunami, war, more terrorism than I can count, floods, riots, and been hospitalized with dengue. Shit just happens every few years.
Sri Lanka, however, is not a disaster.
Throughout all of this, we remain a beautiful, friendly, and generally safe place to visit and be. Rather than making us weak, generations of hard experience has made us strong. We are used to collective sacrifice. We do not debate killing our elders. We are resilient, like many of the nations you only see on the bad news.
In fighting COVID-19, that resilience served us well.
We crushed it.
Sri Lanka reacted early (<100 cases), reacted hard (total lockdown), and has almost completely eliminated COVID-19 from our shores. Over 100 grueling days later, we have no community spread, and — masks on — have returned to life. I went to the beach. I saw my 96-year old Achchi after three months. She was so happy and yet so short that she kissed my wife’s boob.
Forget New Zealand. Jacinda Ardern is great, but Sri Lanka is an island with 4x the population that has crushed the curve harder and flatter than them.
This is what we did.
First Case Wedding
Sri Lanka’s first recovered case
For a long time, Sri Lanka only had one case, imported from China on January 27th. She was tested, treated, recovered. Getting out, she basically had a wedding with the Minister of Health and DG of Health Services. This was, in hindsight incorrect, but sweet. We never blamed China or the WHO, we just worked with them and saved our own asses.
Being paranoid, many Sri Lankans immediately bought or improvised face masks in January. I scoffed, but I was wrong. That paranoia was wise.
After Patient 1 recovered, things were quiet for a month and a half. Then we got hit. It wasn’t from China, which had their shit together. Our epidemic arrived via Italy.
Next Case Dreading
The Navy rehearsing in March. Navy personnel would ultimately suffer more infections than the rest of the country combined.
On March 10th, a Sri Lankan tour guide was confirmed as being infected, likely via Italian tourists. He passed the infection on. We had local transmission. By March 10th, our epidemic had truly begun.
The whole country tensed up, except for the literal old boys of our two most irresponsible schools — Royal and St. Thomas — who insisted on having a dayslong drinking party. Predictably, an infected airline pilot was there, and the whole country was on edge.
At this point, things could have easily gone either way. Italy rapidly accelerated from less than 100 cases to tens of thousands and Sri Lanka was on the same trajectory. Everybody was, that’s just the curve.
And don’t say Sri Lanka just got lucky, or it’s the climate. Yes, it’s hotter than Satan’s taint here, but COVID-19 is so rabid and new that it spreads regardless. We saw it rapidly go from one person to hundreds in the Navy. So no, we didn’t just get lucky.
We just took right action, at the right time. There was strong, largely military leadership from President Gotabaya Rajapaksa, our well developed public health sector, and especially the Epidemiology Unit, themselves battle-hardened from fighting malaria and dengue.
In the public, there was widespread compliance and support. The doctor’s union stopped meddling in politics and gave good medical advice, for once. Nurses, doctors, cops, and troops all showed up to work. Dengue labs converted to run PCR tests and sequenced the full genome of our local strain. The larger public just masked up, shut up, and stayed home.
We didn’t waste time debating whether disasters exist. Everyone in this country has experienced some disaster, we know. Nothing like COVID, but if someone says a weird word like tsunami we don’t ask, we just move.
Sri Lanka reacted fast, we reacted aggressively, and that made all the difference.
Two weeks to eternity
The airport before it closed
Every country had two weeks. From the minute you get your first confirmed case, the clock is ticking, and it runs out after 100. Once you hit 100 cases you’re already dead, you just don’t know it yet. What each country did in those first two weeks echoed through eternity.
Countries that acted fast — like Korea, Mongolia, or Trinidad & Tobago — survived. Countries that dithered — like the US or UK — were hammered and will never suppress the virus now. Those two weeks were a window and it closed.
Sri Lanka made the most of it.
Within five days of the first local case (March 15th), Sri Lanka banned travel from much of Europe, Iran, and Korea. For some reason, we exempted the UK, but it rapidly became clear that they were the worst. Within a day we banned flights from there as well.
During this time things were changing every hour. My wife and one child were in the UK and barely got back on one of the last flights in (we self-quarantined). I know people just stuck in Sweden or the US. Many more got stuck in the Middle East. By March 22nd the airport was completely closed.
This was a huge sacrifice. We cut off the entire tourism industry like a gangrenous limb. We left thousands of Sri Lankans stranded abroad. But it worked.
Closing borders does not stop a pandemic, it just limits the size of the problem you have to deal with it. It makes test/trace/isolate possible. If we’d shut later we’d have had thousands of cases to find. If we’d shut earlier it could have been zero. As it was I think we had a few hundred to start. It was tremendously difficult to find and contain everything, but we did (inshallah).
Because we had fewer cases, we had time to scale up our defenses before the virus went viral. That’s why we’re able to safely open up now.
Above all, it was those two weeks. Because the Sri Lankan government acted in those two weeks, we saved thousands of lives and our entire economy. Our health system is good, but we have zero flex in the ICUs. We would have gotten hammered. Even waiting 12 days cost us 11 lives. Waiting any longer would have cost hundreds or thousands more. It could have cost everything.
That’s why we shut everything down.
Total Curfew | https://indica.medium.com/covid-underdogs-sri-lanka-db6eca164a35 | ['Indi Samarajiva'] | 2020-12-01 10:15:44.021000+00:00 | ['World', 'Government', 'Sri Lanka', 'Coronavirus', 'Covid 19'] |
Six Technologies Getting us Through the Pandemic | With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device, to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time, and we often resent tech’s power over us.
Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020 could be the month that at least partially redeems our faith in technology by demonstrating how fortunate we are to have some incredibly sophisticated tools in our homes.
For many, they are currently the sole portal to the outside world.
In recognition of the critical role they’re playing right now, here are six technologies getting us through:
1. Health: Telemedicine
Remember back when you felt comfortable booking an appointment and going to the doctor? Simpler times. But, thanks to telemedicine, we can just as easily connect with a physician from the comfort of our own home — and due to the outbreak, millions of patients are doing exactly that.
Telemedicine may have been around for years, but need is forcing the public to familiarize themselves with it; and it is booming as a consequence. Doctors (and bots!) are triaging patients both from/in remote locations, opening up capacity at a time when in-person facilities are saturated with the seriously ill. We’re even seeing veterinarians following suit.
2. Fitness: Webinar Workouts (& fitness apps)
One of the first big responses to a housebound population came from the fitness industry. Those of us who had planned to use isolation as an excuse to retreat into couch potato mode would’ve been dismayed to see the huge influx of social advertising centered around fitness apps, online classes, and live webinar workouts.
In the UK, celebrity fitness instructor Joe Wicks has even anointed himself the nation’s PE (physical education) teacher and is broadcasting live workouts for kids at 9am daily. In the US, big name gyms like Planet Fitness have frozen memberships and are offering free at-home workouts for all via Facebook Live.
It’s worth remembering that we’d be dusting off our early-2000s celebrity workout DVDs if it wasn’t for technologically nimble companies and vastly improved streaming.
3. Business: Video Conferencing
While many businesses are bracing for an economic hit (if they haven’t already fallen…) some — like those already mentioned — have been given a real chance to shine. Chief among these must be the video-conferencing suites of Zoom, Teams, BlueJeans, Skype, and the likes. Across the globe, newly remote workers are embracing these platforms, keen to scan the facial expressions of colleagues during tense/embarrassing/generally terrible work meetings (with the added bonus of a glimpse into their homes!).
Though undeniably less of a headache for men (many professional women are now agonizing: waste a full face of Estée Lauder, or go completely barefaced….😱?), these tools are vastly preferable to the constant verbal clash and white noise of old-fashioned conference lines.
From a business perspective, face-to-facing with clients and customers also gives a level of engagement that will likely be lacking over the next few months of social distancing.
4. Relationships: Social Media
Often the villain of the piece, in some respects social media has proved its worth during this crisis. Yes, there have been (well-founded) accusations that platforms have been stoking COVID pandemonium but — misinformation aside — for relatives and friends in isolation, sites like Facebook have become something of a lifeline.
It’s easy to forget that if you don’t live next door to loved ones (and even if you do!) social media can facilitate free-flowing communication and community expression. Though doubtless it can sometimes fuel dramatic and unhelpful rumors, also helps amplify important messages about caution and gratitude. Moreover, the platforms are making a valiant effort to pump out legitimate, shareable information and advice.
5. Entertainment: Tech-Driven #Quarantainment
Outside of health, business, and relationships one of the key roles for tech has been delivering us new forms of what we are now terming #quarantainment. Bafflingly, despite having the internet, handheld devices, music on-demand, algorithmically tailored television, food and wine delivery, and streaming access to every movie ever made, it seems that we are all completely bored.
Enter: entertainment innovation, and a staggering array of technologically enabled new options. Do you want to see a theater production? Visit a museum? Listen to a sports-style commentary of the utterly mundane? Request a tune from a musical impresario? Or just watch this hot mess unfold? Talented internet folks have stepped forward to fill in the void that a few extra hours at home has produced.
Will life ever be the same again, one might ask…?
There are different kinds of entertainment…
6. Information: Virtual Assistants
Love them or loathe them, our affectionless friends are here to help — and especially now that staying informed has taken on a critical purpose. According to this article, more people are turning to virtual assistants to stay up-to-date with the ever-changing global picture, as well as locally-issued orders. So far, efforts have been made to keep them reliable by prioritizing official health sources and reputable outlets.
We can’t pretend that these AI platforms are beautifully frictionless, but companies like Google and Amazon have encouraged developers to submit coronavirus specific voice-apps to help cope with the demand for up-to-date facts, figures, and announcements. Could you get these from any other device? Quite possibly; but this way you can avoid touching those grubby smartphones.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Of course, as well as “getting us through”, technology is being deployed to find vaccines and assess risk as part of a global push to stymy the spread of the virus. In China, it’s even being used to police public adherence to new health and safety rules.
While it’s absolutely right to scrutinize tech and evaluate its capacity for harm (now as much as ever) we can, without falling into complete tech solutionism, be thankful for our connectivity and engagement at this time of unfamiliar distance. | https://medium.com/swlh/six-technologies-getting-us-through-the-pandemic-40eeae97be48 | ['Fiona J Mcevoy'] | 2020-04-18 11:21:18.682000+00:00 | ['Technology', 'Coronavirus', 'Community', 'Apps', 'Connection'] |
Worship My Feet! | You’re in good company, you see, but I doubt the majority are as spellbound by my digits as you are.
I can control you with them, torture you even, and while I usually let you be in charge, I’m the boss now. With the smallest of movements, I pull you towards me, hooked by an intangible thread.
—Want a foot rub? you suggest, also playing blasé, not wanting to reveal just how defenseless you are.
I answer by swinging my body sideways, landing my pods in your open lap to recline back on the sofa. You start with stroking, lightly running your fingers alongside and underneath, tracing toes. Your touch sends tingles up my spine that makes my brain buzz. I begin to feel something else buzzing too. You push your crotch against me. I push back. It’s a tug of war. Who controls who? I’m losing track.
I used to think only submissives worship feet, that the action defined our roles in the bedroom and vice versa. I’ve since been proven wrong. Partially by you.
A person can be dominant on their knees; in charge with their tongue coiled around toes. You can find submissives on top, even with a flogger in hand. Neither tools nor body placement determines who’s in control. Power is more fluent than that; it’s in our words, spoken with our mouths or with our bodies alone. It’s in our minds.
As you lift my foot to your lips and let my big toe part them we’re both in charge, and equally at each other’s mercy. Watching my eyes you slowly swallow the whole thing, suck and twirl it around, then, slide it out.
Funnily, I feel this in my gut more than anywhere else. Is this explained by reflexology? Where feet are thought to be fitted with pressure points that correspond to parts of the body. They say our toes have meridians that connect to our organs; the big toe sends signals to the liver and spleen. Is that what I’m feeling?
My crotch floods with warmth in response as I rock back and forth in concert with your movements. You have your way with each of my toes, tantalizing my digestive system via secret pathways. | https://medium.com/essensually-ena/worship-my-feet-a5223bb24c96 | ['Ena Dahl'] | 2020-09-29 10:57:22.945000+00:00 | ['Body Positive', 'Short Story', 'Psychology', 'Fetish', 'Sexuality'] |
How To Become a Machine Learning Engineer | A large number of my companions from the software engineering foundation ask me inquiries like, how to turn into a Machine learning engineer in India, what amount does a Machine learning Engineer acquire, or how might I become a Machine Learning engineer without an advanced education. Thus, I thought why not compose sites on these themes. Along these lines, how about we begin. I will be imparting some simple and demonstrated techniques to which somebody can begin with Machine learning. Along these lines, we will go there above all, we should examine a few fundamentals things.
To learn more check out my detailed post on How To Become A Machine Learning Engineer
How To Become a Machine Learning Engineer
Path to Become a Machine learning engineer
To think about Machine learning, I prescribe you to look at my blog entry where I have examined what is Machine learning, Machine learning use-cases, and the eventual fate of Machine learning — Why we need to know about Machine Learning? (ML001)
Along these lines, after you at long last chose to turn into a Machine learning engineer, we will experience some significant focuses which will be useful for you in choosing how you will move toward it. We will cover the themes recorded beneath and dive deep into every last one of the subjects.
Future of Machine learning
The principle bit of leeway of Machine Learning is its boundless applications. These days, each industry is affected by Machine Learning and man-made reasoning. It has helped enterprises develop and get productive. Take for instance, how AI has changed the medical care area. These days, specialists can investigate significantly more information and arrive at a superior resolution. The clinical checking investigation has taken new turns and a portion of the calculations give preferred precision over people’s understanding. Along these lines, being a Machine Learning Engineer in the time of developing mechanization can be extremely productive.
With the world moving towards computerization, there is a more noteworthy requirement for taking care of complex issues and that is the place where Machine Learning engineers come into the image as they are the ones who can take care of these intricate issues utilizing Machine learning methods.
What is a Machine learning engineer?
Along these lines, we have arrived at where we will examine our primary concern of this blog, which is what is Machine realizing engineer. An AI engineer is somebody who is talented with likelihood and measurements, one who is acceptable with differential math, great with calculations, and last yet not least should be acceptable in any programming dialects (ideally Python). One whose employment is to work alongside information researchers and ensure that whatever models they are utilizing for the given information, functions admirably and after which the information researchers can go ahead and talk about the bits of knowledge from information to the partners of the organization. Thus, an AI architect’s work is to comprehend the information first and discover the concealed examples in the information through a variety of models or fabricate a custom model that works best with the given information.
They use programming systems and enormous information strategies to ensure that the information pipelines are gathering crude information and utilizing them to make the AI models more productive and dependable. They likewise run after creation sure that the AI applications that they fabricate additionally work in realtime and give the best outcomes.
So, if you want to start on Machine Learning and hoping to get good datasets to get started with do read the article — Best Image Datasets for Machine Learning and Data Science (ML002)
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/how-to-become-a-machine-learning-engineer-4eade6d73b31 | ['Subham Tewari'] | 2020-12-06 15:23:19.966000+00:00 | ['Deep Learning', 'Artificial Intelligence', 'Computer Science', 'Data Science', 'Machine Learning'] |
Mario’s Gym Routine | Can you teach an old dog new tricks?
A bit over a year ago, I saw a video of a Python program which was able to train a machine to play and win a game of Atari’s Pong through a process known as Reinforcement Learning (RL). Since viewing that video and my mind being subsequently blown by the potential of this application of AI, I have gone on a journey to learn how to leverage RL to beat various retro video games, and even documented my early stages in a previous post.
Throughout the past year, I’ve written various projects from scratch which aim to beat levels in the original Super Mario Bros. on the NES, but wanted to find a better way to switch between models and parameters without writing several hundred additional lines in each scenario. After experimenting with many RL libraries available in Python, I found rllib from the Ray Project to be the most effective and flexible, while also being able to scale up and out on machines — a requirement for my desires to take on larger and more complex environments. This article is intended to provide an example of how to build your own application which can be used to train an agent to beat Super Mario Bros. levels.
Baby Steps: Setting up an environment
Before I jump into the code, a development environment needs to be created to install dependencies and get your machine ready to learn. For installing dependencies in an isolated location for my project, I use either a virtualenv or conda environment to ensure I won’t be installing anything that could effect other apps on my computer. While I won’t go over setup for either in this post, you can read the official docs for either virtualenv or conda.
Inside an active environment of your choice, several dependencies need to be installed for the program to function. These can be installed with Python’s PIP:
pip install ray[rllib] torch gym-super-mario-bros
A more concrete list of requirements can be found on the repository I created for this project, but note that while the repository and dependencies may change with time, the dependencies will always stay compatible with the latest upstream code.
While this code might work on other Python versions (such as Python 3.6), I have only tested it on 3.7 and newer versions, which is highly recommended.
View from above
Whenever I look at examples for a new application, framework, method, or library, I always prefer to look at the entire chunk of sample code before diving into specifics. This allows me to see where the application begins, what needs to be initialized, any arguments that are parsed, which libraries are imported, and the overall structure of the code. For me personally, it’s much more valuable to have a high-level understanding of the code before diving into specific lines without any context, and that’s exactly what I’m going to do here.
The following example is a completely self-contained Python program which can train Mario to beat a specified level of the game. For those that like to get their hands dirty and just run with this code and don’t need explanations, enjoy! I will dive into more detail on each block of code for those sticking around. But, without further ado, here’s a sample application to give Mario a new brain:
Focusing in
With the full application out of the way, let’s look at each section in greater detail to understand what it does.
Import modules
We first need to import several libraries to make our lives easier. This is one of the great aspects of Python — there’s a high chance that something you need has already been created and is available in a library. I won’t mention all of the imports, but some of the key ones are as follows:
gym_super_mario_bros : An OpenAI Gym-compatible environment which allows Python programs to interact with Super Mario Bros. seamlessly.
: An OpenAI Gym-compatible environment which allows Python programs to interact with Super Mario Bros. seamlessly. ray : A distributed computing framework which makes it easy to run processes across multiple workers and machines.
: A distributed computing framework which makes it easy to run processes across multiple workers and machines. nes_py : A framework to interface between NES game environments, such as Super Mario Bros. and Python applications.
: A framework to interface between NES game environments, such as Super Mario Bros. and Python applications. rllib : A reinforcement learning library built on top of Ray which includes several robust models and algorithms to easily build and scale RL applications.
Custom wrappers
One of the great features of OpenAI’s gym is how easy it is to create custom wrappers around an environment to alter various aspects of the game or virtual world, such as modifying image shapes and sizes, stacking multiple frames together, changing reset conditions, and many, many more.
In some cases, as is the case with playing games from the Atari environment with OpenAI’s Gym, it is possible to get away with using the various atari_wrappers which are included with most RL libraries, without the need to add any additional wrappers. Most of these wrappers are imported above, but I had to update one of them, the EpisodicLifeEnv wrapper, as it expects the environment to have an Atari Learning Environment (ALE) object, which our Super Mario Bros. environment does not contain. I simply copied the code for this class as-is from the ray.rllib.env.atari_wrappers module, but modified lines 18 and 37 above to point to the unwrapped property instead of the ale . By including this code locally in my application and changing those two lines, I still get the full functionality of the wrapper (which I will explain further below) but it won’t complain that my environment doesn’t have an ale component.
I also modified a snippet of code from Uvipen’s awesome Super-mario-bros-A3C-pytorch repository which updates the reward each learner receives after every step. By default, gym_super_mario_bros determines the reward at each step by calculating Mario’s velocity (positive points while moving right, negative points while moving left, zero while standing still), plus a penalty for every frame that passes to encourage movement, and a penalty if Mario dies for any reason. While this is a fairly robust reward system, the levels could be played out more “normally” (as in, the way most humans would play them) by rewarding Mario for increasing his in-game score by defeating enemies, grabbing coins, and collecting power-ups. This information is saved in the info dictionary, and the previous score can be compared with the current game score to find a difference in the latest step and add that to the reward.
In addition to improving the reward when Mario increases his in-game score, a sizable reward is added if he collects the flag (or defeats Bowser) at the end of the level to encourage him to successfully beat the stage. He also receives a relatively large penalty if he doesn’t make it to the end of the level before he dies.
This new reward is then scaled down to be in a smaller range, which is similar to a reward clipping technique applied in the original DQN paper by Minh et. al.
Parsing arguments
Up next we parse arguments passed by the CLI during runtime. The user can modify a few components of the application by specifying various flags. These flags have the following impacts:
--checkpoint : The program automatically creates a new checkpoint file after every 50 training iterations and saves it to a spot that is indicated during runtime, typically at ~/ray_results/<Training instance>/checkpoint_<n>/ . By providing the full path to this file, a run can use that checkpoint’s weights to start a new training pass which ideally has a bit of progress already, dramatically reducing the amount of time required to train an agent, and also making it possible to use transfer learning to take Mario’s knowledge of one level and apply it to another.
: The program automatically creates a new checkpoint file after every 50 training iterations and saves it to a spot that is indicated during runtime, typically at . By providing the full path to this file, a run can use that checkpoint’s weights to start a new training pass which ideally has a bit of progress already, dramatically reducing the amount of time required to train an agent, and also making it possible to use transfer learning to take Mario’s knowledge of one level and apply it to another. --dimension : Each frame is cropped down to a square NxN image which makes it faster and easier to process details in an image by a neural network. Minh et al. used an image size of 84x84 which is commonly used for most retro video games, though sometimes other dimensions are used, such as 42x42.
: Each frame is cropped down to a square NxN image which makes it faster and easier to process details in an image by a neural network. Minh et al. used an image size of 84x84 which is commonly used for most retro video games, though sometimes other dimensions are used, such as 42x42. --environment : This is the OpenAI gym environment to train against. The gym_super_mario_bros environments have the format SuperMarioBros-<world>-<level>-<variant> , where <world> is the game’s world number (1–8), <level> is the level number within that world (1–4), and <variant> modifies how the game looks, from an untouched state ( v0 ), to colored blocks representing all objects, enemies, and players in game ( v3 ). It is recommended to stick with v0 for the game variant as most robust RL agents are able to handle the natural game state.
: This is the OpenAI gym environment to train against. The environments have the format , where is the game’s world number (1–8), is the level number within that world (1–4), and modifies how the game looks, from an untouched state ( ), to colored blocks representing all objects, enemies, and players in game ( ). It is recommended to stick with for the game variant as most robust RL agents are able to handle the natural game state. --framestack : Multiple frames can be stacked together to indicate motion of various objects. This helps the agent understand how to react to a certain situation as it is able to infer where objects will be in future states. This is similar to how humans interpret environments and states. If you were to take a random screenshot of a game and share it with your friend without any context, they might struggle to determine which direction each object in the screen was moving in. If, however, you were to take four screenshots from four connected frames and share those screenshots in order with your friend, they would likely have an easier time deciphering the direction of motion for each object in the image as they could compare any differences between the sequence. While the default value of 4 is recommended, different levels can be experimented with to see the effect.
: Multiple frames can be stacked together to indicate motion of various objects. This helps the agent understand how to react to a certain situation as it is able to infer where objects will be in future states. This is similar to how humans interpret environments and states. If you were to take a random screenshot of a game and share it with your friend without any context, they might struggle to determine which direction each object in the screen was moving in. If, however, you were to take four screenshots from four connected frames and share those screenshots in order with your friend, they would likely have an easier time deciphering the direction of motion for each object in the image as they could compare any differences between the sequence. While the default value of 4 is recommended, different levels can be experimented with to see the effect. --gpus : If users have compatible NVIDIA GPUs installed on their system, they can use those resources to dramatically improve the overall throughput and decrease the time required to train an agent. At the moment, the implementation outlined in this article supports a maximum of 1 GPU, but this will hopefully be changed in future versions of RLLib.
: If users have compatible NVIDIA GPUs installed on their system, they can use those resources to dramatically improve the overall throughput and decrease the time required to train an agent. At the moment, the implementation outlined in this article supports a maximum of 1 GPU, but this will hopefully be changed in future versions of RLLib. --iterations : This flag tells RLLib how many training passes it should go through prior to terminating. Note that this is independent from the number of steps that have been taken during execution.
: This flag tells RLLib how many training passes it should go through prior to terminating. Note that this is independent from the number of steps that have been taken during execution. --workers : Workers are the system resources that actually deploy the agents which will independently learn and update global weights with any progress (depending on the model). This number can never be more than the available number of CPU cores on a system, and is recommended to be one minus the total number of cores in a system, with the last one being a controller which deploys Ray and listens for all updates and broadcasts changes to the cluster.
Creating an environment
Before feeding it into a neural network, an environment needs to be built and wrapped with our modifications to make it easier for the agent to learn. I will go down these modifications line-by-line.
gym_super_mario_bros.make : This is a simple wrapper around the gym.make function and allows us to build the specified SuperMarioBros environment and returns the respective env.
: This is a simple wrapper around the gym.make function and allows us to build the specified SuperMarioBros environment and returns the respective env. CustomReward : This modifies the reward returned after each step following the changes listed earlier in this article.
: This modifies the reward returned after each step following the changes listed earlier in this article. JoypadSpace : When the network picks an action to take, it typically selects a zero-based number from a list which represents an action that the agent can take. For example, the first item in a list of actions might be for the agent to stay in place, the second item to go right, the third to go left, and so on. Adding the SIMPLE_MOVEMENT constant from the gym_super_mario_bros actions to the wrapper maps Mario’s actions to a list of pre-defined movements. There are three action lists available — RIGHT_ONLY , SIMPLE_MOVEMENT , and COMPLEX_MOVEMENT which increase in the number of possible actions that Mario can make, ranging from only being able to run right and jump to unlocking his entire toolset of abilities and motion. SIMPLE_MOVEMENT is a nice balance between offering Mario a range of movement while not bloating the action space with a large number of possible actions, making it harder to learn.
: When the network picks an action to take, it typically selects a zero-based number from a list which represents an action that the agent can take. For example, the first item in a list of actions might be for the agent to stay in place, the second item to go right, the third to go left, and so on. Adding the constant from the actions to the wrapper maps Mario’s actions to a list of pre-defined movements. There are three action lists available — , , and which increase in the number of possible actions that Mario can make, ranging from only being able to run right and jump to unlocking his entire toolset of abilities and motion. is a nice balance between offering Mario a range of movement while not bloating the action space with a large number of possible actions, making it harder to learn. MonitorEnv : The MonitorEnv keeps track of episode statistics after each run which can be viewed with the overall results. While this wrapper doesn’t change the environment, it does provide many useful metrics to evaluate overall performance.
: The keeps track of episode statistics after each run which can be viewed with the overall results. While this wrapper doesn’t change the environment, it does provide many useful metrics to evaluate overall performance. NoopResetEnv : The initial state of an environment is sampled by taking a random number of no-op, or non-actions in a row after reset. The no-op is generally assumed to be the first action in the action space, which in our case correlates to Mario standing still.
: The initial state of an environment is sampled by taking a random number of no-op, or non-actions in a row after reset. The no-op is generally assumed to be the first action in the action space, which in our case correlates to Mario standing still. EpisodicLifeEnv : This wrapper uses the modified version of the class that was shown above with the custom wrappers to add support to SuperMarioBros environments. The wrapper modifies the environment in a way that any time the player loses a life, it is considered the end of an episode, but a reset only happens when the game is over. This helps with estimation of future policies to aid in learning.
: This wrapper uses the modified version of the class that was shown above with the custom wrappers to add support to SuperMarioBros environments. The wrapper modifies the environment in a way that any time the player loses a life, it is considered the end of an episode, but a reset only happens when the game is over. This helps with estimation of future policies to aid in learning. WarpFrame : This wrapper first converts all images to a single-channel grayscale image, then resizes them down to an NxN square image as specified by the --dimension flag. This makes it quicker and easier for the network to find key features in each observation.
: This wrapper first converts all images to a single-channel grayscale image, then resizes them down to an NxN square image as specified by the flag. This makes it quicker and easier for the network to find key features in each observation. FrameStack : This stacks the specified number of frames from the --framestack flag together in a sequence to show motion of objects. The motivation for this wrapper was outlined in the argument parser section above, but the default setting is to stack four frames together to illuminate subtle differences between frames to allow the network to decipher movement.
Printing progress
This function is not strictly necessary, but I find the standard output which can be printed by RLLib to be fairly verbose, as I prefer to run a training loop in the background and only periodically look at certain values, such as the max, min, and mean rewards. The tabulate library makes it very easy to print clean tables of information which is what I use here, similar to what RLLib does with their output. If desired, this function can be omitted, and the result object below can be pretty_print ‘ed instead.
Main function
The main function in this case is where a lot of the magic happens and there’s a lot going on (which slightly pains me given how I normally write Python applications with a slim main function, but I wanted to be relatively concise in this case).
Breaking it down, the function starts with a env_creator_lambda definition which allows us to build a custom environment. This is necessary as RLLib takes a gym environment’s name by default and builds that environment at the beginning of a training run. As we added several custom wrappers to our environment, they need to be included in a new function which is handled specially by RLLib to let our Python application build the environment instead of RLLib doing it automatically.
I borrowed the IMPALA config tuned specifically for Atari Pong from the official Ray repository as a baseline for the Super Mario Bros. config. I chose IMPALA for my agent as I have a fairly beefy workstation which allows me to leverage a few dozen workers and two GPUs to accelerate the training process. IMPALA tends to be on the faster side amongst some of the common RL-models when given enough resources, hence my usage here. One of the reasons I moved to a RL-library, however, was to be able to seamlessly transition between various models without writing a bunch of new code to support new algorithms. Moving forward, I plan on adding new models in this application.
Eagle-eyed readers will also notice I use PyTorch as my framework of choice. While I haven’t done any comparisons for this specific application between PyTorch and TensorFlow (both are supported by RLLib), I tend to pick PyTorch when given the option as I prefer the framework’s more Pythonic and intuitive style, generally better performance, and seamless support for GPUs. Assuming both packages are installed, switching between the two frameworks is as easy as changing the value of the framework setting.
After initializing the config settings, several Ray-specific functions are called to startup a cluster on the current node, register an environment using our custom environment creation function with wrappers above using register_env , and building an IMPALA trainer based on the config that we created.
If the user specified a checkpoint file during runtime, the trainer will read the requested checkpoint file, and update the trainer’s weights based on that checkpoint prior to starting the run.
Lastly, the training sequence is created by looping through the requested number of iterations and calling the train() method against the trainer to begin that phase of the pipeline. This will automatically update the model weights as progress is made and new policies are learned. The results from each iteration are then printed using the custom function created earlier to display the episode rewards over time. On every 50th iteration, a new checkpoint file is created and saved alongside model parameters, typically in the ~/ray_results/ directory.
Running the application
At this point, you should now be able to replicate the training environment on your own machine by copying the first code snippet above, installing dependencies, and running the code with any necessary parameter changes.
If the code was saved as train.py , the training can be kicked off by simply running the following:
$ python3 train.py
Note that if you have a different number of available CPU cores than the default four or have NVIDIA GPUs, you can specify the result with the --workers and --gpus flags, respectively, similar to the following:
$ python3 train.py --workers 60 --gpus 1
Photo by Victor Freitas on Unsplash
Italian Plumbers Pumping Iron
Congratulations! If you followed this guide, you should now be able to train Mario to beat various levels of the original game that started it all. It may take several hours to complete a level depending on resources, and some are harder than others, but this offers a base foundation to achieve greater heights in the exciting realm of reinforcement learning.
Have any suggestions for improvement? Feel free to drop a new issue or pull request on my repository or comment below. And as always, enjoy the new adventures with everyone’s favorite Italian plumber. Wahoo! | https://towardsdatascience.com/marios-gym-routine-6f095889b207 | ['Robert Clark'] | 2020-12-24 14:15:52.292000+00:00 | ['Reinforcement Learning', 'Mario', 'Python', 'Nvidia', 'Editors Pick'] |
8 Overlooked Details by Beginner UI/UX Designers | Photo by Josh Calabrese on Unsplash
When designing for a big project, there are some pages or elements that designers often forget about during the designing process. Many mentioned points are being made by beginners and/or in projects where there was no time to create wireframes to foresee all the flows and problems.
1. Forget password flow
Image by https://www.howtogeek.com/357257/how-to-recover-your-forgotten-instagram-password/
It often happens with the beginners when they design the login and registration flows, forget password flow gets lost. Similar to login flow, it takes a short amount of time to design it still, it’s an important flow one should not forget.
Users can recover the password by email and/or phone number. Recovery can be a link that leads the user to create a new password, or it can be a one-time password.
Which would work better — depends on the app/website.
If it’s an email and the user is on a mobile device, it would take more steps to go through to recover the password. And if it’s a mobile number, the problem is that if the user changes his/her number, the future owner of that number can access their account.
More about that you can read in the 4th point of my 8 UX Design Tips for handling controversial positions of UI elements article.
2. 404 error
One of the most challenging pages to design is the 404 page, as it’s often based on personal taste. Designers have relatively creative freedom here, and there are lots of awesome 404 pages. One better than the other. And it’s hard to compete with them. There is always a simple solution of putting big 404 in the middle of the screen and writing how sorry we are, but if we want a mindblowing effect, it’s better at least to try to blow their minds. I know it’s not the best, but my personal favorite is Figma’s 404 page, where you can move the nods of the vector-based shape of 404.
3. Skeletons and Spinners
Skeleton loader is a low-fi wireframish-looking representation of the final design, that appears when you open a page and before the content is downloaded. Open your Facebook and scroll down and you’ll see the gray shapes of the images and texts for a short amount of time before the page will appear. That’s the skeleton. The other popular type of loader is a spinner. A spinner is a spinning circle that shows the user that some progress is going on.
When to use Spinners and Skeletons (and other loaders) is a subject for another day.
In short — We use skeletons when the page has many elements such as pictures, input boxes, texts, basically when information is requested from the back-end. And spinners are better for the progress of some exact process, like purchasing a ticket, the opening of an application, uploading a photo, etc.
There is also mixed variant; the page downloads with skeleton but within that webpage, there are micro-spinners on waiting processes.
So spinners and skeletons are elements (if you could say so) that many beginners often forget about, until the moment when the development process overtakes them.
4. Empty Search page
Image from Amazon.com
When designing e-commerce or similar website we design how the search works, and how the results show up. And the cool designs of filters of that search process, and how the products appear on our cool layout grid. But often, beginners forget to design the page of the case when the search brought no results. It’s another problem that appears during the development process.
Same goes for order-history page when users don’t have any orders yet.
5. Payment Failed Pages
In the flow of the payment process of e-commerce or some other app/website, there is this cool page at the end of the flow where a message appears telling users that their purchase was successful. There are often cases when beginner designers forget to design the failed message page, for example, when users didn’t have any money on the card, or some other problem appeared. There are even times when designers forget about the success page as well, and flow takes users straight to the homepage.
So it’s another proof that it’s always better to have, IA, Site Map and User flow before starting the design.
6. Handoff some assets
It’s an often case when the designer forgets to handoff some images or icons or other assets to developers. Again the problem appears during the development process and designers will have to go back, make and export them to hand off to engineers.
7. Providing the Style Guide
Style Guide of one of my projects
Some beginners often forget (or don’t consider it necessary) to create a Style Guide for the website/app. Some designers create the Style Guide before starting the design, others after finishing, and some during the process. I would suggest to make it whenever it suits you, but it’s an important document that developers must get before their involvement. If they have a Style Guide, they may even correct your accidental errors sometimes.
8. Favicon
Image from https://fitsmallbusiness.com/favicon-website-icon/
Favicons are the small icons on the tabs of browsers. Usually, it’s a logo of a website/app, sometimes, another identifying symbol/image.
Often designers forget to design the favicon, and developers/project managers remind them to design and deliver it only when the project is at its launch. It would be better to include them with all other assets and hand them over to developers before the start of coding. | https://uxplanet.org/8-overlooked-details-by-beginner-ux-designers-bb6c010ee772 | ['Daniel Danielyan'] | 2020-12-26 19:39:27.425000+00:00 | ['UX', 'UI', 'Success', 'Design', 'Tutorial'] |
Moon and Sun | Fainting moon and morning sun,
My companions while all alone
Dazzled by their changing lights,
Mirrors of thriving spirits within
Fleeting moments of spiritual bliss.
Quieting my mind’s racing thoughts,
Awareness in these mindful moments
Letting peacefulness settle over me,
Cool winds blowing against my face
Morning freshness to be absorbed.
Bright moon’s face has disappeared
Sun’s crimson rays have taken hold
Sun and thoughts, companions now,
Contemplations on why we’re here
Purposeful lives we are meant to have.
Philosophical musings and meditations
Illusions, realities, mixed and churned
Discerning images that are revealed,
What is real and what is not?
Moon and sun, we know they’re real.
Mind and spirit, relaxing into solitude. | https://medium.com/publishous/moon-and-sun-cffa6f5de4b1 | ['Randy Shingler'] | 2020-04-14 19:19:00.717000+00:00 | ['Self-awareness', 'Spiritual', 'Moon', 'Poetry', 'Mindfulness'] |
We really need to talk about why the media likes to sanitize bigots and bigotry | America has a race problem. The media has an even deeper issue with bigotry and the bigots they cowardly protect. You would think that having a White president, who proudly identifies as a White nationalist, would intensify the need to avoid tendencies to pretty up the ugliness in our midst — but that hasn’t been the case.
The media can’t afford to irreversibly anger those that provide the clicks and traffic.
There is the traditionalized route of vilifying Trump, and there’s also the urgency to stealthily utilize his potency as the seamless engine for career-making moves and epic ratings. Major networks are reveling in this deadly climate that has ended many Black and Brown lives, while allowing for prominent anchors to channel their trajectories in alignment with the “breaking news” of unsightly actions by the White man who is paying the bills.
The truth is that we need Trump to be the constant and dependable villain.
That’s why Yamiche Alcindor endured having her dignity stripped away in full view of her colleagues, when the president accused her of being a racist, and not one single member of the press corps came to her defense — during that cringe-worthy moment when a Black woman was verbally assaulted by a racist asshole.
It explains why the media over-indulges in the hypocritical fodder of presenting the heroic tales of oppressed citizens, overcoming the very worst this country has to offer, while doctoring the words and phrases to prevent the deplorable themes from exacting justifiable damage to the perpetrators.
Exhibit A:
Social media and online journalism didn’t just destroy print — it also killed the art of reporting.
The golden age of journalism is a fascinating era for journalists like me, who are desperately trying to recall the aura of the illustrious past.
It’s a terrifically challenging feat, when you consider the threat of an irretrievably broken system, that no longer recognizes the inappropriateness of White notables, who deserve to be shamed for their shameful demonstrations of unfiltered hate, and outright rejection of the laws that were instituted to prevent the disease of lawlessness.
The societal disorder that currently overwhelms has been magnified by the GOP and how party members have accepted the responsibility of enabling the catastrophic results of a leaderless nation.
Fifty years ago, Richard M. Nixon took the oath of office and became the 37th president of the United States of America. He survived the first term, and managed to secure a second one, but the Watergate scandal, that was supported by damning materials, proving his guilt beyond a doubt — forced Nixon to resign in disgrace.
The abrupt end to a presidency in turmoil is significant, especially when the Commander-in-Chief makes the difficult decision to walk away from his job duties, with the understanding that his legacy will be forever shattered by the historical relevance of his traitorous governance.
But the fundamental aspects of the scandal that made the early seventies the period when the nation was entrenched in nationalized espionage, was the relentless pursuit for justice by two passionately-driven reporters from The Washington Post — Bob Woodard and Carl Bernstein.
Back in the day (left, Bernstein, right Woodard)
Both men could easily be described as the very definition of “American patriotism.”
They were seasoned journalists who were willing and able to risk it all for the assignment of tackling the stench of nefariousness, that couldn’t be doused with flowery scents of nonchalance, and the guiltless adherence to methodical withdrawals from the entire landscape of #facts.
When you have the serious fracture of governmental hierarchies, that are embroiled in a major coverup, that floods pollution into the avenues of our democracy under the tutelage of the “leader of the free world” — the value of reportage goes all the way up.
Then you add the tampering of documents and misallocation of funds based on the incentive to blot away the loot from the “break-in” heard around the world — there is nothing else to do but endeavor to unearth the gems from the wreckage.
It’s hard not to wonder how well Woodard and Bernstein would’ve fared in this era of social discontent and maddened chaos, that allows for a dubiously sinister figure with love affairs that center around his preferred palate for murderous dictators — to be righteously feted by the entertainment industry — through once-ailing institutions that have been brought back to life with hearty laughter at their expense.
Citizen Trump was a celebrity with reality show about celebrities who want to rule kingdoms of gold, and it was this bloated resume that led him towards the path of greatness.
To be great in America is to be in a position of influence, whether you’ve earned the right or not. It’s the reason why “influencers” can be paid thousands of dollars to lounge in five-star resorts, and right illegible blurbs that explain how we can catch a direct flight to a paradise, that we can only afford to hover and click on.
Trump didn’t run for office because of his lengthy record of community service, and his steadfast mission to fight for the liberty and security of all Americans. Like most White males with too much money and all the privilege to match that station, there was no concrete reason to reject the seduction of absolute power.
Republicans were hungry to bounce back from 8 years of brutal perfection that was steered by the Messiah-like Black man with an Ivy League degree, and the charming disposition that made world leaders melt on contact.
Donald Trump is a dangerous motherfucker, but Barack Obama was lethal.
How do you reconcile the Black man in The White House, with the swagger to boot, who knew what to say, how to say it and never stuttered in the face of supreme hate from White men in suits, who secured every blessed opportunity to remind the negro in the Oval Office that he would always be a NIGGER.
Those 8 years of celebration, that was amassed from the validating imagery of the first-ever Black family, were tortured by the weightiness of how White people in positions of authority, scan’t respectfully tolerate the evidence of how little they matter when Blackness takes centerstage.
Notable Republicans loudly condemned what they couldn’t change, and erupted into fits of rage at the inability to remove the stain of Blackness from the House of Whiteness.
Republican Congressman Steve King was one of the foul-mouthed bigots, who didn’t hold back his contempt for the Black president and Black first lady. His numerous comments about migrants have been boldly consistent with the hateful rhetoric that Trump continues to trumpet without restraint.
But King literally built his career around his utter disdain for Black and Brown folks, and his practiced bigotry has flourished under the rule of thumb set forth by the media, that negates the treacherousness of hate speeches with the penmanship of giving White people the freedom to be abhorrently expressive.
This translates to refusing to assess situations and the people who create them in ways that appropriately capture what can’t be disputed by anyone with cells of common sense.
It means coming up with innovative descriptions that are manufactured to replace the enormity of hate-filled passages that should be classified in the category that best describes the delivered sentiment.
Exhibit B:
It means major news organizations like NBC, circulating a mandate that strongly warns staffers against referencing Steve King as the racist he is, because his public endorsement of White supremacy doesn’t qualify under the code of misconduct — according to the regulations of overpaid executives who can’t afford to alienate influential bigots — who supply rent and bonuses.
That’s why a conservative train wreck from Fox News was transferred to the mainstream territory of greed and cumbersome neutrality.
Megyn Kelly was the appetizing White woman, who became famous for sparring with candidate Trump, and admirably won the votes of White women, who voted for Trump, and fell in love with Kelly’s activeness of White feminism.
Kelly’s streak of cheering the brutalization of Black people, particularly Black women and their children, didn’t dissuade NBC from ignoring the dollar signs dangling with the promise of how a beloved bigot could sanitize her record of hate with the network cleanse of White women viewers.
The gamble didn’t pay off for the idiotic network, but it sure did work out beautifully for Megyn Kelly, who brilliantly staged her $69 million exit with a memorable segment on her already flailing morning show — that showcased her nostalgia for blackface for Halloween.
As we barely manage the present temperature of extremeness, that continues to permeate from the chants of an administration that has been engaged in acts of violence that should be categorized as crimes against humanity — the stakes are higher than they’ve ever been.
Yet, mainstream media chooses to play it safe and gentle, instead of rising to the occasion of energetic reporting that holds us all accountable — come what may.
We are dealt the blows of weirdly coined shit like “racially-tinged” and “racially-offensive,” as if we’ve all made the pact to give White people who hate Black and Brown people the benefit of the doubt, when they issue statements that contain proof of why they support the gassing of migrant women and children — and why they miss the days when festive lynchings of Black bodies was a regular sport.
Exhibit C:
Recently, we’ve had the displeasure of another viral video that depicts White teens donning #MAGA hats, gleefully mocking the poignant ceremony of an elderly Native American man, who was consumed by the elements of his primal call to arms.
There was the expected uproar on social media platforms, as the viewership increased beyond measure, and we were once again treated to another play by play that demonstrates how Making America Great will always result in the graphic display of Whiteness challenging the audacity of Blackness.
The video footage is not just offensive, it’s also heartbreakingly vile, as we watch the ceremonial depiction of beauty and the beast with the defeated realization that future generations will absolutely not fare any better.
Hours after major outlets collected the wealth of traffic from countless clicks and views, it was reported that there was in fact an extended video that contains additional content that could conflict with the well-packaged narrative of how White teenagers, parading about with hate-themed memorabilia could be absolved of their sins much to the chagrin of excitable naysayers.
The extended version of mayhem does reveal a lot more than what was initially provided, and the “group of Black men who identify as members of the Hebrew Israelites” didn’t help elevate the stoic spirit of the Indigenous Peoples Rally.
But how does that wipe away the indignities that the Native Americans suffered with the mob of White teens defiantly wearing the traditional weaponry of the White man, that represents the historical damnation of the Black and Brown population?
Nick Sandmann, who is the villainous character of this very “complex” saga has issued a statement, that adamantly disputes the earlier reports of how he rudely confronted Nathan Phillips, the Native tribal leader and war vet, who he claims “locked eyes” with him first.
Sandmann also alleges that he and his group of comrades were raising their voices to drown out the “inflammatory comments” from the rowdy Black men that had previously provoked and were continuing to taunt the White #MAGA-attired students.
But then we hear other witnesses provide alternative summations of the footage with confirmations that the White teens were chanting “Build the wall!” and that Nathan Phillips was incessantly mocked and ridiculed by the mob of White hecklers.
The tribal leader also explained how his path was blocked by “the most famous White boy in America,” which created an atmosphere of chilliness that he won’t ever forget.
“I was scared.” “I don’t like the word ‘hate.’ I don’t like even saying it, but it was hate unbridled. It was like a storm.”
The ongoing updates of this shit fest has been met with mixed reviews, as some viewers are annoyed by the rampant disloyalty of the #FakeNews media — and how it conveniently releases the version of the truth — that helps to validate the climate of hate with the promise of monetary rewards.
Others like CNN political analyst and White House correspondent April Ryan, prefer to play it safe by covering all the basis of wrongdoing with the delegation of blame to all who apply, and ending with the hopeful chime of desired unification through the channels of civility.
But we’re not dealing with the crux of the issue, and how this habitual snafu is giving the business of journalism a very bad name.
Why did CNN and other well-respected news organizations, hastily post videos that hadn’t been vetted?
What happened to the mandatory fact-checking process, that takes the time required to ensure that the maddening crowd has something to feel shitty about without the risk of conflicting material?
Why is the media obsessed with the “breaking news” of Black pain, and the illustration of bigotry from uploaded real time episodes, but when it comes to putting racists on blast with the ammunition of the R-word — all bets are off?
It seems that major outlets are completely sold on the idea of “posting first, and asking later.”
This is due to the army of attention that descends on footage that highlights the themes that get Black and Brown people killed.
But when it comes to erecting heds, that accompany the heinous hate crimes with the accurate descriptions, that explain the murderous rage of “Racist White Cops,” or the callousness of the “Racist White woman” who made a Black boy cry, or the “Racist Republican Congressman” who wants to save White supremacy from extinction — there’s a blatant resistance against making White people look bad — before the law improperly tries and acquits them.
America has a race problem. The media has a problem with reporting race and racists with unblemished thoroughness. And until we’re gifted with the headline that says:
Racist President Strikes Again, PBS White House Correspondent, Yamiche Alcindor Is Latest Victim
The media will remain in tragic allegiance to clicks, traffic numbers and the ratings that give White people the comfort to hate and harass Black and Brown people without the R-word bulldozing those agendas.
And that’s the shit we need to keep talking about. | https://nilegirl.medium.com/we-really-need-to-talk-about-why-the-media-likes-to-sanitize-bigots-and-bigotry-a548d44c8285 | ['Ezinne Ukoha'] | 2019-01-22 13:05:29.701000+00:00 | ['Media', 'Politics', 'Journalism', 'Racism', 'Social Media'] |
The Technologist’s Guide to The Circular Theory | Zero and one is X and X’.
Zero and one is X and X’. (Photo by Todd Diemer)
The technologist understands everything reduces, and, expands, to, zero, and-or, one. This makes his, and-or, her, life easier, when it comes to everything. Zero, and-or, one, is yin, and-or, yang, is circumference, and-or, diameter, literally (and figuratively).
Meaning, any combination of traits, X and Y, X and X, X and X’, duplication and negation, reduces and expands to, zero, and-or, one. From the technologist’s point of view, there is no difference between a zero, and-or, a one, because zero, and-or, one, articulate, and, must, conserve, a circle.
Therefore, there is a circular relationship between any pair of attributes, (zero and one is circumference and diameter). This means the circular relationship has control of all attributes. Any discipline (technology, especially).
Complementarity is the basis for identity because duplicity is the basis for a unit. This means zero and-or one is one and-or two. From the circle’s point of view, then, zero is, always, two (you have to have both circumference and diameter in order to have either).
This means the technologist has the flexibility to function, realistically, when it comes to understanding life. Expectations, and experience, share a circular relationship, meaning, the technologist understands we live a self-fulfilling prophesy. Half-the-time we are zero, the other half, we’re one.
Half-the-time we’re one. The other half we’re two. Where zero, one, and, two, are pi, diameter, and, circumference. In any order, combination. Meaning, no way around it, everything is 50–50.
Conservation of the circle is the core dynamic in nature. | https://medium.com/the-circular-theory/the-technologists-guide-to-the-circular-theory-682cabfeb9c9 | ['Ilexa Yardley'] | 2017-08-07 14:02:33.353000+00:00 | ['Circular Theory', 'Leadership', 'Life Lessons', 'Universal Relativity', 'Productivity'] |
Assessment Validation—Why We Miss Things Easily Found by Auditors | After an auditor has completed assessment validation and found a mountain of issues, you might wonder how you managed to miss them. You’re a competent trainer and assessor, so why does ASQA’s validation of your assessments differ so drastically from your own?
Let’s try a quick experiment to test your attention to detail. If you wear a watch, follow the instructions below.
Step 1
Place your hand over the face of your watch without looking at it. No peeking.
Step 2
Think about whether the watch has numbers all around the dial, or whether it has some numbers at 12, 3, 6, and 9, or no numbers at all. Think of every colour included on the watch, and any other notable details that it might have.
Step 3
Look at the watch to check what you remembered.
How did you go? Don’t be hard on yourself if you struggled. You look at your watch constantly throughout the day, but probably don’t pay much attention to the details, because your focus is narrowed to one thing: finding out the time.
The analogy is similar to what can happen in assessment—we focus on a desired outcome, develop an assessment tool to achieve the outcome, but don’t analyse the competency requirements for compliance. We’ve completed the task so many times, we’re blinded to the crucial details.
This automatic completion of tasks is what psychologists call unconscious competence—the final stage in the four stages of competence (below).
Unconscious competence usually saves us a ton of time. But when we develop an assessment tool while in unconscious competence mode, we slip into danger. As we mechanically map assessment items against units of competency, assuming that we’re working correctly because we’ve done it thousands of times, we blind ourselves to the intricacies of the unit. We fail to consider the details, and so we fail to write a compliant assessment. When it’s time for assessment validation to be completed by an auditor, they’ll find issues.
Creating an assessment that satisfies the required competencies is finicky. We can’t take anything for granted. When we’ve completed every question, and we’re ready to check that our assessments satisfy the competency requirements, we must adopt a manner of scrutiny, and follow this process for every question:
Identify the key steps you’d take when completing the question. Write them down. Identify the Performance Criteria (PC), Performance Evidence (PE), and Knowledge Evidence (KE) from the unit of competency. Map the PC, PE, and KE to your key steps.
If you followed this process for a question, did you manage to map all criteria and evidence to the key steps for the task? You’ve done well if so. If not, you’ve identified what’s missing, and can update the question to include them. You’ve temporarily shifted back to conscious competence, and as a result, the stringent compliance requirements that are tested in assessment validation have a much better chance of being ticked off.
Auditors can’t afford to work in unconscious competence mode. To be successful, they must be meticulous, or risk missing the nitty gritty that reveals non-compliance. The same stance must be taken by trainers when developing assessments. Automatic mode just won’t cut it. By shifting back to conscious competence, trainers will be able to create high quality, compliant assessment that satisfies auditors, and properly tests students.
Written by John Price from the Vet Gurus | https://medium.com/vetexpress/assessment-validation-why-we-miss-things-easily-found-by-auditors-2c4027cf9303 | [] | 2019-11-11 03:10:52.964000+00:00 | ['Vet Insights', 'Learning', 'Psychology', 'Education', 'Assessment'] |
When You Feel Like a Fraud — but You’re Really Not | Many years ago, in another professional life as a grain trader, I was at the top of my game. I wasn’t the envy of my peers, per se, but there were some who expressed admiration for my accomplishments, among them, my boss.
I liked that. It’s comforting and encouraging when you can be an inspiration of sorts to young leaders who aspire to go where you are. Yet, when I was alone with my thoughts, I felt like a fraud — someone playing the part of a leader. Secretly, I had a lot of doubts. Did I really know anything about my area of expertise? Maybe it was all a fluke. I worried: When will the jig be up? When will that infamous shoe drop, and I’m exposed for the fraud that I am?
If you’ve ever felt like this, you might be suffering from imposter syndrome, and you’re not alone. Some 1 in 3 U.S. employees, roughly 32 percent, or up to 70 percent of the population have battled imposter syndrome at some time or other.
But yes — you do deserve your success. For some, experience and career maturity help to cement the idea that. For others, it takes a bit of work. I had to do a bit of work, some self-reflection, but I eventually came to believe that I fit right in at the top.
What’s the problem?
Imposter syndrome, or imposter phenomenon, is a form of intellectual self-doubt. Suzanne Imes, Ph.D., and Pauline Rose Clance, Ph.D. first described the condition in 1978, and back then people figured it was a woman’s thing — typical, right? But that has since been disproved — it occurs in men and women.
Imposter syndrome occurs in high achievers — though not exclusively — who have a hard time believing they deserve their success. They feel like frauds and commonly contribute their success to good timing, coincidence, or even pure luck. Imposter syndrome often goes hand in hand with anxiety, and the person suffers in silence, waiting to be outed as a phony despite there being no real evidence to sustain this belief.
Experiencing imposter syndrome is quite common. Albert Einstein and Meryl Streep both had it. Einstein feared his work wasn’t worth all the fuss, and believe it or not, Meryl Streep didn’t think she could act. Ridiculous, right?
Luckily, there are things you can do to move past imposter syndrome and genuinely appreciate your success. More importantly, you can do things to ensure this common feeling doesn’t actively derail your career.
It’s no longer a man’s world.
I was once a young commodity trader, and there were few women in my field or company at my level. As you can imagine, my male peers had quite a bit of fun at my expense. It was also a highly stressful, complicated position.
As the Baltimore-based general manager, trader, and logistics guru for an ocean-going, vessel loading, major grain export facility, I managed a three-shift facility, traded grain, and transported it from the Midwest to the east coast. It was a logistical nightmare running eight, 100-car trains to our small facility, loading ships that would sail the world, all under time constraints.
I often had moments of sheer panic and I wondered, what am I doing? Do I even have a clue about this whole thing? I worked every day, around the clock, eventually “moving” into my office at the facility so as not to miss a thing. Then, even after the program proved highly successful, I still thought, “Well, anyone could have probably done it.”
Turns out, anyone could not have done it as well as I did it; I understood my job quite well. On the other hand, I didn’t have a clue that I was suffering from imposter syndrome.
You can move on from your angst.
All these years later, it’s still a relief to share that I moved past this, and so can you. However, first, you have to admit a few things. In her book, The Secret Thoughts of Successful Women: Why Capable People Suffer from Imposter Syndrome and How to Thrive in Spite of it, Dr. Valerie Young, who has spent decades studying imposter syndrome, identified five subgroups for it. They are:
1. The Perfectionist: Sets excessively high goals
2. The Superwoman/man: Works harder to measure up
3. The Natural Genius: Has high goals, works hard but wants success on the first try
4. The Soloist: Independent, can’t ask for help for fear of exposure
5. The Expert: Fears they will never know enough
Of course, these are extremely brief descriptors for Dr. Young’s subgroups, but do you see yourself in any of them? Identification is often the first step in determining if you are experiencing imposter syndrome.
I was the Superwoman, and my epiphany came when I was asked to be a mentor to a newly minted grain trader. As I taught him our trade, I realized just how much I actually knew. It was eye-opening for me — I really did know my stuff. I was pretty good at it! If you have the desire and the opportunity and think you might be suffering from imposter syndrome, become a mentor, it might help you more than them.
Face the truth. You’re great!
I left grain trading behind many years ago, and now I’m an international cultural consultant and etiquette expert. Yes, I’m an expert, and I coach people who, while not necessarily suffering from imposter syndrome, may not have adequate confidence. A good tip to either help you overcome imposter syndrome, to build your confidence, or both is to take a good look into your toolbox. Inside you will find all the tools that you need to wield confidently.
How are your communication skills? Do others understand what you say and what you mean? Are you present when you listen? Are you persuasive, creative, a good problem solver? Do you make things happen? If you’re reading this, I imagine you do. You are probably a high achiever, and you simply could not have gotten to where you are without good communication skills. Keep looking. I’m sure there are many tools of confidence in your toolbox. Be honest with yourself, and acknowledge your best features.
Another good tip is to talk to someone. Perhaps you have someone in your career whom you admire, who was helpful, perhaps a mentor. Reach out. Listen to what they have to say about you and about your career. It might be a pleasant conversation to hear how well you’ve done from a senior colleague you trust. However, if you can’t shake persistent feelings of self-doubt, it might be a good idea to contact a professional, a therapist, or psychologist. They can help you to work through them and process them appropriately.
As an etiquette expert, my final advice to you is my advice to everyone: Be respectful, be thoughtful, and always, be kind — especially to yourself. You work hard, and you deserve it. So, don’t hesitate to remind yourself that, “You’re great!”
Thank you for reading! | https://medium.com/middle-pause/when-you-feel-like-a-fraud-but-youre-really-not-8b15d1de65b2 | ['Heidi Dulebohn'] | 2020-07-14 16:07:19.510000+00:00 | ['Self-awareness', 'Management', 'Leadership', 'Gender', 'Self Improvement'] |
The Unreliable Narrator Perspective and Snopes | The Unreliable Narrator Perspective and Snopes
Fact-checkers are obviously right because the people saying that they aren’t are so obviously wrong about so many things
We’ve all experienced it. An odd claim will come over the transom at us, perhaps on Facebook, perhaps in the comment section of the article we really didn’t mean to spelunk into or perhaps at the coffee shop with that aged relative you are dutifully socializing with. We whip out our phones or tablets or laptops and check with Snopes, Politifact or Media Bias/Fact Check, one of the handful of sites we trust to be on top of the odd conspiracy theories or Presidential tweets (or both) that have sprung up since we last had time to trawl the newsfeeds.
Politifact Truth-o-Meter
And we’ll rapidly find out that our friend, acquaintance, relative or random stranger has fallen for another piece of nonsense drummed up by someone on 4chan or in Russia, and spread by the usual social and anti-social media suspects. So we say, “You know, Snopes checked this out, and found that it was about as credible as flat-earthers and anti-vaxxers.”
We’ll be happy that we’ve nipped a piece of disinformation in the bud. For a few seconds, or maybe even a minute or two. And then out it will come. “That site is just a biased liberal front. I read about a study one time that found they were all wrong.” Or maybe “I looked at one thing on them, found some inconsistencies, so obviously everything that they say is a lie.”
So you start to wonder. Maybe they’re right? Maybe all of these fact-checking sites we depend are all biased to reality being a liberal thing. But there’s a simple way of telling that these sites are gold standard, something I call the Unreliable Narrator Perspective (or UNP, since it looks more credible if it has an acronym).
In fiction, an unreliable narrator is a character who keeps being shown to be wrong in subtle or not so subtle ways yet has a narrative role in the work. The author has created someone who you are required to learn to disbelieve in order to understand the story. A character who lies early yet divulges information leading to someone else being under suspicion, yet is shown to be the actual perpetrator themself is a classic example.
Russell Crowe’s character in the 2001 film A Beautiful Mind, the biographical film about the mathematician John Nash was an unreliable narrator. Many of the film’s scenes are only revealed at the end to have happened entirely in his mind, not in reality. The mathematician suffered from paranoid schizophrenia, so this isn’t an unreasonable example actually.
How does this apply to fact checkers? Well, who is complaining about Politifact, Snopes and other major, trusted fact-checking organizations?
I think you’ll see the pattern here. The people claiming that Politifact and Snopes are not honest arbiters of truth have at best a tangential relationship to truth themselves. Perhaps it was their supply teacher in second grade. Maybe it ran a business in the same town as them. Maybe it’s a friend of a friend who they have ‘friended’ on Facebook and never deleted despite it’s annoying habit of contradicting them. Maybe it walked their dog that one time.
By the UNP measure, Politifact and Snopes are gold standard. They are only attacked by people who couldn’t pick truth out of a police lineup consisting of truth, Satan, the Easter Bunny and Don Corleone. They couldn’t find truth if it was the only thing in their pocket and they stuck their hand in to find it. They couldn’t understand truth if it were printed in crayons by their favorite grandchild who then read it carefully to them with lots of cute pauses. They couldn’t find truth with Google if Google first deleted everything false in its database.
The people claiming that Politifact and Snopes are unreliable wouldn’t know empirical reality if it hit them with a two-by-four engraved with “Trump is a lying liar who lies” repeatedly. If truth threw itself under the wheels of their moving car, they’d somehow manage to miss it. If truth were a shooting target two feet from the barrel of their oversized guns, they’d manage to shoot themselves in the foot before hitting it.
So they aren’t accurate judges of Snopes and Politifact. It’s surprising that they can tie their shoelaces, never mind operate a computer. | https://medium.com/the-future-is-electric/the-unreliable-narrator-perspective-and-snopes-619143fadfef | ['Michael Barnard'] | 2020-05-22 16:27:31.758000+00:00 | ['Fiction', 'Fact Checking', 'Politics', 'Journalism', 'Reality'] |
The Martyrdom of Aurore Gagnon | The Martyrdom of Aurore Gagnon
In 1920, ten year old Aurore Gagnon, died after suffering from years of torture in the hands of her parents
Restoration of this photo, circa 1919, created by the author. For the first time in a century, we have an idea of what Aurore may have looked like in life.
Fortierville was a sparsely populated village in 1920, located just south of the St. Lawrence River, 60 miles from Quebec City. It was the type of place where everyone knew everybody. Though neighbors minded their own business, there were no secrets. The citizens were French-speaking Roman Catholics, and the Gagnon family was no exception.
Marie-Aurore-Lucienne Gagnon was the second child of farmer Télesphore Gagnon and his first wife, Marie-Anne Evelyn Caron. Those who knew the girl called her Aurore. Aurore was born May 31, 1909, in Sainte-Philomène-de-Fortierville, Quebec, Canada. Her sister, Marie-Jeanne, was just a year older. Télesphore made his living as a logger, farmer, and blacksmith.
A new baby arrived every few years in the Gagnon home. After Aurore’s arrival, three children followed; little Marie-Lucina in 1910, Georges-Étienne in 1911, and Joseph-Télesphore in 1915.
For a moment, life was almost idyllic for the family. The cemeteries were full of babies who succumbed to various diseases, and people had little or no access to health care. But the Gagnon children were healthy and robust. Télesphore and Marie-Anne had every reason to place all of their hope and happiness on their lively children.
A family reunion at the Gagnon home in Fortierville, 1915, public domain image
Bereavement
Shortly after baby Joseph’s birth, Marie-Anne Caron developed a terrible cough. Eventually, a doctor told Télesphore that his young wife had tuberculosis and would not survive. She needed a level of care that only a hospital could provide, so Marie-Anne Caron reluctantly left her children for Beauport Asylum in Quebec City.
In addition to his wife’s illness, Télesphore lost a close family member. On January 20, 1915, Télesphore’s cousin, Napoleon Gagnon, died suddenly. Napoleon left behind a widow named Marie-Anne Houde and two children, Gerard, Aurore’s age, and Georges, born in 1912. Napoleon and his wife previously buried two infant daughters. His death left Marie-Anne Houde alone with two young sons.
Télesphore found himself unable to tend to his small children and ailing wife. He sent the girls to live in a convent, but they soon joined their brothers at their maternal grandparents’ home. Télesphore continued to work hard and send money to support his children, and Marie-Anne Houde moved in to nurse Mrs. Gagnon and keep the house. With the assurance of her help, Télesphore welcomed his children home.
Marie-Anne Caron with Télesphore Gagnon, public domain image
Tragedy
After Marie-Anne Houde moved in, a series of tragedies befell the family. First, two-year-old Joseph smothered in his sleep beneath a mattress. The coroner ruled his death accidental. Next, Five-year-old Lucina — who lived with her grandparents — also died. Sadly, Marie-Anne Caron lost her battle with tuberculosis on January 23, 1918. She was only 32. The death of her siblings and mother was only the start of Aurore’s suffering.
Stepmother
Télesphore and Marie-Anne Houde wasted no time. They married on February 1, just over one week after his first wife’s death. The people of Fortierville thought the hasty marriage was in poor taste. However, it was no crime, and Télesphore did need help with the children since he worked well into the evening. Still, the union caused neighbors to pay closer attention to the Gagnon household. The attention was by no measure enough to save Aurore’s life.
During 1918, the Gagnon and Houde families settled in and tried to create a new state of normal. The blended family attended mass at Sainte-Philomène church and fulfilled their obligations as parishioners.
In May of that year, the Gagnon children attended school for just ten days. Aurore’s teacher, Yvonne St. Onge, described Aurore as quiet, intelligent, and obedient — a sentiment not shared by Marie-Anne Houde.
Marie-Anne Houde, ca 1920, public domain image
Rumors
After the wedding, Aurore began acquiring injuries. These injuries increased in severity and frequency. Initially, no one knew that Aurore’s new stepmother beat her mercilessly, but that truth would soon reveal itself.
Ultimately, Marie-Anne decided to keep her stepdaughter inside. She did not fear criminal punishment, but gossip; she and Télesphore often spoke about beating the girl until she “gushed blood” or her legs collapsed beneath her. In such rants, they blamed Aurore.
Marie-Anne convinced her husband that Aurore deserved the beatings. Télesphore believed Marie-Anne acted out of love and in his daughter’s best interest. The children’s day-to-day discipline was the responsibility of his wife, and his sympathies lay with her.
“The child does her business in her clothing while I work my fingers to the bone!” Marie-Anne complained. She also accused her very young, innocent stepdaughter of seductive behavior toward her sons and convinced them that Aurore was dirty and sinful. To conceal her responsibility for Aurore’s many wounds, she sometimes asked the children to lie.
Such was the case in July of 1919, when Aurore’s caretakers brought her to the home of Oreus Mailhot, Fortierville’s grocer and Justice of the Peace. Marie-Anne and Télesphore were upfront about beating Aurore that day and blamed it on the 10-year-old’s “vicious character.” But they weren’t there to discuss their manner of discipline. Aurore had ulcerated, infected wounds on her feet. The couple insisted neighborhood boys caused the lacerations by smashing her feet with rocks.
“Come,” Oreus said as he gently ushered Aurore away from her parents, “tell me about it.” Aurore could hardly walk but followed the man to a little room just out of earshot. As Aurore limped away, Marie-Anne offered the girl a stern warning. “Be careful what you say!”
Suspicious and concerned, Oreus asked Aurore to be honest and tell him what happened. Before he finished the sentence, Aurore fervently insisted that a group of boys threw rocks at her feet and legs and stabbed her side with a stick. Oreus tried again to let her know she was safe to tell him the truth, but Aurore continued to repeat the above narrative. Not knowing what to do, Oreus brought the child back to her waiting parents.
Still, Oreus couldn’t shake the feeling that Marie-Anne and Télesphore caused the injuries. Like other witnesses, he didn’t feel it wasn’t his place to interfere. Oreus advised the couple to seek medical help for Aurore and consult the parish priest, Father Ferdinand Massé. He also took his concerns to police in Quebec City, who wouldn’t do a thing unless Oreus filed a personal complaint against the Gagnons, which he was unwilling to do. | https://medium.com/lessons-from-history/the-martyrdom-of-aurore-gagnon-e3129827845b | ['Heather Monroe'] | 2020-12-17 21:21:50.842000+00:00 | ['Nonfiction', 'Biography', 'History', 'Child Abuse', 'True Crime'] |
Architecture of Big Data Systems | In the previous post we understood the Need of Big Data Systems. let’s take it further on the architecture of Big Data systems and see what are the different components required for creating an efficient Big Data System.
A crucial part of any Data Intensive application is Data Model. but prior to diving into data model for Big Data you need to be aware about several properties of Data which are
Rawness Immutability Eternity
Rawness
It can be defined as granularity at which data is present.
When data is in its raw form lots of information and insights can be drawn as compare to when its in structured form or summed up.
To understand Rawness let’s take an example where you have the data of sales transaction of a store. there are lots of data points being gathered like order_id, customer details, product details, discounts etc. Now you summarize these records based on location to create another dataset so for example you got 5 records each location by summing up sales figures out of 100 original records. The information the you can extract from summarized dataset is way less than you can from original dataset. Thus many queries can be answered from Raw data.
Immutability
Its the property of data where you do not delete/update any piece of data, A new record is added for any new data information.
With immutable data system, original data remains untouched which helps in retrieving back the system from failure
Since RDBMS is widely used in industry where updating a record in database is very common, immutability is difficult to digest. Lets take an example where you are storing customer address and as the person moves to new location and address changes you update the record in database. This leads to a potential loss of information of person’s previous address which as a analyst you could use to derive potential insights. In Big Data Systems — Immutability helps you retain original data and add new data with timestamps.
Eternity
Eternity results from immutability as if the data is not tampered and no updation/deletion is allowed that data is called to be eternal.
Data is always pure and true.
A timestamp is attached with every event/fact while its stored in database and latest timestamp tells the current state of data.
By Enforcing these three properties in Big Data World we can achieve more Robust Systems and gain Powerful capabilities. Keeping in mind these properties of Big Data lets have a look at Fact Based Models in Big Data
Fact Based Model in Big Data
A Fact is a fundamental unit of data. Fact is atomic and timestamped. its the smallest unit of data that cannot be broken down further. To make them unique a timestamp is attached with every fact.
Examples of facts:
I am a blogger
I live in Punjab
I like Big Data and Stream Processing
In the Fact based model we store the data as atomic facts.
Facts are immutable and unique.
Why Fact Based model:
you can query data at any time
data is tolerable to human error
you can store data in both structured and unstructured formats
Even facts within a fact-based model capture single pieces of information and does not convey the relationship between them and different types of entities. solution to this is graph schemas
Graph Schema:
Graphs that capture the structure of dataset stored using fact based model are termed as Graph schema. There are 3 core components of a graph schema — Nodes, Edges and Properties
Graph Schema for Fact Based Model of Big Data
Nodes are the entities in the system
are the entities in the system Edges are the relationships between nodes
are the relationships between nodes Properties are information about entities
Okay, so till now you must have got the idea that information is stored as facts and a graph schema describes the types of facts contained in the dataset, But you still are not aware about what would be format you will use for storing facts. There are several options available such as JSON but there are problems in using it and a serialized binary format like AVRO, Parquet etc. would be a good option. Check this article to learn more on the data format.
No since we have got basic understanding of Big Data Systems, lets dive a bit deep into Big Data System Architecture
Generalized Big Data Architecture
Big data applications generally require several different kins of workloads such as
Batch Processing of data at rest
Real Time Processing of data in motion
Interactive exploration of Big Data
Predictive Analysis and Machine Learning
Big Data Systems are designed in such a way that they can handle Ingestion, Processing and Analysis of data that is too large or complex for traditional database systems.
Most of the big data architectures include some or all the components as shown in the figure
Generalized Big Data Architecture
Data Sources: all big data solutions start with one or more data sources like databases, IOT Sensors etc.
all big data solutions start with one or more data sources like databases, IOT Sensors etc. Data Storage : data for batch processing systems is generally stored in distributed systems that can store high volumes of large files.
: data for batch processing systems is generally stored in distributed systems that can store high volumes of large files. Batch Processing: to process huge amounts of data usually these jobs include reading data from files, processing them and writing output to files.
to process huge amounts of data usually these jobs include reading data from files, processing them and writing output to files. Real-time Message Ingestion: it is used to store and capture streams of data in real time.
it is used to store and capture streams of data in real time. Stream Processing: Processing the data in real time and giving the output to sink.
Processing the data in real time and giving the output to sink. Analytical Data Store: the data storage required for and used by analytic and reporting tools
the data storage required for and used by analytic and reporting tools Orchestration: used to facilitate repeated data processing operations, moving data between multiple sources and sink etc.
These all are the various components of a Big Data System. Now the question is -
When to use this style of architecture?
you should consider using it when you need to
Store and process data in volumes too large for traditional databases. Transform unstructured data for analysis and reporting. Capture, Process, Analyze unbounded streams of data in real time.
Advantages of Big Data Architecture
Bunch of open source and mature technological options available.
and mature available. High performance and throughput through parallelism.
and throughput through parallelism. Scale-out options are supported by default making the systems highly scalable
Things to keep in mind
although it all seems to be alluring to use Big Data Systems but there are several things to take care of whenever you are deciding to use a big data system such as
Complexity if these solutions may increase in some cases
if these solutions may increase in some cases Skillset in big-data is very important for the team who is implementing it.
in big-data is very important for the team who is implementing it. Security is also one of the concerns as all data goes into data lake, thus it’s important to give correct access rights.
Thats all about the different components of a Big Data System, Most important of them are Batch Processing and Stream Processing. Check this post to understand the differences between these two.
The article was originally published at https://msbawa.com/architecture-of-big-data-systems/ | https://medium.com/learnifyme/architecture-of-big-data-systems-aa930de5b72f | ['Maninder Singh Bawa'] | 2020-09-06 07:27:36.791000+00:00 | ['Data Modeling', 'Big Data', 'Batch Processing', 'Data Science', 'Stream Processing'] |
How do you get to Carnegie Hall? | There’s a saying that’s generally attributed to a golfer, though there seems to be a debate as to its origins, or even to which golfer said it. In one version of the anecdote, Gary Player was accused of being lucky after holeing three bunker shots in a row, to which he replied:
“Well, the harder I practice the luckier I get”.
Wise words indeed! But I have never found it to be simply the case that a larger quantity of practice equates to better performance. For me, practice is a process filled with spikes and dips; barriers, plateaus and breakthroughs. And sometimes those breakthroughs can completely change the complexion of the thing you are practising. In that case, it helps to step back from time to time and re-evaluate to make sure you’re focusing on the right things. My own example of this pertains to the game of Sudoku, which I began playing pretty regularly about three months ago.
The Game
Sudoku is conceptually very simple. For the uninitiated, it consists of a 9x9 grid of cells and the object is to fill each row, column and 3x3 sub-grid with the digits from 1 to 9, each of which by necessity can only occur once in that row, column and sub-grid. The level of difficulty is determined by how many and which cells are already filled in. The idea, and one of its main appeals, is that you can find the answer by logic alone — in other words, you don’t try a number and continue until you get stuck, and then go back (though admittedly, I have in the past been forced to resort to that). Also, for any puzzle there is one, and only one, correct solution.
Up until recently I would only play Sudoku very occasionally, mostly in situations of alleviating boredom, like long-haul flights. What started the most recent encounter was when I was visiting a good friend of mine, and he had a particularly horrible specimen partially completed. He and I sat in his living room, with a movie playing in the background while both of us stared piercingly and unwaveringly at his laptop, willing it to present an epiphany that would lead to the solution, but it never came. Every so often, one of us would think we had cracked it, but as soon as we tried to articulate the logic and explain the case, it fell apart. The challenge of it was both infuriating and irresistible. I was hooked, and shortly afterwards I installed the Sudoku Free app, so I could engage the torture at my convenience.
The leaderboard
After being initially baited by the challenge, I played for the enjoyment, and to pass the time. The hook that kept me coming back over a longer period was incredibly simple: a leaderboard of my own top ten times, that would appear automatically as soon as the puzzle was completed. This naturally lead to setting simple and achievable time-based goals like: get all the top 10 under 5:00; get the top 3 under 4:30; get the top time under 4:00. The constant progression made for quite an addictive experience, and I would say that this definitely caused me to play more than I would have otherwise.
The easy versions of the game can be solved using a couple of very simple techniques, that all involve looking at the numbers and using a process of elimination to find the cell. The logic runs along the lines of: “In this sub-grid, 9 must be on the bottom row because the top and middle rows already have a 9 in the sub-grids on either side; and 9 must be in the right-hand column because the other two columns on the bottom row are already filled in; therefore 9 goes in the bottom right cell”.
Game Changers
For anything beyond the easy puzzles, the opportunity to use the logic described above is rare. To progress to the stage of being able to tackle them, we need to learn and develop some more advanced techniques.
Pencil marks are used to keep track of what the candidate values are for each square. You can reason about these pencil marks independently of the large numbers. Also note that when the number 9 is selected, all squares that have 9 as a pencil mark are highlighted as well. This allows reasoning based on the visual pattern independent even of the pencil marks themselves.
The basis for most of these is the use of “pencil marks”, an allusion to the pen-and-paper version of the game. How it works is that, for each cell, we mark all the possible values for that cell in a small size with a pencil, to keep track of them. The game then becomes about reasoning away the pencil marks in each cell until one remains, which must be the correct number. This causes a natural division of playing the game into two stages: the “Add” stage, where we fill in the pencil marks; and the “Remove” stage, where we use reason to whittle down the pencil marks enough that we can fill in the big numbers.
The next game changer came as a response to hitting a plateau in the existing technique. I had got to a point where my entire top ten were around the six or seven minute mark, but it was getting harder and harder to get on the board. So I decided to try a new technique. The Sudoku Free app gives a nice visual cue where, when a number is selected, it will highlight all cells that have a pencil mark with that number. Because of that it’s possible during the “Remove” stage to make some inferences on that basis without looking at the pencil marks at all. There was a period of time where I was trying to develop the technique that my times and consistency disimproved dramatically, leading to hours of frustration, and I questioned whether it was ever going to work. Eventually it did, and within about an hour I beat the entire top ten by over a minute.
At this point, it’s worth remembering the goal that is driving and motivating my continued playing of the game — and that is improving the condition of the leaderboard. With that in mind it’s obvious that, if at some point in the game I know that the time isn’t going to make the leaderboard, then there’s no sense in continuing. Even within the same level of difficulty, there are some games that will just “fall out” nicely based on the techniques I use, but others will take more work. Inevitably it’s the former ones that will end up on the leaderboard, and the latter won’t. I found that I could often make that determination pretty soon after the “Add” stage. So the new game is a very different type of numbers game, where improvement comes by quickly abandoning games that aren’t going to turn out in a good time, and instead exposing myself to a higher number of games so that I’ll get through more of the ones that will just “fall out”.
Now this really is a seismic shift in the very essence of the game. To me, this reality makes the game is a bit less “pure”. In some ways, I regard it almost like cheating, even though no rules are being broken. I think what it comes down to is that the personal attributes that are tested at this level are totally different to those when I started playing. Back then, each game was a test of logic and reasoning; now it’s about persistence, concentration, and physical considerations (Yes, I’m serious!..these days, I need at least a few games to warm up my fingers before I can get anywhere near the leaderboard, and there’s no point in playing at all in a cold room!). It’s effectively a performance, one that requires getting psyched up for in order to smoothly complete the puzzle without physical or mental stuttering. That is fine in and of itself, but I guess you could say it’s not what I signed up for.
Wait!
So the question arises: why I am still playing? Well, the truth is that I play a lot less often now than I used to. I think the reason is that the reward of getting on the leaderboard these days is inadequate compensation for the increasing effort. I believe it’s certainly true that the more you are made to work for something, the more you will value it, and there’s no doubt that when I do get a time on the leaderboard now, the satisfaction is large. But it takes such a long time of playing to get there now that it’s not worth it from an enjoyment perspective. The game was originally meant to be fun, but instead it is becoming a bit of a chore. In truth, the only thing keeping me at it now is a small amount of habit and a larger amount of stubbornness. So where do I go from here? The way I see it, I have a few options:
1. Go pro
Yes, there is such a thing! I could decide to take this seriously and dedicate more and more time to practice, learning and developing new techniques…somehow, I don’t think I have the stomach for that!
2. Stop playing
If the game isn’t providing enough enjoyment and instead is getting more and more frustrating, then deleting the app and not playing again is an easy way to solve that problem.
3. Revisit the goal of the game
I believe what lead to this issue in the first place is that the goal of getting on the leaderboard is too simplistic. It is a cold metric that doesn’t take account of the fact that the game should be fun above all things, certainly when it’s being played as a pastime. Playing without regard to the time releases the burden of having to hurry, and particularly awkward puzzles can be attacked instead of being abandoned.
And this is the crucial point, in my opinion. By allowing myself to be hooked by the cheap thrill of seeing my time on the board, I lost the part of the game that gave me the most value…the satisfaction of getting to the bottom of a tricky puzzle.
The lesson I take from this is the importance of setting aside time every so often to reflect and to identify the motivations for persisting at a particular practice. Is it being done for the right reasons, or is judgement being impaired by compulsion, or some goal set in the past that may be misguided or no longer relevant? Are you afraid to turn away from something that’s not doing you any good because of the amount of time and effort you’ve already invested in it? It’s easy to be bound by past decisions and old habits.
The same principle pervades at work too. On an individual level, I occasionally get an urge from somewhere to revamp my development practices. On a higher level as a development team, we have on one or two occasions in the past got stuck in situations that caused us to be ineffective for a time, like a feature that didn’t quite work as we originally intended. It is tempting to blindly stick to the original plan in the hope that it will come out as intended. But the problem can manifest as a general malaise, where everyone in the team loses some of the lust for the task. It has taken some brave and dramatic decisions to identify what’s going on and shake things up, even abandoning a feature we might have been fond of at the outset. But every single time, acting has turned out to be the right thing to do, and it clears the way for the next stage of progression. I think we have been good at that in the past, and always try to do it better.
When advances come at a rapid rate, it’s easy to stay engaged and interested. But ultimately, continued improvement comes not only from throwing more time at a problem, but from practising right as well as practising hard. And in my experience, a waning enthusiasm is a good indicator that it’s time to step back and make some changes. Until then, keep practising!
Thanks for reading. If you’ve enjoyed this post, please recommend and share it! I would love to hear your thoughts so please leave me a comment in the box below. | https://medium.com/soundwave-stories/how-do-you-get-to-carnegie-hall-b34f21d1bd6a | ['George Boyle'] | 2015-03-24 18:10:12.774000+00:00 | ['Self Improvement', 'Motivation', 'Learning'] |
Supercharge your website’s page load time with color | No, color alone can’t actually speed up your website. That’d be silly. But the right color, value and chroma can elicit the perception of a faster loading website.
We know the brain functions under a limited processing model. It’s a giant guessing machine tuned over the millennia to quickly process lots of stimuli and “guess” based on the person’s experience and situational expectations to form actionable perceptions.
Since the brain is for guessing, it’s fairly easy to trick it. As user experience designers, we have to ask ourselves: can we trick the brain to advance our UX agendas? A study, Waiting for the Web: How Screen Color Affects Time Perception, published by the Journal of Marketing Research, says you can.
These researchers found that designers can adjust hue, value and chroma to induce more or less relaxed feelings states — and less relaxed feeling states lead to greater perceived page load quickness. The researchers concluded that by using soothing color trickery, a website could be perceived to run up to 53% faster.
It turns out, we can trick users to perceive interfaces loading faster simply by using blues, higher-value, and low chroma. What’s more, users that perceive faster loading websites are more likely to recommend the website to their friends. Wow. Just, wow.
A word of caution: sometimes, users “need” the website to take a second or two to process information so they “believe” their input (actions) had the intended outcome (feedback). The use of blue, higher-value and low chroma “speeds up” your website, but be careful to not to lose your users along the way… throttle with good feedback and perhaps a few microinteractions.
Reference:
Gerald J. Gorn, Amitava Chattopadhyay, Jaideep Sengupta, & Shashank Tripathi. (2004). Waiting for the Web: How Screen Color Affects Time Perception. Journal of Marketing Research, (2), 215. Retrieved from http://proxy-ub.researchport.umd.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=edsjsr&AN=edsjsr.30162328&site=eds-live | https://uxdesign.cc/supercharge-your-websites-page-load-time-with-color-4a43f7aab1d8 | ['Andres Zapata'] | 2019-06-21 00:44:51.164000+00:00 | ['Product Design', 'Design', 'Design Thinking', 'Web Development', 'User Experience'] |
Build Trust First, Take Action Later | I work in one of Madagascar’s largest remaining blocks of intact tropical forest. Located in the northeastern part of the island, Makira Natural Park boasts unique and beautiful wildlife like lemurs that are found only on the huge island. Other unusual species include the Fossa, a short-legged puma-like animal but smaller and in fact, related to the mongoose; and spiny little tenrecs that look like a mixture of hedgehog, shrew, and opossum.
Makira is also home to thousands of families struggling to make a decent and dependable living in the face of a changing climate and increasing demographic pressure. The Betsimisaraka and Tsimihety people who live in and around Makira want to meet their basic needs, provide a nutritionally balanced diet for their families, send their kids to school, access health care when they need it, and protect the forest that defines their cultural identity and sense of self.
“Makira is home to thousands of families struggling to make a decent and dependable living in the face of a changing climate and increasing demographic pressure.”
Within and around this protected area, my organization, WCS (Wildlife Conservation Society), seeks to conserve the forest and its wildlife in ways that respect and protect the rights of people living with — and at times eating — the region’s amazing, endemic, and endangered species. What sets WCS apart from other conservation and development NGOs is its long-term commitment to the places where it works and the people who live there.
My team and I recently traveled for six days to Tsarabajina, a village on Makira’s western edge. We are both excited and nervous, building trusting relationships takes time — sometimes it is three steps forward and two back. Patience is everything.
Field visit during poultry training. Photo credit: ©Morgane Cournarie/WCS
Following local custom, we greeted the head of the community and brought news from the neighboring villages, before explaining the purpose of our visit. He listened with his eyes closed and did not speak for a long time. He then called everyone in the community to gather in the shadow of the village tree.
The Tangalamena — a wise and highly respected old person from the village — stood up and declared in a hoarse voice: “First of all, we would like to thank you for coming to visit Tsarabajina. Eloi, your park agent, has informed us of your visit and interest in our diet. All of us eat rice, and often little else. So for us, the occasional lemur hunted with a slingshot is a welcomed meat for its fat and taste.”
Rivo, our guide, emphasized our role, observing, “We care for your food security, your cultural identity, and the wildlife that lives in your forest and in Makira. For us, these three things are more than linked together. They are interdependent. We would like to work with you to secure all three now and for generations to come.”
Tsarabajina landscape, Madagascar. Photo credit: ©Morgane Cournarie/WCS
After long discussions that continued until after the sun set, the community asked that we help them to learn better ways of raising their chickens and train community para-veterinarians to vaccinate the village hens against Newcastle Disease that kills 90 percent of birds some years. The community in turned agreed to stop their rare but unsustainable hunting of lemur and fossa if poultry production increased from our technical assistance.
“I saw more clearly than ever that effective conservation needs a multi-sectoral approach that respects people’s rights and livelihoods, and seeks common solutions to shared problems.”
After a quick “shower” in the river by the light of what seemed an impossibly bright moon, our team gathered in the single-roomed house of our park agent. Everyone was talking about whether we really could help the community increase the sustainable production of chicken. Some asked what would happen if we failed.
It had taken months of active engagement with the community to get this far. Would people stop trusting us and begin hunting lemurs again? Though the team was confident that the improved poultry production system would work, it was good to see them talking through what might happen and how they would respond.
Lemur in Nosy Mangabe, Madagascar. Photo credit: ©David Mansell-Moullin/FAO
Rivo, looking tired after a very long day, explained that increasing poultry production is just one step among others. It is unrealistic to expect that the production of chickens will halt unsustainable hunting. It is the local families who are shouldering most of the risk by investing their time and resources in poultry production. Rivo smiled, adding, “We cannot put all our eggs in one basket.”
We groaned at his joke but agreed that while we must raise awareness of the benefits of eating chicken, we must also find ways of helping families not interested in raising poultry. And we must support community rangers patrolling the forest to prevent outsiders from stealing their natural resources.
In my tent, I marveled that families might trust us to help them become more food secure and that they might protect lemurs in return. The team hoped for the best and planned for the worst. I saw more clearly than ever that effective conservation needs a multi-sectoral approach that respects people’s rights and livelihoods, and seeks common solutions to shared problems. None of that can happen without trust — the keystone to effective and durable conservation partnerships
Morgane Cournarie is the Site Coordinator for the Sustainable Wildlife Management Programme in Madagascar at WCS (Wildlife Conservation Society). | https://medium.com/communities-for-conservation/build-trust-first-take-action-later-8946e474278d | ['Wildlife Conservation Society'] | 2020-06-15 21:41:31.168000+00:00 | ['Indigenous People', 'Madagascar', 'Lemur', 'Environment', 'Conservation'] |
Conducting Bayesian Inference in Python using PyMC3 | Solving the Coin Problem using PyMC3
The moment you have waited for. First, install PyMC3 via a simple
pip install pymc3 .
Inside the programming environment of your choice, prepare the imports and the coin toss data.
import pymc3 as pm tosses = [
1, 1, 0, 0, 0, 1, 1, 1, 1, 1,
0, 1, 0, 1, 0, 1, 0, 1, 1, 0,
1, 0, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 1, 1,
0, 1, 0, 1, 1, 1, 0, 0, 1, 0,
0, 1, 1, 0, 1, 1, 1, 0, 0, 0,
1, 0, 1, 0, 0, 0, 0, 0, 0, 1,
1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
0, 1, 1, 1, 0, 0, 1, 0, 1, 1,
1, 1, 1, 0, 0, 0, 1, 0, 1, 0
]
So far, so good. The actual model is even shorter than this piece of code, so I will just present it to you and explain it afterward.
with pm.Model() as model:
# define the prior
theta = pm.Beta('theta', 2, 2)
# define the likelihood
data = pm.Bernoulli('data', theta, observed=tosses)
# get the samples
trace = pm.sample()
Basically, that’s it, the entire Bayesian inference. Probably you expected more, but this speaks even more for the awesomeness of PyMC3. Don’t be fooled, however: Under the hood, a lot of processes are running, more than I can explain.
Anyway, we end up with this variable trace which contains 500 * #cores_of_your_machine samples, 2000 for me. It involves the number of cores because PyMC3 generates samples in parallel, without us having to specify anything. Reasonable defaults, I love it!
Understanding the Code
If you followed this and my older articles that I mentioned in the beginning closely, the code should make kind of sense for you.
The only weird part should be the context manager in the first line, with pm.Model() as model: . It’s just a programmatic design choice that the PyMC3 people have made. You open up a model (like you open a file in plain Python) and do things inside this context. In our case, we define distributions and sample.
We then start defining our prior θ~Beta(2, 2), which in PyMC3 language is
theta = pm.Beta('theta', 2, 2)
PyMC3 distributions always want a name, that’s always the first parameter you have to specify. Usually, I just use the variable name again. Then the two parameters for the Beta distribution follow, in our case a=2, b=2. Done with the prior! | https://towardsdatascience.com/conducting-bayesian-inference-in-python-using-pymc3-d407f8d934a5 | ['Dr. Robert Kübler'] | 2020-12-23 15:59:09.120000+00:00 | ['Bayesian Statistics', 'Bayesian Machine Learning', 'Pymc3', 'Python', 'Editors Pick'] |
How to Keep Yourself Amused on Zoom | How to Keep Yourself Amused on Zoom
The lessons I’ve learned in 2020 as a university professor
Photo: Sam Wasson/Stringer/Getty Images
Zoom has taken over our lives and will continue to rule for the foreseeable future. But if those meetings drag on, what games can you play to keep yourself amused? After nine months of this, I have some suggestions.
Where will they end up?
You are in a meeting and suddenly someone gets up and starts walking. They are holding their laptop and so you can see right up their nose at the ceiling going by. A fun little game is to predict 1) where they are going and 2) how long they will take to get there. It really raises the whole energy level of the meeting.
Who’s paying attention?
The nature of Zoom meetings is that you can only see someone’s headshot. Unlike in-person meetings, it is therefore not immediately apparent whether attendees are paying attention or doing something else.
Can you tell who is really paying attention? The signs I use are as follows:
Do they have too much attention? Anyone starting intently at the screen is not actually paying attention. Your average Zoom speaker is not that interesting, so if a person is rapt, it is likely they are doing something else.
Do their emotions match the situation? Someone is saying, “and if we are lucky we may only lose $8 million this quarter” and there is someone there smiling. This happens. They aren’t paying attention.
If you notice any of these “tells,” the appropriate response, and I know this from being a university professor, is to then ask the most certainly distracted person what they think and/or if they are happy to take this unusually complicated and arduous project on. Hijinks ensue.
Who’s messaging who?
This is a good one. When we met in person, if someone said something stupid or typical, you could glance at a co-worker and roll your eyes — “Can you believe this guy?” For Zoom, that is not really possible. So people resort to the private Zoom chat. Now, this is already a dangerous activity because Zoom has ways of causing you to mistake who you are talking to. But what this also means is that whenever people are doing this, what they are discussing must be really good.
Trying to catch a private chat in action is one of the most fulfilling Zoom games there is. If you see one person looking like they are typing something wait until they stop and then immediately search your screen to see if someone else’s eyes light up right before they start typing. Once you have your mark, it is easy to confirm whether a chat is going on.
In this situation, you can then use the Zoom chat function to out someone with a “did you mean to send that to everyone?” in the chat. Then you can watch their face. Priceless.
These games keep me amused but I am sure you have invented more. Let me know in the comments. I need to stay entertained until this is all over. | https://debugger.medium.com/zoom-games-efeda8dceff9 | ['Joshua Gans'] | 2020-12-15 14:18:56.633000+00:00 | ['Covid 19', 'Zoom', 'Coronavirus', 'Technology', 'Education'] |
Emulating a PID Controller with Long Short-term Memory: Part 4 | Here we are in the last section of this fun project! Here’s where we’ll see some practical applications of using the LSTM to emulate the PID controller, as well as some potential shortcomings. If you haven’t read the previous articles in the series, I highly recommend going back so you can have some context. And of course, the neat thing about this project is that you can run all the code on your own Temperature Control Lab device, simulating something that you might see in a real refinery or chemical plant. So here’s a quick recap of this series, then we’ll get started!
Changing PID tuning parameters
If you’re familiar with PID controllers, the first thing you may have been wondering during the problem setup is what happens if you change the tuning parameters? Recall the PID controller equation:
If we change K_c, τ_I, or τ_D, the heater output changes, and then the LSTM that we trained no longer matches the output. One could look at this as a huge downside — if we change the tuning parameters for the PID controller, we’d have to go to all the work to retrain the LSTM to emulate the new behavior (back to Part 2). On the other hand, we could use this to our advantage. Once a PID controller is tuned to a system, it’s rare that we’d need to change the tuning parameters, so changes to the PID behavior might indicate a malicious attack from an outsider (or potentially an insider such as a disgruntled employee).
Suddenly, we have an anomaly detection method out of the box. We can do what we did as a check in Part 3, where we run the controller off the PID, but also check the LSTM output to make sure they’re similar. If they’re not, then we know something has set the PID controller out of whack, and we can investigate to fix it.
Let’s take a look at our setup, which should be familiar by now, but with a few changes:
#### Set up run #### # Import model and model parameters
model = load_model('pid_emulate.h5')
model_params = pickle.load(open('model_params.pkl', 'rb')) s_x = model_params['Xscale']
s_y = model_params['yscale']
window = model_params['window'] # Run time in minutes
run_time = 45.0 # Number of cycles
loops = int(60.0*run_time) # arrays for storing data
T1 = np.zeros(loops) # measured T (degC)
Qpid = np.zeros(loops) # Heater values for PID controller
Qlstm = np.zeros(loops) # Heater values for LSTM controller
tm = np.zeros(loops) # Time
t_pid = np.zeros(loops) # Time to compute PID controller output
t_lstm = np.zeros(loops) # Time to compute LSTM controller output
Q2 = np.zeros(loops) # Time range to introduce anomaly (turn on heater 2, change PID tuning constants)
start_anom = int(0.7*loops) # Heater 2 turned on during anomaly window
Q2[start_anom:] = 80 # Temperature set point (degC)
with tclab.TCLab() as lab:
Tsp = np.ones(loops) * lab.T1 # vary temperature setpoint
end = window + 15 # leave 1st window + 15 seconds of temp set point as room temp
while end <= start_anom:
start = end
# keep new temp set point value for anywhere from 4 to 10 min
end += random.randint(240,600)
Tsp[start:end] = random.randint(30,70)
while end <= loops:
start = end
# keep new temp set point value for anywhere from 4 to 10 min
end += random.randint(240,600)
Tsp[start:end] = random.randint(30,50)
As usual, we import our model and accompanying parameters, set a run time, initiate the arrays to store data, and create a setpoint profile. There are also a few things to point out. We have t_pid and t_lstm , which we’ll be using later on to time the controllers. We have the start_anom variable, which I use to indicate when the anomalous section of data should begin. Notice that I also setup an array for heater 2 ( Q2 ), which we’ll be using for another test later on. And finally, I kept the setpoint for the heater a bit lower during the anomalous section of data — it will work with higher temperatures, but it’s easier to see when the temperature stays lower.
I also made a change to the pid(sp,pv,pv_last,ierr,dt) function, so it now takes the tuning “constants” as additional inputs (Kc,tauI,tauD) . This will let us change the PID controller output mid-run.
And finally, here’s what the code will look like as we run the simulation:
# Run test
with tclab.TCLab() as lab:
# Find current T1, T2
print('Temperature 1: {0:0.2f} °C'.format(lab.T1))
print('Temperature 2: {0:0.2f} °C'.format(lab.T2)) # Integral error
ierr = 0.0
# Integral absolute error
iae = 0.0 start_time = time.time()
prev_time = start_time for i in tqdm(range(loops)):
# Delay 1 second
if time.time() > prev_time + 1.0:
print('Exceeded cycle time')
else:
while time.time() < prev_time + 1.0:
pass
# Record time and change in time
t = time.time()
dt = t - prev_time
prev_time = t
tm[i] = t - start_time # Read temperature (C)
T1[i] = lab.T1 # Integral absolute error
iae += np.abs(Tsp[i]-T1[i]) # Perturb PID tuning parameter
if i > start_anom:
Kc, tauI, tauD = 3.0*Kc0, 0.5*tauI0, tauD0 + 2.0
else:
Kc, tauI, tauD = Kc0, tauI0, tauD0
# Calculate PID output (and time)
t0_pid = time.time()
[Qpid[i],P,ierr,D] = pid(Tsp[i],T1[i],T1[i-1],ierr,dt,
Kc=Kc,tauI=tauI,tauD=tauD)
tf_pid = time.time() # Write heater output (0-100)
lab.Q1(Qpid[i])
# Run LSTM model to get Q1 value for control
if i >= window:
# Load data for model
T1_m = T1[i-window:i]
Tsp_m = Tsp[i-window:i]
# Predict and store LSTM value for comparison
t0_lstm = time.time()
Qlstm[i] = lstm(T1_m,Tsp_m)
tf_lstm = time.time()
# Save controller times
t_pid[i] = tf_pid - t0_pid
t_lstm[i] = tf_lstm - t0_lstm
There’s a lot in there, but it should look familiar from Part 3. The main difference is the line where we change the PID tuning parameters: Kc, tauI, tauD = 3.0*Kc0, 0.5*tauI0, tauD0 + 2.0 when our time count goes beyond our start_anom specification. You’ll also notice a few time recordings in there, which we’ll address in the next section.
After letting this run, we can plot the results and put them into a video to visualize. Let’s see what happened!
As we saw in Part 3, the LSTM output closely resembles that of the PID, with obvious exceptions during the transient timeframe. This abruptly changes, as expected, when the PID tuning parameters change. You’ll notice that since the PID controller is the one actually writing the output, that the heater output also starts to go erratic, and the temperature starts to fluctuate a lot more.
We could easily plot the error between Qpid and Qlstm and use that to detect anomalous behavior. It’s also convenient to have a backup in this case; if we notice that the PID controller starts behaving oddly, we can quickly switch control over to the LSTM controller, just by changing lab.Q1(Qpid[i]) to lab.Q1(Qlstm[i]) . Pretty neat that what initially might be thought of as a shortcoming actually turns out to be quite useful!
Computation Time
If you recall, the primary reason we started this project was from a paper talking about using a neural network to emulate a model predictive controller, with the idea that the neural network would be faster than the controller. From the last run where we were computing both the LSTM and PID controller, we conveniently saved the time each one took. Let’s see what the results are:
print('LSTM:',t_lstm.mean())
print('PID:',t_pid.mean()) >>> LSTM: 0.03118442685515792
PID: 2.043927157366717e-05
Well, that’s honestly what I expected. The PID is actually significantly faster than the LSTM, although the LSTM controller still computes in well under 1 second. There’s a reason that PID controllers are used, and that’s because they’re fast and perform quite well for several control problems. If you have a more complex controller, though, we might expect the LSTM controller to still only take a fraction of a second for each computation, assuming the model inputs and parameters are similar. So while it actually has a time disadvantage over a PID controller, there are still some uses (check back for a bonus article coming November 2020 about a more complex controller and how the LSTM controller could be advantageous in that case).
Using Heater 2 to simulate additional anomalies
Finally, let’s take another look at how the LSTM controller could be used to detect other types of anomalies. We’ll simulate a situation where the ambient conditions are no longer standard in the refinery. Maybe this is from something as benign as different weather, or something as malicious as a cyber attack that is targeting another part of the plant (or even this specific process). How would the controller behave differently?
Let’s simulate this anomalous event by turning on heater 2 for part of the time. While heater 2 is physically separated from the sensor on heater 1, it is still close enough to affect the conditions around heater 1 which we’re trying to control. Our LSTM controller is trained on a specific set of data that was operating under standard room temperature, so if we turn on heater 2, we have a situation that the LSTM controller isn’t trained to control. Similar to the case of changing the PID tuning parameters, this initially seems like a downfall of using the LSTM controller. However, if we have a carefully controlled ambient environment in our plant, then we’d expect the controller to behave consistently; any deviation from that would indicate something anomalous is happening.
Here’s a look at the code for the run (recall that we already specified the Q2 parameters in the initial setup).
# Run test
with tclab.TCLab() as lab:
# Find current T1, T2
print('Temperature 1: {0:0.2f} °C'.format(lab.T1))
print('Temperature 2: {0:0.2f} °C'.format(lab.T2)) start_time = time.time()
t = start_time for i in tqdm(range(loops)):
# Delay 1 second
if time.time() > t + 1.0:
print('Exceeded cycle time by ',time.time()-t-1.0)
else:
while time.time() < t + 1.0:
pass # Record time and change in time
t = time.time()
tm[i] = t - start_time # Read temperature (C)
T1[i] = lab.T1 # Run LSTM model to get Q1 value for control
if i >= window:
# Load data for model
T1_m = T1[i-window:i]
Tsp_m = Tsp[i-window:i]
# Timer for LSTM controller output
t0_lstm = time.time()
# Predict and store LSTM value for comparison
Qlstm[i] = lstm(T1_m,Tsp_m)
tf_lstm = time.time() # Write heater output (0-100)
lab.Q1(Qlstm[i])
lab.Q2(Q2[i])
This probably all looks familiar by now, with the notable exception of the last two lines: we’ve turned over control to the LSTM, and we also read the Q2 value in. After running the simulation, we again can plot the results and put them into an animation.
Very interesting. Notice that once we turn on Q2, the temperature tends to overshoot the setpoint, and the controller isn’t able to properly account for the overshoot. That’s because the controller isn’t trained to account for extra heat coming from the nearby heater.
That gives me an idea, but I think I’ll let you try it out, now that you know how to set up the TCLab, train an LSTM, and run a temperature control simulation with it. Could you use the skills you learned from this series to set up a control system that turns on both of the heaters and tries to control both temperatures to a certain setpoint? This would be a Multiple Input, Multiple Output (MIMO) controller. For a prod in the right direction, visit the MIMO lab at APMonitor.com.
Final Thoughts
That brings us to the conclusion of this project. We started with a basic PID controller, generated some data, used that data to train an LSTM to emulate the PID controller, turned over control to the LSTM, and then looked at some useful applications such as detecting various anomalies and potential computation time savings.
So how did you do? Did you get excited about this project like I did? What other applications can you think of for this? I’m happy to hear suggestions for areas that could use more clarification. Thanks for following along on this project, and I look forward to future interactions. If you enjoyed this, feel free to connect with me on LinkedIn. | https://nrlewis929.medium.com/emulating-a-pid-controller-with-long-short-term-memory-part-4-19ab327be61b | ['Nicholas Lewis'] | 2020-10-28 20:27:59.144000+00:00 | ['Lstm', 'Machine Learning', 'Pid Controller', 'Python', 'Lstm Control Emulation'] |
Gentle Introduction to AutoML from H2O.ai | In recent years, the trend for data science skills and its demand had outpaced the skill supply. As artificial intelligence penetrates every corner of the industry its hard to place data scientists in every possible use case.
To bridge this gap, companies have started building frameworks that automatically process the dataset and build a baseline model. We see many of these implementations going open-source. According to one of the industry leaders, H2O.ai,
AutoML interface is designed to have as few parameters as possible so that all the user needs to do is point to their dataset, identify the response column and optionally specify a time constraint or limit on the number of total models trained.
According to Google Trends, the rise of Auto ML began in Q2 2017:
AutoML Google Trends
AutoML is a function in H2O that automates the process of building large number of models, with the goal of finding the “best” model without any prior knowledge. In this article, we will look into AutoML from H2O.ai.
The implementation is available in both R and Python API and the current version of AutoML (in H2O 3.20 ) performs:
Trains and cross-validates a default Random Forest (DRF), an Extremely Randomized Forest (XRT), a random grid of Gradient Boosting Machines (GBMs), a random grid of Deep Neural Nets, a fixed grid of GLMs. AutoML then trains two Stacked Ensemble models. First ensemble containing all the models and second ensemble containing just the best performing model from each algorithm class.
Install H2O.ai
The installation procedure is quite simple. All you need to do is have the following dependencies installed and then pip install ;
If you are already having anaconda installed you could directly proceed with the conda command;
conda install -c h2oai h2o=3.20.0.1
Note: When installing H2O from pip in OS X El Capitan, users must include the --user flag. For example -
pip install -f http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o --user
For R and Hadoop installation please refer to the official documentation here.
Getting Started
Start the H2O.ai instance by importing h2o.ai and H2OAutoML instance.
import h2o
from h2o.automl import H2OAutoML
h2o.init()
If the setup was successful then will see the following cluster information.
In this example, we are going to use a dataset from DataHack Practice problem Loan Prediction III
The goal here is to predict whether or not a loan will be paid by the customer wherein we are provided with details like — Gender, Marital Status, Education, and others.
First, let’s import the training set and check out .head() and the datatypes of the data frame.
df = h2o.import_file('train_u6lujuX_CVtuZ9i.csv')
df.head()
.head() method frame | https://medium.com/analytics-vidhya/gentle-introduction-to-automl-from-h2o-ai-a42b393b4ba2 | ['Mohammad Shahebaz'] | 2018-09-15 14:56:22.404000+00:00 | ['Data Science', 'Automation', 'Python', 'Machine Learning'] |
Combatting ‘Fairness Gerrymandering’ with Socially Conscious Algorithms | Decision-making algorithms help determine who gets into college, is approved for a mortgage, and anticipate who is most likely to commit another crime after being released from jail. These algorithms are made by programs that ingest massive databases and are instructed to find the factors that best predict the desired outcome.
Both the people who write and who use these algorithms understand that the decisions they produce are not always fair. Bias against race, gender, religion, sexual orientation — almost any subgroup status — can be present in the data, the way the computer draws relationships between data points, or both. This leads to bad predictions, in the form of both false positives and false negatives, that inordinately cluster in some subsets of the population in question.
Penn’s Warren Center for Network and Data Sciences is working on this problem.
Michael Kearns and Aaron Roth
Michael Kearns, founding director of the Warren Center and National Center Professor of Management & Technology in Penn Engineering’s Department of Computer and Information Science (CIS), and fellow Warren Center member Aaron Roth, Class of 1940 Bicentennial Term Associate Professor in CIS, are interested in imbuing these decision-making algorithms with social norms, including fairness. They’re interested in one particularly vexing issue: Algorithms that take fairness into account can have the paradoxical effect of making their outcomes particularly unfair to one subgroup.
This is known as “fairness gerrymandering.”
Like a political party that draws districts such that a critical majority of their opponents’ voters are concentrated in one spot, an algorithm can meet fairness constraints by unintentionally “hiding” bias at the intersection of the multiple groups it’s asked to be fair to.
A malicious person could achieve similar results on purpose. A racist country club owner, for example, could comply with fairness-in-advertising regulations by only showing ads to minorities who live far away and who could not afford the dues. Taken at face value, the owner has complied with the letter of the law, but still produced results that are unfair.
Algorithms are susceptible to producing the same sort of biased outcome due to the intrinsic trade-off that predictive algorithms have to make between fairness and accuracy.
Imagine an algorithmic classifier that connected students’ SAT scores and high school GPAs to their college graduation rates. Colleges might think such a classifier would be the most fair and objective way to predict which new applicants will do best and admit them accordingly. However, a “race-blind” algorithm — one tasked to maximize accuracy across the entire population of students — might inadvertently return results that are unfair to minorities.
Black students, through the institutionalized racism of poorer schools and less access to private tutoring, might tend to have lower GPA and SAT scores than their white counterparts, though the relationship between those scores and their college performance is just as strong, if not stronger. In that scenario, the algorithm would incorrectly reject black students at a higher rate simply because they are a minority: they have fewer data points and thus a smaller effect on the classifier.
One approach existing algorithms use to avoid this type of unfairness is to stipulate that that the false-negative rates for each subgroup be equal. But this problem gets trickier and trickier as the number of subgroups an algorithm is tasked with considering is increased. This is where fairness gerrymandering comes into play.
“We might ask an algorithm to make this assurance of fairness to populations based on, say, race, gender and income, all at the same time,” Roth says. “And the algorithms will make false-negative rates equal on race, the false-negative rates equal on gender, and the false-negative rates equal on income. The problem is, when we look at the false-negative rate for poor, black women, it’s extremely unfair. The algorithm has essentially cheated.”
Critically, algorithms that do this type of fairness gerrymandering aren’t designed to do this on purpose; they just haven’t been explicitly told not to. Since they can meet the harder constraints by spreading out unfairness over the three groups they have been explicitly asked to protect, they do not need to consider that they have concentrated all that unfairness in the place where those three groups intersect.
Kearns, Roth and other Warren Center researchers, including doctoral student Seth Neel, and former doctoral student Steven Wu, are currently writing algorithms that do explicitly counteract fairness gerrymandering. Preliminary work on the subject is already on arXiv.org for other machine learning researchers to begin validating.
“We demonstrate theorems that prove that this algorithm provides protections against fairness gerrymandering,” Kearns says. “We also demonstrate experiments that show our definition of fairness in action. With every constraint you add you lose some accuracy, but the results remain useful. There’s a cost and a tradeoff, but there are sweet spots on real data sets.”
The key to the Warren Center’s efforts in this field is to synthesize knowledge from the relevant domains in law, philosophy, economics, sociology, and more, and to match it with the way computers “think” and behave.
“Traditional approaches to fairness, like regulations and watchdog groups, are important, but we’re trying to prevent discrimination directly in the algorithms,” Kearns says. “We’re not saying this is going to solve everything, but it’s an important step to get human social norms embedded in the code and not just monitor behavior and object later. We want the algorithms to be better behaved.” | https://medium.com/penn-engineering/combatting-fairness-gerrymandering-with-socially-conscious-algorithms-17e3e63cdbd1 | ['Penn Engineering'] | 2018-01-31 19:00:39.831000+00:00 | ['Machine Learning', 'Engineering', 'Criminal Justice', 'Fairness', 'Algorithms'] |
Write the way everyone understands | Tips for better communicative writing in tech industry
Photo by Iga Palacz on Unsplash
We live in the age of remote communication, where most of our daily interactions happen over the wire. We text our friends, browse social media, comment on others’ posts and pictures. At work, we communicate over Messaging Applications like Slack most of the day.
We have requirement documents for projects, planning documents for initiatives, so many types of materials that I cannot cover here. Even we put together sheets for camping trips. When we meet, we put down the agenda and goal of the meeting in the calendar. But worst of all, we communicate with people in channels and try to convey our point to them. Sometimes it can become frustrating, sometimes people misunderstand or don’t understand our intentions.
How can we communicate clearly and concisely in all these channels?
How can we succeed in our communication with other people? How to keep and take other people’s attention? How to make our audience go through our written documents effortlessly and with understanding?
Several techniques can help you learn to write clearly and concisely to motivate your audience to read and respond favorably to your communication.
Here are some tips to make your writing even more clear:
👉 Avoid unnecessary “fancy” words
Using straightforward words can help your audience understand your intention better. Be simple in your word choosy, Here are some words replacements you can consider:
from Stanford Technical Communication Program
👉 Eliminate redundant words
Unnecessary words come in many forms. Like vague words, they can conceal instead of reveal your meaning. They can weigh down your writing and make it hard to understand. You can often replace phrases like these with single words. Your writing will be more concise and energetic, and readers will find it more enjoyable.
Wordy: It’s essential that you have your ID with you. Clear: You must have your ID with you. Wordy: There is an occasion for us to inspect the machinery. Clear: We need to inspect the machinery. Wordy: Every student has the ability to excel in this math class. Clear: Every student can excel in this math class.
👉 Remove unnecessary “and”s
Wordy: check-in with your colleagues and ask them how they are Clear: check-in with your colleagues, ask them how they are
👉 Avoid starting sentences with “there ”, “this”, or “it”.
Though a sentence may be grammatically correct, writing more concisely may be a better choice. Sentences that start with there/this or it can usually be shortened. It may be unclear who or what there/it is referring to. Consider rewriting the sentence to remove the unclear reference.
Wordy: There are studies that prove that a greater percentage of the things we worry about never actually happen. Clear: Studies prove that a greater percentage of the things we worry about never actually happen. Wordy: There are some people who tend to compose very long sentences. Clear: Some people tend to compose very long sentences.
👉 Eliminate extra nouns
Extra nouns that does not provide additional context and meaning to sentence can make you sentence unclear.
Wordy: Make sure you have strict boundaries on work time. Clear: Ensure you have strict boundaries on work time. Wordy: Luis was interested in the data processing field. Clear: Luis was interested in data processing.
👉 Replace multiple negatives with affirmatives
Multiple negatives require your readers to interpret your meaning. Affirmatives, instead, convey concise meaning that needs no interpretation.
Wordy: Your audience will not appreciate the details that lack relevance. Clear: Your audience will appreciate relevant details.
👉 Don’t use filler words
Words and phrases such as basically, actually, in fact, and for all intents and purposes are often considered to be filler phrases. They make sentences wordy without contributing any important information. Avoiding empty filler words and phrases will make your writing more precise.
Wordy: As a matter of fact I was talking to him this morning. Clear: I was talking to him this morning. Wordy: Basically, they lost because they didn’t bother to practice. Clear: They lost because they didn’t bother to practice.
👉 Use Active voice
The passive voice is not a grammatical error. It’s a style choice. However, most readers prefer the active voice. ( I saw mom ) In a clause written in the active voice, the subject of the clause performs the action. In a clause written in the passive voice, the action is performed upon the subject of the clause. ( Mom was seen )
The active voice can provide more clarity, brevity, accountability, or certainty than the passive voice. If the active voice makes sense, use it. However, the passive voice may be more appropriate when the actor is unimportant or unknown.
Wordy: The mayor was informed of the accounting errors. Clear: Mr Lee informed the mayor of the accounting errors. Wordy: Mistakes were made. Clear: We made mistakes.
👉 Know when to use technical works_jargons and when not to
When Communicating with non-technical colleagues don’t use technical words. You need to be able to explain technical details in a non-technical way. Use abstractions and analogies instead.
For example, instead of using the technical term RAM: | https://medium.com/swlh/write-the-way-everyone-understands-2084d1a3e26 | ['Naz Delam'] | 2020-05-18 21:40:06.286000+00:00 | ['Remote Working', 'Technical Writing', 'Software Engineering', 'Softskillsimprovement', 'Writing Tips'] |
Simple Linear Regression in a Comprehensive way | Simple Linear Regression in a Comprehensive way
Regression is a form of predictive modelling technique, which investigate’s the relationship between independent and dependent vectors. Regression is of many types and Linear Regression is one among them. Linear Regression predicts the dependent vector by assuming the relationship between the independent and dependent vector is a straight line
What is Simple Linear Regression ?
It’s the easiest approach among the Regression models. Simple Linear Regression is applied only when our data has one independent variable and it predicts the dependent vector, by estimating the relation among the dependent and independent vectors as a straight line. It expresses the relation among the dependent and independent vector’s as a straight and is in the form as below.
y = mx + c where
y is the dependent vector
x is the independent vector
c is the constant and also called as bias, which is added to the line
m is slope and it is an offset that equals both vectors
Mathematically, c is the y-intercept that determines the value of y when x is 0 and m is the slope which determines the angle of line.
How it works?
The Simple Linear Regression model assumes and plots a Regression line, in such a way that the Regression line should be as close as to all data points of the dataset. To get more clear idea about how it works, let’s go through an example. We have an salary dataset comprised of Years of Experience and Salary. The dataset is as follows.
Overview of dataset
Here dependent variable is salary and independent variable is YearsExperience. If there is an increase in dependent variable when independent variable increases, then there is a positive correlation among them. If decreases, then there is a negative correlation among them.
Now let’s plot a scatter plot among YearsExperience and Salary.
Scatter plot Experience VS Salary
The best fit line for the data is the one which produces least error or least square approximation error among all regression line’s that can be drawn. This method of finding best fit line is called Least Square Approximation Method.
Now let’s get started by drawing an regression line among the data points in the scatter plot using mean of independent and dependent vectors. Draw a line across the point such that the point will be the mean of two vectors.
Xm ( Mean) = Sum of all experience values / Total number of experience values
Ym (Mean) = Sum of all salary values / Total number of salary values
Now plot a line which will be our assumed regression line over the data ponts in the scatter plot.
Regression line plotted using mean
From the above plot, we can observe that the regression line is some what far from some data points. This whole process is an iterable one and will be continued until the best fit line which is having the least square approximation distance is obtained.
The values corresponding to the original value on the regression line are called predicted values. The Least Approximation distance can be calculated as follows.
Distance Approximation among original value and predicted value
To calculate the regression line, for which the approximation distance is minimum when compared to the regression line now.
The slope or multiplier of new regression line can be calculated as follows:
where Xi, Yi are the values of independent vector and dependent vectors. They are subtracted from their corresponding mean’s and the slope of new regression line is calculated.
Let’s calculate the slope for new regression line. You can view the whole calculation in the following table.
Hence the summation of d*e and e*e are 1463496.36666667 and 3.366111111. The slope of the new regression cab be calculated from the above value’s and it is 0.00016808.
I have performed all the operation using python code. Have a look at it.
Xm = np.mean(X_train)
Ym = np.mean(y_train)
sum1 = 0
sum2 = 0
print('Experience Salary d=Xi-Xm e=Yi-Ym d*e e*e')
print('------------------------------------------------------------------------------------------------')
for pos in range(0, len(X_train)):
d = (X_train[pos] - Xm)
e = (y_train[pos] - Ym)
sum1 = sum1 + d*e
sum2 = sum2 = d*d
print(f'{str(X_train[pos]):{10}} {str(y_train[pos]):{10}} {str(X_train[pos]-Xm):{20}} {str(y_train[pos]-Ym):20} {str(d*e):{20}} {str(d*d):{20}}')
From the slope, we can calculate the y-intercept or bias by substituting x and y as zero in y-mx+c equation.
The obtained line equation is the new regression line and this process is continued for all regression line that cab be possibly drawn in our scatter plot. The regression line with minimum least square approximaton error is called best fit line.
Don’t worry, python’s scikit learn library does all these hectic work for us.
R-square Regression Analysis
To check how efficient is our model fits to the data, we can use the R-square Regression analysis method. This method is also called Coefficient of determination. Higher the R-square value, higher the efficiency of our model. But not all R-square value models are bad, it is based on the problem statement. We can find the R-square value of the model in the following way.
where Yp is the predicted dependent variable, y is the actual or original dependent variable and Ym is the mean of the dependent variable. Like in the Least square Approximation method we can calculate R-square value.
Let’s implement Simple Linear Regression on Salaries data. Firstly, import all the necessary libraries and load the dataset.
# Importing the libraries import numpy as np
import pandas as pd
import matplotlib.pyplot as plt # Loading the dataset df = pd.read_csv('Salary_Data.csv')
Now go through the data or perform some EDA (Exploratory Data Analysis), to understand and get familiar with the dataset.
# Viewing few rows of data print('----- Few rows of data -----')
print(df.sample(10)) print('
')
print('----- Features in the dataset ----')
print(df.columns) print('
')
print('---- Shape of the dataset -----')
print(df.shape)
Some insights about the datset
There are about 30 observations of data with two columns namely YearsExperience and Salary in our dataset. Our problem statement is to predict the salary based upon the experience(in years) he/she has. So YearsExperience is the independent variable and Salary becomes the dependent variable as per our problem statement.
Let’s have a look over our dataset for any missing values in the dataset.
# Check for null values df.isnull().sum()
Insights about Missing values in the dataset
Hurrah..!! No missing values are present in the dataset. So no need of any data preprocessing, let’s jump directly into splitting our dataset into independent and dependent vectors.
# Converting dataset to dependent and independant vectors # YearsExperience
X = df.iloc[:, :-1].values # Salary
y = df.iloc[:, 1].values
Now split the data into training and test data using scikit learn’s train_test_split method. I have spliced data in a way that 80 percent is training set and 20 percent is test set.
# Splitting the dataset into testing and training set's from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Some insights about the dimensions of training and test data after splitting them.
# Dimensions of datset after splitting into testing and training set's print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
Dimension of the resultant datsets
Now the test and train dataset’s are ready. Let’s start importing the scikit learn’s LinearRegression model and instantiate it.
# Fitting Simple Linear Regression model to the training data from sklearn.linear_model import LinearRegression # Instantiating LinearRegression Model linear_regression = LinearRegression() # Fitting to the training data linear_regression.fit(X_train, y_train)
Output showing our Linear Regression model is trained
The LinearRegression Model that was instantiated was fitted to our training data. Means a Regression line (a best fit line) with minimum least square approximation distance was identified. So from that regression line, our model starts predicting the output.
So start passing the test data to the model, to see the predictions of salary that our model predicts.
# Predicting dependent variable using independant variable predictions = linear_regression.predict(X_test)
Our Model has predicted the salaries of persons with respect to experience. Let’s view the predicted values and the original values together.
# Lets view predicted and original salaries print('Predicted - Original')
for pos in range(0, len(predictions)):
print(f'{predictions[pos]:<{25}} {y_test[pos]:<{15}}')
Predicted and original values
In some cases, the predicted values are very close to the original values and in some cases the predicted values are some what far but no too much from the original values. This is because our Regression Model learns the correlation among the variables by expressing as a straight line. So not all data points passed through the regression lines, due to outliers and some factors. So these causes these differences between and original target values. So that’s the reason that Linear Regression is not 100 percent accurate.
Let’s visualize the relationship between the independent variables with both predicted and original values along with the Regression line to get clear idea.
Now let’s plot a scatter plot between Experience and Salary of training dataset and along with Regression line.
# Training data VS Regression line
# Regression line is drawn using predicted values for training set plt.scatter(X_train, y_train, color='blue')
plt.plot(X_train, linear_regression.predict(X_train), color='red')
plt.title('Years VS Salary')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
Original and Predicted values of training set
Since it’s the best fit line, mostly every data point is very close to the Regression line. Now let’s plot a scatter plot between Experience and Salary of test dataset and along with Regression line.
Original vs predicted values of test data
Even the Regression line is very close to mostly every point in the test dataset.
Mean Square error and R-square regression can be performed in the following way.
#import libraries from sklearn.metrics import mean_squared_error,r2_score # model evaluation for training set y_train_predict = linear_regression.predict(X_train)
rmse = (np.sqrt(mean_squared_error(y_train, y_train_predict)))
r2 = r2_score(y_train, y_train_predict) print("The model performance for training set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
print("
") # model evaluation for testing set y_test_predict = linear_regression.predict(X_test)
rmse = (np.sqrt(mean_squared_error(y_test, y_test_predict)))
r2 = r2_score(y_test, y_test_predict) print("The model performance for testing set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
mean squared error and r2 error for train and test data
The complete Jupyter notebook cab be found below
You can find the GitHub repository here
𝚃𝚑𝚊𝚗𝚔𝚜 𝚏𝚘𝚛 𝚛𝚎𝚊𝚍𝚒𝚗𝚐..!!
𝙷𝚘𝚙𝚎 𝚢𝚘𝚞 𝚕𝚒𝚔𝚎𝚍 𝚖𝚢 𝚊𝚛𝚝𝚒𝚌𝚕𝚎. 𝙳𝚘 𝚜𝚑𝚊𝚛𝚎 𝚝𝚑𝚎 𝚊𝚛𝚝𝚒𝚌𝚕𝚎 𝚒𝚏 𝚢𝚘𝚞 𝚏𝚒𝚗𝚍 𝚒𝚝 𝚠𝚒𝚕𝚕 𝚋𝚎 𝚞𝚜𝚎𝚏𝚞𝚕 𝚝𝚘 𝚢𝚘𝚞𝚛 𝚙𝚎𝚎𝚛𝚜.
𝙻𝚎𝚝 𝚖𝚎 𝚔𝚗𝚘𝚠 𝚒𝚏 𝚢𝚘𝚞 𝚑𝚊𝚟𝚎 𝚊𝚗𝚢𝚝𝚑𝚒𝚗𝚐 𝚝𝚘 𝚊𝚜𝚔 𝚒𝚗 𝚌𝚘𝚖𝚖𝚎𝚗𝚝𝚜 𝚜𝚎c𝚝𝚒𝚘𝚗 :)
𝚁𝚎𝚊𝚌𝚑 𝚖𝚎 𝚘𝚞𝚝 𝚑𝚎𝚛e | https://medium.com/bycodegarage/simple-linear-regression-in-a-comprehensive-way-e290a1358a6d | ['Mallidi Akhil Reddy'] | 2019-09-06 03:56:02.410000+00:00 | ['Programming', 'Linear Regression', 'Python', 'Data Science', 'Machine Learning'] |
[JAVA-2a] Building a Simple Calculator: Types and Variables | Last time we successfully said hello in Java, much like the original Macintosh in 1984 apart from almost every single way. We also came across a number of unfamiliar terms that we skimmed over. Let’s try to understand some of those terms by building a simple command line calculator app.
Overview
Think about how a calculator works. We type in some number, then the arithmetic operation we want to perform, after that another number, and finally the equal sign. Because we are building a simple calculator, we are only concerned with the four basic types of arithmetic operations: Addition, subtraction, multiplication, and division. We also don’t care about chaining operations such as 1+2+3+4=? . We only want to do 1+2=? .
We can try to generalize and streamline how a simple calculator works:
Type in first number -> Choose the arithmetic operation -> Type in second number -> Press equal -> Print results.
Let’s go ahead and make it happen in Java! How hard can it be?
Types and Variables
Have you ever gone to a public restroom? If you have, you may have noticed that there are, generally speaking, two types of restrooms: Male and Female. Let’s assume every person fits into one of those two categories, then we can say there are two types of people in the world: Male and female. Let’s hold this thought for a bit.
Do you have a name, hopefully? If so, chances are your name is not unique. For example, my legal name, Boyuan Xu, is also the name of some character in a novel called “The King’s Avatar”. We can say a name is only a designation — for the same name, we could be referring to different things. Combined with what we said in the last paragraph, we can then try to describe some person using the combination “(Type) — (Name)”. For example, you would describe me as “Male — Boyuan Xu”.
But since our names are not unique, “Male — Boyuan Xu” can easily refer to the character in the novel as well. The point is, even with the same type and name, the actual content can still differ. This is pretty much what a variable is in programming: something with a specific type and name that can have different contents.
Unsurprisingly, types in Java do not consist of Male and Female. Here is a list of primitive data types, aka the most basic types a language supports, in Java:
byte : The smallest unit, can represent values from -128 to 127. Binary.
: The smallest unit, can represent values from -128 to 127. Binary. boolean : Literally either true or false. That’s it.
: Literally either true or false. That’s it. char : A single character. ( A or , )
: A single character. ( or ) double : Use it for decimals. ( 0.123456789 )
: Use it for decimals. ( ) float : We can also use it for decimals but it has less precision compared to double. Just use double.
: We can also use it for decimals but it has less precision compared to double. Just use double. int : An integer. ( 12345 )
: An integer. ( ) short : A smaller integer used for when you need to save space.
: A smaller integer used for when you need to save space. long : A bigger integer when the number gets too big.
Out of all those, we are only concerned with boolean , double , and int for now. Don’t worry about the others. But wait: where is String ? Is it not a primitive data type? How did we print out “Hi, I am ____” last time? In most languages, string is considered to be a primitive data type but not in Java. In Java, a String is considered to be a simple object instead which is why its type begins with an uppercase ‘S’ instead of a lowercase one. We can see it as a primitive type most of the time — String is a special case and it really doesn’t matter too much apart from one sneaky inconvenience which we will cover later.
Now that we know about types in Java, let’s go ahead and declare a variable. We want to create an integer named favoriteNumber with an initial value of 7. Let’s do it:
int favoriteNumber = 7;
Not too difficult, right? We can then print out my favorite number by doing:
System.out.println(favoriteNumber);
When you are referring to a variable after its declaration, you only need to call it by its name, much like how others don’t call me “Male — Boyuan Xu” all the time. You can probably see why when we said hello in Java, we had to encapsulate our sentence with double quotes — anything without single or double quotes, Java sees it as a variable name.
If we want to increase the number, type the following after the previous print statement:
favoriteNumber++;
This means we want to increment the integer by 1. Similarly, we can decrement the integer by 1 by replacing the plus signs with minus signs. Print out the integer once more and you can see that it indeed increased by 1 and went from 7 to 8. Another way to increment the integer by any amount is by typing:
favoriteNumber = favoriteNumber + 9;
Or the shorthand:
favoriteNumber += 9;
We have just increased our favorite number by 9.
We can also completely replace our favorite number by doing:
favoriteNumber = 6;
Now our new favorite number is 6.
You cannot replace the variable contents with something of a different type, for example favoriteNumber = “Hello” .You cannot declare a new variable of the same name even if the types are different. Try it and see what happens!
Here is a screenshot of how thing should work after all the above: | https://medium.com/swlh/java-2a-building-a-simple-calculator-types-and-variables-82f3787b67eb | ['Jack Boyuan Xu'] | 2020-01-03 20:52:22.622000+00:00 | ['USC', 'Java', 'Programming', 'Beginner', 'Viterbi'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.