title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Why Investing in Home Security Cameras is a Good Idea | Home security cameras have helped reduce crime significantly, but are they the best way to protect your home? Many homeowners wonder if these small, inexpensive devices are really effective in deterring criminals and preventing property damage. In reality, the best home security cameras have multiple functions and offer many different features, all of which are meant to enhance your home’s security and protection. If you’ve recently invested in-home surveillance cameras or have been thinking about them for some time, it’s important to learn about the different options available. Here are the most common options and their most important features.
Motion-activated security cameras capture footage by detecting movement and automatically sending a signal to a remote receiver or monitor. Some home security cameras are infrared-based, which provide a clearer picture with less light emissions. These types of cameras also tend to work better in low-light environments, since they require less energy to operate. Most wireless security cameras today are also easy to install, thanks to their slim designs. Wireless systems usually use digital video recorders to store the footage, so you’ll be able to watch the footage for hours or even weeks (or even months) without having to worry about the hassle of DVDs or VCRs.
Home security cameras with wireless components have even more benefits. These cameras don’t require you to run wires throughout your house, which makes for easier installation. Additionally, a wireless home security cameras system is often easier to install than one that uses hard wires. You won’t have to worry about running cables over hallways and through walls, which means you won’t need to deal with the potential hazards that come with digging holes and crumbling. Some new wireless systems include advanced features such as “panic buttons” and telephone dialers, which can help to protect the most vulnerable areas in your home.
Home security cameras
Most people think that wireless home security cameras only come in one color — white. While there is no reason that all wireless surveillance cameras should look the same, most of them do. In recent years, however, many manufacturers have started building surveillance cameras that are not only stylish but also incorporate features such as remote monitoring, facial recognition, automatic shut-down, touch screen functionality, and other helpful features. Depending on the type of camera you purchase, it may also be possible to integrate the surveillance camera with a computer system, so you don’t need a special license to operate it (although most states require you to get permission before installing a surveillance camera in a private residence).
If you’d like to protect your family from intruder invaders but don’t want to pay for an expensive Home Security System, you’ll likely appreciate the convenience of wireless surveillance cameras. These devices will usually only cost you a few hundred dollars to purchase and install. For this reason, they’re an excellent option for homeowners who live in apartments or for homeowners who aren’t interested in having to install a complicated security camera system. As long as you have a power source and a laptop with an internet connection, you can start recording video footage right away! Although you’ll never have real “time” with a wireless security camera, you will have valuable time footage if your camera captures an intruder coming into your home at night and if it captures someone breaking into your apartment after hours.
Home security systems are very popular because they provide homeowners with peace of mind. Homeowners who invest in high-quality security systems will often be able to sleep better at night knowing that their families and homes are safe and secure. This increased security will also make burglars think twice before attempting to break into your home. For this reason, many homeowners with expensive security systems consider adding surveillance cameras to their systems. In fact, if you’re concerned about crime in your area, you should contact your local government offices and request information on crime prevention and crime statistics in your area. | https://medium.com/@rhodiumcctv99889/why-investing-in-home-security-cameras-is-a-good-idea-7686f4ff9084 | ['Rhodium Security'] | 2021-07-12 07:14:19.441000+00:00 | ['Security Camera Systems', 'Home Security Cameras', 'Best Cctv Camera System', 'Home Security Systems', 'Security Cameras'] |
Learning to Learn by Barbara Oakley — Part 1 | The written version of the video https://www.youtube.com/watch?v=vd2dtkMINIw . It is a one of a Google Talks event that talks about learning to learn (new things). The idea about learning backed by science is always interesting since it reflects the way of long life learner actually doing.
In the video, she talked about how she was doing adventurous life from early childhood when she was not interested in math due to her always-moving life with her parents. She had moved 10 difference places already when she was in tenth grade. She loved to learn language so she joined military. Because of that, she could learn another language without paying. She learned Russian in the military and changed profession to be a crew in big Russian fishing ship (a kind of trawl boat). Finally she ended up in Antarctica where she met her husband.
Having rich experience makes her gaining a new perspectives. She realized that she was being used to by always gaining new perspectives. It is like having adventurous is becoming sort of a comfortable thing. But she was not actually stretching herself to really have a totally new perspectives.
She thought back when she was in the military and worked with West Point engineers who have exceptional problem-solving skills. They could think in a way that she could not think. For instance, they read and “communicated” with the equations below. Questions arise, what if we could read the equations like they could read the equations? Or in some sense, could we learn the language that they were able to speak? Could we actually change our brain to learn in what these people knew? How we could change our brain?
Captured from the video
To answer the questions, she did a research by visiting ratemyprofessors.com, and observed top professors worldwide who teaching subjects like engineering, math, chemistry, physics, economics (difficult subjects) and also relevant subjects also like psychology and English. Furthermore, she researched how they teach and how they learn for themselves. She also reached out to top cognitive psychologists and neuroscientist. Finally she came to the conclusion that they (the expert in each their fields) were using metaphor or analogy. They are embarrassed to say it because other professors would to say “ah, you are dumbing things down”. But it is actually they used to easily communicate with each other about the ideas. In addition, it is like a shared hand shake.
So the next materials are the key ideas how we can learn the certain subjects.
Brain Thinking Mode
There are two kinds of the way we think. They are focused mode and diffuse mode. Focused mode is a way of thinking that has established pattern of thinking. For example how we think about solving a multiplication numbers. It is a thinking exercise that we are used to since our elementary school. Therefore, its pattern has been mapped out in our brain generated from our previous learning. By using metaphor, if there is a pinball machine in our brain, this focused mode draws a pattern of ball movement path between pins. How we do this focused mode is usually by sitting and thinking about a solution for a problem. We are usually immersed to find out the solution. And it takes time. On the other hand -back to the metaphor- what if the ball intentionally or accidently bounces to another pins and makes a new path between the pins in the machine? It is called diffuse mode.
Diffuse mode happens when we learn new things or get the new ideas of solution for certain problem that never been encountered before. The ball creates a new path in our pinball machine. Or in other words, it creates new way of thinking in our brain. It can be harnessed by doing going off of the problem we want to solve. For instance, we can go outside to do jogging or by taking a nap.
By orchestrating these two difference kind of thinking modes can help us a lot in solving a really difficult problems. The focused mode has a role to help us reverting back to our previous and established learning process and apply the solution to the new same problem. It usually works. But what if it does not work due to the new complexity? The worst thing for us to act is just keep sitting and focusing on the problem. Actually, it is a time for diffuse mode playing its role to help us. To summon it, we can go to taking a walk or shower. Our brain will map out a new path for the thinking path as if the ball bounces between different pins and create a new pattern in the pinball machine. Both focused mode and diffuse mode are part of learning process.
Each person has various way of getting mind off. The famous surrealist painter from Spain, Salvador Dali got the inspiration by sitting in a chair, relaxing, and relaxing, holding a key chain in his hand. By the time he was really relax, his mind run free. The next time he fell asleep due to the relax mode, the chain would fall and made a noise. And at that time he usually got the ideas or inspiration making his masterpiece that we praise today. It happened also for Thomas Alfa Edison, an inventor who is famously having an innovative light bulb after many experiments. After he’s stuck on a difficult problem, he would do the same as Dali did, but by using ball bearing instead of key chain.
Diffuse mode is not conscious though, but it plays important role in learning process. It connects the dot between material that we have already absorbed which does not happen in focused mode. So, BIG YES for taking a break between study! And because of that activity of going back and forth between focused mode and diffuse mode, it will takes time. The next question is what if we are a procrastinator?
A Procrastinator
The procrastinator tends to see a task as a unpleasant activity. For that reason when we feel that kind of feeling, there is a pinch in our brain. There are two things we actually can do if we are in this condition. Firstly, we suppress our feeling and it will diminish and fade away at the end. Secondly, we can just move to another things that we like, for example turning to social media or watching funny video in YouTube. It will not significantly affect in short term, but if we continue this procrastination in the long term, it will detrimental for our life. Having said that, should we back to our first choice to overcome this procrastination?
There might be requiring a changing mindset on the first solution which is just do it! Instead of focusing on the task or worst on the pain of finishing it, try to focus on the time. There is one technique called Pomodoro developed by Francesco Cirillo in 1980s. It uses a timer to break down work into intervals, traditionally 25 minutes in length, separated by short breaks (Wikipedia).
A tomato-shaped kitchen timer that Cirillio used. Pomodoro is tomato in Italian.
What we can do is setting a time without distraction for around 20 minutes or depends on each person and focusing on the time for just doing the task. It will be making easier to us to do our task. After each interval, try to reward ourselves by taking a break and relax. It is okay to be relax since it is also part of the learning process. This time, the brain connects between dots or knowledge material.
About Sleeping
One of the form of relaxations is a sleep. First of all what happened in our brain when we are not in a sleep is the metabolites come out and sit between our brain cells or neurons. It will accumulate and become a kind of toxin for and affects badly in our way of thinking. Therefore, it is usually a bad time for making decision if we are lack of sleep.
AWAKE. The metabolites (red) are accumulated in between the brain cells (yellow). Source: Sleep Metabolite Clearance from the Adult Brain by Xie, Lulu, Hongyi Kang, Qiwu Xu, Michael J Chen, Yonghong Liao, Meenakshisundaram Thiyagarajan, John O’Donnell, et al.
SLEEP. The brain cells shrink and allow the fluids to wash the metabolites out. Source: Sleep Metabolite Clearance from the Adult Brain by Xie, Lulu, Hongyi Kang, Qiwu Xu, Michael J Chen, Yonghong Liao, Meenakshisundaram Thiyagarajan, John O’Donnell, et al.
Sleep can help us to grow a new synaptic that is generated after learning a new thing. It is based on a scientific research by Guang Yang et al. Sleep promotes branch-specific formation of dendritic spines after learning. So it is recommended to do the cycle of learn, sleep, learn, and sleep in order to grow more synaptic connections. While the relaxation of sleep helps through growing new synaptic connections, how about the active exercise?
About Exercising
Exercise is incredibly important for learning process. A study found that our brain produces new neurons every day in the Hippocampus part. There are two ways of this brain activity to grow and survive: get exposed to new environment like travelling and exercise like simple walking. This process is called neurogenesis. So, what is a special about the new generated neuron? It apparently has function as a pattern integrator by introducing a degree of similarity to the encoding of events that occur closely in time. So next time our kids ask a recess or break from the learning, give it to them! That is the time of their brain produce new neurons to build the memory of learning.
To be continued.. | https://medium.com/@maung-sutikno/learning-to-learn-by-barbara-oakley-dbd5bba511d9 | ['Maung Agus Sutikno'] | 2021-07-09 09:55:31.578000+00:00 | ['Barbara Oakley', 'Brain', 'Longlife Learning', 'Neuroscience', 'Learning And Development'] |
MDN Breakout with Phaser 3 — Part 1 | The MDN “2D breakout game with Phaser” tutorial shows you a basic way to make a game with Phaser 2. I recently did the tutorial so I could guide others through it.
However, the tutorial doesn’t show you how to use the most recent version of Phaser, or how to use good design concepts that will help you make your own game using JavaScript best practices.
So I set about adapting the MDN tutorial according to the format set out in the Ourcade “Modern JavaScript” phaser tutorial. The tutorial uses the phaser3-parcel-template, which helps with starting up a complex phaser project.
Here is a link to the finished code, and another link to the assets used in the project.
In this article, I will cover steps 1 through 6 of the MDN article: “1.Initialize the framework,” “2.Scaling,” “3.Load the assets and print them on screen,” “4. Move the ball,” “5.Physics,” and “6. Bounce off the walls.”
Initialize the Framework
To initialize the framework, we will follow the directions on the phaser3-parcel-template repository to clone the repository into a new directory. Take care that if you wish to upload your work to your own repository at a later time, you will have to delete the hidden “git” folder.
With this template, you get a project format that is set up for modular design. We will use JavaScript imports to allow us to split our code into modules.
You’ll note that when importing, we don’t need the ‘.js’ extension. This is due to some of the modules included via NPM in the project.
The next thing we want to do is set the values for the canvas type and size. We’ll go into the ‘main.js’ file in the ‘/src’ directory of the project. We’ll find the config object there, and that is where we can set the game to use the ‘CANVAS,’ set the size of the game world, and set scaling.
We’ll create the GameScene in just a minute, and it will replace the HelloWorldScene that comes with phaser3-parcel-template.
In the above code, we set the type of display to canvas (step 1), while our other set the canvas size, set the scaling (step 2), and set the physics (step 5).
Create Game Scene
Before we begin loading assets into the scene, we’ll create the GameScene.js file in src/scenes.
Create the Ball
Before going through the steps, you’ll want to grab the ball asset so you can load it. We’ll also want to create a reference to the canvas that we’ll use for positioning the ball when it is created. This will be useful, as the canvas size can vary based on the size of the user’s screen.
Next, we’ll use a pattern that will be repeated throughout the tutorial for creating gameobjects.
create a key constant to use when loading the asset make a property of the Scene object in its constructor method in which to store the asset load the asset in the Scene’s preload method make a createObject method in the Scene call the createObject method in the Scene’s create method and assign the return value to the property created in 2.
In the below code, we’ll also cover step 6 of the MDN tutorial, “Bounce off the walls,” because it only requires on line of code in the createBall method.
Note that in the createBall method, we create an object and then return it from the method. This is best practice for a createObject method. The return value is then attached to a property of the scene object, which allows it to be used by other methods in the scene.
With that, we have accomplished steps 1 through 6 of the MDN breakout tutorial, but with Phaser 3 and in a modular JavaScript format!
We have a scene, and we have a ball that bounces around the screen.
Up Next
In the next article in this tutorial series, we will add a paddle and create game over conditions. | https://medium.com/@michaelbragg-20879/mdn-breakout-with-phaser-3-part-1-a46a36e4a034 | ['Michael Bragg'] | 2021-01-11 13:10:07.740000+00:00 | ['Phaser 3', 'Modular Design', 'JavaScript', 'Game Development'] |
[Study] Machine Learning — Regression | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/doyuns-lab/study-machine-learning-regression-2d0ffdf61eb5 | ['Doyun S Journey'] | 2020-10-25 16:39:04.657000+00:00 | ['Doyun Study', 'Regression'] |
The First Signs of Alcoholic Liver Damage Are Not in the Liver | The First Signs of Alcoholic Liver Damage Are Not in the Liver
Myfather died of alcoholic liver cirrhosis four years ago. It came as a surprise to all of us, even though it was clear he had a severe drinking problem for decades. It was especially surprising to me, as a former nurse and a recovering alcoholic. You would think I’d know more about liver problems and alcohol use than the average person. But the truth is, in the months before his death, I had no idea my father’s liver was struggling at all. Most people know about cirrhosis, but few people know how a liver goes from early damage to end-stage liver cirrhosis.
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-01.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-02.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-03.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-04.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-05.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-06.html
https://nm-aa.org/cvo/cxv/video-udine-v-croto-it-07.html
https://nm-aa.org/cvo/cxv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://nm-aa.org/cvo/cxv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://nm-aa.org/cvo/cxv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://nm-aa.org/cvo/cxv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-01.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-02.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-03.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-04.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-05.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-06.html
https://www.oscarwildetours.com/wild/epo/video-udine-v-croto-it-07.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-01.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-02.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-03.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-04.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-05.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-06.html
https://www.startupeuropeclub.eu/club/duv/video-udine-v-croto-it-07.html
https://www.startupeuropeclub.eu/club/duv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://www.startupeuropeclub.eu/club/duv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://www.startupeuropeclub.eu/club/duv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://www.startupeuropeclub.eu/club/duv/Video-udinese---crotone-Seria-b-bopa-i211.html
https://awf.com/fom/video-udine-v-croto-it-01.html
https://awf.com/fom/video-udine-v-croto-it-02.html
https://awf.com/fom/video-udine-v-croto-it-03.html
https://awf.com/fom/video-udine-v-croto-it-04.html
https://awf.com/fom/video-udine-v-croto-it-05.html
https://awf.com/fom/video-udine-v-croto-it-06.html
https://awf.com/fom/video-udine-v-croto-it-07.html
https://awf.com/fom/Video-udinese---crotone-Seria-b-bopa-i211.html
https://awf.com/fom/Video-udinese---crotone-Seria-b-bopa-i211.html
https://awf.com/fom/Video-udinese---crotone-Seria-b-bopa-i211.html
https://awf.com/fom/Video-udinese---crotone-Seria-b-bopa-i211.html
The combination of my father’s death and my personal background lit a fire in me to know more. He was admitted to the hospital on June 24, 2016, and he died on July 18. Only 24 days passed between the first sign there was a problem and his subsequent death.
Now, hearing that he was in end-stage cirrhosis didn’t surprise me, given his heavy drinking. What did surprise me was that he’d visited several doctors and specialists in the months before his death, and no one knew his liver was struggling either.
So what happened? Does end-stage liver cirrhosis really sneak up that fast? Were there other signs that would have alerted someone to his failing liver?
As for why the doctors and specialists didn’t know what was happening, that mystery resolved reasonably quickly. The plain truth is that alcoholics rarely divulge the amount and frequency of their drinking to their doctors. This was the case for my dad. He had many health issues that he was trying to solve, but he protected his drinking habit fiercely. So he refused to spill the beans, even when it mattered.
The problem is that liver damage has numerous multifaceted symptoms that are confusing and associated with many other illnesses. Unless a doctor knows that the patient is an alcoholic, they may not know how to interpret what’s happening until it’s too late.
As he was dying, my father told me that he didn’t think to tell the doctors how much he was drinking. He said it was as if he blanked out and “forgot” to mention it. As crazy as that sounds, this strange “forgetting” is a common part of the alcoholic mindset. It may also be due to the metabolic and physical changes of cirrhosis itself.
There are many signs of liver problems, but oddly, none seem to point to the liver at first. And in fact, many of the first signs of liver damage occur in other parts of the body. Knowing these signs may help educate alcoholics and their families if they want to understand their risk of developing liver cirrhosis.
Liver damage has numerous multifaceted symptoms that are confusing and associated with many other illnesses. Unless a doctor knows that the patient is an alcoholic, they may not know how to interpret what’s happening until it’s too late.
Digestive signs
The liver plays a huge part in our digestive process. It filters out all toxins from food as well as helping to break down fats and glucose.
When a liver starts to slow down due to significant damage, it will reduce its digestive work. Instead, it will divert its energy toward vital functions like metabolizing medications and filtering toxins.
This means that symptoms like bloating, nausea, vomiting, gas, and diarrhea will start to increase. Over time, eating becomes more challenging. In the later stages of liver cirrhosis, toxins that can’t be filtered out begin to build in the bloodstream, which causes more nausea.
Cognitive signs
Although confusion and brain fog happen in end-stage liver cirrhosis, they can also be early signs.
The liver is responsible for filtering dangerous substances in the blood. It also helps regulate hormones, blood glucose, and vitamin absorption. In the early stages of liver damage, these processes can be interrupted. Inevitably, this affects our brain and nervous system.
This means that early liver problems can make you feel tired, confused, slow, and foggy. You may have some memory issues as well.
Neuromuscular signs
The liver stores vitamins required for the functioning of many organs and systems in the body — one of them is vitamin B1 or thiamine. A deficiency in this particular vitamin has been documented in many alcoholics with or without liver damage.
Unfortunately, alcohol inhibits the absorption of thiamine in the intestine. Over time, as the liver becomes damaged, it can no longer store thiamine in enough quantities. Thiamine deficiency is responsible for many neurological issues in people with alcoholism.
Symptoms of thiamine deficiency range from mild to severe and include things like: confusion, mental fog, lack of balance, pain and numbness in hands and feet, muscle weakness, rapid heart rate, digestive problems, flushing, and involuntary eye movements.
Thiamine deficiency happens in almost every alcoholic who consumes frequent and large amounts of alcohol. And if thiamine deficiency due to alcoholism is discovered, you can be sure the liver is suffering damage at the same time.
Many of the first signs of liver damage occur in other parts of the body.
Vascular signs
All alcohol consumption can lead to blood vessel dilation, causing flushing in the face and hands. Over time, this can cause damage leading to permanent redness in the face. Although many alcoholics have rosacea or spider-like veins on their faces, this is often benign.
However, spider angiomas are different from rosacea or spiderlike veins. They’re circular and have a central point called a spider nevus that is darker than the rest of the lesion. Spider angiomas are a sign of liver disease and can be present in the early stages. They often progress to more extensive and more numerous lesions.
Spider angiomas are caused by increased estrogen levels in the blood. When the liver becomes damaged, it can’t properly metabolize estrogens, which causes them to build up in the body.
Many women who are pregnant or taking birth control pills may have a few spider angiomas. However, in alcoholic liver disease, these lesions are often more frequent and accompanied by red palms and varicose veins in the esophagus.
These are a few of the main signs of alcoholic liver damage that happen outside of the liver. It’s important to know this because most of us have no idea how the liver functions and how it communicates distress.
The liver itself doesn’t show signs like pain or swelling in the early stages of liver damage. This contrasts with other organs like the heart or stomach, where any damage will emit pain or symptoms directly from these organs.
What happens with liver damage is that its many diverse functions become interrupted, causing symptoms in other parts of the body. This may explain why most people never think they have a problem with their liver.
Unfortunately, patients with alcoholism are rarely educated about these issues. This is because they often don’t reveal their drinking, to begin with. And even if they do, the symptoms are widespread and complex, which makes patient education challenging.
My goal in writing articles like this is to help educate regular people about alcoholic liver disease to understand their health and make better decisions.
It’s hard to say if my father would have changed his drinking habits if he knew more about his vague and complicated symptoms. But I think having proper education would have certainly helped him understand his risks and health problems better. | https://medium.com/@malcomdaily390/the-first-signs-of-alcoholic-liver-damage-are-not-in-the-liver-1620d1e34c7 | [] | 2020-12-15 16:51:29.624000+00:00 | ['Addiction', 'Body', 'Alcohol', 'Mental Health', 'Health'] |
Resumes are bait | I am not in HR, a recruiter, or even a super linkedin.com user. That said, I was on the recruiting team at a Big 4 consulting, and we looked through hundreds of resumes every year and 90% of them went into the trash. We probably spent less than 15 seconds on a cover letter and 30 seconds on a resume. Basically, the resume review was quick and violent.
Resumes = bait
The way I see it, the entire purpose of a resume is to get invited for an interview. Period. Getting an interview means the fish took a bite at the bait.
When you get to your late 20s, early 30s and beyond, you are not looking for jobs where a recruiter would hire you with a piece of paper. The entire goal is to get in front of the hiring manager and create a connection. Resume = bait.
Good resumes are rare
You can ask anyone in HR, MBA admissions or a recruiter and they will tell you that resumes come in all shapes and sizes. They are formatted differently, have varying lengths, but it quickly comes down to good ones and bad ones.
What’s a good resume?
My career coach friends might gasp, but here is my over-simplification of a good resume:
Achievements quantified and organized in a story
1. Achievements
Clearly, your resume should have content. Can’t bake this resume cake without flour, eggs and water.
What kind of work do you do? Where have your worked?
How would you explain your accomplishments simply?
Can you demonstrate commitment, focus, other admirably “worth hiring” characteristics?
If you don’t have achievements in your work history that you are proud of, that is a different problem. Time to buckle up, and get some good work done. Be so good they cannot ignore you. As a friend of mine likes to say, “Work like you give a damn.”
2. Quantified
This is where most people fail. There should be numbers on your resume. If you improved throughput of a process, by how much? If you increased sales, what % off of what base in what amount of time? If you managed a team, how many headcount? Resume bullets without quantification are . . . weak. It shows a lack of accomplishment, or unwillingness to measure what you are trying to manage.
Employers want results. How will your contribution affect EBITDA? How effective have you been in your previous position? Are you thinking like an owner? Like a good acting teacher might say, “show, don’t tell.”
PAR or STAR
This is a super valuable thing I learned in MBA. Problem (P), Action (A), Result (R). Each bullet on your resume should describe what was the problem you are were faced with, what action you took, and what result you got. This simple algorithm forces you to be specific.
Too often people write vague things on their resume which, honestly, could apply to anyone. . .”I was responsible for ABC boring thing.” SO WHAT? No one cares. What did you do? What problems did you solve? Why were you awesome? Compare these two fictitious bullets about the same achievement:
GOOD: Integrated disparate customer information from 3 databases into a master file which lead to a 12% increase in customer contacts and 8% increase in close rates for $450K in incremental margin
Integrated disparate customer information from 3 databases into a master file which lead to a 12% increase in customer contacts and 8% increase in close rates for $450K in incremental margin BAD: Responsible for customer data and information and marketing projects
“What gets measured, gets managed.” — Peter Drucker
3. Organized
This has a few meanings. First, there needs to be formatting and it needs to look clean. No spelling errors (compliment, complement), typos, incorrect usage (their, there, they’re), or parallel structure problems (verb, verb, verb, noun). Second, organize your achievements from most important to least. Remember that people remember the first and last things in a list. Things in the middle get lost. Third, the content needs to be structured in a way that is both logical and sensible. Is there a progression of responsibility? Are the sizes of the assignments, roles, achievements appropriate to this new role?
Does this pass the sniff test?
4. Story
Your resume need to tell a story. How does the Venn diagram of your skills, experience, passions, quirks combine into a compelling story of what you can do for this employer or organization?
What kind of work do you do, and how good are you at it?
Did you progress in your career, or did you continuously bop-around from place to place because you did not fit in.
What does my resume say?
It’s common for large consulting firms to interview at a half dozen business schools on the same day. So you have dozens of senior managers and principles interviewing hundreds of candidates. At the end of the day, what do you want your recruiter (yes, the one who was sitting in front of you) to say about you? How will they advocate for you vs. the hundreds of other candidates at dozens of other schools?
What’s the elevator pitch you would LOVE for them to tell their colleagues? What’s the story?
Have more than one resume
If you are applying for consulting, marketing and strategic planning roles. . . don’t use the same resume. Treat the resume reader like a picky customer. They won’t buy what looks odd, or out of place. There are tons of resumes to choose from. Tailor your resume and your story for them. Work at it.
If a resume is bait, would you use the same bait to catch different kinds of fish?
Be relevant
At my MBA, they really beat this into our heads. Each bullet on the resume should be interesting enough for the recruiter to look at it as ask, “so tell me about that”. Each bullet is a teaser for the next question in the interview. Also, you better have a good story for each part of your resume. Don’t put on your resume anything you can’t talk about.
Be able to answer the question, So what?
Hone your resume
This is tough work. When I offer to help someone with a resume, I often “rev” it with them a few times. It is hard work. You need to be willing to (re)write it until it is close to perfect. If you want to see a bunch of MBA resumes just google the words “MBA resume book pdf” and you will have tons of examples.
Happy fishing.
Cover letters?
As far as I can tell (and from my experience looking through dozens of resumes), the cover letter is bait to get you to look at the resume. Just like the resume is bait to get you an interview. Just like an interview is bait to get you an internship.
What other good resume advice do you have?
Related posts: | https://medium.com/@consultantsmind/resumes-are-bait-11e521582139 | [] | 2021-07-06 14:47:11.260000+00:00 | ['Jobs', 'Recruiting', 'Consulting', 'Careers', 'Resume'] |
No Resolutions: Why you should do a 30 Day Challenge Instead | Most progress is noticed through measurement, and unless you are experienced dedicated and systematic about measuring, it’ll be hard to chart improvement over the course of a year. Once you get over how gimmicky 30 day challenges sound, they’re a super useful way to make fast progress. Here are 8 reasons to take on the thinking of a month-at-a-time challenge.
Almost no planning required
To get started you need an accomplishable goal: to become better at a skill, make progress on a project or explore an activity. Once you have a concept and an idea of how you’ll iterate for a week or two, you’re ready to go. We are learning the fun of risk. There aren’t actually stakes here. I know you promised your social followers you’d create 30 different llama logo designs in 30 days, but if you fail I think they’ll forgive you. As an extra sneaky mental trick, plan just enough to get you started and have some cocky swagger that you’ll figure the rest out. Planning halfway will keep you on the edge which will keep you in the creative zone of the unknown.
2. Let creativity swoop in and save you
If gaining ground on a skill in one month doesn’t sound fun enough, theme the weeks of your challenge to ease the ideation phase. Rather than plan 30 workouts, hope to learn 30 recipes or write 30 articles, make mini challenges: a group of workouts you can do completely from the ground, or 7 spicy recipes. I’ve only planned the first two weeks of my challenge because I know I’ll get better ideas partway through and want to scrap whatever I had come up with.
3. Decide where you will post every day
Telling people you plan on doing a 30-day challenge is lame. We’d all much rather hear about the progress. Wherever you announce it, commit to telling people about it at the same place every day. It’ll help you feel more accountable. You might even get great support!
4. You begin to construct your environment to support your endeavor.
The problem with trying to make an instant permanent change is that all the time until now contradicts a habit you haven’t built up yet. If you were keeping score it would probably look like 20 years to 0. Thinking of a month-long habit helps frame it in a way that feels doable. We are used to constant small changes. Our weekends looks different from our weekdays and we don’t lose our shit. If you plan to make a month of something, your brain starts to fill in how it will fit into your schedule, where you will keep what you need, and automatically you start to shift your environment accordingly which is a make-or-break factor of change. If anything, a 30 day challenge is a useful mental trick to making bigger changes down the road.
5. It’s fun because in essence it’s an experiment
It’s easier to get out of your own way when you just decide to start. A resolution is a big promise which puts a lot of pressure on you and you instantly want to avoid the feelings you just came up with. A challenge allows you to toy with an idea and make a smaller commitment, and yet it’s enough time for you to see whether the change you made gave you good feelings or not. The changes you want to make can be self-reinforcing if you can only take the pressure off.
6. Helps keep expectations in check
You aren’t setting out to get ripped in 30 days (I hope), which would lead to some dashed expectations unless you’re nearly there. I know some people are all for dreaming big, but I get really motivated when I see progress that’s within my reach. It helps keep the focus narrow and commitment sustainable. It makes doing the work that much more exciting.
7. Its a great opportunity to learn Time Blocking
Time blocking beats list making any day, according to many articles I’ve read here on Medium. Introducing a new type of task to do every day is a big time suck, so you might want to chunk similar activities so that you can accomplish a lot at once and be able to take the weekends off. You don’t want to burn out on your first challenge. If you were making 30 characters on youtube like me, you would write as many as you can in one sitting, film several in one day, edit a few another day, and then you’re a few days ahead of pace to rest and get more creative energy going. You can still make lists. List what different 30 day challenges you want to do this year.
8. You’ll learn how to track and measure progress
These types of things tend to leave a trail. Whether it’s your social posts, photo progress, or the actual projects completed daily, you’ll be able to look back and see how far you’ve come. As a bonus you could journal the development of the month and further track your inner game. If you do, please include me on the recap or share your lessons-learned at the end. And of course congratulate yourself for finishing, whatever progress you made. I have a feeling success rates with this would be much higher than with resolutions.
Caitlin vlogs and acts. Her self-inflicted challenge is to perform 30 different characters in youtube videos. Many are gender bended https://youtu.be/RbLOH2IbJxc
Good luck on your ventures! | https://medium.com/@caitlinburt/no-resolutions-why-you-should-do-a-30-day-challenge-instead-fbf7a1aa6094 | ['Caitlin Burt'] | 2019-01-02 03:51:03.522000+00:00 | ['Habit Building', 'Resolutions', 'Personal Development', 'Challenge', 'Self Improvement'] |
Togas, Vodka, And Secrets, Oh My! | Togas, Vodka, And Secrets, Oh My!
A toga party shifted two relationships.
Photo courtesy of Twitter
His name was Ryan. On paper, he was my perfect man. Masculine, football player, blond hair and blue eyes. The only problem with him was that he identified as straight. His parents were religious, they wouldn’t understand! His arguments fell on deaf ears. As much as I wanted him, I had pride.
One of the rules I have for myself is that I will never be someone’s dirty little secret. I am out and proud of who I am. If someone doesn’t like it, that’s their issue, not mine. Still, given my propensity for liking men who are still in the closet, this rule has led to a lot of heartache for me.
Even as we debated the merits of our relationship, we were intimate with one another. I can’t count how many times one of his frat brothers walked in on me giving Ryan head. It became a running joke that his dick was going to fall off from as much head as he was getting from me.
“Come on, please come to the toga party,” Ryan begged me. He was modeling his costume for the evening and it left almost nothing to the imagination. I agreed to come and he threw me down on the bed. Our physical relationship once from one-sided to consummated in just a few minutes.
We didn’t know someone had been watching us. | https://medium.com/the-bad-influence/togas-vodka-and-secrets-oh-my-78f068849587 | ['Edward Anderson'] | 2020-01-14 22:16:01.890000+00:00 | ['Frat', 'LGBTQ', 'Love', 'Outed', 'Dating'] |
WKU Basketball: Hilltoppers Handle Tennessee Tech, 88–68 | The Western Kentucky Hilltoppers men’s basketball team returned home to play one last game at Diddle Arena for the year 2020. This was also a non-conference matchup against Tennessee Tech from the Ohio Valley Conference. An hour before tipoff, it was announced that the highly touted junior Charles Bassey was inactive for tonight’s game. This was something that quickly needed to be an afterthought as the Tops were competing for a win.
Tennessee Tech came into the game winless with an 0–8 record. As the game started, the Golden Eagles won the tip. In the first sequence of the first half, the Tops dominated outstandingly. This included a 13–4 run y the Tops under the 16-minute mark of the first half. Luke Frampton scored the first 6 points of the game with back to back three-pointers. The game went on with the run increasing to a 24–12 lead through the halfway mark. This also includes one fast-break dunk that is SportsCenter worthy from Josh Anderson:
The Tops didn’t fall into the pressure of replicating Bassey as they kept up the defensive momentum on the Golden Eagles. Taveion Hollingsworth quickly filled the rebounding duties as he was able to get four in the first half alone, leading the team. One concern that still lingered for the Tops over the first half was acquiring 8 turnovers in a five minute period. In the last five minutes of the game, the deficit was cut to a 28–18 lead for WKU as they struggled to increase the lead. In the closing seconds, Kenny Cooper scored on a buzzer-beating three-pointer.
At halftime, the Hilltoppers lead the Golden Eagles 36–25.
Starting in the second half, the Tops continued to have the upper hand. Their biggest lead was 15 points but it was quickly cut by the Golden Eagles. A Luke Frampton three was the catalyst that pulled the Tops back into dominating. WKU kept the foot on the gas pedal for the rest of the half, approaching a 24 point lead with two minutes left in the game. This extended into a final score of Western Kentucky beating Tennessee Tech 88–68. This was their seventh win on the season as they completed the nonconference portion of their schedule at 7–2. | https://medium.com/the-red-towel/wku-basketball-hilltoppers-handle-tennessee-tech-88-68-dca6faf965fc | ['Alex Sherfield'] | 2020-12-23 02:50:11.411000+00:00 | ['Basketball', 'College Basketball', 'Wku Basketball', 'Wku Hilltoppers', 'NCAA Basketball'] |
Stop Calling Me White Washed. Please stop calling me whitewashed for… | “You’re such a white girl”, he said.
This was coming from a date I was on with a black guy, who later confessed that he only dated white girls- but I was different, a different kind of black girl. He couldn’t wait to tell his friends that he was actually dating a black girl.
Huh? Well, this was a new one. I was stunned, not mad. Even curious. I went on a few more dates with him. I wanted to understand his perspective of this-situation.
Shouldn’t I be outraged for all the black women out there?
At the end of the day, maybe I’m jaded, but my hate is too high of a price to spare. People are allowed to love whoever they want, regardless of their race.
I think the outrage comes not from dating another race, but from simultaneously trash-talking and hating your own. Men and women are guilty of this.
He was open and honest with me. His confusion, preferences, and identity issues weren’t my problem. Technically on a bigger scale, I’m sure they were, but I’d been down that road too many times before. Miss save-a-man-at-your-own-expense. Nope.
During our dinner conversation, I got down to the root of his hesitancy about dating black women and my whiteness. His summary was that,
He was bored of seeing black women. Only to find out that he was 25-leaving me whitewashed and a cougar. My look and mannerism, associated with feminity, gave him white vibes.
After some time our dates dwindled for various reasons. I became anxious wondering if I was going to be added to his- see, this is why I don’t date black women hit list. Who knew?
What does being called whitewashed mean?
Is being called whitewashed a derogatory term or a backhanded compliment?
Perhaps first, we should look at what it means to be black. An Afrometrics research study questioned participants on their self-definition of being black. Six themes emerged.
Struggle and resilience- Twenty-five percent of participants identified being black with the struggle for equality, justice, fighting against racism, and other forms of oppression. Ancestry- Twenty-three percent of participants identified being black as having and honouring their African ancestors. Pride- Twenty-three percent of participants identified being black as having a sense of empowerment, rich culture, and dignity. History and legacy- Fifteen percent of participants associated being black with a past story, roots, and continuation of the lineage. African descent and community- Thirteen percent of participants associated being black with having a like-minded community, embracing cultural traditions and values.
Why do we call each other whitewashed?
We call each other white-washed when we assume that a person cannot adopt aspects of another culture while maintaining their own.
Does adopting aspects of another culture contribute to a loss of our identity?
We’ve felt rejected in so many areas of life that we can’t bear the thought of being rejected by our own people.
I too have been guilty of calling people whitewashed. Subconsciously I was scared, scared that I’d be contrastingly black around them, that I’d have to keep my defenses up, that we couldn’t relate. Wasn’t that the same mentality that caused hate crimes and slavery? Just saying.
We, as black people, need to start taking more chances on each other.
We are the ones who put expectations on our blackness. We judge each other’s blackness or lack thereof the most.
We’re not blind or delusional to the racism and limitations society has tried to place on us. In light of our protests, there have been increasing opportunities for advancement. Now is the time. Now, there are platforms to challenge the stereotypes of what it means to be black.
People say that slaves were taken from Africa. This is not true: People were taken from Africa, among them healers and priests, and were made into slaves.- Abdullah Ibrahim
Why we should say goodbye to calling each other whitewashed
Being called whitewashed is a barrier to healing, self-esteem, and acceptance. We should say goodbye to the term, as it undermines the multifaceted nature of who we are. We’re more than rap music, WAP, drama, and thugs. We’re tech nerds, punk rockers, outdoor adventurers, and classical music connoisseurs. Renaissance people.
Assumptions that being called whitewashed creates,
That it’s not possible for black women to enjoy or try something outside of their culture or environment.
That black woman can’t be associated with femininity, travel, adventure, or sophistication. It’s normal to be seen as ratchet, but you’re fake when you act otherwise.
That white women are rich, prim, proper, and have never experienced struggle.
That it’s not safe for black women to be vulnerable, ask for help, or seek protection because we’re used to the struggle. It opens us to abuse.
Closing thoughts
In calling each other whitewashed we put limitations on ourselves.
The story started with a date centered around expectations of what black should be like. It continued with curiosity about what it means to be whitewashed, or not black enough.
We are the ones who judge each other the most. We put expectations on our blackness, although in part, fueled on the backs of media and society.
Twenty-five percent, (the majority) of people identified being black with struggle and resilience. They also honour pride, history, ancestry, and legacy.
While it’s important to acknowledge and honour the struggle of our ancestors it’s also important to acknowledge that black is multifaceted. We clutch on to struggle for dear life, feed it to our children, and sing it’s praise when we can create black identities through our individual stories.
Being called whitewashed creates barriers to esteem and acceptance.
Being called whitewashed says that it’s not okay for black women to be vulnerable, feminine, and protected.
Being called whitewashed says you can’t explore another culture without hating or abandoning your own.
Let’s change the narrative on what it means to be black.
Black is expansive. Black can’t be boxed.
Stop calling me whitewashed.
~Arlene~ | https://medium.com/an-injustice/please-stop-calling-me-white-washed-e4a332ffb7b0 | ['Arlene Ambrose'] | 2020-12-08 21:19:23.446000+00:00 | ['Whitewashing', 'Race', 'Identity', 'Self Respect', 'Self'] |
How Does Night Owl Security Camera Work? | Night Owl Wireless Security System Reviews
The first thing you would like to understand a few nightOwl wireless security system is that it’s basically a camera system, not a real integrated security system. This is often to not knock the worth of an honest camera system, but you would like to understand what you’re buying. Most of those nightowl systems contains a Wireless Smart Security Hub and a number of other cameras.
I will show you how does night owl security camera work via this night owl security camera reviews. Want the convenience of wireless without having to stress about changing or recharging batteries? Introducing Night Owl’s Wireless 1080p HD Home Security Camera System. Our premier wireless solution is certain to satisfy those customers craving fewer cables, easier setup and superior quality images.
Built-in Wi-Fi means you’ll not need to believe a third party Internet connection. you’ll easily live view and access recordings from a TV or monitor via the HDMI port. An online connection is merely required for remote viewing on a computer, smartphone or tablet.
To bring your security experience to subsequent level, each camera includes 2-way audio, so you’ll record sound and speak through the cameras. These cameras even have feature dual sensor technology, which reduces false alerts to your smart device by up to 90%. With 24/7 support, pre-record and absolutely no monthly fees, nighthawk are going to be here for you each step of the way.
The features of those systems generally include:
1080pHD Video Security System
Cameras equipped with two-way audio
300 ft. wireless signal to make sure maximum coverage
Secure wireless network
Uninterrupted video transmission
Ability to look at & playback footage directly from smart devices
Patent Pending Dual Sensor Technology reduces false alerts by up to 90%
24/7 Technical Phone Support
Advantages of Cameras
Cameras are an efficient deterrent to crime. In fact, a poll of former criminals cited within the Guardian revealed that:
Burglars are presumably to be postpone breaking into homes by CCTV cameras and barking dogs, consistent with a panel of former criminals.
The idea is simple–a criminal sees the not-so-discreetly placed camera and has got to decide if they’d rather attempt to find out how to interrupt into this place without being caught on this camera or whatever other cameras are on the property (it’s nearly always quite one), or simply advance to subsequent softest target. An opportunistic criminal not targeting a selected house or item will just advance .
Cameras also are great forensically, as they will often give enough information a few perpetrator to trace them down, albeit that process may take days or maybe months, and there’s no guarantee that finding the perpetrator will mean finding any stolen goods.
Other Security Options
Cameras have limitations, however. Most home security cameras aren’t monitored in real time, in order that they don’t offer any warning that a criminal offense is happening . This is often where a more diverse alarm comes in–something sort of a security system from Protect America. Security systems generally have deterrent properties too, a bit like cameras.
A study from the University of North Carolina in Charlotte surveyed over 400 incarcerated criminals and 60 percent of them said they appear for alarms and therefore the presence of an alarm would cause them to maneuver on to a softer target. A security system from Protect America can include HD wireless video cameras, but also can include door and window contacts, motion detectors and even home automation devices which will further deter crime. Better of all, Protect America can provide these devices with free installation.
Features & Technology
Dual Sensors
Two sensors are better than one because they eliminate the speed of false alarms by up to 90 percent. You won’t get alert unless there’s a notable change in either heat or motion.
DVR and NVR
You have a choice between a digital video recorder and a network video recorder. The DVR is usually cheaper , while NVR provides better image quality.
Two-Way Audio
Night Owl cameras can capture audio, but you’ll also use them as an intercom of sorts. If your spouse is reception and not answering your texts, you’ll log onto the app and use two-way audio to ask them an issue that’s much harder to ignore.
You may also interest: best security camera with two-way audio
Compatible with Google Assistant
Google Assistant is meant to form your life easier. That’s doubly true now that it integrates together with your nighthawk home security systems. If you would like to see out what’s happening together with your cameras, you’ll ask Google Assistant to open your app for you.
No Monthly Storage Fees
Other companies charge you a monthly fee for things like cloud storage. in the dark Owl, storing your recordings on either a tough drive or MicroSD card means you avoid such fees. It also won’t cost a thing to look at your system through the nighthawk app.
Night Vision
Your camera shouldn’t pack up once you attend sleep. Night Owl’s indoor cameras have up to twenty feet of night-sight , while the outdoor cameras’ night-sight range extends up to 100 feet. Night Owl Wireless 1080P Smart Security System.
Records Without Internet
These cameras transmit and record video without requiring an online connection.
Ease of Use
You don’t need hours of coaching so as to line up your nighthawk equipment. In fact, the corporate promises many of their systems are often found out in two minutes or less. If you’ve got any questions, you’ll watch a set-up tutorial video which will explain everything you would like to understand .
Customer Service
Night Owl offers technical support via the phone in not one, not two, but three languages: English, Spanish, and French. you’ll call 1–866–390–1303 and obtain assistance 24/7, even on holidays. The company’s online support center also includes a plethora of user manuals, guides, and even videos which will show you ways to line up equipment. you’ll also want to send an email to [email protected] or use the website’s live chat feature.
Were you shipped a defective product within the warranty period? therein case, nighthawk promises a simple return process. They even call it the “EZ Return” process because there are not any shipping fees or “overly complex requests.”
Value
Night Owl offers plenty of customization options. If you would like one or two cameras, you’ll easily spend a few hundred dollars or less. But if you would like to line up a virtual security fortress around your house or business, you’ll drop $1,000 or more thereon also . you’ll choose between security systems that cost more but offer you better value, or simply buy your equipment piece by piece until you’re satisfied.
If the new stuff may be a little out of your price range, inspect the factory reconditioned equipment. These pieces are cheaper but still reliable.
Is nighthawk an honest security camera?
Night Owl has been great with trying to assist me work on this. aside from that this camera system is beyond amazing and therefore the clarity of the photographs and quality of the video you’ll get is astonishing.
Does nightowl have a monthly fee?
With nighthawk , there are not any monthly fees for using the nighthawk services. … No, there’s no monthly fee because the sole person monitoring the cameras is you. You put in an app once the DVR and cameras are wired therein allows you to monitor them yourself as long because the DVR is attached to your modem by coaxial cable .
How long do nightOwl cameras last?
A system with 1 TB hard disc , with 4 channels, 4 cameras, recording nonstop on D1 resolution will last about 45 days.
Which is best Lorex or night owl?
If you would like to make a customized home security system that’s designed specifically for your home, Lorex is that the top choice. nighthawk may be a good system if you would like a wired system. Its cameras offer a spread of features and may integrate with smart home systems. Here is the full details which is best lorex vs night owl in 2020. | https://medium.com/@getlockers1/how-does-night-owl-security-camera-work-c8da156220f8 | ['Smart Locks'] | 2020-11-26 06:16:59.736000+00:00 | ['Night Owl', 'Security', 'Cctv Security Cameras', 'Security Camera', 'Security Camera System'] |
Reusable Sanitary Napkins: Let’s Address the Stigma | In my previous post, I discussed my project proposal of further initiating the awareness and use of reusable sanitary napkins. I believe feminine hygiene is a topic too rarely talked about yet is so significant since it concerns every woman that exists in the world. Other people share the same thoughts as well, so this is why my group and I have decided to choose this topic for our project analysis.
The main question we will be addressing is, “How can we make sanitary napkins more sustainable and affordable for women worldwide?” With this, we could easily touch on the topic of how the initiative first originated and its original motives. This also is a great question because it brings attention to worldwide feminine hygiene. The topic that is often not spoken about is looked at with a new perspective when stating that women worldwide are affected.
Of course, when addressing this topic there will be some stigma uprise against it and thought of whether the napkins are clean and safe to use. Like a diva cup, reusable sanitary napkins are 100 percent clean and safe. The napkins are made entirely of cloth and are in the shape of a disposable sanitary napkin. The reusable napkin can be buttoned around the underwear, much like how wings on disposable napkins stick to the bottom of underwear. When not in use, the napkins can be machine or hand-washed, dried and used again. We would be sure to include all of the following information in our project presentation.
After the information about how the napkins developed, what and who they are for and how they operate, we would then address the statistics of women around the world who do not have access to disposable sanitary napkins. The statistics are more surprising than expected. Additionally, we would also address the amount of waste that disposable napkins cause since they’re made of plastics. This would also be a different approach, yet still, keep the same premise of a global issue since the majority of women in America have access to disposable napkins. Pollution has already been addressed to be a huge issue in our world today, so discussing how disposable napkins pose as a threat to the already piling plastic pollutants will naturally spark interest and/or passion for action.
With a project that is focused mainly on a product, it would be logical to have visuals and the actual product when presenting. So, when my group and I present, we plan to have a real reusable sanitary napkin kit on hand so people can see, feel and understand the product and better understand the presentation and topic. Words and pictures can only influence a person’s perspective a certain amount, but having hands-on material will bring a broader approach to the topic.
We are looking forward to further educate people on this initiative. We believe this global issue deserves more attention because it indirectly affects all of us, men and women, worldwide. Hopefully, the initiative will continue to grow. | https://medium.com/year-one-ksu/reusable-sanitary-napkins-lets-address-the-stigma-23f057773839 | [] | 2019-11-13 20:06:10.271000+00:00 | ['Women', 'Learning', 'Belonging', 'Collaboration', 'Year One'] |
QexiQex’ Stories. Welcome to my story archive. Be aware… | Welcome to my story rack. Be aware that my tales are of the ‘naughty’ nature, so if you are below legal age or easily offended, turn away now.
See the full list of stories with links further down.
My work can be generally categorized as breast fetish, breast bondage and breast peril but they sometimes cover other niches as well.
The main purpose of this page is to host all my work that I have published on various other sites.
I’m always looking for (constructive) feedback and appreciate any comments, so feel free to reach out through Medium, deviantArt or other means. Many of my stories are also posted on Literotica and other sites.
I also run a discord server, so if you want to get notified about brand new stories, get access to work in progress, provide feedback, exchange ideas, contribute to chain stories, or just chat with me and other like-minded people, you might consider joining the server. Drop me a note if you are interested in an invite.
The stories:
A Fuck-Bike for Wendy Wendy loves racing her bike. A prank by her boyfriend leads to a rather intense experience where she fucks herself silly during a race. Main themes: big breasts, large insertions, reluctant masturbation, light breast bondage, exhibitionism
Gamer Girl This is a rather emotional story about the struggle of a young nerdy woman and the power of friendship. I actually had tears in my eyes when reading it again, no kidding! Main themes: big breasts, group sex, consensual, breast glory holes, breast fetish
Shroom’s This story is about a big breasted girl that manages to accidentally get her breasts trapped in a fancy door. And instead of rescuing her, others are playing with her swollen body parts assuming they are just a lifeless but well made prop. There is no sex, just fun with breasts. Main themes of this story are: big breasts, breast fetish, exhibitionism.
To Save Mankind Humanity is under attack! And only a group of young , big busted, lactating women can save mankind! Main themes of this story are: big breasts, exhibitionism, toys, object insertion, loads of breast-related mayhem and … aliens. The whole premise of this tale is rather weird but I hope some will enjoy the utter madness of it all ;)
Ann’s Art Project Ann is obsessed about her breasts. She fantasizes about them all the time, sees them popping up in the weirdest places. In the hope to get her fantasies under control, she decides to do an art project where she would publicly present her very own breasts masked as allegedly perfect foam replicas. This is her story. Contains lots of breast play and general fooling around with breasts. No sex.
Betty’s Blog of Bound Boobies Betty is obsessed with her breasts and creates a blog to share the fun. When she gets a proposal to exhibit her breasts at a museum things start to get interesting. Main themes: breast bondage, breast play, breast fetish
Portal Bra Helen wants to get fit but her overly large breasts always get in the way. Things change when she discovers a brand new, high tech bra that makes them go away. Main themes of this story are: big breasts, breast fetish, breast bondage, exhibitionism.
Jumping Melons Bus Co. This is a story about a bus full of big bound breasts conveniently dangling in front of each passenger for their entertainment during the trip. Main themes: heavy breast bondage, breast play, lactation and all kinds of breast-related mischief. There is no sex.
Lost in Akiba Brea’s Japan adventure with strong focus on her huge, lactating breasts and the weird experiences she goes through. Main themes of this story are: big breasts, exhibitionism, lactation, object insertion, breast bondage, breast play, breast fetish and all kinds of breast-related mayhem. This story contains non-consensual aspects.
A New Bra For Hitomi Hitomi was sick of all the hassle caused by her overly large breasts. But then, one day, she stumbled over a promising ad suggesting a high tech fix for her problems. Unfortunately, the fix came with some significant side effects. And the porn shoot she had scheduled didn’t become any easier. Main themes: body mod, breast play, heavy breast bondage, insertion, fisting
Tricked In this tale, the narrator reports on a rather explicit, brutal and non-consensual experience that she was subjected to, containing humiliation, heavy breast bondage, lactation, fisting, vaginal object insertion and rape. All characters are purely fictional and of legal age. None of this actually happened and the whole plot is a product of my fantasy. This story contains non-consensual aspects.
Pavlovian Reaction Ever since that science experiment, Stacey’s body reacts in rather inappropriate ways to a specific song. One day, when Stacy heard it play at a shopping mall, a lucky clerk has the time of his life. Main themes of this story are: sensory depriviation, mind control, reluctant masturbation, fisting. This story contains non-consensual aspects.
Amusement Park Playtime This is a short story I wrote for a friend some time ago. Main themes of this story are: exhibition, breast play, masturbation and some breast fucking.
Sunshine Farm Encore This is a tribute to bastard13’s story ‘Sunshine Farm’ which I enjoyed immensely. I really liked the general concept and the setting of that story, so I just HAD to write one of my own. Please read bastard13’s story first so you get the backstory. Main themes: breast bondage, breast peril
A Special Event Fiona loves the feeling of being used by unsuspecting people. The idea to have her best friends fuck her without even knowing what was going on was a kink that haunted her for ages. With her upcoming exhibition, she could finally act out her fantasy. And she has the time of her life. Main themes: bondage, forniphilia, insertion, lactation, forced climax
Business Trip Brian has the time of his life when a busty barmaid asked him to demonstrate his breast-loving skills on her rack. Though for safety reasons she keeps the rest of her body cleverly out of reach. Main themes: big breasts, lactation, breast play, trapped breasts
The Painting A short story about an artist, his muse and their work together. Main themes: bondage, f/m, sex, fisting, consensual
Stories with links to Gulavisual’s universe
Amber Cocoon (graphic novel) Fantasy setting, the hero Yeina (the illustrator Gulavisual’s big-breasted, bad-ass warrior character) is going through an awful lot of boob-trouble during her treasure hunt and has to put up with a pesky tribe of small creatures that only want her milk. Main themes: lactation, bondage, shrinking, magic, breast rings, breast fetish
Cosplaying Yeina This story is my humble tribute to Gulavisual’s EDEN series. I have very much enjoyed his comics and drawings, and want to show my appreciation with this little tale. If you haven’t read EDEN, I wholeheartedly recommend that you hit lulu.com, search for Gulavisual and purchase all of them :) While not strictly required, the story makes so much more sense when you know the comic “EDEN Volume 1”. Main themes: big breasts, Yeina, Gulavisual, breast peril, breast fetish, breast bondage. You can enjoy more of Gula’s art patreon.com/gulavisual
Like Yeina A sequel to Cosplaying Yeina and another tribute to Gulavisual’s work. This story is about Jana’s next project in which she is helping to reproduce various artwork in real-life. Main themes: big breasts, Yeina, Gulavisual, breast peril, breast fetish, breast bondage. This story is quite long and contains the drawings that are referenced in the text. You can enjoy more of Gula’s art at patreon.com/gulavisual
Yeina And the Curse of Udaar Yeina is on a quest to free the poor women of a village that got abducted by an evil, breast-obsessed, tree-like demon. Contains plenty of breast-related silliness. Based on characters created by Gulavisual. You can enjoy more of Yeina and Astrid at patreon.com/gulavisual
My WRIST.XXX Writing Contest Submsissions
Dye Of Pleasure This story has won a writing contest (‘KAW 18’) about the theme “COLORS” hosted by wrist.xxx. Main themes of this story are: anonymous sex, orgasm denial. This is not a breast-fetish story. This story contains non-consensual aspects.
Emancipation This was written for a short story contest (‘CAW SS’) about the theme “MISSING” hosted by wrist.xxx and is my first femdom work ever. This is not a breast-fetish story and the “missing” theme is falling a bit short. Main theme of this story is: femdom
Peppered Love This story was written for a short story contest (‘CAW SS’) about the theme “HOT” hosted by wrist.xxx and is more vanilla than my usual work. This is not a breast-fetish story. Main theme of this story is: loving, spicy sex.
Incomplete Stories
Bullied (incomplete, work in progress) Big-breasted Amanda got transfered to a new college and is mercilessly bullied by everyone. This story is heavily inspired by the work of “hurtmybreasts” on literotica (“Bad First Day” and “Bad First Day the Next Day”). Main themes are: breast fetish, brest bondage and other breast-related mayhem. This story contains non-consensual aspects.
Only Breasts (incomplete, work in progress) Valerie found new work where her assets get exposed and played with. Main themes: big breasts, breast fetish, breast glory holes. There is no sex.
Virtualized (incomplete, work in progress) Lucy always wanted to know more about Virtual Reality tech, so when she got an offer as “reference model” for a VR project she couldn’t resist. The thought that her body would be used for all kinds of VR porn weirded her out, but nevertheless she was proud to be part of this pioneering effort and understood that gathering reliable data was essential for success. Main themes: big breasts, breast fetish, insertion, bald, virtual reality, technology
Chain Stories from my Discord
Stunt Girl Saddie This chain story was written on my Discord with contributions by GuySmut, Spiral and myself. The scenario is about breast stunt girl Saddie and her strange adventures. Main themes are: breast fetish, brest bondage and other breast-related mayhem.
Neglected2Much’s Stories:
Unfortunately I haven’t been able to reach neglected2much for a few years and fear the worst. As he was the only one with the necessary access rights to keep the old site running, I have posted his stories here to save his incredible work.
The Curator
Red Rocket
Sexy Stewardess (Version 3)
The Family Estate
Tasha’s Infiltration
Do You Believe in Rock and Roll?
Fafnir’s Quest
The Thrill of Victory
The Fortunes of Wang Fang Lu | https://medium.com/@qexiqex/qexiqex-story-archive-a3664e9fcb7c | [] | 2021-03-02 17:50:42.155000+00:00 | ['Breast Fetish', 'Erotica', 'Nsfw', 'Breast Bondage'] |
Un incendie en Californie oblige à évacuer plus de 7 000 habitants | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/opresse/un-incendie-en-californie-sur-une-base-de-la-marine-am%C3%A9ricaine-oblige-%C3%A0-%C3%A9vacuer-plus-de-7-000-4a9796da1f5c | [] | 2020-12-27 14:18:32.345000+00:00 | ['Opresse', 'Climat', 'USA', 'California', 'Incendie'] |
Accessibility 2020 wrap-up | Accessibility 2020 wrap-up
Photo by Avel Chuklanov on Unsplash
Stuff I am proud of:
My year-end message is as follows:
Accessibility Veterans
Yes, it’s been a tough year, but to a certain extent in our field, they are all tough years in one way or another. The only way to get off this American litigation hamster-wheel, vicious feedback loop we are currently stuck in is to continue to do one thing — raise awareness.
Don’t just preach accessibility to the choir, Preach it to the heathens.
That requires getting outside of your comfort zone. Speak at meet-ups. Mentor co-workers. Work with HR and D&I to ensure that your organization is a desired destination for people with disabilities. Encourage people to self-identify as having a disability. Participate in ERGs. Don’t have a disability ERG? Start one! Hone your accessibility elevator pitch and build your accessibility brand, both inside and outside your current organization. Volunteer with disability-related organizations. Never stop learning.
To people just entering the accessibility field
Welcome! You’ve come at exactly the right time to this foundational career. As the lawsuit pressure increases, more organizations will finally get the message that accessibility is important. However, the message these organizations receive will be for the wrong reasons. Your job is to convince them that accessibility is important because people with disabilities are important, not just because the law requires it.
Get out there and break things. You don’t have to change the entire world, just your corner of it; if enough people do that, the entire world WILL change. Keep reminding your organization that accessibility is a program, not a project. Every time a new piece of technology is released, ask two questions:
Can everyone use it equally? If the answer is no, reach out to the technology owners and make sure they know that their product is discriminatory and the result of unconscious bias and ableism. Can accessibility be improved by taking advantage of this new technology?
To people with disabilities
Work from home is a viable option and not an undue burden. Who knew? Hint: We did. Yes, it sucks that people with disabilities are forced to file lawsuits to enforce rights should be automatic. That is, unfortunately, the way the world currently works. It’s an ugly and stressful path that can take decades, but it is a path that frequently leads to success. Remember, you aren’t advocating for yourself — you are advocating, so people with disabilities in the future don’t have to go through what you have been through. Get out there and tell your stories to anyone who will listen. No one can tell your story better than you can.
To Designers
It all starts with you. Without accessible (or inclusive or universal, depending on your preferred term) design, the chances of an end product, and its documentation, training, marketing, and support being accessible are not high. Fight for accessibility being in the MVP.
To Product Owners
It doesn’t matter how cool your next new feature is if your product isn’t accessible. Eventually, you will run into a sale that you really really want that you will lose because your product doesn’t follow the WCAG standards. Furthermore, you will discover that this is not something that can be turned around in a day or even a month and probably not even six months, so you will continue to lose sales while you are retrofitting your product. Proactive accessibility is always better, faster, and cheaper than reactive accessibility. Draw a line in the sand and state “no inaccessible products/features are launched after this date.” And then stick with it.
To the world in general
People with disabilities matter, and we shouldn’t have to prove it with a business case. Stop wrapping yourself in an ableist bubble of denial. For the workplace to be truly equal, 1 out of every 30 co-workers should have a visible disability, and 1 out of every 15 should be willing to talk about their invisible disability. Until your organization is there, your work remains unfinished. Don’t wait to get sued before you find out that universal truth. | https://medium.com/@sheribyrnehaber/accessibility-2020-wrap-up-a43f3f3b72f3 | ['Sheri Byrne-Haber'] | 2020-12-31 19:55:19.026000+00:00 | ['Diversity', 'Accessibility', 'Business', 'Disability', 'Civil Rights'] |
Who Should You Prove Yourself To? | It’s all-too-easy to get caught up in living up to the expectations of others, real and imagined. You want to be a good friend, lover, parent, sibling, child, human being, etc. Then, looking at the actions of others and seeing both good and bad, you hold yourself up to them, and question, “What’s my value in this world?”
I know where you are coming from. I have spent a lot of my life trying to live up to the expectations, both real and imagined, that I have encountered. This is an ongoing process, with both good days and bad days. I get concerned about the impression I make, how what I do impacts others, and whether I am worthy or deserving of living the best that I can.
The truth is that we are all worthy and deserving of living as best we can. Nobody is here just to merely survive — you and I are meant to thrive. Living in a fear-based society chock-full of mixed messages, this can feel like an impossible task.
Who should you prove yourself to?
How do you see your own worth? How do you feel deserving of being content, even happy in this life? Let me ask you this — who do you have to prove yourself to?
There are three very distinct judges our society looks to for determining worth and value in humanity. They can conflict, cause strife, dissent, and even have led to numerous wars.
These three judges, in no particular order, are:
You yourself
Other people, from friends and lovers to total strangers of varying degrees of importance
God, the Universe, or whatever conscious higher power a person may believe in
This can be really daunting, because the judgments of others and a higher power you may or may not believe in are completely, totally, and utterly out of your control. Because they are outside of you, there is no way to know just how they think or feel about you, if at all.
Since you can’t know what it’s going to take to prove yourself to these outside forces, why allow their judgment of you to dictate your feeling of worth and overall value? Look how unhappy people are because they are struggling every day to live up to the unknown metrics established by other people or unseen entitie(s) for being worthy and deserving of anything at all.
Outside validation is fleeting, frequently conflicted, and is different from each and every outside person or being. What’s more, while you might prove yourself worthy to one of them, that proof may not apply to another.
This is why the person that you need to prove yourself to is you yourself. | https://mjblehart.medium.com/who-should-you-prove-yourself-to-eee24224e5e5 | ['Mj Blehart'] | 2019-05-27 14:16:03.685000+00:00 | ['Self Improvement', 'Personal Development', 'Life', 'Mindfulness', 'Depression'] |
Baby Sleeps by Elisabeth Dorto Review | The happiest mom. Ever.
I was struggling with my baby’s sleep from the moment she was born. I tried everything to make her fall asleep but it usually ended up in a crying session that made me feel like beating myself on the head until I would be knocked out. It wasn’t just for one night or two, it was every single time and I couldn’t take it anymore. So when Elisabeth Dorto came into my life little by little, she gave me so much advices that changed my whole family life not just mine! She shared with us methods we were never aware of and now our daughter is no longer sleepy at night time, which allows us to have more quality moments together as a family.
Help your baby sleep peacefully with simple, effective tips that tackle the root of the sleep problem.
Your baby is crying, and you don’t know why.
As a parent, your top priority should be to make sure that your child sleeps well. But according to the American Academy of Pediatrics, 30% of babies have sleep problems at 6 months old. And by age 3 years more than 50% of children still struggle with their sleep schedule!
======> Official Website
Baby Sleeps book will help you solve this problem because it’s written by a pediatrician who has helped thousands of parents just like you get better sleep for their kids at night. The author explains how babies’ brains are wired differently from adults’, which means they need different strategies for falling asleep and staying asleep throughout the night. You’ll learn what works and what doesn’t work when it comes to getting your little one ready for bedtime — so that everyone in your family can finally get some much-needed rest!
====> Learn More Here | https://medium.com/@askarikhan15/baby-sleeps-by-elisabeth-dorto-review-4de3f9828ce5 | ['Good Health'] | 2021-09-09 18:24:13.501000+00:00 | ['Baby Sleep Consultant', 'Baby Sleep Routine', 'Baby Care', 'Baby Sleep'] |
Design is Design: The Parallels of UX and Fashion Design | “How many years of experience do you have as a UX designer?” he asked.
“I’ve been a designer for over ten years and—” he cut me off before I could finish.
“How many years of experience do you have as a designer in UX specifically?”
“I’ve been practicing User Centered Design and Design Thinking principles -essentially UX Design for over ten years, it just wasn’t called that.”
Now he’s perplexed.
“But it says on your resume you’ve only had one job working on mobile apps…” and so went the last 30 painstaking semi-frustrating calls I’ve had with recruiters in the past two months of my job search.
I’ve had to explain how design is design, and that even though I’ve only recently started identifying myself as a “UX Designer”, I am in fact a design veteran familiar with the design process digital or otherwise. Not only do I know it in theory but I’ve implemented these design principles under real world high pressure conditions in corporate America for years and years.
After minutes of explaining they still don’t understand or agree. So I wonder, do people not know what UX Design is? Is design not design no matter what product you apply it to? Is the definition of UX taught to me through my Flatiron/Designation bootcamp in fact not what I thought it was? So I decided to take a step back and re-examine the exact definition of UX.
According to Don Norman, inventor of the term “User Experience”:
“No product is an island. A product is more than the product. It is a cohesive, integrated set of experiences. Think through all of the stages of a product or service — from initial intentions through final reflections, from first usage to help, service, and maintenance. Make them all work together seamlessly.” — Don Norman, Nielson Norman Group
According to Kim Goodwin author of “Designing for the Digital Age”:
“Design is the craft of visualizing concrete solutions that serve human needs and goals within certain constraints. These solutions could be tangible products, such as buildings, software, consumer electronics, or advertisements, or they could be services that are intended to provide a specific sort of experience.”
By these definitions, design is indeed design and the approach or the thought process involved for solving these problems is the same. One can argue that UX is the design of digital interfaces themselves, making them “pretty”. I would disagree and say by that definition alone, UX is nothing more than a glorified graphic designer (no offense graphic designers out there). However, UX Design goes beyond just the visuals of websites, apps, games, software, VR, AR, MR and obviously voice etc., but it is the “experience” itself. As a clothing designer, that is exactly what I have been doing this entire time. Now, I am fully aware that there is a craft unique to UX as well as important distinguishing factors that differ the two disciplines. But the similarities outweigh the differences. In this article I’m emphasizing the former; the foundation of User Centered Design and application of Design Thinking remains the basis of both.
Here’s a crash course in the design process of clothing and UX comparing my previous career as a fashion designer to my current title of “UX Designer”. For the sake of time, I will go into this article with the assumption that we are all familiar with the UX Design process already and expand on the fashion side. I also preface this with a disclaimer that different companies have different procedures and may do things in a different order or skip parts all together. This is the general overview, a very over simplified version of the experience I’ve gathered from all the places I’ve worked at in my career as a mass market womenswear designer. Below is a visual for all the visual learners out there.
In UX Design we always start with research. For fashion we do the same. Because we design about a year in advance, in order to predict future trends we do research by looking at trend reports from trend forecasting companies that provide color, fabric, print and silhouette direction.
Like UX where we do a competitive analysis, in fashion we go competitive shopping to see what competitor brands are doing and selling. We also buy samples to get fabric inspiration or quite bluntly, to knock off the style, print, or pattern. We take pictures, we assemble reports and present back to our teams. Some designers travel abroad for this, some travel to different cities and some just do it locally depending on company policy and budget.
In UX we synthesize the data, in fashion we synthesize our trend and shopping reports. We pull together an inspirational mood board to set themes or stories we want to tell. We analyze previous sales reports to tell us what styles were best sellers so we know which, if any, to carry over and to put in a new print. We also look at what our best selling colors were and take all that into consideration when we put together our color palettes and fabrics for the season. In UX this moodboard could also be called a style tile, containing all the styling elements we would use for our interface.
As with all design, we will always have to work within constraints. In fashion, it’s usually related to budget: fabric costs, fabric minimums, factory production minimums, trim costs like buttons, buckles or other closures that you may use. How complicated is the garment? How much labor will go into making it? If I add embroidery somewhere, it will add to the cost and I will need to make sure the base fabric is cheap enough. All these constraints affect how we design. For example if I choose to use buttons that are more expensive, I may have to compromise on fabric yardage and add a seam somewhere all the while making sure the garment still makes sense. We have seasonal constraints. What time of year are we designing for? Fabrics will be limited to the season we’re in; if I’m designing spring/summer I most likely will not be using heavy wool. We also have styling constraints. Does it fit who we’re designing for? Our target customer -in UX -Persona. Does it fit into our trends, our themes and the story we’re telling with our color palette? Last but not least, we have time constraints. What is the in-store delivery date? What are fabric lead times and availability? How soon can they dye it or have it printed? Are our prints even ready to send to factories to start printing? Usually the answer is no and everything was supposed to have been done last week.
Given these constraints we still have to pull together weekly presentations to the buyers, or in UX the stakeholders, to update them on current status. In UX we would call these sprint meetings.
UX paper prototypes
Now for concepting and ideation. These ideas would first be quickly illustrated by hand or in UX, via paper prototypes and rapid prototyping. Then we would move to computer sketches, called wireframing in UX world. We do technical flat sketches using Adobe Illustrator (or for UX Sketch or Adobe XD) that show the entire collection; all the colorways a style comes in, what fabric, in which prints, what it sits next to on the floor, and/or how it’s merchandised. We have our low fidelity flat sketches which are black and white and then high fidelity if there is enough time to render the actual fabrics, colors and textures.
Finally it’s time to get the first sample or proto made. In UX, this would be the prototype. In UX, usability testing is done on a user. In fashion, the proto is also tested on a user through an actual fitting. Through asking the user questions: How does it feel? Where is it uncomfortable (pain point)? Do you like how it looks, is the fabric scratchy? We attain comments or user feedback and iterate to improve, much like in UX. We test or fit again until it is refined enough for production, or in UX terms, development.
We have to make sure that all the garment details are annotated, what the measurements are and how it’s constructed. This is diagrammed in what we call a tech pack which we pass on to the manufacturer for the garment to be made and mass produced. In UX world it would be the equivalent of annotations that would then be passed off to a developer.
So there we have it. The long, perhaps not long enough answer to the short question of “How many years of UX Design experience do you have?” to which I will answer “Ten, I have over ten years of UX Design experience.” | https://medium.com/swlh/design-is-design-the-parallels-of-ux-and-fashion-design-fefd2153b34c | ['Lisa Lin'] | 2020-04-16 01:27:37.457000+00:00 | ['UX Design', 'Design Process', 'Fashion Designer', 'Career Change'] |
ViteX Campaign (80,000 VITE) for Ringing in 2021! | In celebration of the new year, Vite Labs will be giving away a total of 80,000 VITE as a present to the entire Vite community!
Rules
For two weeks, users who have mined VX through trading-as-mining, market-making-as-mining, referral-as-mining, or staking-as-mining on ViteX are eligible for receiving our giveaway.
To be eligible, you must also follow our social media accounts (We will send a Google form to verify this at the end of the campaign):
https://twitter.com/vitelabs
https://twitter.com/vitexexchange
Like and retweet the original tweet announcement for this campaign
Campaign period
From December 28, 12:00 to January 10, 12:00 (UTC+8)
Trade as Mining
Total giveaway: 16,000 VITE (equally shared by all eligible addresses)
Eligibility: Traded at least once during the campaign period
Market-Making as Mining
Total giveaway: 16,000 VITE (equally shared by all eligible addresses)
Eligibility: Mined at least 100 VX through market-making during the campaign period
Referral as Mining
Total giveaway: 8,000 VITE (equally shared by all eligible addresses)
Eligibility: Invited at least one new user with referral code during the campaign period
The new user must complete a trade before the end of the campaign period
Staking as Mining
Total giveaway: 40,000 VITE (equally shared by all eligible addresses)
Eligibility: Mined VX through staking (received mining rewards) before the end of campaign period
Notes
No limitation on trading pairs. You can perform trading or market-making on any qualified trading pair (for mining) on ViteX.
Your reward will be distributed within 7 working days after the campaign ends.
Vite Labs reserves all rights of interpretation.
Thank you, and happy new year! | https://medium.com/vitelabs/vitex-campaign-for-ringing-in-2021-be0b048fcdbf | ['Vite Editor'] | 2020-12-28 19:09:44.134000+00:00 | ['Blockchain', 'Vite', 'Dex', 'Announcements', 'Vitex'] |
What are stable tokens? | Cryptocurrencies are highly volatile and change their values rapidly in short time frames. For example, you have bought $1000 worth of ICO’s token the previous week, and today you woke up to find out that your tokens worth less than $500. This means you have lost 2x value in a week’s time. This problem can potentially be solved by the use of stable tokens.
The concept of Stable tokens is fairly simple. To understand this concept let’s take an example: consider the value of currency say USD (US Dollar) and build a token that always has the same value as that of the currency USD. So, 1 tether token = $1 USD. It is not uncommon to see up to 10% variations in cryptocurrency in a span of a few hours. This short-term volatility makes cryptocurrency difficult for everyday use by public.
Ideally, a cryptocurrency must maintain its purchasing power and should have the lowest possible inflation rate that is sufficient enough to encourage spending the tokens instead of saving them. Stablecoins provide a solution to achieve this ideal behavior.
Stable tokes are one of the most important type of currencies due to their purely practical aspect. Thought stable tokens are great in theory, they are not easy to implement. They have certain challenges and technical considerations. Stable token received traction as they offer both instant processing and security of payments, and the volatility-free stable valuations of fiat currencies.
The biggest challenge with stable tokens is that no currency ever has a stable value.
Now let’s understand the concept of stable tokens in deep-
There are 3 factors that affect a country’s currency:
Import and export of the country Health of economy Inflation
Currencies used by countries can be broken down into 2 types:
Fiat currency whose value is backed by the government that issued it. Commodity backed currency that derives its value from a commodity like silver and gold. Earlier the US dollar used to be backed by gold that was held in the federal reserve. However, President Nixon in 1971 changed the commodity-backed currency to a fiat currency. Now, this completely separated the value of gold and the US dollar. Though this led to a lower purchasing power of gold, but the economy got independent of the gold supply and demand. And now the government was better able to control the value of the currency.
Now, why was a commodity-backed currency not suitable?
The commodity-backed currency had certain drawbacks, such as its supply could be manipulated without oversight from the government. The commodity-backed currency is highly vulnerable to large price variations. For example, if some gold reserves are uncovered, or when people or organizations would hoard large amounts of gold, the prices suffer huge shocks.
The benefit of fiat currency is that the controlling government body was able to control one of the factors influencing the current stability that was the total supply.
How do stable tokens help?
Stable tokens are similar to commodity-backed currencies whose value can be associated with multiple different assets. Certain stable token are backed by silver, gold, or even the US dollar. Since the US dollar is quite stable unlike ordinary cryptocurrency, it offers some measure of stability.
Another interesting usage of stable tokens is to act as a reserve currency in case something drastic happens in a cryptocurrency ecosystem. For example, if someone has to implement a loan in a smart contract in Ethereum, the person could technically insure this loan using a stable token. And if the loaner is not able to repay in time, the interest can still be calculated based on a relatively stable store of value. This allows us to use the decentralized nature of Ethereum to automate transactions instead of it going through a bank while still maintaining the ability to insure such a loan.
How do stable token work?
The implementation of stable tokens requires some technical considerations on how to control their supply. The considerations include establishing two tokens, one stable token, and another regular cryptocurrency with a stable token acting like bond and regular cryptocurrency acting like share. A bond is a very stable store of value while a share is a highly volatile store of value. And at any time when the value of the cryptocurrency varies too much, the supply can be controlled by exchanging them with a certain amount of stable tokens. And if the value falls too rapidly, the issuer of tokens can purchase the stable token for a fixed no. of cryptocurrency and then destroy the excess cryptocurrency thereby increasing the value of each existing crypto token. And if the value rises too rapidly, the reverse happens and more of cryptocurrency is generated.
This way stable tokens can be used to regulate the supply of another token to curtail heavy inflation or deflation.
Visit us at corum8.com for more details about cryptocurrency development, marketing and crowdfunding services, contact us at [email protected].
WhatsApp/ Skype Number: +16506810218 | https://medium.com/@corum8/what-are-stable-tokens-a3cf11dae054 | [] | 2020-12-25 19:22:15.923000+00:00 | ['Blockchain', 'Blockchain Development', 'Stable Coin'] |
We need to keep investing in each other | We need to keep investing in each other
By Lucy Bernholz
November 17, 2020 Correction: The data cited in this article were preliminary. Notably, data on turnout and support from the Navajo nation, from a November 6 report, have been significantly adjusted downward to account for the dispersion of Navajo citizens in rural and urban areas, the number of Whites living in precincts that include Navajo lands, and for other reasons. More information can be found here. I regret the error.
How do we make sense of the election? Part of the answer to that question depends on whose stories (and what data) you focus on. A Twitter thread from Dr. Rhea Boyd helped me realize why the mainstream news story about a close race seemed incomplete to me. I dug into the data on voters and looked only at results from Black, Indigenous, and voters of color. When I did so, the story of a close election disappeared. AP Votecast data shows that 90% of Black voters, 63% of Latino voters, 70% of Asian American voters, and 97% of Navajo Nation voters (who helped flip Arizona) voted for the Democratic ticket. Those percentages tell the story of a landslide.
Pundits and scholars have been saying for months that the top issues for voters were the pandemic, the economy, racial justice, climate change, and the future of democracy. So I looked at more data on the communities noted above. According to the Covid Racial Data Tracker and the Mayo Clinic, Black, Indigenous, and Latinx people are experiencing significantly higher COVID infection and death rates. These communities are also being hit hardest by economic losses. In October 2020, the unemployment rates were 10.8% for Black Americans, 8.7% for Latinos and Hispanics, and 7.6% for Asian Americans. As of June, 12.4% of Indigenous people were unemployed. If the pandemic and the economy are what mattered on November 3, 2020 — the people have spoken. Of course, these voters represent the communities who’ve been leading the fights for racial justice, environmental protections, voting rights, and the rule of law for centuries.
When I put these data at the center of my reflections, a clear statement of preference emerges from Americans whose demographic identity matches that of the global majority of people. It’s not the entire story of the election, by any means, but it might offer some insights for thinking about philanthropy and what comes next.
What might philanthropy be learning from this?
For foundations and philanthropy, there have been some signs of change over these last months. When it came to organizing around racial justice and getting out the vote, foundations of all sizes demonstrated the ability to come together and make big operating grants to Black-led organizations, much of it led by Black women like Nse Ufot, LaTosha Brown, and Stacy Abrams for organizing and community building. Donors showed they could move billions of dollars fast to respond to the call for COVID relief and racial equity/reckoning. If electoral turnout and victory are the measures of success, these strategies worked, not because of the investment from philanthropy, but surely supported by it.
For foundations and individual high-capacity donors the key lesson to learn about big, general operating support for organizing over time is simple: don’t stop. The election is over but not a single one of the problems these movements aim to change has been fixed. Domestic terrorism by White nationalists is still atop the national security threat list. COVID is infecting more people per day than ever before (and more in the U.S. than anywhere else). The economic damage — with winter coming, evictions looming, and final efforts to take health insurance from people — will get worse. The structural damage done to the federal governing apparatus is real (and much more may be done over the next 10 weeks). Faith in government has been pushed to new lows. The political path ahead is still unclear.
Another key lesson for institutional philanthropy should be a recognition of just how much its impact depends on the actions of local small donors and committed volunteers. As Danielle Allen of Harvard notes, “one silver lining of the last four years…Best civic education the country has had in decades….” Long-term support for grassroots community organizing and organizations builds the infrastructure for everything else that needs to change. If, with the electoral spotlight gone, philanthropy turns away from supporting leaders and organizations who do the hard work of organizing communities and doing deep canvassing, we’ll fall backwards. These communities and these leaders need reliable, flexible philanthropic support to make change; this is as true for addressing structural inequities as it is for getting out the vote.
Philanthropy shouldn’t stop listening to and supporting the communities who spoke so clearly with their votes; they are the same communities and leaders who will make change happen on education, housing, environmental protection, and safer communities. Philanthropy shouldn’t return to its old behaviors — whether that be nine-month decision making cycles or stockpiling money in DAFs. Doing so would be a missed opportunity to learn that strengthening communities requires listening, consistent support, and stability.
The biggest mistake would be to step back from the coalitions you’ve joined. Now, is the time to double-down on our investments in each other, in the leaders who organized for the election, in the communities for whom this electoral fight was a continuation of generational struggles, not an aberrant moment in time. Now is the time to resist what sociologist Dr. Tressie McMillan Cottom calls White “self-interested inertia.” Philanthropy is learning to work differently. It would be wise to continue that education through, and beyond, our syndemic crises and into a collective pursuit of dignity, justice, and the pursuit of happiness, for all. | https://medium.com/@stanfordpacs/learning-from-the-present-2a6d98478eb3 | ['Stanford Center On Philanthropy', 'Civil Society'] | 2020-11-27 13:02:01.609000+00:00 | ['Philanthropy', 'Justice', 'Elections', 'Stanford'] |
Building in Public, Building for Public | A lot of people have been posting their projects on Building in Public. A trend that initiated in the Silicon Valley, and as always progressed to other parts of the world. A lot of the projects showcased inspires me a lot and I am amazed at the sheer tenacity of people to build stuff and then audacity to share in public.
My current version of Building in Public, is Building for public.
The project is a 25 bedded hospital to bring affordable, quality healthcare for rural masses in Bihar. The Project started in June’20 and the official Product Launch is targeted for August 2021.
At Happy Horizons Trust, we have been working in the domain of education for over 8 years now. COVID19 and the resulting lockdowns got us closer to the communities and also experience the healthcare services upfront.
This is our move to expand the @HappyHorizons portfolio into healthcare. Healthcare is a domain I hold really close to my heart due to growing up within a medical family. It has always had me interested to do something in it. This is our opportunity to serve.
There are so much learnings in building this out. A key aspect of my life over the past few years has been to use the Systems Thinking lens to understand complex problems and create solutions that have a systemic impact.
The healthcare space and interactions with a lot of healthcare professionals and patients has given us a lot of insights, and I am sharing a few of those here in the spirit of building in Public. This is also to highlight our thought process on what we are doing, why we are doing and how we will be doing it.
As India moves a lot towards preventive healthcare, we understand that healthcare in rural India needs to be broken down into four different levels.
1. Advocacy
2. Education
3. Diagnosis
4. Treatment
Each has a few startups in it. We’re building a cohesive system around it
All of us are aware that a major reason of this move towards preventive health is the access to information, which has been made possible by cheap internet and the presence of smart phones in rural households we well.
Connectivity due to better roads and information has made it possible for people to seek better healthcare. While institutional healthcare centres have improved, there are less number of qualified healthcare professionals, both doctors and other staff as required.
While there’s a lot of information, one needs to be careful, when it comes to identifying the source of information and how much we trust it.
That brings us to the 1st pillar of our work in Advocacy.
Our past work @HappyHorizons has created the distribution channels over past 8 yrs. Leveraging deep community connect at grassroots and youth alumni from fellowship program, we are able to establish further channels for information dissemination. A lot of the time the decision to go or not go for healthcare service is dependent on this information.
With an ever changing world post covid, fear psychosis is at an all time high. We need credible information to reach the rural masses so that they are better informed.
The ability to make fake information viral these days is a huge challenge to providing credible information and when you have a complex ecosystem that often relies on superstitions, traditional practices, and questions medical science, it becomes extremely difficult to work.
That brings me to our 2nd pillar or work with HHH (Happy Horizons Healthcare) : Education.
Owing to the poor education backgrounds, and low awareness, many are not aware about healthcare and how to go about it. Quite often we hear about stories about scams in rural healthcare.
A few scams that I am aware of from recent times, Uterus Removal in the garb of family planning operation (Illegal hysterectomies), the rise of fake doctors, immensely high charges for private nursing homes, patients being forced into ICU for smallest of reasons.
So while access to information is good, we also need to educate. That is where I am hoping our existing work at @HappyHorizons will help in designing specific learning sessions to communicate with rural masses.
Empowering locals in the past 8 yrs enables us to empathise better. Speaking their language makes it relatively easier to make inroads.
This brings me to our 3rd pillar of work: Diagnosis.
When you have provided credible information, educated people about things related to healthcare, people would want to get tested. The recent push towards preventive healthcare drives this phenomenon as well.
None of the major players in Diagnostics serve the rural masses. The unit economics do not work. A medical test in a city is convenient and easy to get done at home. Not so in the rural context. What tests & why its need to be done? Our hope: First two pillars feed into this.
So we are rethinking about diagnosis from the first principles. Can there be a better way to doing this? In all likelihood, the diagnostic ‘centre’ would be a mobile unit, that is then accessible to rural masses and visits the villages.
This we are envisioning from a collection perspective. The collections would then have to be submitted to a centralised unit from where the tests would be done. We seek to have the ability to do tests at this unit, and pass on the result immediately over whatsapp / other channels. In the process we are able to reduce the turn around time for patients needing attention.
The hope is that when you have community champions set stage with credible information, an educated (on healthcare) rural population, the convincing for getting a test done is higher.
And if you have the diagnostic units visit your home for sample collection, there are high chances of doing it. If the consumer cannot go to the diagnostic center, can the center go to the home instead.
And that then brings me to our 4th Pillar: Treatment!
I have realised that, due to the absence of the first three aka Advocacy, Education and Diagnosis; often the treatment starts at a late stage.
Many medical complications arise.
Often lives are lost.
In my opinion, access to a quality healthcare professional & services should be a fundamental right for EVERY citizen in our country. Unfortunately we are way behind the WHO recommendations.
The situation in rural areas is grim. And as we are aware a lot of things are “Bhagwaan Bharose” (at the mercy of god).
The model we are building at Happy Horizons Healthcare is to be the first point of contact for anyone seeking healthcare service in rural areas. We then leverage the upcoming hospital, our networks of people around the globe, the telemedicine setups for speaking to right folks.
We understand the role of Technology, Data analytics, Behavioural change to drive this and those form the core of our work.
We are a few months away from launch and it is immensely exciting to be building in public, building for the public!
#systemsthinking #design #socialinnovation | https://medium.com/@kshitiz/building-in-public-building-for-public-8a6cdb748d06 | ['Kshitiz Anand'] | 2021-02-07 10:40:18.167000+00:00 | ['Systems Thinking', 'Complexity', 'Design Thinking', 'Startup', 'Entrepreneurship'] |
How to Migrate Your Local MongoDB Database Between Computers | Restore MongoDB Data
At this stage, you already have your dump-files directory in your new machine. Next, we can proceed to restore the MongoDB data.
Let’s assume I saved my dump directory at this path ~/Downloads/mongo-migration .
Now we can use the default root role we have in our MongoDB to restore the database. Refer to the command below.
We can use this single command to restore all of the database, and you’d have the exact same database as previously.
Restore multiple DBs
However, there could be a scenario where you only want to restore a few DBs, excluding a few DBs from the dump directory.
For example, I want to restore the audit and client tables during this round. We can do this by using --nsInclude with the nsExclude option in mongorestore .
Example of using --nsInclude
We can use --nsInclude to select only the database and collections that we want to restore. In the command below, we restore all the collections in the audit database using audit.* and all collections in the client database using client.* .
The wildcard after the dot notation means all collections in the database. If you want to include or exclude multiple databases, you’d need to specify it as nsInclude or nsExclude . Refer to the example below.
Example of using --nsExclude
We can use nsExclude to exclude the database and collections we don’t want to restore. In the command below, we exclude the partner , promotions , transaction , and utilities databases.
Restore a single DB
Lastly, this is the command to restore a single DB. There are a few things to take note of here:
-d options: Specify the database name to be restored. This is a required option.
options: Specify the database name to be restored. This is a required option. Specify the correct dump directory. For example, if I’m restoring the utilities database, I must specify the utilities database dump directory.
Refer to the command below. | https://medium.com/better-programming/how-to-migrate-your-local-mongodb-database-between-computers-debe57092ab5 | ['Tek Loon'] | 2020-07-31 14:22:05.348000+00:00 | ['Database', 'Mongodb', 'Software Engineering', 'Mongo', 'Programming'] |
Calculate Your Project Budget Using These 6 Steps | Photo by Standsome Worklifestyle on Unsplash
A budget establishes a financial base for every project. It’s necessary to keep a project grounded to prevent the business’ time and resources from being exhausted. Without a budget, it could be easy to invest too much to complete the task at hand, to the detriment of the overall project. One project could become a great loss to your business if you extend yourself too far.
A lack of a project budget could make it more difficult to set directions and limits during the work process. This involves considering many factors involved in the project. That is why there are steps to follow when calculating your budget. Not only are you setting a pathway for your team, but you’re working to ultimately ensure that you make a satisfying profit for your business to continue its path to success.
These steps will guide project managers to calculate their necessary budget so they don’t turn a successful job into an unprofitable one.
Step 1: Plan
As soon as a brief of your client’s requested product is placed in your hands, you can start considering what your project budget will be. This is when you start shaping ideas for how to meet their needs. The brainstorming process is not quite the time to restrain your ideas with practical concerns. However, it is important to have the numbers in the back of your mind as you start considering your choices and the general outline that will guide your project along.
When planning your project, you need to look ahead of what it will entail and how much you are willing to spend. You take on a predictive role by imagining the best possible result and what must be done to lead to this.
Factors to Consider When Planning Your Project Budget:
How much time is available for you to complete this project (what is the deadline)?
How much time will this project take?
Who do you need to include on your team?
What will be the general labor costs, according to your team size and the time requirements of the project?
What programs or resources are needed for this project? Will these cost your business money to use?
What projects need to be postponed or set aside? What money is being lost from this?
Composing a list that answers these questions and adds a numerical value beside them will help you begin to hone in on a general budget to ensure you are not ultimately losing money on this endeavor.
Step 2: Rally Your Team
As you start to see the sum of the above mentioned possible costs, you can determine how much labor you can afford to invest in the project. If this is a larger company project, you may need all hands on deck. Otherwise, it’s best to choose workers who will be efficient and can handle the extra workload of the project. You may need employees with specialized skills to take on certain tasks.
When choosing your team, you can predict how much their labor will cost your business. This could be one of the most expensive components in your budget for this project. If your employees are spending more time on this project and getting less done on other tasks, you could be losing money from other clients. This is why you need to choose how you prioritize their timing and how much of their labor you want to be invested in this task. Find the general sum that you’re willing to spend on labor, and then you can proceed to decide other budget costs.
Step 3: Determine the Costs of Services and Other Necessary Materials
Once your major labor costs have been shaped, you can look at other vital expenses in this project to determine your whole budget. Do you have everything you need on-site to meet your clients’ requests? Or do you need to outsource to other providers? This is where you must decide if other services will be necessary to complete the project and how much they will cost.
To save these costs, you can look outside of the box and consider how you could manage to deliver these services without hiring another business. You could invest in long-term resources or training for team members so they can provide specialized in-house services when needed.
Step 4: Leave Room for Emergencies
It may seem that the sum of your investments involved in this project is complete once your team, the services and resources are decided. However, in general, it’s important to remember that anything could happen. Keeping this in mind while determining your project budget will be valuable in the long run. Should anything go wrong, it will help to have extra funds put aside so you have already accounted for their costs in your budget. These emergencies could include a rushed deadline where services need to be sped up or a glitch in the development process that may require a restart. New additions to the client brief could alter your process as well.
It’s always best to be safe than sorry. Include an emergency fund of about 10% of the budget sum. That way, you will know the ultimate cost of the project in the worst-case scenario. There will be no surprise costs that need to be allotted to the project’s budget once it’s been established.
Step 5: Keep Track
As your project starts moving forward, keep track of every expense. This includes keeping all receipts from related purchases or orders or monitoring the labor hours invested by your team members. It helps to invest in time tracking software, like actiTIME, to accomplish this. Time tracking will help ensure that everyone is committing the required amount of work hours into the project task they’ve been assigned, and no more or less. That way, you will know that the allotted budget is being met when it comes to labor costs.
Time-tracking will also help with ensuring that every deadline is met. It offers a roadmap for your project to guide it along. That way, you will be able to submit the finalized project to your client on time, with no extra costs or need to discount their billing for delays.
Remain Disciplined, Save Money, Earn Profits
Your project budget will ultimately help you and your team remain disciplined as you progress through your project and its various required materials and hours of labor. Limiting the amount to commit to the project will ensure that your business is not negatively impacted by taking it on. Instead, you’ll have spent a restricted amount of money so that your profits, in the end, are much higher.
For long-term benefits, always decide your project budget early on, and stick to it. Use tools like tracking software so you can hold others accountable for their work hours and make sure you remain on schedule. | https://medium.com/@actitime/calculate-your-project-budget-using-these-6-steps-725655308a8a | [] | 2020-12-20 08:47:33.273000+00:00 | ['Project Management Tips', 'Project Planning', 'Project Management', 'Resource Management', 'Project Budget'] |
Harmonic quanta: DEX order matching that generates no change | One of the snags in atomic swaps is that if you place, say, 1 BTC on sale, and a counterparty takes 0.3 BTC, then the remaining 0.7 BTC would be sent to a change address in your wallet — but this would render the remainder of your order unspendable until it is at least a few blocks deep in the chain, which would make quite a mess if you want to liquidate the entire order quickly. Now while it would be simple enough to solve this by splitting your coins into small amounts at many addresses in your wallet, this would still create change for at least one of your addresses, which means that you would not generally be able to sell all your coins in one go. Worse still, it would create additional complexities in the order book, as it would have to continually check for change and reduce order sizes by any change generated. Because of all these drawbacks, I thought I’d try a little trick to generate no change whatsoever.
Fragmentation issues? Time to quantise.
Let’s start on the simple end of the problem, with a straightforward story in BDD. The story describes the behaviour of a script that Block DX may run in order to correctly determine quanta for tick size and transaction input size.
##Runs on each trader’s instance of Block DX (if they don’t run it, then they generate change, effectively DOSing themselves)
##Splits coins into equal tiny amounts per input
##Uses an amount per input, “x”, which functions as a minimum transaction size and is (a) above the dust threshold of the coin, and (b) harmonic with (specifically, a standardised divisor of) a calculated tick size, “y”, for the market.
##Sends the remainder from the splitting tx to a separate trade-fee-paying address
##Makes sure the total balance in fee-paying addresses is ≥ the maximum trade fee (0.2%) a trader would pay if (s)he sold all coins
##Ensures that tick sizes and minimum transaction sizes do not require re-running this script when changing to a new coin pair. Given
Block DX autoconfig script has run on a wallet integrated to Block DX, so that the wallet has:
— addresses (recorded in .conf) from which it pays trade fees
— addresses (recorded in .conf) from which it swaps coins When
Block DX launches
And
Either (a)
All trade-fee-paying addresses hold < 0.2% of the wallet
balance
Or (b)
Any input in any wallet holds over x coins Then
When (a)
Reserve 0.2% of the total coins in trade-fee-paying
addresses
When (b)
Create a tx that consumes all current inputs (excluding
trade-fee-paying addresses) that do not hold exactly x
coins, and create outputs of exactly x coins each
And
Send any remaining coins as change to a trade-fee-paying
address
Determining “x” and “y”
To determine x and y in the above script, several size-thresholds will be considered in turn: the dust threshold, the tick size, the minimum order size, and maximum price precision. Following this, the discussion will proceed through stages to a specification of x and y in the above script. The intention is to quantise certain critical minimum sizes at standardised levels above the dust threshold, in such a way that they are always factors or divisors of one another. If order sizes may only be whole quanta of some minimum order size, and if changes in price may only be whole quanta of some minimum tick size, then it is possible to split inputs into sizes that will always result in whole inputs being consumed in a trade.
1. Handling Dust
A very small trade may spend a single input at the value of x. In order to avoid creating unspendable transactions, x must always be above a coin’s dust threshold, or else at least one side of the transaction will be unspendable and the swap will cease to be atomic.
The most valuable coin on Block DX is Bitcoin. Hence, for every trade, it can be expected to be the coin with the smallest units of currency traded. All else being equal, it is thus reasonable to assume that it is the most likely coin to hit its dust threshold.
The worst case scenario for this design would be in cases of a sudden stratospheric rise in price unaccompanied by the usual lowering of the dust threshold. For example, assume Bitcoin goes to $200k and its dust threshold is not lowered; as per current Core dust rules, a dust threshold of 546 Satoshis (for non-segwit transactions) would then equal $10.92.
(Note: this figure shall be used throughout this blog post as a nominal shorthand for “dust threshold.” However, if a bail-in transaction consumes both a fee input and a trading input, the dust threshold would be somewhat higher, and a real implementation would need to accommodate this. Whatever the actual dust threshold, it need only be the lowest dust threshold for any atomic swap transaction, since in swap transactions that use more than one standard-sized input, the transaction amount increases to a greater degree than the dust threshold increases.)
In such a scenario, the total transaction value for a Bitcoin transaction would have to be > $10.92.
For Block DX, this transaction value would include:
the value of a matched order
the trade fee (in cases where the fee system includes fee txs in the bail-in tx itself)
the network fee
To generate no change, orders would have to be in regular multiples of some value >$10.92, and the smallest permissible change in price would have to be harmonic with this value (for example, if a sell order of $10.92 worth of BTC fetches 0.5 BLOCK, then the smallest change in price must be at least $10.92, or else a quantum smaller than $10.92 would be required in a trade — for, say, 0.51 BLOCK — and this would thus generate dust.)
How might fee transactions affect the scenario? Well, the largest proportion of a trade that could be charged as a trade fee is currently 0.2%, for takers. For a $200k Bitcoin, a single trade at the dust threshold would incur a trade fee of $0.02184. In an implementation where fees are paid in separate transactions from swap transactions, this would impose a significant limitation on the minimum trade size, as the fee would have to be >$10.92.
Assuming that, instead of independent fee transactions, a protocol that spends trade fees as an output of the bail-in tx is implemented, then only the total value of the transaction would be significant. This is obviously advantageous to the scalability of the solution across sudden changes of coins’ value — a phenomenon perhaps uniquely frequent in crypto.
As such,
in an atomic swap, the minimum bail-in transaction amount shall be >546 Satoshis,
trade fees shall be one output of a multi-output bail-in tx, spendable upon revelation of the swap’s secret;
its total output (i.e. including the trade fee) shall be returned to the user upon nlocktime maturation, or else in some circumstances, traders will pay a fee when no trade completes. (In terms of antispam and anti-DOS incentives, this is acceptable because it would not be necessary to charge malicious parties a trade fee if, instead, their coins get locked up until nlocktime maturation, as the opportunity cost of capital lockup is far more significant than a fee. For background on this assumption, see this post.)
2. Tick size and maximum price precision
One “tick” on an exchange is the minimum amount that the price of a given coin in a currency pair may increase or decrease. “Price precision” is the number of decimal places permitted in an order. It should correspond to tick size and, in traditional designs, should have an explicit rounding logic so that when, for a given trade, a calculation of (coin A @ price y ) / (coin B @ price z) generates a value of either coin extending beyond the maximum price precision, the number will be rounded up or down to some value that falls within the bounds of price precision permitted. Of course, in this post, we are aiming at a non-traditional design with the convenient property of eliminating the need for rounding.
Tick size and maximum price precision are useful for:
eliminating tiny, trivial orders (a kind of spam).
eliminating trivial competition when traders (usually bots) place a minimally better-priced order (e.g. a difference of 0.000001 BTC) in front of their competitors. This activity does not stimulate demand because minimally better-priced orders do not create a significant increase in incentive to take an order. Hence, it produces a minimal increase in trading volume at a high bandwidth cost for the order system, and it gives bots an unfair advantage over human traders.
avoiding the dust threshold.
avoiding rounding errors by specifying either bounds of precision, or in our case, by precisely harmonising quanta in both coins so that traded amounts never need to be rounded.
avoiding creating change (and confirmation times) by using tick amounts that are always multiples of wallets’ input sizes.
3. Cryptoeconomics
Nothing in this spec need be enforced by network protocol, because all logic reduces to counterparties’ order book rules and no party stands to be penalised except those whose quantisation scripts do not conform with those of the rest of the network. If a trader uses different rules, then either (a) the trader will end up with change, or (b) other traders will not parse orders as valid, since the offending party’s orders will involve coins with fewer confirmations than the threshold set by other traders.
The only check required in this system is that traders should conduct a UTXO check on coins prior to updating an order on their order books, which is already implemented in Block DX.
4. High level strategy
Rounding errors and change can be eradicated by using minimum transaction amounts and tick sizes with carefully-chosen divisors that correspond to coins’ input amounts. Furthermore, by adopting, across all coins, standard minimum transaction amounts and tick sizes that have many common divisors, where each input size in any coin’s wallet will always be a divisor (simple example: 1, 2, 3, 4, 6, and 8 are divisors of 24, 36, and 72), not only will all trades on a given coin pair generate no change, but when a trader switches to a new coin pair, (s)he need not re-run the above input script for different values of x and y.
5. Specification to determine x and y
The minimum transaction amount shall determine x.
For a given coin, its minimum transaction amount shall be defined as either 0.36*10^n or 0.72*10^n, whichever is the next number higher than its dust threshold, where n is the number of decimal places counted little-endian up to the last nonzero digit of the lowest dust threshold.
(For example, if the dust threshold is 546 sats, then n is 3 and the minimum transaction amount will be 720 sats.) Tick size determines y.
For a given coin pair, one tick shall be defined by the lowest common multiple of their minimum transaction amounts. Tick size shall be denominated in the least valuable coin of the coin pair, while the most valuable coin shall be termed the “base currency” henceforth.
(For example, if coin A’s minimum transaction amount is 720 sats and coinB’s is 3600, then their market’s tick size shall be 3600 sats, denominated in whichever is the least valuable coin.)
(UX note: base currency in this calculation is independent of the user’s choice of base currency, which may be inverted at will.) For a given coin pair, maximum price precision shall be defined by y, as in (2) above. In other words, not only shall prices increase or decrease by the tick size, but no order’s price may be smaller than one whole unit of the tick size. For a given coin pair, the input size shall equal x, the minimum transaction amount, as in (1) above. Input size will thus differ from coin to coin, but in all cases, inputs will be exact divisors of y.
To combine the above into an example, if Bitcoin’s minimum transaction amount is 546 Satoshis and the Blocknet’s is 0.00002000 BLOCK, then for the BLOCK:BTC currency pair:
The next good choice of divisor is 0.00000720 for BTC and 0.00003600 for BLOCK.
Their lowest common multiple is 0.00003600.
Exchange rate shall now vary by a tick size of 0.00003600 BLOCK:BTC
All transaction amounts (and of course orders too) shall be multiples of 0.00000720 BTC or 0.00003600 BLOCK. Due to the tick size, every 0.00000720 BTC input will always be equal to a certain number of whole BLOCK inputs.
6. Why this ensures zero change
From the preceding section, the higher-valued coin will always have inputs exactly proportioned to the market’s maximum price precision, y. The lower-valued coin will always increase or decrease in value relative to the higher-priced coin in increments of their lowest common multiple, y.
(For example, if 50 BLOCK inputs buy 1 BTC input, and then the price increases by one tick (0.00003600), then 49 BLOCK inputs buy 1 BTC input.)
The result: one input of the higher-valued coin will always buy some number of whole inputs of the lower-valued coin.
7. Novel points worth noting
If price increases enormously, so that, for example, 1 BLOCK input buys 1 BTC input, then a further increase or decrease in price would result in one tick doubling or halving the input:input ratio. As such, this design has the novel property of decreasing price resolution (the size of the smallest increase in price in dollar terms) as the lowest common multiple increases in real-world value. This will not be problematic outside of the most garish cases of poor coin maintenance, however. As a lower-priced coin increases in value, its fee and dust thresholds ought to be adjusted down so that micropayments remain feasible and users do not inadvertently find that their transactions do not get accepted into a block and their coins are stuck. This adjustment, then, would cause the tick size and minimum transaction size in Block DX to return to a normal level, where a “smooth” or near-continuous variation in price is experienced. In exchange for a variable price resolution, this solution offers Block DX the ability to create zero change for all trades. I believe this is a desirable compromise. Note that this system is independent of network fees and network congestion. Changing network fees will have further effects on users’ incentives to trade.
8. Weaknesses
Currently, the principal weakness of this design is that it has not been turned into mathematics and tested. Until it is implemented, there is a chance that this idea is a mere flatus vocis and I am confused.
A secondary weakness is the need for wallets to be aware of coins’ prices (specifically, which coin is the more valuable one) in order for the script to run. This adds complexity and may lengthen setup time somewhat.
A further weakness is that if, at any point, different wallets run differing versions of this script, then it is possible that they could end up with incommensurable xs or ys, which would generate change.
Finally, transactions that consume many inputs are more expensive than single-input transactions. This will not affect the calculation of input size though, because a single-input bail-in transaction will have a low cost, and multi-input transactions will have higher amounts with only a nominally higher cost. As such, this drawback is limited only to increasing the network fee for trades. | https://medium.com/flatus-vocis/harmonic-quanta-dex-order-matching-that-generates-no-change-f7cbbd26ded9 | ['Arlyn Culwick'] | 2018-07-30 22:08:50.711000+00:00 | ['Bitcoin', 'Blockchain', 'Dex', 'Atomic Swap', 'Blocknet'] |
Wading on the Shores of the Unconscious: Exploring the Roles of Fantasy and Imagination in Therapy | The capacity for fantasy is an integral element of the therapeutic relationship. On a whole, human beings place enormous stock in the things they believe about themselves and the world around them. Often people don’t consider that perhaps their perception is flawed and that what they call fact might really be a fable. Much of my therapeutic work, therefore, involves a process of unraveling the constructs and myths of perception clients have unknowingly erected, then found themselves trapped within. Time, gentle encouragement, and steady support can go a long way to helping clients perceive and experience the true futility of their unworkable fantasy-based solutions.
This brings me to the part of the mind that exerts steady power over our daily lives yet frequently goes overlooked: the unconscious. Just what is the unconscious? The unconscious is the part of our mind that exists largely outside of our everyday awareness. It’s the part that was shaped significantly by our early experiences and it has the ability to influence our judgments, feelings, and behaviors without our realization. As the repository of all our accumulated knowledge, learning, and memories, it’s why you can tie your shoes without thinking, easily remember the lyrics of your favorite song from twenty years ago, and it’s also why people often become more and more like their parents as they age (perhaps much to their chagrin). Moreover, the unconscious is also the seat of a great deal of untapped potential and internal strength, especially when it affects our fantasies, but I’ll expand upon later, for now, let’s loop back around.
Therapeutically, when working with fantasy so much of the process is a gentle attempt to peel back the protective layers of unconscious deception that comprise self-destructive fantasy. The mind often tends toward avoidance, after all, it feels safer than diving into our fears, shame, anger, and sadness. Evolutionarily we are wired to seek safety above all else, even if that safety is only from the fear of being emotionally exposed. So many clients wish for quick fixes and easy solutions; without realizing their fallacy they both seek and expect the easy road to be the road out of suffering. I remember the distinct timbre of incredulous disappointment in a client's voice as she exclaimed, “Wait, so these pills won’t completely eliminate my anxiety?!” “No.” I tentatively responded, “But within a few weeks they’ll likely lower it to the point where you’re able to work through what you’re feeling without becoming completely overwhelmed.” What a let down that revelation was. In the throws of unconscious extremes, other clients have been known to tumble into the dangerous and diametrically opposed abysses of either worthlessness or self-aggrandizement. Without a thought, they compare themselves to others with tiered and inaccurately extreme quality judgments. Once one client exclaimed, “I’m such a loser. I mean look at me, I’m so depressed I lost my job and my sister is a rich corporate CFO.” He paused and started up again before I could interject, “But I’m not as big a loser as my neighbor who’s an alcoholic! I might be depressed but I never drink!”
When we fall prey to these illusions of the extreme, the inner landscape transforms into one of angels or devils, good and bad, right and wrong. This is an intra-psychic playground for the haves and the have-nots within. Down the rabbit hole, the mind tumbles, and the myriad complexity of the self is diminished in favor of unattainable perfection fueled by the sheer terror of imperfection — it’s a desperate quest to be “good” gone terribly awry.
However, there’s a flip side to all this chaos and tendency toward fantasy. The ability to construct illusions about ourselves is at times a necessary mechanism for psychological self-preservation. It’s part of what gives humanity resilience and the ability to endure awful circumstances. When confronted with horrible truths about the nature of our reality, inexplicable losses, or profoundly traumatic experiences, our minds will often cleave the facts from our conscious awareness and scuttle them off to dark corners of our unconscious so that self — the conscious, day to day functioning self — can continue on with the business of life. This disconnection serves as a necessary and protective illusion. It operates in the client whose father died when she was seventeen. Confusingly, she never cried or experienced any concrete sadness despite the power of their bond. At thirty-eight she’s now struggling with panic attacks, depression, and a morbid fear of death, wake’s, and funerals. It operates in the eight-year-old child who survived a serious car accident, though his brother didn’t. Since the accident, he behaves well at home and at school. His expression of grief was minimal, but he now nightly wets the bed and is prone to extended bouts of unshakeable silence.
The strategy of psychological disconnection in response to trauma works up to a point, the problem arises when it no longer functions smoothly. When we grow anxious, angry, and sad, when we begin to feel worthless, hopeless, and terrified of life itself the fantasy of disconnection as a workable solution has failed. These kinds of psychological disruptions are the mind’s desperate attempt to reestablish the integration of difficult experiences that have been banished to the unconscious and forgotten by the conscious mind. To disconnect from the fluid whole that comprises the link between the conscious and unconscious minds creates an internal schism, and the self is always attempting to move toward wholeness and integration. | https://medium.com/stillpoint-mag/wading-on-the-shores-of-the-unconscious-exploring-the-roles-of-fantasy-and-imagination-in-therapy-a3e4f0f14352 | ['Stillpoint Magazine'] | 2020-05-27 11:14:27.743000+00:00 | ['Fantasy', 'Therapy'] |
Aditya Vinod Buchinger | Aditya Vinod Buchinger
Key interest areas: Data, Environment, Building
London, United Kingdom
What do you do?
I am an architect, and engineering project manager, focused on bringing in sustainable building principles in AEC sector. Currently, I am developing a large-scale sustainable algae farm in a remote desert. I write about architecture(25%), sustainability(25%), ethics(10%), analytics(20%), and business (20%) built from my experiences in London.
How are you creating an impact in your niche?
I am working towards changing some of the AEC industry practices in material selection and design. I believe designers have real power in hand and must actively design out polluting materials from their projects. This is a challenge as there are very few alternative for some very polluting materials. To overcome this barrier, I am looking at other ways to reduce emissions in the overall project’s lifecycle such as Carbon Offsets.
Where can one find you work? | https://medium.com/architectonics/writer-aditya-vinod-buchinger-13b9d63ba4b3 | ['Aditya Vinod Buchinger'] | 2020-12-01 10:38:12.476000+00:00 | ['Writer'] |
Portable Pipelines with Apache Beam | There are lots of use cases for data processing and analytics pipelines, and nearly as many frameworks to use. Apache Spark is probably the de facto standard these days, but it is far from the only option. In this article, we'll take a look at Apache Flink — one such alternative — , and more importantly Apache Beam, which will make it so you don't have to worry about picking the right framework ever again.
First off, Flink. At first glance, Spark and Flink seem very similar (as do some of the other frameworks). Flink is a lot newer, though, and its main distinguishing feature is the fact that it is based on stream processing, rather than batch processing. It can still emulate batch processing if required, but fundamentally it works with streams. This is in sharp contrast to Spark, which uses batches and can emulate streams with so called micro-batches. Spark Structured Streaming does now also have a continuous processing mode, but in the most recent version at the time of writing (Spark 3.0.1), it remains experimental and unlike Flink does not offer exactly-once guarantees.
The choice of which framework to choose can be pretty difficult, then. Fortunately, Apache Beam comes to the rescue. Beam is not a data processing framework, but rather a unified programming model for data processing tasks. It ships with a number of runners capable of executing tasks using Spark, Flink, Google Cloud Dataflow, and a variety of other tools. That means you can implement your ETL logic once, and run it on any of the supported pipelines while only needing to change some command line parameters. | https://medium.com/datapebbles/portable-pipelines-with-apache-beam-2bd226563e49 | ['Dennis De Weerdt'] | 2020-12-27 12:22:38.166000+00:00 | ['Apache Flink', 'Big Data Pipeline', 'Spark', 'Apache Beam'] |
Cops Are Not the Victims Here | Cops Are Not the Victims Here
Officers are broadcasting petty grievances, showing their indifference to protestors’ calls for police reform
Photo: Mark Makela/Getty Images
Milkshakes, McMuffins, and tampons, oh my! Over the last few weeks, police officers across the country have claimed their food orders are being tampered with by service workers with anti-cop biases. But the efforts to distract from police violence have largely failed — if anything, the fact that cops are painting themselves as victims while they simultaneously refuse to empathize with protestors brings more attention to how little they seem to care about deadly racism.
A Los Angeles cop told a reporter on Monday that he found a tampon in his Starbucks frappuccino (the reporter then tweeted out a picture of something that looked nothing like a tampon); a sheriff’s deputy in Georgia went viral earlier this month after she posted a video of herself crying outside a McDonald’s because she feared employees would do something to her Egg McMuffin; and in New York, police accused Shake Shack employees of poisoning their milkshakes — a claim proven to be entirely fabricated.
It is not a coincidence that cops are trying to paint themselves as victims right now. Protesters across the country are demanding police officers be held accountable for their violence against Black people, and a majority of Americans agree with the demonstrations. And as calls to defund or abolish the police gain traction, it’s becoming clear public opinion is not on cops’ side.
In the same way police hope that images of white officers kneeling with protesters or hugging Black children will humanize them even as they shoot rubber bullets into peaceful crowds, the spate of (sometimes invented) anti-cop harassment is meant to make Americans feel as if the victimizers are the real victims.
Officers are angry they’re not being given the unearned deference and respect that cops are taught they are entitled to.
That’s why NYPD Commissioner Dermot Shea claimed protesters were strategically placing bricks across the city to use against cops, tweeting a picture of what was actually debris from a nearby construction site (apparently, Shea was less concerned with an instance of actual, police-propagated violence: This week, he defended the police officers who drove a car into a crowd of protesters in Brooklyn earlier this month, saying the officers didn’t violate the NYPD’s force policy), and why cops in New York falsely claimed concrete mixing samples were being disguised in ice cream containers so that demonstrators could throw them at police.
Police officers would like Americans to believe the protests are violent, and that, despite all the videos showing cops pepper-spraying, hitting, dragging, and shoving peaceful protesters, it’s the police who are the ones in danger.
But it’s not just a PR strategy that is driving the police to sometimes lie about anti-cop incidents. It’s more than that: Officers are angry they’re not being given the unearned deference and respect that cops are taught they are entitled to.
When the sheriff’s deputy in Georgia cried about her McDonald’s order, for example, she didn’t just say she was afraid about her food being tampered with. She was upset over the general shift in people’s opinions of officers.
“I don’t know what’s going on with people nowadays, but please give us a break,” she said. “If you see an officer, say ‘Thank you,’ because I don’t hear ‘Thank you’ enough anymore.”
It’s an outrageous request — to show appreciation, of all things, as Black people are being killed in the street by those meant to protect them, and as protesters are being tear-gassed by the government meant to uphold their free speech. The police officers complaining about missing food orders or fretting over bad-tasting milkshakes are demonstrating just how out of touch they are with the heart of what these protests are about.
No one is poisoning police officers. No one is tampering with their food or writing “pig” on their coffee cups. Americans just want the racism to stop. And when white cops spend more time whining over largely fake fast food controversies than they do listening to the real grievances of Black Americans, it lays plain just how warped their priorities are.
If police are afraid of what the public thinks of them, they don’t need to cry in their cars or stop eating out — they just need to stop killing people. | https://gen.medium.com/cops-are-not-the-victims-here-14747e22ece8 | ['Jessica Valenti'] | 2020-06-24 12:06:47.325000+00:00 | ['Race', 'Society', 'BlackLivesMatter', 'Police', 'Jessica Valenti'] |
The Berlin Manifesto v2.0 | Fridays for Future has paved the way for a new realization of the treasures our planet holds dear. It is an achievement that history has yet to rank in it’s monumental scale which Greta Thunberg single-handedly forced into existence. She acknowledged the grave situation our society has reached due to the inability to follow more than 30 years of climate research recommendations. She sat down next to the Swedish parliament with a self-made sign stating the fact that from now on, she would strike school attendance to bring awareness to this situation. She succeeded, getting carried throughout the world to spit her angry message at the world leaders. Grey haired men, wearing expensive designer suits, not knowing what was happening to them as her angry voice obliterated the schemes and excuses that dominated climate policy for decades now.
There is no doubt left among leaders of relevance now that, to put it in Greta Thunberg’s words, the world is on fire indeed. Many books have since been written about her involvement in the climate activism rising like a new star on the northern sky. Books have been written about the scientific status quo. Books have been written about the policy issues involved. It might be safe to say that most words in regards to the crisis of human-created climate change have been spoken or written already.
Why write this white-paper then? As an expert in risk management and digital taxonomy, I am approaching the climate change situation with a different angle. Instead of talking about the problems and hurdles currently creating a hard impasse in the rectification of the climate situation, what is offered here is a solution. The Berlin Manifesto was initiated as part of an performance art project. Art should be not a mirror for society, but a sword that transforms it. These words by Leo Tolsky have inspired me to venture on a wild trip around Europe and the digital realm, gathering impressions from all corners of our society.
The original Berlin Manifesto is documented here on medium.com. This white-paper offers an in-depth review of all back then suggested notions, as well as other related issues as found on the performance art hub website and my experience as ISO quality, risk and security manager. If you follow the authors artistic ramblings throughout the web you might with with reason take his output with the grain of salt assigned to those that are definitely on the crazy side of our large human family.
Throughout the performance art project I have unfortunately fallen deep into the lysergic reality of hallucinogenic enlightenment. I have accidentally set fire to the apartment of my ex partner and me. I have been held captive by police, incarcerated for arson. You name it, I made the mistake. But, given the benefit of doubt, reading this white-paper I hope you will realize that my visions of a better future hold truth in my experience of two decades of transforming organizations and digital systems in a vast variety of sectors. Finally, and this is what made me set out and create an amalgam of scientific, sociological and artistic explosion, I think if Greta Thunberg has proven nothing else it was that you can definitely change the world by being crazy with maybe just a bit of charming smart added on top.
Reading the IPCC report made me realize two central issues creating the policy grid lock we have been observing for years now: the complexity of the issues involved and the lack of clear, modern risk management supervision.
The first issue is simple to understand. In the end, leaders of our international organizations like the United Nations, high profile politicians, and economic decision makers coming from large corporations or think tanks are responsible for managing this crisis. And, while these persons are quite smart on the their own for obvious reasons, their challenging day to day tasks do not allow them to submerge in the climate issue for months throughout. The IPCC report is thousands of pages thick, even the summaries for policy makers span endless pages of numeric reiteration of degrees, emission data, and the likes. It is easy to get lost in between the physical reality of climate change and the political reality of executing necessary change. The climate crisis is, after all, the largest scientific endeavour since the moon landing took place. The output of thousands of the smartest minds on this planet can be cumbersome to consume.
The second issue is more intricate, and to my disappointment not really part of the public discussion around the slow implementation of change of mitigation initiatives. Modern management utilizes agile and cyclic means to ensure that even the most complex projects can be progressed towards their goal within a reasonable time frame. The unicorns forming the basis of Silicon Valley technology impacting society at unprecedented scale have shown how you are able to implement monumental change within years. While individual excellence and centralized control certainly often plays a factor at modern transformative projects, management has for decades now also aligned with modern tools that originate in the assembly construction lines of Japanese automaker Toyota in the 70s.
It was my realization that if the Facebooks and Googles of this world are able to modify our behavior as well as our economics deeply and on global scale within years, there should be no reason why our nations are unable to do the same. Is it not a matter of life and death which compares to a mundane matter of technical comfort and monetary gain? How is it that the biggest issue humankind has ever faced by it’s own creation, the literal box of Pandora, is not handled with the same efficiency and care that benefits the business goals of modern super corporations? My world is different to Greta Thunberg’s reality. I am 40 years old and spent my life in the digital trenches. But where she hit a brick wall of non-realization in the way the politicians ignore the climate crisis, I hit a similar brick wall in the way the politicians failed to execute necessary change.
I have to admit that without Greta bringing this topic to everybody’s attention I might have never stumbled into my passion for saving this planet’s ecology. My life has been filled to the brim with artistic initiative and professional responsibility when I saw this brave girl screaming her truth at the gathered world leaders. But, and I cannot thank her enough for this, the more I started to invest time into the climate crisis situation, the more I hit the same feeling she emanated. How can the world be this wrong? How can she, a mere school pupil, maybe gifted with supreme intellect, but a child at the time nonetheless, see things that everyone else ignores. How can I, a mere technology evangelist and art pupil, maybe gifted with similar psychological oddities as Greta, see things that everyone else ignores.
Using the lens of risk management, I identify five major deficiencies in current climate change policy procedures:
Paris agreement and similar additive policies by bodies like the European Union or China are only loosely targeting emission goals. There is a lack of definite attached mitigation mechanism. Current democratic processes do not consider the flexibility required to implement both milestone and an actionable plan at the same time. As it stands policy equals wishful thinking, and there is yet to be proof of goals actually being delivered on time.
Considering the global nature of carbon emissions, radiative forcing, carbon sink issues like deforestation, or ice shield loss it is without question that only a coordinated planet-wide effort is able to achieve significant chances of rectifying measures being effective. Available democratic means in the different regions, political unions, and nations are non-standardized and subject to local democratic power balances. The complexity and impact of the climate crisis demands transparent, efficient, and standardized management strategies in place.
Public discussion and policy activities focus on the reduction of carbon emissions via immediate measures. In traditional management we currently only consider the project phase. Recent and mid-term historic experience shows that the operational phase of the implemented means holds both potential and risk currently not or only unsatisfactorily managed. New technology is regularly bolstered by economic subsidy, only to dwindle as the money disappears or other inconveniences arise. There is a need for contingency on all implementation paths, as well as continued lifetime reporting.
Currently there is no democratic umbrella process in place that is able to cover all stakeholders. The ultimate impact of risks attached to climate change demands guarantees and enforceable strategy. Only the United Nations, NATO, and WTO currently hold a position feasible for policing climate change action. As such, it is vital that this position is formalized and empowered to a point where deviation from climate goals becomes the non-preferable solution. The fight against the climate crisis is only as strong as the weakest (significantly scaled) stakeholder, since any country is able to negatively impact most important climate change variables planet wide.
Currently available means which effectively reduce carbon emissions are still in large parts economic trade offs. While the balance is shifting towards clean, renewable energy and similar technology, it is without doubt an economic challenge to implement significant emission reduction goals. The global economic imbalance needs to be considered for climate equity. Also, climate change risks are asymmetrically distributed over the globe, with many risks hitting economically underprivileged regions harder.
The climate situation is a ripe for more liquid democratic processes since the necessary counter measures are difficult to implement and often overlap with policy making. The current six year IPCC cycle is a prime example for the deficiencies attached to slow and cumbersome political reality. Technology moves faster than politics right now, and it is a catastrophe that the best available means are often not deployed for mere delay of decision making.
An essential lateral interest in climate policy arises from sun-setting of legacy technologies. Currently there is no interest in a global common sense where this activity takes place. Local politics lean in favor of climate killers for political opportunity, or in favor of cleaner technology only to stop action at borders of legislature. The climate crisis knows no borders and geographic segmentation, as it is mostly global phenomena resulting in the projected risk factors.
For the sake of discussion I name the proposed mechanisms Eco-socialism 2.0. This is not a definition rooted in the existing Eco-socialist conventions, or an argument against globalization or Neo-liberalism. To the contrary, I propose more global thinking in attacking the crisis, even if some means might imply a more local look at available resources.
Eco-socialism 2.0 is the natural term for the construct of this white-paper because in the spirit of fair distribution of economic resources an effective battle against the climate crisis demands a fair distribution of the load attached to reducing the carbon emissions to net-zero or even negative targets within a reasonable time frame.
My endeavour into the mad decent of art intervention is not part of this white-paper. You will certainly find more on that in the future in other channels. Since my rehabilitation I have though set course to provide my experience and knowledge in a fashion less destructive for society. What follows is my first try to make amends. I would like to wrap this introduction up with an apology to those that paid dear price for my unrestrained passion raised in my hell trip: my ex partner R., our landlord, my friends, family, and the countless others I have without restraint hurt in the name of truth and greater good. Art should be a sword, this one I agree on with Leo, but you should not hit the innocent with it.
Five years of Paris
It is a common misconception which is only slowly disappearing that the climate crisis is an issue only manifest in the far future. Facts are getting stronger by the day that global warming drivers do already raise severe issues today. Around the globe the impact can be felt in extreme weather situations and catastrophes increasing in frequency. It is vital for the case of fighting the climate crisis that the correlation between such incidents and climate change becomes understood better and part of public awareness.
The disconnect between carbon emissions and climate impacts, which can be thought of in the scale of decades, creates a difficult landscape for policy baseline. Hard emission cuts cost money, cut into labor and workforce topics, and can impact the everyday convenience for citizens. There is an obvious need for strong argument in favor of climate change mitigation measures. The impact of global warming today is exactly that, a good case for investing into the future of our planet.
The first victims of global warming do live in the global south. Equity will be a topic later on in this white-paper, but looking at the pacific island nations right away highlights the severity of global warming today for many humans inhabiting sea level habitat fully exposed to oceanic changes. IPCC report estimates are uncertain about the effective sea level rise to be expected for the different pathways, but numbers in between 1.3 m and 2.4 m by 2100 should easily highlight the massive issue on the horizon for island nations. Their countries could simply disappear from the maps of the earth.
Current sea level rise numbers are though only around 4 mm per year, so this effect is not felt much today. The island nations do see though a high increase in number of cyclone events. Cyclones are huge rotating air masses that exert low air pressure in their center, thus creating high waves and resulting in severe flooding next to heavy thunderstorms. Similar events are felt as hurricanes in the Atlantic region with increasing frequency.
The effects of cyclones to the small Pacific island nations are catastrophic. These countries are put at extreme risk by such events due to their remoteness. Foreign help is likely days away for many of these nations. Due to slow economic development many such nations also suffer from low infrastructure and building quality, with sanitation, fresh water, electricity, and other basic needs for living being underdeveloped and directly impacted by cyclone events. Children of such nations are kept busy throughout the year heaving mud out of their flooded homes instead of attending school or enjoying a risk free childhood.
Wildfires are seeing more awareness in western media due to the immediate vicinity to highly populated areas. Wildfires pose an extreme risk to inhabitants of affected dry vegetation areas, with devastating destruction resulting from fires spreading at high speed according to topology and weather situation. 2020 has been an unfortunate record year for California, where Wildfires have burnt more than 2 million acres of land, which encompasses more than 4% of the total area of the country. Climate change has been attributed to increase in frequency of wildfire, increase in area burnt, as well as the extension of the wildfire season into previously colder months.
Huge power outages affecting hundreds of thousands of people are caused by such events. It is only thanks to the highly equipped emergency rescue operations of countries like California that people are able to escape or being rescued. Wildfires are phenomena also seen in less developed regions of the globe, with severe risks to human life incurred due to infrastructure not up to par to handle these often extreme fire blazes.
India as another example is also already seeing heavy effects from global warming in ever increasing water shortages. In 2018 there already have been 600 million citizens seeing acute water shortages. The decrease of snow cover during the winter season effects the water supply throughout the year. Fresh water is distributed over thousands of miles. Rain fall capture can only offset the change in spring water supply where effective technology is available. At the same time the Monsoon increases in intensity, so drought periods are alternated by heavy flooding. Catastrophic effects are observed both in residential and agricultural habitat.
Even western nations not commonly associated with heavy weather effects are already feeling global warming. Germany has seen the worst drought period in over 250 years in 2020. The summer heat is exceeding 30 degrees Celsius over extended periods of time, which has previously been unknown in the cool Gulf stream influenced climate region. Rivers run dry under such conditions, as well as trees dying, thinning out the essential forest biospheres. Both ecological effects and economic effects are devastating. Inland shipping is blocked by lowering river levels, and crops are failing due to lack of irrigation water. The country is not equipped yet to move the large quantities of missing water supply over the large distances from Alpine mountain regions into the northern low lands.
Heat waves are particularly intense in the global south. Africa, for example, is warming faster than the world average. Worsening the situation response for the African population is the lack of proper reporting in the often low developed countries. Mean average temperatures of above 29 degrees Celsius are considered life threatening, numbers previously only recorded for small Sahara desert parts. Lack of data and response plans is already putting peoples lives at risk.
Scientists are also predicting that global warming might increase the severity of snow blizzards in the winter. The Alpine region has seen up to 2 m of snow in valley areas in 2020, with precipitation exceeding previous record years. Even if not all events can be directly linked to climate change effects, the increase of frequency and severity of weather extremes and catastrophes is without doubt already showing us the direction our climate is heading to.
To find the popularity vote required for stringent climate change policy, it is essential that news and education is adapted to highlight the effects of global warming as it is felt now already. It is much too convenient to view the crisis as some remote or distant future thing that might as well not exist or never affect some area of population. Fact is, all IPCC pathways show severe global warming effects, and the current carbon emission figures indicate that it will require a massive feat to achieve anything near the Paris agreement goal of “well below 2 degrees Celsius”.
At the 2020 five year anniversaries “Climate Ambition Summit” world leaders like Xi Jinping, the EU commission leader Ursula von der Leyen, and Pope Francis urged swifter action for climate action.
Generally, many countries are apt to promise mid term goals but fail to commit immediate mitigation. In addition, the commitment of the Paris agreement only requires the signatory parties to publish nationally determined contributions (NDC), but fails to enforce the legislature necessary to make these measures binding. The result is that emission levels are still rising at alarming rate, with current emission levels tracking the previously thought to be implausible representative concentration pathway (RCP) 8.5.
The intermediate IPCC 1.5 degrees Celsius report states around 1 degrees Celsius of existing global warming compared to pre-industrial levels, leaving anything between a few years and at most around 15 years left for the RCP 1.9 goal of 1.5 degrees Celsius. It seems implausible to reach emission zero within the designated time frame, with Paris agreement’s “well below two degrees Celsius” challenging effective carbon emission reduction progress.
The fragility of the Paris agreement has further been highlighted by actions of the likes of Donald Trump, who opted to withdraw the United States from the commitment altogether. Luckily the democrats won the 2020 election, and Joe Biden promises to reestablish the countries’ Paris agreement signatory with increased focus on global warming in policy making.
Notable positive commitments have been provided in 2020. The European Union has increased it’s aspirations with a 55% reduction from pre-industrial levels at 2030. China designated 2030 for “at least” 65% carbon emission reduction compared to 2005. Unfortunately there is neither a standard of climate target designations nor do all countries or unions agree on a scientific baseline. Beyond the voluntary nature of legislative pressure, this results in skewed interpretations that has been called out as cheating previous goals by critical climate activism groups.
As a side-note the Corona pandemic and resulting emission reductions cause by economic lock down mechanisms have been shown to have little effect on the overall global warming situation. To make things worse, net zero goals generally include vast quantities of carbon emissions getting reduced by as of now non-existent carbon sink technology or reforestation initiatives of unprecedented scale. The current situation can be summed up as dire, or to put it in the brutally honest words of Greta Thunberg “as #ParisAgreement turns five, our leaders present their ‘hopeful’ distant hypothetical targets, ‘net zero’ loopholes and empty promises”.
Overall, the IPCC and United Nations endeavors are showing to fail to deliver necessary guidance and policy pressure to combat the real-world political conflicting goals between climate mitigation measures and ongoing economic challenges. Current technology still lacks the means for easy migration from fossil fuel technology into fully green variants at cost parity.
Energy production, which accounts for around three quarters of greenhouse gas (GHG) emissions, is sitting at 14% carbon free technology. This includes renewable energy like solar and wind power as well as traditional nuclear energy. Electric mobility stands around 3% of new car sales, with high prices and low range not being able to see enough offset by subsidy. Industrial carbon reduction technology like hydrogen powered furnaces are still in their infancy.
It should be obvious that Greta’s harsh words are only calling out the truth that most deny to see yet: without policy pressure mounting to much higher levels climate targets are doomed to fail and result in catastrophic global warming levels of 3 degrees Celsius, or even worse. This puts IPCC RCP scenarios like Greenland icecap or permafrost melting into the domain of plausibility, with catastrophic effects on sea level rise or extreme heat and drought rendering large areas of our planet’s surface unlivable in the long term future. The general biosphere consequences of global warming reaching such levels are not well understood, but expected to result in large-scale species extinction.
Risks involved in the worst global warming outcomes include high political instability, human mass migratory scenarios, devastating hunger and poverty rise and other lateral effects in a form unseen since the modern age. The climate crisis might as well be called the end of earth and society as we know it. The goal of this white-paper is not to recap the climate crisis, as it has — thanks to Greta Thunberg’s Friday for Future initiative building awareness — been done countless times in recent years. I aim to provide an out-of-the-box, fresh view on the project, risk and operations management perspective of IPCC’s and United Nation’s well-intended but ultimately doomed intent of mitigation.
Throughout all the negative pictures outlined above I stand with the optimism shared by the young climate activists that necessary change is possible. In the following chapter this white-paper will highlight the shortcomings of management topologies provided by IPCC, and how modern management thinking and existing tooling can help tackle these extremely difficult issues. This thinking does not arise out of my personal good spirit or some supernatural belief system, but the fact that in the digital domain where I come from cataclysmic global-scale transformations have been achieved in recent decades with similar management strategies.
Management strategies
The IPCC working group III has provided valuable meta research on the status of risk management strategies at the time of writing of the 2015 report. It highlights the global warming crisis as a so called risk-risk problem, with both climate change and mitigation cost involving risk for all involved parties (the countries of our planet). Additionally, the management problem is burdened by the complexities involved in the large number of stakeholders (all countries of our planet), and the asymmetric nature of the stakeholders (developed vs. developing countries).
The report encourages international collective action in a very short outline, but fails to explain how this might be achieved without proper means being available beyond the United Nations loose international policy force. From a positive angle, the focus on the definition and quantification of uncertainty which stood sound in 2015 can be realigned today. Many previously unknown links between weather extremes or catastrophes and climate change have since been identified, and carbon emission pathways have worsened up to a point where policy make should more easily lean on the climate risk side of the situation.
Risk perception and awareness, which holds a complete treatise in the report, is, as previously highlighted, also no longer as important as before. On the other hand, the references to intuitive decision making and consequential problems in risk mitigation read kind of prophetic today. Many of the policy initiatives summed up above provide the feeling that they might be a compromise between the reality of majority vote and the deficiencies of intuitive political engagement.
The IPCC working group has highlighted the issue of short-term thinking and present bias, but seemingly not with enough emphasis. The five year Paris agreement United Nations meeting has shown that countries are starting to realize the necessity of harder mid- and long-term commitments, but the status quo shown previously highlights that this still fails to result in effective means being deployed with sufficient reliability.
Ambiguity, which is only given a very short outline, is one of the main deficiencies of the original IPCC reporting form. In demand of scientific soundness and completeness, the large authorship group seems to result in an extremely taxing form of literature. The public discussion around the climate crisis still, after years of ongoing and informed debate, reflects in high ambivalence between the extreme positions of climate activism and global warming negation. This should highlight that from a management perspective there looks to be a lack of middle ground translating between expert comment and decision maker. The recent 1.5 degrees Celsius report summary for policy makers shows increased legibility, but the vast amount of scientific detail and figures still leaves strong doubt if the message is transported in proper fashion for decision makers.
The obvious oversight of IPCC working group III is the omission of project and operations management from the guidance. Beyond the explanation of risk management approaches the IPCC expects with well intended reasoning that the countries and international organizations like the United Nations will be able to deliver the processes required to execute all available means. If the recent years have shown one thing it is that this is not the case.
Anyone accustomed to large scale project and operations management will immediately realize that it is the project structure of the climate crisis that is doomed to fail. Large numbers of stakeholders together with conflicting interests, highly asymmetrical means of delivery, and a total lack of high level governance with executive means would imply failure for most if not all projects on much lesser scale in commercial context.
From an historical point of view the working group also seems to neglect the ultimate consequences of failure in what they call the tail risks of extreme climate change events like high sea level rise destroying a large area of densely inhabited areas. Considering the ultimate consequences of such events, even the off chance of non-conforming partners putting the mitigation strategies at risk demand a much more aggressive stance at risk management.
The climate crisis should therefor be considered more akin to global conflict situations like nuclear proliferation, where game theoretic approaches have found decades of experience in handling such scenarios. This only finds minute mention in the IPCC report. In the sake of sustainable international security it should definitely find proper attention in the future.
Now, considering the climate crisis mitigation as failed project, modern management theory suggests the following failure recovery mechanisms:
There is a clear need for IPCC and/or the United Nations to recruit the necessary technical and/or non-technical resources that seem to be currently lacking. To put things simply, the IPCC consists of thousands of scientists, but the whole issue lacks project management and oversight with executive power. Any company facing a similar issue would hopefully find and appoint the best experienced project manager available with experience to execute on the observed and failed scale.
Any risk management problem can also be seen as a resource management problem. This is obvious in the global warming crisis, where carbon budgeting is the central issue (with other GHG emissions also playing a role not to ignore). Putting the situation in wider context, the effective deployment of green technology incurs second level resource issues. For example, electric vehicle acceptance is largely linked to available battery technology and pricing, with very specific and in parts scarce resources being attached.
Projects require clear operational metrics. The IPCC has delivered on that front by defining variables like carbon budget and radiative forcing. Unfortunately, there is both a lack of enforcement on the metric reporting of the stakeholders as well as a frequency of reporting that leaves much to be desired. The IPCC reporting interval is six year, and only in 2020, marking the fifth year of the Paris agreement, first hard numbers of global (lack of) progress are appearing in intermediate reports.
Even projects with a much lower number of stakeholders require highly efficient means of communications and knowledge management in a commercial context. The cumbersome political reality of the United Nations and IPCC do not show the necessary tooling. Yearly conferences and journal or press release publications are vastly deficient for the delivery of complex project goals with ongoing unexpected hurdles.
This white-paper is written to apply such recovery means to a hypothetical climate crisis response scenario. It is important to consider that currently there is no framework within to execute the proposed mechanisms. As explained the IPCC has good reason to leave such measures open, since there is lack of precedence for global cooperation on such a scale. Unfortunately, and I do not stand alone in this critical mindset, without fundamental empowerment of United Nations, IPCC or some yet to be established global cooperation it is unlikely that anything close to the Paris agreement goals is attainable at all.
The second central aspect of the project recovery this white-paper proposes is that our world is largely dominated by Neo-liberal hyper-capitalistic forces. While we do see a variety of political landscapes ranging from communist or social democratic governance over to fully liberalized markets, it is my firm belief that in the end money dictates the direction the ship is heading. Therefor it is paramount to start to view the climate crisis as an economic crisis. Attaching monetary price to climate variables might be challenging from a scientific point of view, since it introduces yet another step of uncertainty, but I am convinced that the value of this mindset change is worth any effort attached.
The third and last overall recommendation this white-paper follows is that good management for a large group of peers requires some kind of standardized processes. While the IPCC has, as established, delivered benchmark variables, these are not applied in the same fashion. Countries use different baselines for carbon budgeting, and considering the operational management of mitigation approaches there is no standard or central means of tracking at all. Fortunately the International Organisation for Standardization (ISO) already covers many of the required management controls.
What follows are a number of management issues that in my view need to be tackled in a concerted fashion for the climate crisis to become a manageable topic. Outside of the scope of this white-paper we are working on the delivery front of required tooling in a project called FRIDAY4FUTURE.NET. The goal of this project is the implementation of a crowd sourced reporting platform that might serve as a reference implementation for future designated information management initiatives by United Nations and IPCC. Only transparent open data reported in regular fashion will provide the necessary agility to continuously adapt policy to economic, industrial and climate reality. More detail on this will follow at the end of the white-paper.
Climate assets
On a most basic level, as explained the climate crisis is an asset management issue. Carbon dioxide and other GHGs are assets that require limitation to curb global warming. Fast scaling of new green technology is also limited by specific resources that simply define assets to become managed. It should not be the limited factor for electric vehicle acceptance if a country has access to the necessary natural resources to produce batteries.
Free markets and attached issues like monopolies or artificial resource scarcity do not provide the required leverage for really fast scaling of all available means. Similar issues arise from knowledge as an asset. Intellectual property protection is a necessary evil of technological innovation to protect research and development investment. On the other hand, it should not be the case that tools available to capture carbon emissions or avoid them altogether are not utilized because the necessary technology is sitting with a company that simply does not do business in some region of the world.
As such I identify five levels of assets in regards to the climate crisis response:
First level assets are variables directly impacting global warming. This includes emissions of carbon dioxide and the other GHGs, as well as sinks like plants, oceans and other soil. One affect global warming directly in a negative fashion, resulting in global warming. The other affect global warming in a positive fashion by offsetting the emitted gases.
Second level assets are physical resources indirectly impacting global warming. This includes all major resources required to build mitigation technology, as well as other major consumables that affect global warming like concrete. The acquisition of these resources needs to be decoupled from free market mechanisms to ensure all countries access in proper quantities. Additionally, resource limitations and proper resource utilization need to be identified and managed in consequence.
Third level assets are immaterial resources that directly incur second level asset cost. This includes energy or fuel consumption. The calculation of these assets needs to be decoupled from the physical resource cost because the root causation enables proper management of the usage. For example, flight miles allow more concrete processing that abstract figures of crude oil extraction.
Fourth level assets are immaterial values indirectly impacting global warming. This includes all intellectual property required to build mitigation technology. This intellectual property needs to be put into the public domain so that innovation can foster without artificial slow down.
The fifth level asset is the cash required to implement mitigation measures. The political reality of equity alone validates to begin with economic management on a global level. Additionally, even developed nations are seeing difficulty in delivering the necessary subsidy to effectively reduce emission levels in the short term future.
Obviously these asset management efforts incur economic effects that also require mitigation. Subsidy of resource of intellectual property holders might be a way to offset costs involved. New means of taxing international trade might be another form. Nuclear resources like Uranium might serve as an example of regulated assets and ways to manage distribution and fair cost sharing. Trade unions like the European Union or the African Union might serve as a priori constructs to implement economic means regionally, with global oversight by the World Trade Organization. This white-paper will not consider the definite means to execute such regulatory intrusions, but expects that without them the climate crisis will not be solved.
The second and third level resources show a large overlap and possibly endless quantity of assets, and should be considered on a case per case basis based on the overall participation on global warming. Major climate crisis drivers like transportation or large industries certainly validate specific consideration, while minor natural resource applications might not show the necessary cost/benefit ratio to do so.
Figure 1: Asset management levels
Above you can see the five levels of managed assets that affect the climate crisis directly or indirectly. Currently the IPCC only recommends the management of the 1st level of the pyramid, as it is implemented by the Paris agreement. Unfortunately, as has been explained before the interactions between the globalized free market, political majority power, and the lower pyramid levels are too intricate to allow for a naturally developing overall solution.
This is the reason why the mindset in this white-paper takes a holistic approach to the management of the climate issue. As will be explained later on, the sensitive topic of climate equity will validate the existence of the 5th level, namely effective cost of mitigation. Only a price tag will allow for the proper distribution of cost between the global north and the global south, where a clear economic slope is reality.
This scheme is accompanied by an additional form of climate asset, namely the human costs incurred by risks considered by the selected RCP. If a realistic current goal would be a three degrees Celsius global warming the respective RCP risks should be selected and cost projected over the course of the century:
First level human climate output is the capability of food production. As physical systems are increasingly disturbed the food production means become challenged. While some regions even see increased means of food production (e.g. increased fish stock in the norther hemisphere) it is again especially the global south that is challenged by drought and other weather phenomena.
Second level human climate output is the overall human health situation. Besides lack of food there is the additional challenge of fresh water supply, the increased risk of heat or cold related health hazards, hurricane or cyclone hazards as well as other weather and catastrophe factors.
Third level human climate output is the livelihood and security for the human population of an area. As the first and second level conditions decrease there will be large areas of our planet that become difficult or impossible to live in, accompanied by physical effects like sea level rise. This will result in human migration of unprecedented scale, with climate refugees incurring both security risk and security cost.
Fourth level human climate output is the economic cost of the aforementioned levels. Each of the climate risks involved incurs a monetary cost value that should be correlated with the mitigation cost of the asset pyramid.
This is subject to limitation due to the uncertainty principles outlined in detail in the working group III report. Nonetheless some cost factor should be defined for the four levels to ensure proper overall economic cost tally.
Figure 2: Human climate output pyramid
Together these two asset types form the overall monetary value or cost of a chosen RCP. Currently the climate response process lacks this kind of transparency. This is the case at the time of IPCC reporting taking place, where the abstract view of the climate situation disconnects completely from economic factors. It is also especially significant though over the course of an IPCC iteration, where the divergence between well-intended but unrealistic national commitments increases attached cost without any means of quantification.
For the sake of completeness I will now highlight the third kind of cost category, namely the biosphere cost. I would like to stress though that in my interpretation of the current pathway situation, purely ecological effects might be a luxury which to protect is no longer feasible. Purely ensuring human survival, economic stability, and security will be challenging enough as it is. The idealistic Paris goals are less than realistic at the moment, and the consequences of that are indeed dire.
Since heavy species extinction is expected in all currently attainable pathways the quantification of biosphere effects should though at least allow for risk response scenarios. For example artificial animal breeding or plant fostering might allow for the protection of some species. This category also incurs potentially severe tail costs due to the intricate link between species in their natural habitats, which are often not well understood and should be closely tracked as species migration and/or extinction increases. The following biosphere climate output variables are identified:
First level biosphere output is the permanent physical system change. This includes sea level rise removing land mass, ice cover disappearing, permafrost thawing, deserts growing and similar effects.
Second level biosphere output is the transient physical system change. Catastrophes like wildfires, hurricanes, cyclones, as well as weather condition changes like heat and drought will change the livelihood of areas for temporary periods throughout the year.
Third level biosphere output is the ecosystem impact. Species will either migrate or go extinct in regions affected by the aforementioned effects.
Identifying the likely scenarios will help proper biological risk response. Unfortunately, as mentioned these effects are often not well understood, so the response actions will require close monitoring of the situation as it increases in severity over the course of the century.
Now that we have identified all assets involved in the climate crisis, traditional asset management suggests to define and manage the processes of acquisition, logistics, maintenance and support, as well as disposal or renewal of assets. This is so called life cycle or operations management, which the following chapter will explain in more detail.
Call for management standards
Currently, the IPCC and in consequence the United Nations only recommend the management of the first asset level, namely the GHG emissions. It expects each nation to properly extract the underlying process structure and implement more granular measures over the different categories explained above. While some countries like the United Kingdom do show in-depth management of many affected variables on their own, the same can not be said for a large part of the stakeholder group. Overall current policy reality creates large carbon budget debt with unclear incidence on how future generations might apply rectification measures.
While knowledge transfer is certainly within the scope of United Nations efforts to coordinate a global climate crisis response, countries like the United Kingdom need to be properly utilized as best practice with clear lessons learned and operations management experience extracted for replication. Ideally this takes the form of a new family of ISO standards specifically designed for governments and large corporations to implement such a best practice on a policy level. Current available ISO standards cover many aspects of environmental management facets, but do not target countries and policy makers per se.
The proposed new standard might provide a governmental umbrella standard covering the following preexisting ISO standards:
ISO 14000 environmental management
ISO 21930 sustainability in building and civil engineering
ISO 50001 energy management
In addition to these specific environment-related standards the following ISO standards already exist to define generic management systems with reliability:
ISO 9000 quality management
ISO 27000 information security
ISO 30401 knowledge management
ISO 31000 risk management
ISO 55000 asset management
Environmental standard coverage is only partial in relation to the afore identified assets. Additionally, the complexity of applying such a large number of standards in an institution, let alone a group of institutions like a government, is extremely taxing on the organizational and process capabilities to deliver. I would therefor recommend the implementation of a new, dedicated family of standards optimized specifically for the use of governments in global warming response.
Having defined a specific governmental climate crisis response standard, policies could be adapted to enforce the implementation of such a standard in addition to the definition of (better comparable) climate goals. This would provide more leverage for United Nations, IPCC, or some other yet to be defined entity with global scope, to apply a higher management cycle frequency. Ideally, as commercial and industrial best practice examples show, some kind of agile process would allow the continuous mode of operations with up-to-date picture of both the climate and the mitigation situation. More on this will follow later on.
The new suggested umbrella standard family would draw from and/or design the following management controls:
GHG budgeting includes the processes and means required to effectively manage GHG emissions and offsetting mechanisms like CO2 certificates (or rather an improved future mechanism).
Natural resource management includes standard asset management mechanisms for all critical natural resources like carbon sink vegetation.
Energy management includes all forms of electricity, heat, cold, or other forms of stored and transported energy.
Logistics and transportation management includes movement of goods and people via land, sea, or air travel.
Building and civil engineering management includes standards for the sustainable construction and operations of buildings and infrastructure.
Industrial management includes standards for the operation and control of industrial complexes.
Intellectual property management includes the storage, distribution and cost sharing of climate crisis mitigation know how.
Agricultural management defines the means to produce crop and life stock, including irrigation methods and water consumption standards.
Fishery management defines the means to produce fish stock.
Healthcare management standardizes the climate crisis response healthcare procedures.
Refugees management standardizes the means of moving large quantities of people during or after natural disaster or negative weather condition change.
Biosphere management includes standards for climate crisis response action in regards to animal or plan migration and/or extinction scenarios.
Budgeting and controlling provides some form of economic book keeping to encompass the total cost of all aforementioned processes.
Asset and information management enables the proper continuous control of all aforementioned assets.
Project and change management enables the implementation of this vast standard family.
Information security management ensures the secure implementation and operation of all aforementioned processes.
As mentioned above there is a large number of existing ISO standards that can provide points of reference in the implementation of these new processes. The wheel does not need to be reinvented here.
The resulting management topology is multi-layered with a clear meta management level at the bottom:
Figure 3: Management topology
Cyclic management
Most forms of modern management philosophies are cyclic in nature. The 2020 Paris anniversary situation highlights the reason for this change in management mode. Projects with long management phases with up-front or top-down waterfall planning are prone to large deviations from their original goals. Considering a six year cycle for IPCC, continuing in the current speed of iteration would imply that in at most three cycles the century course would be laid out without any means left for adaptation.
Cyclic project management acknowledges for the fact that things go wrong, and implements a higher frequency of iteration to give leeway for regular changes in course. Obviously agility as it is displayed in commercial projects is not realistic for IPCC with thousands of scientists, or Paris agreement with close to two hundred nation signatories.
The cyclic nature of climate change mitigation should best consider three different levels with varying frequency:
High level policy cycle where Paris agreement signatories meet and are able to commit effective policy changes
Medium level climate panel cycle where IPCC working groups meet and are able to deliver wrapped up publications
Low level data reporting cycle where all climate relevant data is provided in up-to-date fashion by all stakeholders
Given the utmost importance of the climate crisis finally arriving in the minds of all relevant world leaders a 2 year period for the high level cycle should be realistic to attain. This should be well aligned with national budget phases, which might require adaptation for full efficiency.
The climate scientists should definitely target 1 year or less for future reporting phases. It is to be considered that climate modeling is a highly complex and compute resource intense process, but there is no need for full new simulations coming out every year. The current practice of intermediate reports could be extended though to more thorough yearly reports with an updated calculation of the most important pathways.
Data reporting needs to be at a much higher pace for the management controls to efficiently function. A six week cycle would enable even market driven topics like energy management to react in proper fashion.
Figure 4: Tri-cyclic management phases
Such a tri-cyclic management model would look as follows:
The six week data cycle would take a lot of pressure from the IPCC working groups since many report consumers mainly require updated data sets to work with. A transparent open data platform should provide an ongoing view into all involved climate input and output variables. FRIDAY4FUTURE.NET will be created to establish a reference crowd sourced variant of such a platform.
The goal of this project is to create pressure towards the involved policy makers today to collect data at higher frequency. This should deliver valuable data sets within a short term time frame and prepare governments and academic institutions in a soft manner for the time when an official United Nations governed body like the IPCC starts to operate in a more agile fashion. The following chapter will describe how the reference platform will look like.
Open climate platform
I asked myself the question “How do you create an open climate data platform?” often in recent times. As things stand there is a multitude of different sources around the IPCC, governmental publications, academic research and then there is an obvious huge gap in the available data sets where nothing seems to be handily available. Would it not be nice to have, unrelated to the suggested change of course in this white-paper, a website where you could go to and simply access the latest climate related data?
As it stands even accessing the data beneath the nice charts and visualizations in the IPCC reports is a difficult to neigh impossible feat. If you venture into the heart of the servers storing the climate data for IPCC you will find endless troves of simulation runs and mountains of related data. But simple, easy to implement data sets? You are out of luck on that one.
Now, if you would like to find global carbon emission data on a per country/year level, or even lateral climate data like solar radiance maps, wind maps or other information that influences how climate change mitigation might work out in some region of the world, that leaves you on a hard (and nonetheless exciting) treasure hunt.
How would you collect all that data and make it centrally available? It became clear to me in one of my endless climate research sessions. Wikipedia. Open Street Maps. Crowd sourcing is the magic spell to accumulate data distributed all over the world in hard to access places. We need to activate national citizens that speak the administrative language to contact the governments for carbon budget data. We need to activate IPCC researchers or students in their vicinity to retrieve those hard to find spreadsheets that actually built the charts in the reports. Crowd sourcing allows us to divide and conquer the issue of data acquisition.
The charming thing about such an approach is that we can right away start to build soft pressure towards the entities less willing or capable of producing the data on time, let alone in my desired higher frequency fashion. Regularly asking the government’s environmental department for carbon emission data will trigger more efficient retrieval processes in house. If commercial users get accustomed to semiannual updates from one nation they might just as well start to ask why other nations only provide years old data.
Crowd sourcing is a proven mechanism that has created invaluable data sets in the past. And the high criticality of the climate crisis certainly validates the efforts involved to create a platform efficient, maybe even fun, to do so. This is the mission of FRIDAY4FUTURE.NET, where we set out to do just that. Save the world with data. The platform is currently in bootstrapping phase and will launch in early 2021. Get in touch if you plan to utilize such kind of data, or if you think you might participate in crowd sourcing activities.
Summary and Outlook
This white-paper is written to serve as an out-of-the-box view on climate crisis management. The central premises are:
We do live in a time already affected by global warming
We do have the process and operational tools necessary to manage complex projects
We currently lack the application of such tools to the carbon budgeting measures
I hope that the information contained within this paper finds an interested reader or two, and spawns interest in ISO-grade management controls. There is an obvious challenge in the lack of executive power overseeing the whole climate situation. The United Nations as such are a loosely coupled union, and the lack of consequential decisions around the Paris agreement highlight this unfortunate situation.
Maybe the upcoming IPCC report will further the sharpening of the public perception in regards to climate change projections. The five year Paris anniversary already marked a step in the right direction, with countries strengthening their zero-emission goals. Personally I do keep my fingers crossed, since the future livelihood of us on our planet, as well as our whole biosphere with the wonder that is flora and fauna, does depend on it. | https://medium.com/@normanmatrix/the-berlin-manifesto-v2-0-6a27e5bdd973 | ['Norman Matrix'] | 2020-12-16 09:17:20.215000+00:00 | ['Ipcc', 'Paris Agreement', 'Climate Action', 'Fridays For Future', 'Climate Change'] |
Testing Strategies for Chatbots (Part 1)— Testing Their Classifiers | Is your chatbot sorting user utterances into the right intent categories?
Artificial intelligence systems are ultimately software systems and all software systems need to be tested. AI systems require new testing approaches. In 2016 I wrote generically about testing cognitive systems, in this post I focus specifically on testing chatbots.
There are two major areas to focus on when testing your chatbot. The first is assessing the performance of the classifiers in the system. The second is testing that any branching logic is appropriately routing users through the dialog nodes and updating state as needed. The first is a “unit test” of the intent classifier (covered in this post) and the second is a “unit test” of the dialog routing logic (covered in the next post).
Conversational intent testing
Chatbots like Watson Assistant are trained with “ground truth”, a set of sample utterances that are marked with target intents and entities labeled by subject matter experts. We assess the training performance of the chatbot’s classifier by submitting a collection of utterances and examining whether the chatbot’s classified intent matches the intent from ground truth.
Unlike traditional unit testing we are not expecting 100% performance from our classifier. Indeed, if we had 100% performance our classifier would surely be overfit! Instead our goal from testing the classifier is to find its strengths and weaknesses. We explore the weaknesses to find patterns and we use these patterns to help improve the classifier through either adding new ground truth or modifying the intent classification scheme.
WA-Testing-Tool is an open source tool for testing Watson Assistant workspaces, it is available at https://github.com/cognitive-catalyst/WA-Testing-Tool. It tests the workspace’s classifier using K-folds testing over the ground truth, which iteratively breaks the ground truth into training and blind/test sets. (The training set is used to train the model and the blind/test set is only used to test the model.) The tool produces several reports for examining the classifier’s performance and these reports give you the data you need to find patterns in classification errors, helping you improve your classifier.
Improving classification in a sample workspace
In this post we will explore the classification performance of a chatbot classifier trained on sample utterances from the Watson Assistant content catalog (full Watson Assistant workspace: test-workspace.json ). Feel free to follow along testing your own workspace.
To run the tool, first create a config.ini file (use the config.ini.sample as a template and plug in your authentication variables), then run the following commands:
python run.py -c config.ini
python utils/intentmetrics.py -i ../data/kfold/kfold-test-out-union.csv -o ../data/kfold/kfold_intent_metrics.csv
After the K-folds test is completed there are a set of outputs for review:
· Summary of classification performance by intent
· List of correctly and incorrectly classified utterances with confidence
· Overall accuracy number
Figure 1: Intent summary report (kfold_intent_metrics.csv) from Watson Assistant testing tool on sample workspace
We use the output to identify patterns of errors. Our first concern is evaluating the intents themselves. We start by looking at the classification performance of each intent and we sort the report in ascending performance. This shows us the intents with the most classification errors. We start with the worst performing intent and work iteratively down the list. Figure 1 shows an example intent summary report ( kfold_intent_metrics.csv ) sorted in increasing quality.
When we know a couple of intents to improve, we move on to the detailed report. We first sort the report in a useful way (see Figure 2), then we are able to apply filters or simply scroll to narrow into regions of interest.
Figure 2: Suggested sort options for K-folds report (kfold-test-out-union.csv)
Most generally we take the intent we want to improve and filter the entire utterance classification result list with that intent. The first thing we are looking for is intents that are commonly confused for each other. This can be found by sorting the “predicted” column and noticing which intent(s) are most often incorrectly predicted for the intent we are focusing on. If two or more intents are close to each other the intent/entity system you can choose either to revise to more easily distinguished intents, or you can provide additional training data for these intents. Figure 3 shows the kfold-test-out-union.csv file filtered or the worst performing intent. Here we can quickly see that Redeem Points is often confused for Loyalty Status and Transfer Points, suggesting a need for further refinement of those intents or additional training required.
Figure 3 Report (kfold-test-out-union.csv) filtered on worst-performing intent
Our second concern is to review the individual utterance results looking for specific patterns of errors that do not cluster around one intent pair. This can be an eyeball test as we are looking for any other patterns we can find. As is the case above, we can generally improve the Watson Assistant classifiers by refining our intent/entity structure or by providing additional training data.
Continuous improvement of the classifier
The data above can also compute an overall classification accuracy score, which should be treated as an interesting but not critical data point. (Your target metric should be a business-relevant metric like “chat completion time” or “user satisfaction”, not “classifier performance
Classification performance will eventually reach a point of diminishing returns where it may take twice as long to add another 2–3% of accuracy. Too much focus on classification performance can lead to overfitting, where the system performs perfectly on what data it has been trained on but cannot generalize to new and unseen data. You should certainly not target 100% classification performance. You will however want to continuously monitor your classifier’s performance, especially as you add more training data.
The example above described a single classifier improvement cycle. In any given improvement cycle we will probably only target a couple of intents, ideally the lowest-performing ones or the ones used the most in our application. We anticipate that improving some intents may cause other intents to decrease in performance, so updating your training in an incremental fashion is a great idea. Expect to do several improvement cycles as your train your classifier. While doing this be sure to version your training data as well so you can track the performance of your system, via its training data, over time.
The classifier is good enough, now what?
When you are satisfied with your ability to classify user utterances into intents, you can now focus on testing your ability to successfully route a user through one or more conversational steps. Testing the conversational dialog routing logic requires a completely different set of tools. We will explore these tools in the next post: testing dialog routing logic. | https://medium.com/ibm-data-ai/testing-strategies-for-chatbots-part-1-testing-their-classifiers-20becaf5f211 | ['Andrew R. Freed'] | 2019-11-25 14:19:50.168000+00:00 | ['Editorial', 'Chatbots', 'Testing', 'Watson Conversation', 'Classification'] |
Journey of Ram Funders — your funding partner | A highly accomplished finance professional with a diverse experience of 14 years in the banking, auditing and manufacturing sectors. Currently the head at Ram Funders, a financial consultancy that assists companies with their funding requirements and financial projections. Open for new opportunities in the finance, banking and accounting sectors. | https://medium.com/@ramfunders/journey-of-ram-funders-your-funding-partner-7ba49b06fe9b | ['Shriram Kumar'] | 2020-12-27 13:59:17.697000+00:00 | ['Business Funding', 'Finance', 'Project Report', 'Banking', 'Financial Projections'] |
Thank You for Being the Constant in My Life | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/chalkboard/thank-you-for-being-the-constant-in-my-life-ce38aa1211b8 | ['Francine Fallara'] | 2020-11-26 19:07:07.790000+00:00 | ['One Line', 'Gratitude', 'Poetry', 'Nature', 'Spirituality'] |
Image Classifier for Oolong tea and Green tea | Image Classifier for Oolong tea and Green tea
Photo by Manki Kim on Unsplash
Developing the Dataset
In this project, I will be making an image classifier. My previous attempts a while ago I remember did not work. To change it up a bit, I will be using the Pytorch framework. Rather than TensorFlow. As this will be my first time using Pytorch. I will be taking a tutorial before I begin my project. The project is a classifier that spots the difference between bottled oolong tea and bottled green tea.
The tutorial I used was PyTorch’s 60 min blitz. (It did take me more than 60 mins to complete though). After typing out the tutorial I got used to using Pytorch. So I started moving on the project. As this will be an image classifier. I needed to get a whole lot of images into my dataset. First stubbed upon a medium article. Which used a good scraper. But even after a few edits, it did not work.
So I moved to using Bing for image search. Bing has an image API you can use. Which makes it easier to collect images compared to google. I used this article from pyimagesearch. I had a few issues with the API in the beginning. As the endpoints that Microsoft gave me did not work for the tutorial. After looking around and a few edits I was able to get it working.
But looking at the image folder gave me this:
After looking through the code I noticed that the program did not produce new images. But changed images to “000000”. This was from not copying the final section of code from the blog post. Which updated a counter variable.
Now I got the tutorial code to work we can try my search terms. To create my dataset. First I started with green tea. So I used the term “bottle green tea”. Which the program gave me these images:
Afterwards, I got oolong tea, by using the term “bottle oolong tea”.
Now I had personally go through the dataset myself. And delete any images that were not relevant to the class. The images I deleted looked like this:
This is because we want the classifier to work on bottled drinks. So leaves are not relevant. Regardless of how tasty they are.
They were a few blank images. Needless to say, there are not useful for the image classifier.
Even though this image has a few green tea bottles. It also has an oolong tea bottle so this will confuse the model. So it’s better to simplify it to having only a few green tea bottles. Rather than a whole variety which is not part of a class.
After I did that with both datasets. I was ready to move on to creating the model. So went to Google Collab and imported Pytorch.
As the dataset has less than 200 images. I thought it will be a good idea to apply data augmentation. I first found this tutorial which used Pytorch transformations.
When applying the transformation, it fell into a few issues. One it did not plot correctly, nor did it recognize my images. But I was able to fix it
The issues stemmed from not slicing the dataset correctly. As ImageFolder(Pytorch helper function) returns a tuple not just a list of images.
Developing the model
After that, I started working on developing the model. I used the CNN used in the 60-minute blitz tutorial. One of the first errors I dealt with was data not going through the network properly.
shape ‘[-1, 400]’ is invalid for input of size 179776
I was able to fix this issue by changing the kernel sizes to 2 x 2. And changed the feature maps to 64.
self.fc1 = nn.Linear(64 * 2 * 2, 120) x = x.view(-1, 64 * 2 * 2)
Straight afterwards I fell into another error:
ValueError: Expected input batch_size (3025) to match target batch_size (4).
This was fixed by reshaping the x variable again.
x = x.view(-1, 64 * 55 * 55)
By using this forum post.
Then another error 😩.
RuntimeError: size mismatch, m1: [4 x 193600], m2: [256 x 120] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41
This was fixed by changing the linear layer again.
self.fc1 = nn.Linear(64 * 55 * 55, 120)
Damn, I did not know one dense layer can give me so many headaches.
After training. I needed to test the model. I did not make the test folder before making the model. (rookie mistake). I made it quickly afterwards by using the first 5 images of each class. This is a bad thing to do. This can contaminate the data. And lead to overfitting. But I needed to see if the model was working at the time.
I wanted to plot one of the images in a test folder. So I borrowed the code from the tutorial. This led to an error. But fixed it by changing the range to one. Instead of 5. This was because my model only has 2 labels. (tensor[0] and tensor[1]) Not 4.
When loaded the model. It threw me an error. But this was fixed by resizing the images in the test folder. After a few runs of the model, I noticed that it did not print the loss. So edited the code to do so.
if i % 10 == 0: print('[%d, %d] loss: %.5f' % (epoch + 1, i + 1, running_loss / 10)) running_loss = 0.0
As we can see the loss is very high.
When I tested the model on the test folder it gave me this:
Which means it’s at best guessing. I later found it was because it picked every image as green tea. With 5 images with a green tea label. This lead it to be right 50% of the time.
So this leads me to the world of model debugging. Trying to reduce the loss rate and improve accuracy.
Debugging the model
I started to get some progress of debugging my model when I found this medium article
The first point the writer said was to start with a simple problem that is known to work with your type of data. Even though I thought I was using a simple model designed to work with image data. As I was borrowing the model from the Pytorch tutorial. But it did not work. So opted for a simpler model shape. Which I found from a TensorFlow tutorial. Which only had 3 convolutional layers. And two dense layers. I had to change the final layer parameters as they were giving me errors. As it was designed for 10 targets in mind. Instead of 2. Afterwards, I fiddled around with the hyperparameters. With that, I was able to get the accuracy of the test images to 80% 😀.
Accuracy of the network on the 10 test images: 80 % 10 8
Testing the new model
As the test data set was contaminated because I used the images from the training dataset. I wanted to restructure the test data sets with new images. To make sure the accuracy was correct.
To restructure it I did it in the following style:
While calling the test and train dataset separately.
train_dataset = ImageFolder(root= 'data/train' ) test_dataset = ImageFolder(root= 'data/test' )
With the test images, I decided to use Google instead of Bing. As it gives different results. After that, I tested the model on the new test dataset.
Accuracy of the network on the 10 test images: 70 % 10 7
As it was not a significant decrease in the model learnt something about green tea and oolong tea.
Using the code from the Pytorch tutorial I wanted to analyse it even further:
Accuracy of Green_tea_test : 80 % Accuracy of oolong_tea_test : 60 %
Plotting the predictions
While I like this. I want the program to tell me which images it got wrong. So, I went to work trying to do so. To do this, I stitched up the image data with the labels, in an independent list.
for i, t, p, in zip(img_list, truth_label, predicted_label): one_merge_dict = {'image': i, 'truth_label': t, 'predicted_label': p} merge_list.append(one_merge_dict) print(merge_list)
On my first try I got this:
As we can see its very cluttered and shows all the images. To clear it out I removed unneeded text.
Now I can start separating the images from right to wrong.
I was able to do this by using a small if statement
I wanted to get rid of the whitespace, so I decided to change the plotting of images.
ax = plt.subplot(1, 4, i + 1) fig = plt.figure(figsize=(15,15))
Now I have an idea, what the model got wrong. The first sample the green tea does not have the traditional green design. So it’s understandable that is got it wrong. The second sample. Was oolong tea but misclassified it as green tea. My guess is the bottle as has a very light colour tone. Compared to the golden or orange tone oolong bottles in the training data. Then the third example, where the bottle has the traditional oolong design with an orange colour palette. But the model misclassified it with green tea. I guess that the leaf on the bottle affected the judgement of the model. Leading it to classify it as green tea.
Now I have finished the project. This is not to say that I may not come back to this project. As an addition to the implementation side could be made. Like having a mobile app that can detect oolong or green tea. With your phone’s camera. Or a simple web app, that users can upload their bottled tea images. And the model can classify your image on the website. | https://medium.com/dev-genius/image-classifier-for-oolong-tea-and-green-tea-c06de29d834 | ['Tobi Olabode'] | 2020-10-22 07:56:30.558000+00:00 | ['Pytorch', 'Tea', 'Image Classification', 'Python', 'Machine Learning'] |
【CryptoNews】Do You Know? 02: Token Supply Management? | When talking about cryptocurrency or Defi, the market cap of the token project or the ecosystem is usually the first thing people want to know. We can say the market cap is the primary factor in deciding if it is a successful project. However, it is also the reason why the market cap is often targeted to manipulate. It brings out a new question. Is market cap still the best way to evaluate the true value of a token project/ ecosystem?
Flowchain adopts “Token supply management” to prevent such manipulation and proposes a more reasonable standard to evaluate a token’s actual value.
The “Do You Know? 02” will let you understand:
What is market cap? Why and how can market cap be manipulated? What does “Token supply management” do to prevent such manipulation?
On y va! | https://medium.com/flowchain-knowledgecamp/cryptonews-do-you-know-02-token-supply-management-878c8308af88 | ['Angelina H. Huang'] | 2020-06-12 01:27:46.118000+00:00 | ['Cryptocurrency', 'Defi', 'Flowchain', 'Tokenomics', 'Global'] |
How to be Vulnerable in an Online Community | Photo by Andrew Neel on Unsplash
Vulnerability is an odd thing.
It feels like weakness but looks like bravery. It’s the only way to push relationships deeper, but it might also lead them to die an inglorious death. It’s scary and daunting and necessary.
It’s also highly delicate. Vulnerability requires a fine balance somewhere between being too closed and sharing too much. Too closed and you risk never getting close to people and missing out on receiving empathy and support. Too open and you risk oversharing, overburdening others, or coming across as needy, manipulative, or attention seeking.
This balance becomes yet more complex when considered from a community perspective.
Being vulnerable in a community context means being vulnerable in — gulp — public. And being vulnerable in public is tricky. Is it possible to be truly vulnerable if it’s to a large, faceless group? Is it just a bid for attention? Does it seem dumb to those you’re trying to be vulnerable to? Is it courageous and encouraging? What even is the goal?
So many questions.
Why you should care
Why would you want to learn how to be vulnerable? Simply put, you can’t have meaningful relationships without it. Vulnerability is what brings people closer in an authentic, deep way. If relationships like that are something you want, then vulnerability is what you need. Vulnerability is sharing yourself and without sharing yourself healthily and effectively, you can’t become known by others which means you can’t build deep relationships.
Are you following this glorious stream of logic? Great.
Also you’re on social media. You likely wouldn’t be reading this otherwise. And people share on social media. Now, let’s be honest: sometimes it’s a bit too much sharing. You don’t want to be that oversharing person. It’s awkward.
What you want is to share wisely because you like taking the wise route. (Go with me on this one. I believe in you.)
Fabulous. You’ve come to the right place.
Where all this brilliant info is coming from
The basis for this article was a survey I put out to an online college community. A total of 37 people provided their thoughts on vulnerability and on whom they saw in the community practicing it well. I then conducted interviews with five of the people most frequently mentioned as being vulnerable in a healthy, worth-emulating kind of way.
I distilled the interviews, and all the insights I pulled from them, into the top five ways you can become more vulnerable, more effectively, specifically in the context of an online community.
(I’ll also call upon the brilliance of Brené Brown, shame and vulnerability researcher, as needed.)
But first…
What vulnerability really is
Always define your terms! And in this case, defining vulnerability makes a big difference in how we approach the questions posed at the beginning of this piece. Superficially, vulnerability would seem to be the act of opening up and sharing things about yourself and your life. But that isn’t quite it.
Based on the survey responses, vulnerability is sincerity.
It’s purposely setting yourself up for potential hurt. It’s the act of diving deeper into relationships by being honest, showing your flaws, saying the hard things, and going beyond small talk answers to questions.
Vulnerability is about feeling fear but moving through it in order to bridge the gap between two people.
It’s bravery.
To Brené Brown, vulnerability is, quite simply, “uncertainty, risk, and emotional exposure”.
What vulnerability actually looks like
While some sharing might look like vulnerability and sound like vulnerability…it isn’t actually. We just defined what vulnerability is. Manipulative oversharing isn’t vulnerability. Awkward group exposés aren’t either.
“Vulnerability is based on mutuality and requires boundaries and trust. It’s not oversharing, it’s not purging, it’s not indiscriminate disclosure, and it’s not celebrity-style social media information dumps. Vulnerability is about sharing our feelings and our experiences with people who have earned the right to hear them. Being vulnerable and open is mutual and an integral part of the trust-building process.”
That’s from Brené Brown’s book Daring Greatly. (Highly recommended.)
Survey respondents had similar things to say when asked what they thought distinguished healthy vulnerability from “unhealthy vulnerability”. (The latter of which isn’t actually vulnerability, as we’ve just established.)
They indicated that the motives for sharing are what really distinguish real vulnerability from a sharing that just looks like vulnerability. What are you hoping to gain? Why are you doing it? Is it selfish or manipulative? Those are some questions to think about. If the goal isn’t in line with what vulnerability is about, it likely isn’t vulnerability.
Context is another big deal. It’s not necessarily about what you share but about where and to whom. It’s important to understand your audience and the expectations around your relationship so that you share appropriately, upping the likelihood that your sharing will be adequately received.
Trust is yet another vital component in any vulnerability situation. To share something scary requires a deserved level of confidence in the other person’s character and the safety of your relationship with them. You’re sharing yourself and yourself is valuable, so you need to establish a solid sense of trust with whomever you’re opening up to.
The need for emotional intelligence and maturity plays a role as well. That includes being able to respect boundaries when it comes to sharing; understanding yourself, the other, and the dynamic; knowing that relationships require a give and take, and not a one-sided dumping; and being thoughtful towards the other person in general and what they can handle at any given time. Without all that, you’d likely be engaging in unhealthy sharing instead of vulnerability.
On a deeper level, a healthy sense of self worth is a defining feature of real vulnerability. Vulnerability takes courage and needs to be backed by a solid-enough sense of self value that rejection won’t result in complete dejection. That’s the risk of vulnerability — rejection by the person you’re being vulnerable to. They might not be able to handle your vulnerability, they might judge you, or they might push you away. If you want to be vulnerable, you need to be able to safely risk those outcomes. You just might get rejected and to deal with that you need a firm sense of self. (Or firm enough.)
How to be vulnerable
This is where you learn the top five vulnerability lessons I got from those community members voted “bestest in vulnerability.” (Okay, that’s not really what they were called, but that’s what I’m calling them here.) They are community leaders in how they effectively navigate the online space, and they are seen as sharing when and where it matters, and in a way that builds connection.
Yet, ironically, none of them think they’re great at being vulnerable. Most feel they’re overly closed and one feels she tends to share too much. This phenomenon leads to the first lesson.
1. Learn how to be vulnerable.
Humans aren’t born knowing precisely when and what and how to share. Considering the fact that vulnerability takes “uncertainty, risk, and emotional exposure”, your life experiences might have taught you that it’s not worth it or that it’s way too scary, so you don’t open up. Or maybe you’ve struggled in the past to get the connection and support you need, so you just keep sharing in less-than-ideal contexts, essentially using the idea of vulnerability to get that emotional boost…without success.
The good news is you aren’t doomed to be closed off from people nor are you doomed to be an indiscriminate sharer.
You can learn how to be vulnerable. It’s a skill.
All the interviewees noted that they make an effort to be appropriately vulnerable, whether that means sharing or not sharing. They train themselves to be aware of situations that might call for vulnerability and make the conscious decision about how vulnerable they want to be (or not be.) They also actively accept feedback and calibrate their vulnerability as needed.
The point is, they keep learning and practicing. (And as one interviewee said, the decision making process becomes much more intuitive and subconscious with time.)
Photo by Trung Thanh on Unsplash
2. Choose to encourage.
The top vulnerable from the survey make a point of encouraging people with their vulnerability. If they’re sharing publicly, they want it to be for a purpose that benefits the reader or listener. If they’re on a leadership team, they might share to make themselves (and, by extension, the leadership in general) come off as more accessible and relatable. They might share something that they’ve struggled with and overcome so that others can feel hopeful as they grapple with their own issues. It needn’t be something grandiose — it just needs to be more other focused. They try to make their vulnerability less about them and more about the people they’re being vulnerable with.
This point goes back to how vulnerability is based on motives. Those I interviewed aim to be vulnerable for the group benefit of connection, community building, and encouragement. They tend to save the vulnerability for which they need emotional support for private, one-on-one situations.
3. Be humble.
Being vulnerable isn’t about perfection. On the contrary, the whole point of vulnerability is showing imperfection. The interviewees try to keep themselves grounded and humble when they share themselves. Their sharing is done with a humility people pick up on because their vulnerability isn’t about showing off or furthering a self-serving agenda or to make themselves seem a certain way.
That’s not to say they don’t think through what they’re sharing and how it might come across. They do. Some of them arguably severely overthink it. They want to be a good example, to maintain their self-respect, and, really, to just not seem silly. All that said, their goal typically isn’t to make themselves sound great or to be congratulated or sympathised with.
Their sharing is an exercise in humility because they are exposing themselves for, what they hope is, the benefit of others.
4. Connect behind the scenes.
This is a big deal. Why? Because how something is viewed publicly depends a lot on what is cultivated privately.
All the people I interviewed focus on vulnerability one-on-one. In fact, the majority of them find public vulnerability to be some version of uncomfortable, inane, unnecessary, or unwise. They all actively build real relationships with people outside of social media and public interactions. The public arena is only an extension of the relationships and community they’ve fostered one-on-one and in small groups and so being vulnerable in those contexts feels less public than it might otherwise.
Vulnerability, as established, is better received when there’s a strong relationship, so by establishing those personal relationships, any public displays of vulnerability are better received and perceived.
5. Recognise the value.
Everyone I spoke with values vulnerability.
They understand that the act of being vulnerable deepens discussions and enhances relationships. They know they need to share even if it’s scary, uncomfortable, or awkward. That’s why they practice vulnerability and use it as best they can. Being vulnerable (in the right way) starts with recognising that you need to be. It starts with seeing the value in opening yourself up to that emotional hurt.
The interviewees know that. They might not even like it, they might still feel they aren’t very vulnerable, but they see the value in meaningful sharing.
What the answers are
To recap, I posed some questions about public vulnerability at the very beginning of this writing. They were:
Is it possible to be truly vulnerable if it’s to a large, faceless group?
Is it just a bid for attention?
Does it seem dumb to those you’re trying to be vulnerable to?
Is it courageous and encouraging?
What even is the goal?
Considering the meaning of vulnerability, yes, it’s possible to be vulnerable to a large group. It’s not ideal. It isn’t where the vulnerability magic happens, but it’s possible to be vulnerable to a group granted you play it well. It might be a bid for attention, but then it isn’t actually vulnerability. Remember that motives tend to decide if something is healthily vulnerable or not. And yeah, it might seem dumb. You can’t control perceptions and if there’s one thing that I personally took away from my research was just how perception-based vulnerability is. Everyone I spoke with had a different idea of what vulnerability looked like coming from them and that sometimes further differed from how their sharing was perceived by other members of the community. Your perception of someone else’s vulnerability depends less on if they’re actually being vulnerable or not and more on if what they’re sharing is something that would make you feel vulnerable.
Funny how that works.
So yes, your sharing might be perceived as dumb, but hey, that’s the inherent risk of being vulnerable. (As incredibly uncomfortable as that is.) But it might be seen as courageous and encouraging too! If you truly are being vulnerable, then yes, that’s courageous and hopefully it’s encouraging too because you shared with that goal in mind. Just remember that you can’t control perceptions either way. You can simply practice wise vulnerability, aim to encourage, be humble, and cultivate those relationships behind the scenes.
As to the last question, only you can answer that: what is the goal? Whatever your answer, it’ll guide whatever you might share.
Answer well. | https://andrea-klein.medium.com/how-to-be-vulnerable-in-an-online-community-f592d7b9f7d4 | ['Andrea Klein'] | 2020-07-24 16:52:17.027000+00:00 | ['Relationships', 'Self Improvement', 'Vulnerability', 'Psychology', 'Online Community'] |
An introduction to DAOs | An introduction to DAOs
Resource allocation is the new activism.
What is a DAO?
DAOs (Decentralized autonomous organizations) are internet-native organizations that are run and managed by communities through transparent decision-making processes. Unlike traditional organizations, DAOs leverage the blockchain to enable everyone in the community to have input in the key governance and resource management decisions — this is a practice known as community banking.
This in contrast to traditional corporate governance structures where the organization is managed by a centralized board of directors vs. all members of the organization. The idea is that we can build better values-driven organizations by enabling them to be run, owned, and governed by communities instead of the few.
Why DAOs?
Our inability to compromise around financial maximization holds us back from being able to coordinate in society. The organizations of today are built to optimize for-profit and are usually unable to account for other forms of value such as sustainability, financial resilience, and worker alignment. It is the prisoner’s dilemma played out on a societal level: pollution, poverty, famine, pandemic, lack of medical supplies. In the end, money has no value when we are not safe nor healthy. There is more in the world that we value over profit: peace of mind, environmental sustainability, a sense of community. The point of money is to not make more of it. It is to create value: not only for me or you, but for those around us.
What do we care about?
What do the people around us in our communities care about?
With most organizations being run in top-down environments, we are unable to properly express our values in them — it should be strictly business, right? But what if we were able to work with value-aligned organizations where we could have a say in how they were run. Management would shift from a gatekeeping role to one of facilitation instead. We need new ways of forming organizations that are values driven as opposed to purely being financially driven. Only then are we able to perhaps escape this tragedy of the commons to create a more abundant world.
How do we change?
To change our outcomes, we need to change how we make decisions. We need to experiment with governance and understand new ways of how we run organizations.
Currently, organizations today rely on legacy financial infrastructure and legal entities to operate. It may take up to 2+ weeks (depending where you are in the world) to set up a legal entity and then get approved for a company bank account. Currently, not only is this inefficient and permissioned (as in, you can be denied a bank account) but there is no real feasible way to manage finances without having executive controllers in place. And even if someone breaks the rules, you would still need to go to court to sue those individuals and to arbitrate.
With blockchain- based community banking, organizations can be formed in less than 5 minutes. Relying on public blockchain infrastructure, it is global and permissionlessly accessible to everyone. Upon the formation of these organizations, the governance of these organizations is programmed so that they can’t be changed unless the members of the organization agree with them. Through the lense of blockchain technology, organizations are instead flexible computer programs that can be tweaked and altered as needed. Perfect if we are to experiment with them.
Through community banking, we can:
🌲 Create companies that can frictionlessly provide global work opportunities to anyone who has an internet connection
🌲 Enable strangers all around the world to crowdfund and form internet- native non-profits, where they can safely collaborate and fund / work / share global goals and missions that matter
🌲 Create companies that are owned, run, and governed by their workers and employees that can potentially make collectively more responsible decisions for society
🌲 Enable online communities to evolve beyond a social network of forums and chat rooms to become a fully operational community run company
However, despite the potential of community banking and the idea of DAOs, there is a lot of work that is yet to be done. We need to increasingly experiment more to discover viable organizational models, we need people to help communicate much of the sheltered work within the blockchain community and we need better tools that everyday individuals can use.
The technology for this is available. This is a call for us to experiment.
Resource allocation is the new activism. | https://medium.com/metawork/an-introduction-to-daos-782e3817e2cd | ['Peter'] | 2020-05-04 17:13:26.658000+00:00 | ['Distributed Organizations', 'Intro To Daos', 'Dao', 'Introduction To Daos', 'Dao 101'] |
Evolution: Faith or Science? | Photo by Eugene Zhyvchik on Unsplash
Evolution: the word stirs a response in almost everyone. Even after a centuries long history, this idea is shrouded in controversy. It doesn’t do for some to proclaim, smugly, that all scientists agree: they do not. Many scientists disagree fundamentally with the tenants of evolution. Evolution is not so much a scientific theory as a belief system, requiring more faith of its adherents than creationism.
So, what then is evolution? In a nutshell, it is the theory that all complex organisms have evolved, that is, undergone some physical change from one thing into something else. A fish evolves into a frog, evolves into a bird, evolves into a ape, evolves into a man. According to Steven Jay Gould, this is bedrock science and indisputable fact. The only thing to debate, says he, is the mechanism, the how of it. See “Evolution as Fact and Theory”, pp. 254–55, originally appeared in Discover Magazine, May 1981.
I wholeheartedly disagree.
Let us consider. A fossil is found — the remains of some once living creature. The only trouble is, there is no such animal these days. So then what happened? How did this creature, which obviously lived at some point in time, cease to exist? What became of it? The answer offered up by evolutionists are two: either it failed to meet the challenge of its environment and thus disappeared, or it changed into something that could. This supposition is based solely upon a fossil rock found in the ground, the only physical evidence that remains.
The theory then is this: all creatures alive today began their existence long ago in the shrouded mists of time as simple, single-cell organisms. These single-cell organisms changed into more complex organisms. Now do not be mislead. These changes are not simple just because these are simple, single-cell organisms. No, we’re talking about fundamental changes. It’s not like changing the color of your hair. It is more akin to growing another head!
Now I know that’s not the way it works. This occurs, or so they say, genetically over a very long time. But this doesn’t address the fundamental changes that must occur for one type of organism to become another type. Let’s not even talk about people! Let’s keep it simple. Let’s talk about something easy, like say, a frog.
A frog has skin, eyes, ears, a digestive system, a reproductive system, a respiratory system; it has a number of intricate body systems, each different from the others, each carrying out particular functions distinct from the others, and necessary for its survival, and the survival of its species. The eye is different than the ear, in both function and form. The lungs are different than the digestive system, in both function and form. All of these organs are complex, yes, even in a frog.
The frog though, is only one type of organism out of millions, each one different from the others: dogs, cats, people, roses, oaks, catfish, kangaroos, viruses, bacteria, protozoa, paramecium. The list simply boggles the mind.
Take any single organ, such as the liver, for example. The liver serves an absolutely vital function in the body, one which cannot be handled by any other organ: that of filtering the impurities from the blood. Without your liver, you would die. Now how did that single-cell organism come upon the idea of a liver? How many creatures, misshapen and malformed, left this world liver-less before whatever mechanism driving evolution decided that a liver was needed?
How about the heart? Or blood, for that matter? Or the kidney, or the pancreas, or the eye, or the ear, or the lungs? How did they all come to be together in one place, forming a living creature? Take any one of these things, dissect it, look at its function, its fundamental differences from other organs, and tell me that trial and error, hit and miss, in a word, chance, formed them all and brought them all together in one creature to make a living, multi-celled creature, not once, but millions of times, again and again.
Chance is a very important issue here. Darwin, in particular, in his theory of adaptation of species, relied heavily on the chance genetic meandering of change. Five toes were better than three, so every thing born, by chance, with five toes, survived to have others with five toes; while those with only three, went the way of the dinosaur. One of the interesting things discovered by geneticists is the complete lack of predictability of genetic deviations. Normal parents give birth to a giant, and the giant gives birth to a normal person. Relying on this mechanism to perpetuate improvement of species is a bad gamble at best.
Another element of evolution is that of time. Evolution, we are told, requires time, more now than when the theory began. Hand in hand with historical geologist, evolutionists have increased the estimated age of the earth, at first by millions of years, and now by billions. But even this is not enough time to account for the progress allotted to the chance happenings of evolution.
I have carefully given credit to the beginning of all of this: one simple, single-cell creature. Remember it? How did it come to be here, or anywhere, at all? Scientists tell us of the primordial soup, a chemical potluck washing over the earth some tens of billions of years ago. A chance lightening strike provided the energy to combine some of these chemicals into amino acids, the building blocks of life, as it were. This has been shown to be possible in experiments. But the amino acids in the lab were simply that, amino acids. They were not simple, single-cell organisms. They were not, in a word, ALIVE. Nor did they have an even more basic, even more complex element found in every living thing — DNA. DNA is the biological blueprint that contains all the information about a creature. It, not chance, determines what a creature will become, and what its offspring will become. Without DNA there is no reproduction; there is no life.
Faith. That’s where we started this thing, so let’s end there. It requires enormous faith to believe in creation by chance, to see order coming out of chaos, complexity following simplicity and all by chance. It would be like shredding the Collected Works of Charles Darwin in a cross-cut shredder, tossing the debris into the air, and having it land intact and readable. Well, maybe not. I think you’d stand a better chance with the later. | https://medium.com/@huffhimself/evolution-faith-or-science-6eee988d33be | ['Ml Huff'] | 2020-12-07 21:46:34.408000+00:00 | ['Evolution', 'Science', 'Creationism', 'Faith', 'Debunk'] |
Unit Testing CLI Programs in Go | Photo by John Schnobrich on Unsplash
It’s a common scenario — for me at least — to be building a CLI tool ( main package) that has CLI options ( flags package) that I want to add unit tests to.
Here’s a simple example of a CLI tool that adds or multiplies some input numbers:
Let’s try it:
5 + 7 + 9 = 21 — great… But wait! 3 * 2 * 5 != 0 …I’ve put a deliberate bug in the command. Something that we will catch and fix with a unit test.
Don’t Try This At Home Kids
Let’s write a unit test that uses the same CLI arguments so we can fix the bug. However, we have a lot of global state that needs to be manipulated:
Yuck! This is really ugly and problematic code… miles away from a small, easy to read unit test.
One fairly obvious solution here is to separate the logic that does the calculating from the main() function (that handles the arguments and output), like:
Now the unit test is trivial:
But this is cheating! Since we have bypassed the flags completely, we are not actually testing the CLI arguments. We’d really like to perform black-box testing on the tool as it would be used — instead of testing isolated parts. Especially if a bug was introduced to the way flags are parsed.
A Better Solution
We now have an out which makes it much easier to capture output. We no longer use the functions in the flag package. Instead we have to instantiate a new flag parser for each call to main() . Anywhere output is generated will have to write to out instead.
Now our unit test needs only know about the CLI arguments before calling main() :
This looks a lot nicer when there are multiple tests: | https://medium.com/swlh/unit-testing-cli-programs-in-go-6275c85af2e7 | ['Elliot Chance'] | 2020-03-04 05:43:05.057000+00:00 | ['Golang', 'Unit Testing', 'Go', 'Cli'] |
Go To Market Strategy For Custom Made Hot Beverages On Demand in Corporate Parks | Tech Applications Essential For This Business
1. LivePlan — Business Model
To start out strong, use business organization app LivePlan to create a custom business plan. The app will take you step by step through the creation of your plan with a few questions, including your cash flow projections. Once it’s created, you can tweak it as needed to account for new sources of revenue, funding or inventory.
2. CamScanner — Scanning Of Documents
Need to sign forms or email documents? CamScanner turns your smartphone camera into a scanner, creating PDF or JPG files that you can save, email or print wirelessly. Once you take a picture of the document you want to scan, the app removes any background, adjusts the angle and tilt, fixes issues with brightness or colour, and creates a high-resolution final document.
3. Goods Order Inventory — Inventory Management
Inventory tracker app Goods Order Inventory will help you keep track of your stock, along with sales, invoices, payments, locations, suppliers, clients, balance sheets and shipments. Goods Order Inventory includes a barcode scanner and multiple reporting options. It integrates with a variety of accounting applications, as well as sales platforms such as eBay and Amazon.
4. Gusto — Workforce Management
Managing payroll for your employees is simpler with an app like Gusto. Created by QuickBooks to integrate with its accounting software, Gusto allows you to manage payroll, compliance, sick days, vacation time and other benefits all in one place. It also allows you to calculate and file your federal and state payroll taxes.
5. Proven — Hiring Through NGOs
When you hire new employees, Proven streamlines the process of creating and sharing job ads. Use the app to create a post for the available position. You can then post directly from Proven to job sites, including ZipRecruiter, Glassdoor and Monster. Then, you can collect the applications you receive in one place and respond directly to candidates through the app.
6. Moment — For The CEO’s Time Management
You can boost your productivity with Moment, which tracks your phone usage and gives you a clear snapshot of how you are actually spending your day. This productivity app can also help manage time spent on your phone by setting daily limits and sending you notifications when you go over them.
7. Hootsuite — Social Media Performance Tracking System
You can keep track of your social media marketing with Hootsuite. It’s compatible with over 35 different social media platforms, and it allows you to schedule hundreds of posts at once. Unlike many other social media management apps, it also has extensive analytics and monitoring options to track the effectiveness of your campaigns.
10. GoToMeeting — For Employee and Organisational Coordination
If your business requires conference calls with employees or clients, GoToMeeting provides a single hub that connects users from their phone, computer or tablet. The app includes screen sharing as well as an audio and video connection; you can also record calls for later playback. GoToMeeting can sync with your calendar, so you can schedule meetings in advance or create regular team appointments.
11. Asana — For Project Collaborations and Tracking
Project management app Asana provides a platform for teams to collaborate, communicate and stick to a schedule. An Asana board allows you to create tasks and projects, monitor progress, share notes, upload files, and communicate directly with team members and employees. This task management app also integrates with Google Drive and Dropbox for file sharing and lets you post updates to your Slack channels..
13. Wunderlist — Organisational To — Do List
A simple and effective to-do list app for business, Wunderlist will help you get organized. You can create and manage multiple lists in a single place, then share them with others on your team. This task management app can also break list items down into smaller tasks for more complicated to-dos, and you can set reminders and deadlines.
14. Expensify — For Managing Costs and Reimbursements
Business expense app Expensify allows you to keep track of costs and process reimbursements without worrying about paper receipts. You can link the app directly to a credit or debit account. It automatically tracks charges and places them on an expense report. If you prefer, you can also use your phone’s camera to take pictures of receipts, and Expensify will extract and upload the relevant information.
15. QuickBooks — Digital Accounting
QuickBooks is one of the easiest accounting apps to use, and it comes at multiple price points based on the size of your business. In addition to basic accounting, it covers profit analysis, tax reporting, inventory management and more. It connects to your bank account and integrates with many other payroll, inventory, point-of-sale, and business expense apps to streamline your workflow.
16. Instamojo — For Payment Collection and Management
Instamojo Payment Gateway allows new merchants to create a merchant account instantly to collect online payments with ease with or without a website. Charges of InstaMojo are quite reasonable. | https://medium.com/@sanoovrat/go-to-market-strategy-for-custom-made-hot-beverages-on-demand-in-corporate-parks-b55847b5c67f | ['Anoovrat Singh'] | 2020-12-25 17:38:37.709000+00:00 | ['Market Research Reports', 'Marketing Strategies', 'Communication', 'Marketplaces', 'Marketing'] |
Episode 46: Believe In Your Dreams, with Trust Wallet | Press Play!
On this episode of BlockChannel, McKie, Dee and Petty are back for a new season with Viktor Radchenko of Trust Wallet. Fresh off the back of an acquisition by Binance (those guys are on fire), he’s taking his vision for a decentralized dApp browser to the next level with the backing of Binance’s vast resources. Come learn about Viktor, his background, and his vision for Trust Wallet going forward in the future post-acquisition.
Show Link(s):
Trust Wallet: trustwalletapp.com
Intro/Outro Music “Money Boat” by JTM: Jamesthemormon — Money-boat
Show Sponsor(s):
Amentum: amentum.org
Disclaimer: This is not investment advice, it is an engaged discussion on new technology; BlockChannel reminds you to always do your own due diligence before investing in any crypto-related project in the industry. | https://medium.com/blockchannel/episode-46-believe-in-your-dreams-with-trust-wallet-343ae5ec8df9 | [] | 2018-08-22 03:44:21.884000+00:00 | ['Investing', 'Blockchanneltv', 'Ethereum', 'Crypto', 'Bitcoin'] |
Who We Are | Who We Are
Getting the best results by hiring the most qualified Latin American talent.
In software development, you get the best results when you use the highest quality resources. Every feature we deploy comes from the minds of the passionate professionals who form our team. The more qualified the team members, the higher the value we’ll produce, and the happier you will be.
Our recruiting process is consistently evolving to enable us to find the best talent available. Our HR team interviews hundreds of candidates per week to find outstanding profiles. Each applicant goes through a rigorous set of interviews and tests, and only 3% of them get hired. | https://medium.com/beon-tech-studio/who-we-are-1444cb523bfd | ['Beon Tech Studio'] | 2020-12-31 18:59:52.128000+00:00 | ['About', 'Outsourcing Company', 'Who Am I', 'Outsourcing', 'About Me'] |
AWS — Deploying React App With Java Backend On EKS | AWS — Deploying React App With Java Backend On EKS
A step by step guide with an example project
AWS provides more than 100 services and it’s very important to know which service you should select for your needs. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
In this post, we are going to deploy the React application with the Java environment. First, we dockerize our app and push that image to Amazon ECR and run that app on Amazon EKS.
Example Project
Prerequisites
Dockerize the Project
Pushing Docker Image To ECR
Create a Cluster and Worker Nodes
Configure kubectl to use Cluster
Deploy Kubernetes Objects On AWS EKS Cluster
Summary
Conclusion
Example Project
This is a simple project which demonstrates developing and running React application with Java. We have a simple app in which we can add users, count, and display them at the side, and retrieve them whenever you want.
Example Project
If you want to practice your own here is a Github link to this project. You can clone it and run it on your machine as well. | https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-app-with-java-backend-on-eks-2cc396888944 | ['Bhargav Bachina'] | 2020-11-15 06:02:54.844000+00:00 | ['AWS', 'Programming', 'Software Development', 'Kubernetes', 'Cloud Computing'] |
Interview with Robin Brailsford — A Deep Understanding and Passion for Public Art | Interview with Robin Brailsford — A Deep Understanding and Passion for Public Art Thejas Jagannath Follow Jun 25 · 15 min read
Source: Robin Brailsford
When I first saw images of Robin Brailsford’s public artworks, on social media, I was drawn and captivated. Creativity is at the essence of her art works. Her art is fascinating and interactive at the same time. Although it takes a long process for the art to be made and commissioned, it feels like the response and engagement from the public via the interactive nature of her artworks, makes it all the more worth it! Robin’s art can be seen in many cities around the USA, with her public art installations in public spaces of Los Angeles, California and Las Vegas, Nevada.
Robin Brailsford has been a pioneer and champion for public art, with an MFA in Sculpture. She has received numerous awards and recognition for her public art. Robin has a patent for LithoMosaic which is “a process for setting mosaics in monolithic concrete pours.”
LithoMosaic — Mosaic and Concrete — Robin Brailsford: Inventor
This is an interview, which tries to gain information about her artworks, her views on public art processes, interaction with public art, and how she goes about producing them. She also speaks about her views on COVID-19 and BLM.
You can access the interview below:
Thejas Jagannath:
Can you tell me more about the Convention Center Light Rail Station in LA.? What is the name of that artwork and how was it commissioned?
Robin Brailsford:
Time and Presence is the name of the artwork. It was my first major commission, in the 90’s, via the LA Metro Rail. I entered, was a finalist. I made a major proposal and came in second, but the trendy artist who came in first presented a project that was unsafe for the trains — so they called me back.
It was designed at the time of the Rodney King race riots in Los Angeles and I wanted to make a piece that addressed that discord. The Metro line goes from downtown to the beach via Watts and is the first station on the line above ground after town. So, I explored the perceptual shock of sun and dark, above and below, warm and cold, sheltered and vulnerable.
Time and Presence is about life on the planet before man, before our animosity and perceived human differences. It consists of two pierced and painted steel canopies. In one, Pangea casts its patterned shadows on the platform below, and so a transit user can stand with one foot in Africa and the other in the Americas. Gaia is flanked by a sea turtle representing ancient life in the ocean, and an orchid symbolizing early life on land.
The second canopy is the solar system and mathematical (alchemical) symbols of greater than and less than.
The position of the sun changes the project minute by minute, season by season. It casts meaningful patterns on the passengers and trains below.
Time and Presence Artwork by Robin Brailsford. Photo by Paula Jacoby Garrett
Thejas Jagannath:
What are the material goals when you create art to grab people’s attention?
Robin Brailsford:
Well the material goal is only one — to last. The LA Metro project has been installed for several decades, and has required no maintenance. That is the material goal.
I AM very interested in the ground plane though — through shadow, distance, LithoMosaic etc…. and through light, glass and jewelry.
Social justice is never far from the fore.
Thejas Jagannath:
Are your artworks site specific and permanent?
Robin Brailsford:
All my work is site specific and permanent. I find there is no reason to do temporary, as temporary is the same work and engineering for a much smaller return.
Thejas Jagannath:
What are the most effective responses you get from viewers of your public art?
Robin Brailsford:
Hmm great question.
The best by far is little kids wanting to lie down with their stomachs against the LithoMosaic plazas. Little kids — to teenagers — seem to want and need to get the energy and color and light right up against their core. They can feel and need the energy. That innate response is awesome!
Then, there was literally everyone who came to see The Grand Canyons of La Jolla, in the big studio at Scripps Institute of Oceanography…. they would walk in and LOOK and clap their hands in front of their open mouths and say something like; “HOLY MACKEREL!” or, “WOW that’s a MASTERPIECE!”……everyone from world class oceanographers to the Mayor of San Diego to Arctic explorers to Parisian artists. That is gratifying!
Future Plaza, Pioneer Modernism Park, Lemon Grove, CA Robin Brailsford Public Art: Source
Thejas Jagannath:
Do people revisit your artwork?
Robin Brailsford:
Well with art in public places, they don’t have much choice! Time and Presence at the Convention Center light rail station in LA that has been there for decades, so every commuter on that line sees it twice a day, 5 days a week. Lakers’ fans see it every game, and attendees at a convention see it several times a day for a week…. Kids have grown up with it.
Kids grow up with my work.
River of Life Bus Shelters by Robin Brailsford. Santa Monica Downtown Transit Mall, Santa Monica, CA.
Thejas Jagannath:
How do you create a sense of place?
Robin Brailsford:
Research, research, research. I got an MA and my MFA from the University of New Mexico where they treat the MFA as a PhD (terminal degree.) To achieve that level, we had to write a thesis — and much as I rebelled about the work of it at the time — — I now write such a paper for every major project that I do. For me, every big project is the equivalent of an MFA. The writing funnels my thoughts and keeps them in order for the ages.
Thejas Jagannath:
Could you tell us about The Grand Canyons of La Jolla?
Robin Brailsford:
To find out more about the project see the website and link below: https://www.codaworx.com/projects/5b108c394a287/
According to the website, the goal and process of the project of The Grand Canyons of La Jolla is:
Goal
“The goal is simple — to give the hundreds of the beach’s thousands of local, and international yearly visitors a kinetic sense of wonder and beauty for the bright surface of the Cove down to active tectonic, pitch black dark canyons below.
Our client was the 100 year old Father of Modern Oceanography, Walter Munk, so the bar was set very high… for art and education.
Now that it is installed, tech experts are working with biologists, on an app so that from one’s phone. A camera focused on a shark, for instance, will transfer the viewer to a video of the shark in its habitat, and other fascinating details. Some of these fish can breathe air, walk on the fins, electrocute their prey or travel thousands of miles…. So it will be a fun and fascinating interaction.
All the lifeforms are depicted at life size, and we purposely worked to make the plaza timeless — roads are minimal, Kumeaay villages and climate change are inferred.”
Process
“When first approached with the commission, the clients had no aspirations of including Walter Munk’s landmark 1947 paper, “The Grand Canyons of La Jolla,” nor were they hoping for more than 50 fish, had no predators, no whale, no humans and only 12 levels to the canyons. For us, the opportunity was too great to just create a benign “The Map,” (for divers) that they proposed.
As kismet would have it, it seems we achieved all to stunning results. The play of light over the matte and glass tiles is eerily like the surface of the water, yet the depth of the canyons can clearly be understood. The depiction of the Helen Browning Scripps Pier gives the landscape an immediate scale, and the skill of the artists (Robin as Lead, and on slow, flat or big fishes — like a whale, skates, and sunfish: Wick on sharks and all fish that are fast; Kelsey Hartley on birds, and Mariah Armstrong Conner on color palette and water.) Of course, there are exceptions to that rule, as we all pitched in on everything, but as in ancient mosaics, the hand of each artist can be seen with that direction.
Because of COVID, the project is not yet open to the public.”
Grand Canyons of La Jolla
Thejas Jagannath:
How do you recognize when people interact with your artworks?
Robin Brailsford:
Well, I really don’t see them! The creation process is secluded and private (La Jolla being an exception) and I tend not to visit the works when they are done. I know the works by heart, and while it is heartening to see kids playing or selfies being taken, it can be painful — if the maintenance isn’t great or the client’s programming of the site is lax — and my body remembers all the WORK that went into them — so a nap is often lovely and necessary after revisiting work.
I will say that my work is NEVER vandalized, so I take that as the public understanding that I make the work FOR them, versus, TO them. (The exception to the vandalism issue is that public art is often derailed by stodgy white men in seeming positions of governmental power. Fear is a powerful factor. Oh let me count the ways….!)
Thejas Jagannath:
Are your artworks tactile in nature? Do you consider tactile art as a key component of your art process?
Robin Brailsford:
I go for visual as well as touch tactility. I believe that pattern makes one sane and joyful, and plain makes one not. I do think about the temperature of the work, and the climate: to touch, sight and comfort.
TRUISMS are the goal, to entertain once, and over time, the Endowed Chair of the subject I am presenting, as well as the child in their charge.
I strive for my work to be true, on all levels.
I look for the potential in people, places and things, and then strive to realize that potential.
I often find that on a big design team, as the Public Artist, I am the CONSCIENCE of the project.
Robin Brailsford Public Art: Source. Shoreline Stroll, Long Beach Transit Gallery, Long Beach, CA
Thejas Jagannath:
What is the paperwork involved with the city council/policymakers?
Robin Brailsford:
Ha! Go to the Café website (call for entries) and make a real go to apply for a project or two. That will give you a hint. The problem with the paperwork to apply for a project is that every request is convoluted and different. The number of images, size of images, videos/no videos, font size, length of resume, length of letter, why you adore their city, references are all different and there is absolutely no room for creativity — So each and every application can be a week’s worth of work.
I work on an invitation only basis now. I have asked around, and many other artists also feel that Cafe has made the process so “democratic” that they have killed the creativity. The public art application process is now geared towards architecture offices with staff dedicated to just applications. It is no longer worth it for independent artists such as myself.
Cafe RFQ’s often get 500 applications, for one slot. There has to be a better way. Part of it is that cities are understaffed no so Cafe does all the work for them, and the more that enter, the more work for them. If the cities would be very, very specific in exactly what they are looking, for, that would cut the number of applications down to a more sane amount of artists making a go for it….and the juries would not have to go through thousands and thousands of images.
As to contracts, if it is a big city, their contract is written for massive infrastructure projects that do not apply to artists. These contracts can be dozens of pages long, and changes are only rarely accepted. A big sticking point is that they always ask for professional development insurance, which we cannot get, as there is no terminal, PhD degree for art.
Thejas Jagannath:
How does your project get approved and how many months/years does it normally take?
Robin Brailsford:
Every project is different. There is no normal. I have had big projects go in from start to finish in one year, if the client and team are motivated by grant deadlines.
The Irwindale project, as part of the Metro Goldline construction, took 10 years.
I am still trying to get Bird Park finished and that has been almost 30 years!
Thejas Jagannath:
Do you follow the principles of urbanism in public spaces, when you create your public art?
Robin Brailsford:
Yes and no.
I read vociferously… hence the extensive bibliographies in the COLD CALL/MUSEUM AS MUSE newsletter, and thesis-like project proposals….but I tend not to jump on board with the current trends — most of which I have been doing on my own for decades. Check out my project biblioographies and you will see where my inspiration comes from — in short — deeper and deeper — found through science, poetry, story, history, beauty, legacy.
Thejas Jagannath:
How has COVID-19 affected your work as an artist?
Robin Brailsford:
COVID-19 for us has been wonderful. As an artist who has run the gauntlet of life and career completely contrary to societal advice and wisdom (“live in the suburbs, get a 9–5, marry a doctor etc.”) this moment has proved my choices to be spot on. I turned 65 a few years ago so have insurance and a small regular income because of social security. Congress sent me $1200 and is offering me (forgivable) small business loans and, “low and behold!”, a furlough on my mortgage. That would NEVER have happened before COVID.
With my partner Wick Alexander I live in the country with a big studio, loaded with supplies and nice dogs and so we have nothing but time and inspiration to fill our days. Wick is making masterful paintings; I am making jewelry — really good jewelry, and more output that I did for my BFA in Metalsmithing. We are lucky in our timing and location. Our parents have passed, the house is built, we have no need for, or access to social concerns.
The news however, IS beyond terrible, I fear for our democracy and climate. We do stay very well informed, and I take my role as aunt and mentor very seriously, and do my best daily, to be a wise thinker and doer.
Thejas Jagannath:
How do you think your work is likely to contribute in the future?
Robin Brailsford:
Unanswerable: I can only do my best. You tell me.
Thejas Jagananth:
I feel it might have an effect on our public spaces and how we conduct our everyday lives, also inevitably impacting public art and how they are used by the public.
Thejas Jagannath:
Do you think tactile art/interaction with art will still be present after COVID-19?
Robin Brailsford:
It is interesting to me that the word “tactile” comes up often in your questions…. so I want to ask you — “what does that mean to you and why is it important? What is the translation of the word ‘tactile’ for you?”
The Grand Canyons of La Jolla, LithoMosaic detail.
Thejas Jagannath:
Tactile art, is the concentration of my Master’s thesis. The title of the thesis was People’s Interaction with Public Art in Public Spaces within New Zealand. I studied how interaction and engagement with public art is useful for the community. That’s the reason for the question regarding ‘tactile.’ I am interested in the various movements people make with public art rather than just observing…. like really immersing themselves through bodily experiences. That might reduce now with COVID-19, although we cannot be sure.
Thejas Jagannath:
The Black Lives Matter Movement has led to the removal of many oppressive statues around the world, affecting public art. How do you think this will contribute to public art in the near future?
Robin Brailsford:
It’s an interesting fact that they are tearing down history, and one will always think their own beliefs are far above and beyond the previous ideology. What of the Roman churches built over pagan sacred sites? Taliban blowing up Buddhist carvings in Aleppo? It’s a slippery slope, aligned too with male dominance and power brokering. There are holocaust museums.… What is happening to the sculptures and the artists’ work? and legacy? Should these statues be housed as ‘degenerate art’? Melt them down like Inca and Mayan gold? Can we look past the physical sculptures, and disassemble the thought processes that erected and maintained them in the first place? Create work about that in their place?
What do you think?
Thejas Jagananth:
I think that the statues convey an oppressed and a racist past that should not be celebrated in any way. It is a symbol of one racial group showing superiority over another, which is very much still prevalent, even though in indirect ways. In order for us, to globally achieve equality and understanding between all racist groups, it is important we create more cohesive and symbols of multiculturalism and inclusion in the public spaces.
Robin Brailsford:
Agreed!
On a bright note, I think it does open up opportunities. I’ll give an example. For thirty years I have been the steward of BIRD PARK in San Diego, trying to get the public art elements finally funded and installed. A few weeks ago there was an incident in New York’s Central Park, where a white woman in a birding area called the police on a black man who remarked on her off-leash dog. It turns out, he is on the Board of the Audubon Society and is Harvard educated. She was acting on her (maybe legitimate?) fear. Does it make a difference who he was? Does it make a difference who she was?
That incident has started a movement called “Black Birders.” BIRD PARK could be a platform for these different sorts of groups to meet and understand one another. So art can help heal, and be non-judgmental.
BIRD PARK (6 acre) map and interpretive panel, San Diego, Ca
Statues are an old-fashioned idea, but parks and open spaces and PLATFORMS that allow for the expression of diverse ideas and learning — that is the work I think we should be embarking on, in this shocking and riveting decade of ours.
Thejas Jagananth:
Do you conduct any kind of workshops/classes?
Robin Brailsford:
I don’t. I rarely teach anymore — though I did have excellent stints at the University of New Mexico and Loyola Marymount University. I was a superb teacher, if I do say so myself; the College Art Association wrote that I was the best in the department at LMU, despite being a part-timer. The Dean would not give me a living wage though, so I moved on to Young at Art in San Diego and then my own work exclusively.
Occasionally I will teach a workshop — at Corning Museum of Glass last year, and would like to do Haystack and Pilchuck too — that would be a nice, “just rewards” sort of annual gig.
I do train artists as needed in LithoMosaic, and I am a founding member or Public Address (www.publicaddressart.net) Check out the Library documents for my contributions (and interviews!)
Field of Play, Frisco, Texas
Thejas Jagannath:
What are your future plans?
Robin Brailsford: Creative Capital, in their workshops, gives a daunting assignment: “Write your obituary.” The trick of course is that, 1. One never thinks of that — or prefers not to, and 2. One has no idea when one is going to die, so the writing becomes aspirational and non-specific.
I have lots of big plans, and if I won a MacArthur of Guggenheim (nice, but highly unlikely!) I would spend all the money on my work. My mother (FrancesWosmek.com) used to say none of her best work was ever published. I would say the same about mine — the best is waiting in the wings for an encore.
I would like to complete all my major projects, as designed and approved, but never installed or completed…. in San Diego: BIRD PARK and Miramar Water Treatment Plant; in Albuquerque: Territory of Magic; in Phoenix: Ed Pastor Transit Mall. Then there is the whole COLD CALL/Museum as Muse series (Facebook Link) for the museums in Tacoma WA, Syracuse NY, Toledo OH, Corning NY and Salem MA. And third, if I really had my druthers I would also make parks of the entrances to the USA from Mexico and Canada They are now looking like battle grounds and are not welcoming or optimistic in any way — rather vast illuminated truck depots and Border Patrol zones. Our own border crossing here, between Tecate MX and Tecate US is at the end of Highway 94 — the Martin Luther King Highway. How awesome would it be for a MLK Park welcoming travelers to California? We are ready, have designed a large sculptural icon of opening doors for the park and there is plenty of open land for sale.
So, I guess the answer is, “And yes, and yet she persisted.”
To know more about Robin Brailsford’s public artworks and the various projects she’s been involved in, please visit her at www.RobinBrailsford.com | https://medium.com/art-direct/interview-with-robin-brailsford-a-deep-understanding-and-passion-for-public-art-3d748fa28f48 | ['Thejas Jagannath'] | 2020-07-03 11:22:12.114000+00:00 | ['Interaction Design', 'Art', 'Public Art', 'Public Space', 'Artist Opinion'] |
Minakami Madness: Inaka Luxury | Define Your Own Luxury
My life’s pretty hectic, and I think I like it that way, but even so yet another thing that I appreciate about all of these inaka experiences of mine is that they really force you to unwind. Sometimes this is in more fortunate settings than others, but at the end of the day, once the dark comes in a lot of these communities, you’re not going out for anything but an emergency, and so there’s a whole lot of me-time.
And this, I think, is becoming part of my own definition of luxury, more so than the cash price put to accommodations or experiences. Splendor or squalor, getting out there into the sticks, integrating with local communities, and exploring the inaka landscape is, if anything, an exhilarating, life-affirming activity. After a day packed with such highly valuable human experiences, its great to have some time to sit back and relax on your own.
How do you do that, you ask? Well…
- I mentioned playing a game called Stardew Valley in my last post, which is a good one. Minecraft has also caught my eye of late. What a game that is!
- I also have an ongoing calligraphy-meets-cryptography project I spend a good bit of time developing and documenting.
- Believe it or not, I have very little training as concerns video production and editing, so working on that is always in the background
- I’m a musician and compose songs for my band and for my own enjoyment.
All of the above and more could be you if your organization bothered changing the way it goes about its business, but let’s jump back into the story, shall we?…
Day 2
After unwinding over a few beers and some snacks, it was finally time for bed. The Minakami house is basically complete, but when Cory and I were discussing plans, he pointed out that bedsheets and blankets were not part of the deal. Which, in retrospect, I sort of wonder was more they just don’t want to do laundry than they don’t have bedding in the house… Would seem kinda funny if a house that put together merely lacked the means of sound sleep.
Regardless, I brought a sleeping bag to make up for the lack of comforters. Funny thing about that sleeping bag is that I’ve had it for close to 2 years and originally bought it to use on my multi-day bike treks, during which I usually just camp somewhere with a tent. However, because I almost always take those long rides through the countryside in the summer, I’ve never had reason to take the sleeping bag along with me because I simply don’t need it in the heat. So it’s sat there in one closet or another since I bought it, unused, though certainly not unloved.
Luxuriously Brisk
I awoke early on a beautiful chill morning, the sun shining through the curtains with the silver strands of autumnal dawn. I’m not really a morning person, but the merits of getting up early aren’t entirely lost on me.
First order of business was a shower, and here is where I ran into a minor issue: the hot water wasn’t on for anything but the bathtub. Since I didn’t feel like being a nuisance, I improvised: brief spats of freezing shower followed by submersion in a piping hot pool.
This is actually quite similar to a routine I had at a Seattle spa called Banya 5 in which I would alternate between sauna and ice bath, so I wasn’t all too unfamiliar with the extreme temperature variation. Apparently there are health benefits to this, but none are scientifically proven, so I more or less think of this as a practice in will power. Even so, familiarity doesn’t reduce the shock, but while it is a somewhat uncomfortable, it sure as shit opens up your eyes. And with that brisk foray out of the way, I entered the day hyper-aware of my surroundings.
Up the stairs and into the kitchen I went to prepare breakfast, which consisted of opening the camouflaged refrigerator to retrieve a Tuna-mayo onigiri, and boiling some water for instant coffee. This might sound pedestrian, but I assure you the one-two punch of GMO-infused fatty carbohydrates and tasteless rehydrated caffeine is the epitome of luxury.
T’Workin’
Our schedule that day only began after noon, so I took my time in getting my affairs in order. While enjoying breakfast, I hopped onto the wi-fi network to get some work done.
Of particular interest that day was my need to respond to Jessop Petrosky’s request for quotes to use in an article he was writing for The Japan Times about akiya, and the feasibility of *actually* buying one to move into. It goes without saying that I’m quite passionate about this topic, and ended up writing a few pages to get him what he needed.
While Jessop only needed a few good quotes, I’m not the type to just blurt some stuff out without context, so I sat down and thoughtfully answered his questions in paragraph form. The crux of what I ended up writing was that there’s a toxic, binary narrative infecting a lot of perceptions about Japan’s countryside. This narrative basically posits that anything outside of Tokyo and labelled an akiya is garbage, which is anything but true — as the luxury 3-story akiya I was writing in was testament to.
I finished that up in an hour or so, go to a few emails, and checked out activity across our digital presence. Around noon, I grabbed my coat, laced up my boots, and revved up the Jimny, ready for a day exploring Minakami.
Down from the Hills
I started on the long, winding descent down the mountain into Minakami to meet up with Cory at 711. Past ponds, through forests, and over potholes, the road back to civilization is fraught with peril, but eventually I emerged from the wilderness unscathed, and to great fanfare as the sky was miraculously full of rainbows. This is a quality of Minakami that isn’t spoken of too much, but it appears to just have daily rainbows. Weird.
Our first plan for the day was lunch at the city’s well-regarded pizzeria La Bier, and this certainly got me excited. Traveling Japan, you learn to appreciate many regional foods, mostly of a traditional caliber. While I enjoy that luxury, its nice every once in a while to see something that bucks the trend.
To that end, we headed North from 711 for about 20 minutes, through old townscapes full of spectacular traditional buildings, along the mighty Tone River, and into the town of Minakami proper. We parked at a roadside michi no eki, and got out for a walk through the town, accompanied by Cory’s dog, Scorpius.
Minakami’s a very north-south town as it’s situated in between two mountain ranges, and so as we walked through the various locales within the town limits, forever present was the looming gaze of snowcapped mountains. As we walked, Cory pointed out establishments to pay attention to: local craft brewery Octone, a friend’s restaurant Ruins, a newly opened cafe, and more, all of which strengthened my impression that Minakami is a tightly knit, communal town. Cory will cop to that.
The Epitome of Luxury
We reached the pizza place and were quickly seated. Minakami is a pretty quiet spot after the Summer Sports and before the Winter Sports arrive in normal time, but with Coronatime there is a whole new level of inactivity, so its not as if we had any crowds to battle.
Seated outside, we were soon presented with menus, which featured a decent selection of pizza toppings, much of which were locally harvested or made — we got the mushroom and pepperoni varieties. The server asked if we’d like to try one of the local beers they had available, but given that we were driving turned down the offer — maybe next time.
As we tucked in, we got to talking about his business, life in Minakami, opportunities in rural regions and, unsurprisingly, akiya. The region is chock full of them, and some are beginning to be used in promising, novel ways. There’s an abandoned elementary school that’s been repurposed as a teleworking space. A pension that’s being retrofitted for the 21st century. Discussions about transforming ryokan into wellness retreats, and more. It helps quite a bit that Minakami has such a high number of outdoor experiences in the area, and also a notable foreign presence, that these projects don’t seem all that wacky or unreasonable at all for your average person.
A Walk Around the Town
After about an hour and 2 pizzas (Italian style, btw, not NYC), we finished up and headed back to the parking area, where there was a riverside trail which Cory wanted to show me.the great outdoors of Minakami. We walked along for 30 or so minutes, and Cory spoke of more adventurous fare: trail running, bear attacks, bungee jumping, oh my! Up until this point I appreciated the business potential of a place like Minakami much more than I was able to the experiential, but with Cory’s anecdotes it became much more clear that a “sleepy” town in Japan’s rural reaches quickly becomes something /much/ more exciting without much work.
We doubled back at a red bridge crossing the river maybe 30 meters below, taking the time to scope out a few of the vacant hotels along the shore line as we went. Once back at the parking lot, we each got into our respective vehicles and Cory led the way with me following. Back towards Kamimoku, towards the house I was staying, because right near the blue house which Cory pointed out the night before there is a newly completed cafe built by outdoor sports enthusiasts called OneDrop.
We parked at another nearby house which Cory manages, and walked down the hill to OneDrop as the sun was beginning to set. Over the course of my time in Minakami, I got the feeling that dusk comes earlier there than it does elsewhere, perhaps due to the surrounding mountains blocking the sun out earlier than on a plain. Or maybe I’m just making stuff up, but either way, the fact is it was starting to get dark.
OneDrop, 4 Cups
Outside stood the owners, two gentlemen in their mid-40s or early 50s. They warmly greeted us and we almost immediately began speaking about akiya with noticeable excitement. We spoke about the “modern” Japanese real estate market and it’s turf rivalries, competing portfolios, and conspicuously convoluted intel structure, all of which makes it extremely difficult for a potential buyer to assess the core qualities of any property they’re interested in, needlessly complicating the process.
On top of that, there’s the government mandated 3% commission which all agents chase after as their main source of income. This disincentivizes an agent from working on anything but high-end, conventionally conveniently located properties, such that quality ones which fall outside of that definition get considerably less attention and upkeep as the more attractive ones, dooming them to
These 2 factors together create a vicious cycle in which the damned properties fall further out of favor due to no fault of their own while at the same time concentrating the easily accessible property pool with more and more top-end listings, all due merely to standard practice.
We flip that script, I said to much interest. By not tying ourselves to any 1 region we open up the opportunity for bespoke portfolio curation, and by opting for a largely fee based model we create a service that is solely focused on accommodating your needs.
“Oh. That’s incredible.” One said. “Why the hell hasn’t anyone does that before?” The other chimed in, and then we went inside into their new establishment for more talk of the akiya market and inaka life over drip coffee and fresh mikan.
A Day Spent in Inaka Luxury
After an hour or so, we decided to call it a day. We wrapped up our pleasant conversation by exchanging meishi, and we were soon on our way. I wasn’t yet headed back to the house, but to the local supermarket for something a bit healthier than what 711 typically offers for dinner. Plus I just don’t like repeating meals day after day.
The trip to Beisia took maybe 15 minutes, and jeez was it worth it. This one is apparently the largest in the area, and was really quite well stocked. I grabbed some beef, spinach, tomatoes, red onion, oil & vinegar, blue cheese (all local) and a few other ingredients for a simple steak salad, and a bottle of red wine, loaded that into the Jimny, and started making my way back up the mountain, again in the chill evening air and this time with a much better understanding of my surroundings after having spent the day exploring with Cory.
Way back in the day I fashioned myself as a pretty decent chef, but having spent so many years cooped up in small Tokyo apartments with mere micro-facsimiles of a functional kitchen, I worried my ability to navigate one had waned. Fortunately, the kitchen at this Minakami property is a real winner (details here), and I spent the waning hours of my second day in beautiful Minakami cooking up a storm, enjoying wine, and grooving to the dulcet sounds of Uncle Acid & The Deadbeats.
Luxury is a word whose definition morphs with each person, and up there in the cool dark of Minakami’s hills and forests reflecting on a day spent with new friends, experiencing new foods, and speaking about passions shared jointly, I must say, I’m pretty sure that’s my definition.
To see more stories like this, and to learn about opportunities in rural Japan, visit Akiya & Inaka | https://medium.com/@tokyometal/minakami-madness-inaka-luxury-b097b13276e5 | ['Matt Ketchum'] | 2020-12-22 08:36:46.267000+00:00 | ['Japan', 'Gunma', 'Minakami', 'Real Estate', 'Akiya'] |
Spring Boot + Kubernetes — Scalabilità con Horizontal Pod Autoscaler (HPA) | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/architetture-digitali/spring-boot-kubernetes-scalabilit%C3%A0-con-horizontal-pod-autoscaler-hpa-834cba29f2ab | ['Andrea Scanzani'] | 2021-02-19 11:11:47.142000+00:00 | ['Java', 'Spring Boot', 'Kubernetes', 'Architecture'] |
Why its important to update your TYPO3 Websites on a regular basis! | Most websites today run on content management systems and just as the content of a website needs to be regularly updated so is the case with CMS. They too need to be regularly updated as this is a necessary part of website maintenance. CMS regularly release newer versions every few years or so and although sticking with the version of a CMS you’re already using may seem like the easier option, it can be very risky. The story is no different with TYPO3.
Statistics show that 22% of the top one million websites worldwide are running on outdated infrastructure. Updating your TYPO3 based website to the latest version is important not only for enjoying the latest added features but for numerous performance and security based reasons as well.
Read more at http://www.nitsan.in/blog/post/why-its-important-to-update-your-typo3-websites-on-a-regular-basis/ | https://medium.com/nitsan-technologies/why-its-important-to-update-your-typo3-websites-on-a-regular-basis-30b7947e26bb | ['Nitsan'] | 2017-08-29 10:34:24.801000+00:00 | ['Upgrade', 'Security', 'CMS', 'Typo3', 'Update'] |
The Unstable Impact of Climate Change, Confirmed Via Satellite Observation | The Unstable Impact of Climate Change, Confirmed Via Satellite Observation
Too little vegetation is a troubling trend that could lead to food shortages and climate refugees in the future. Mary Elaine Follow Dec 1, 2020 · 4 min read
Photo by USGS on Unsplash
Researchers from the University of Copenhagen have been monitoring vegetation trends across the world using satellite imagery from past records. They are in search of the planet’s driest areas and were able to identify a negative trend of too little vegetation sprouting in some nations, whereas the situation was the opposite in wealthier countries with more vegetation. They predicted a theory that in the future, there could be food shortages and a lot of climate refugees.
In excess of 40% of Earth’s biological systems are parched, which will only increase throughout the span of the 21st century. A portion of these territories, for example, those in Africa and Australia, might be savannah or desert, where scarcity of rainfall has for quite some time been the standard. Inside these biomes, vegetation and wildlife have adjusted to utilizing their insufficient water supply, yet they are exceptionally defenseless against climate change.
Utilizing broad images from satellites that observe and monitor Earth each day, researchers from the University of Copenhagen’s Department of Geosciences and Natural Resources Management have contemplated the advancement of vegetation in parched locales. This was the statement of Professor Rasmus Fensholt of the Department of Geosciences and Natural Resource Management: | https://medium.com/climate-conscious/the-unstable-impact-of-climate-change-confirmed-via-satellite-observation-2598532b99 | ['Mary Elaine'] | 2020-12-01 21:30:20.080000+00:00 | ['Climate Change', 'Science', 'Climate Research', 'Environment', 'Environmental Issues'] |
Burn It Down: A Playlist for Angry Women | Burn It Down: A Playlist for Angry Women
A playlist is very much like an anthology — a collection of different artists, each with their own unique sound and vision, coming together to create something new. The combination creates its own power. When I was soliciting and editing work for my anthology, Burn It Down: Women Writing About Anger, it was important to me to have a range of different kinds of essays — I didn’t want to push any of these writers into a specific form but wanted them to write their anger in the way that felt most natural to them. The voices and styles in the book range as widely as the topics and the backgrounds of the writers — but they all work together. They flow into one another like a Beyoncé song can flow into a PJ Harvey song on a good playlist.
I asked each Burn It Down contributor to choose a song to accompany their essay, and say a little bit about why they picked the song they did. Check out their selections below, and listen to the whole playlist here. If you like what you hear, check out Burn It Down here! | https://medium.com/@lillydancyger/burn-it-down-a-playlist-for-angry-women-1aa16e0fc106 | ['Lilly Dancyger'] | 2019-12-06 17:08:26.563000+00:00 | ['Anger', 'Feminism', 'Playlist', 'Music', 'Personal Essay'] |
Anime Tier List(December 2020). As we come to a close from the craziest… | Love is War season 2 continues the amazing set up of the first season with even more student council shenanigans. The second season manages to differentiate itself from season 1 with a different overall tone. The first season had a lot more comedy and romantic ideals, while this second season had a lot more serious drama than the first. It was still done with the same excellent quality of the first season, but the tone shift was unexpected. At first, I was so-so about it, but it really started to grow on me as I reflect back, and the final 3 episodes are just amazing. They delve even further away from the typical Love is War formula, with some actual backstory and character development, but it was maybe the best presentation for any scene so far in the series. The only thing I dislike so far is that Lino was added and yet has no current purpose, often being more of a background character than Ishigami. This will hopefully change when season 3 comes out. Regardless, an excellent season.
Verdict: Staying at A+
High School Fleet has a similar premise to a previously covered anime, Girls und Panzer. I wasn’t the biggest fan of GuP because of how small the niche for the show was, as well as some boring moments, and I expected High School Fleet to be the same, but with warships instead of tanks. I was pleasantly surprised, however, that this anime went beyond a slice of life. It had an actual story and character progression. Also unlike GuP, this anime did not have history lessons that really dragged out the show and made it boring. Anything historic about the ships or formations you learn are learned through how the characters talk to each other about ships, rather than bringing up a 4th-dimension breaking presentation. Despite all of this, it isn’t exactly a great show though. The plot is very gimmicky, and the quality of the show doesn’t really go very far. In many ways, it’s just a slightly expanded upon slice of life. Regardless, this anime was a big step up from GuP, as these shows are very similar in most regards. It has some pretty solid animation and it actually attempts to use all of its characters, unlike GuP.
Verdict: B-
Your Lie in April was an anime I was initially hesitant to start watching since I was under the impression that it was something that I was not going to like very much. I eventually decided to try it though and it turned out to be a very heartfelt and funny anime that I did not expect. Many anime are nice and outlandish with a bunch of Sci-fi or just unrealistic situations for the sake of story or comedy, so when a very grounded anime comes around like this, it’s really a breath of fresh air. Many sad or depressing moments were relatable and not too over the top. The art style has this sort of intentional messiness that makes it really appealing to the eye. The only real negative I noticed during this anime is that, as someone who doesn’t play an instrument, listening to this many piano solos got tedious eventually. But every other moment had me glued to my screen and wanting more.
Verdict: A
Toradora is a unique romantic comedy about 2 very quirky characters finding their true love by asking help from each other’s best friend, who happen to be neighbors. The character development in this anime is incredible, as it shows a very realistic and reasonable development of the thought process and emotions of the main cast. It doesn’t feel artificial, but very natural and authentic considering all of the situations encountered. The first 2/3rds of the anime disguises itself as wholesome and fun, but it takes a big turn towards the end. Personally, I wasn’t a fan of how the ending was done, and felt that the last few episodes were pointless in the grand scheme of things, but it was still alright. The ending was awkward, but in a way that kind of fit the whole vibe of the show so I was more okay with it. Other than that, my only problem with the show was the character Minorin. Many times she was fine, being a quirky, absent-minded person who works an insane amount of hours to hide her pain by being busy, which is a nice trope to build on. However she randomly would get extremely angry over things she should not, and it felt like a random character trait that was just tacked on and didn’t fit her personality very well. However, these moments are few and far between, and as a whole Toradora is an amazing show that has aged well despite being released in 2009. Yeah, a whole decade ago. Its animation style in particular really impressed me, and as a whole, I really enjoyed watching all of the Taiga’s antics.
Verdict: A-
The best part is the intro honestly
Blend S is a funny little slice of life with an all too familiar premise: a cafe with a bunch of funny anime girls. So what differentiates Blend S from the rest? Primarily, the characters are much less 1 dimensional and have more depth to their character. It was also more on the funny side than the wholesome side, which overall helps it stay relevant and interesting. It’s OP is also very iconic and catchy. The positives stop here unfortunately because despite it focusing more on comedy, it didn’t do much to make me laugh. It wasn’t terrible by any means, but many of its jokes didn’t land. Perhaps my standard is just really high at this point, but when an anime that is selling itself on being funny only makes me laugh sometimes, it’s hard to really enjoy the other parts of the show. It’s still a decent show though and it’s short so give it a try if you want something new and unique from the slice of life genre.
Verdict: B
I guess I was just in the mood for something funny because I watched Nichijou right after Blend S. Both are shows built on the same thing: slice of life by nature, but with the intent to make you laugh a lot. The direction these shows went in though, were quite different. Nichijou has a very relaxed vibe and doesn’t take itself very seriously. In fact, it seems to endorse the weirdness of the characters and put it to the max in terms of the animation style. Just like Blend S though, I didn’t actually laugh too much at the jokes thrown in the show. This time, however, I suspect it’s because of which jokes were used rather than the delivery. This is an extremely high-quality show, but many of its jokes are specific to Japanese culture. Without me living in Japan or understanding much of its culture, I just didn’t find many of the jokes making me laugh. It is still entertaining for sure, but without the comedy element, I was a little bit bored at times. This show could easily reach A tier if I had been able to enjoy the jokes, but as it stands, I’m not enough of an otaku to appreciate the show.
Verdict: B-
One of the hottest anime being released in 2020, Rent-a-Girlfriend showed a lot of promise with a very interesting and unique premise. As far as I know, rental “people” is a thing that is done sometimes in Japan. I’ve definitely seen a video of a rental mother before, so a rental girlfriend is not too far of a stretch. I loved the idea of this show and how unique it’s premise is, since I’ve seen nothing like it before. And initially, it definitely holds up. It has cute girlfriends with fake personas, and an unfortunate MC grieving over an ex, who decides to use the rental service. Beyond that though, there are many problems with the show. The primary one is that the MC, Kazuya, is just an awful character in multiple ways. Not only is the writing for his character bad (maybe on purpose?), but he shows many points where he is just dumber than a brick with a hint of toxic masculinity thrown into the mix. I dropped the show Gamers! for having 2 extremely infuriating main characters that pissed me off, and Kazuya was starting to head in that direction for a little bit. One example is Kazuya being sad that he is alone on Christmas AFTER he told a girl who wanted to hang out with him that he “had other plans”. Eventually, he redeemed himself a bit and I didn’t quit the show, but I certainly thought about it. The art style and animation quality are really good, which definitely helped in me debating whether I should drop the show or not. It has these waves of a few great episodes followed by a few terrible ones which is really weird. If you want a unique premise to a show with some waifu material girls, this is your show, but be warned that the MC has a way with words that really pissed me off at times.
Verdict: C
Aho-Girl has a lot of dumb antics to make the show fun and entertaining. In the same way that many cartoons are funny because they have dumb characters in funny situations, Aho-Girl has plenty of dumb or entertaining characters to use. This show takes pride in its stupidity, and you know, I respect that. Embracing your stupidity and rolling with it is better than what a lot of shows can put up. There isn’t much else to say here, this is a really funny and dumb show with a shorter episode length. Don’t expect it to be the best show ever, but I really enjoyed binging it when I lost power for a few hours and it made the wait a lot more enjoyable.
Verdict: B-
My Teen Romantic Comedy SNAFU is the last show for this blog. Despite the title, this show is actually 3 main genres but never all 3 at the same time. Season 1 was a comedy with a small amount of drama, season 2 was just entirely drama with a hint of romantic elements at the end, and season 3 finally had the more romantic drama part. It’s very weird that it never had a nice blend of all 3 genres it clearly aims to have at the same time. The comedy aspect just drops after the first season and only occasionally makes an appearance later. I really enjoyed the show though, partly because I could relate to the MC Hikigaya a lot. I used to be a loner in high school and much like him, I eventually got sucked into a group of friends that I now really appreciate for bringing me out of my shell. Even with the series having 2 distinct art styles, they both fit well. My main complaint with the show is the writing. It’s an intentional decision, but I really disliked how cryptic the characters are. This show is great for making you pull out a Thesaurus just to try to understand what the characters are trying to express. They are unable to say anything remotely close to “I just want to learn to be more independent” and must instead say it in a way that confuses everyone and thus causes drama. There are plenty of good moments throughout the show, but I would be lying if I said I didn’t have to dig through the comments section of each episode to try to figure out what the characters meant at some tense point of conflict. If you enjoy something with a bit more elegance than your typical rom/com drama, this show might be for you.
Verdict: B+
Thank you everyone for making it this far. If you have any suggestions, comments, or concerns feel free to tell me, as I’m still far from perfect with how I do these. Below will be the tier list. It is extremely long now, and I am looking into a way I can make a tier list that works (primarily, a tier list that has a way to expand so you can see the picture larger and also some text for me to put the review in). If you know any websites that can allow me to do this please let me know! This may also be my last post on medium, as I am working on making a website so that I can format things how I want as opposed to how medium forces me to. I will make an announcement on here if/when it happens. Stay safe and happy holidays everyone!
Tier list:
S tier:
K-On
Konosuba
Weathering With You
Clannad
A tier:
A Certain Scientific Railgun
Love is War
Yuru Yuri
Bloom Into You
Release the Spyce
Bang Dream
Kokoro Connect
Your Lie in April
Lucky Star
Toradora
Love, Chuunibyou, and Other Delusions
New Game!
Love Live! School Idol Project
Usagi Drop
My Teen Romantic Comedy SNAFU
Your Name
B tier:
Yuyushiki
Magical Senpai
The Pet Girl of Sakurasou
Girl’s Last Tour
Gabriel Dropout
Blend S
A Certain Magical Index
Trinity Seven
The Quintessential Quintuplets
A Certain Scientific Accelerator
Seiren
Kinmoza
High School Fleet
Is the Order a Rabbit?
Is The Devil a Part Timer?
Nichijou
Love Live! Sunshine!!
Aho-Girl
C tier:
A-Channel
Love Lab
Rent-a-Girlfriend
Girls Und Panzer
Citrus
How Heavy Are the Dumbbells You Lift?
D tier:
Sakura Trick
Komori-san Can’t Decline
Miru Tights
F tier:
Gamers! | https://medium.com/@lavamites/anime-blog-december-2020-3778cf137970 | ['Michael Reynolds'] | 2020-12-24 00:00:30.437000+00:00 | ['Anime', 'Blog'] |
Broken Heart | Danny had entered puberty. He had a cassette player that his dad gave him when he got back from Korea. It was battery-operated. Danny took it with him to the “outpost,” a place out in the forest where he went to be alone and listen to music.
There was so much great music coming out at the time. Danny had an hour-long cassette of all of his favorite songs. He listened to it all day and his favorite place to listen to it was the outpost. He would sit on his favorite boulder and put the headphones on. He would look up at the tree tops and be swept away to different worlds.
One of Danny’s favorite songs was a number one hit song by the Bee Gees called, How Can You Mend a Broken Heart? It was a beautiful song he listened to as he imagined a future love life.
Danny had never had a girlfriend. He had no idea yet what a broken heart felt like. Listening to that song, he figured it must be a very, very intense feeling. And it must be beautiful somehow.
“I hope to someday experience that,” he thought to himself.
…and so it eventually came to pass. | https://whitefeather9.medium.com/broken-heart-4a33e492ab03 | ['White Feather'] | 2019-05-29 15:33:16.323000+00:00 | ['Humor', 'Love', 'Music', 'Fiction', 'Flash Fiction'] |
Are Robots Really Safe? | Robots are becoming increasingly popular in workplaces around the globe, especially Cobots, the machines designed to work next to humans. But when considering implementing any technology, it’s essential to keep safety at the forefront. What possibilities exist for robots malfunctioning and hurting people or otherwise compromising worker well-being?
But are they really safe?
Industries worldwide are evolving towards the extensive use of automation and industrial robots to deliver process and system efficiencies. With the increasing Automation, it is equally important to follow proper safety protocols. However, given that a system failure can carry severe consequences for both people and equipment, compliance with regional and international robot safety standards is becoming a key concern for suppliers, manufacturers and system integrators.
Now let us see some of the types of accidents that can happen:
The operational characteristics of robots can be significantly different from other machines and equipment. Any change to the object being worked or the environment can affect the programmed movements. Some maintenance and programming personnel may be required to be within the restricted envelope while power is available to actuators. The restricted envelope of the robot can overlap a portion of the restricted envelope of other robots or work zones of other industrial machines and related equipment. Thus, a worker can get hit by one robot while working on another, trapped between them or peripheral equipment, or hit by flying objects released by the gripper.
1. Impact or Collision Accidents: Unpredicted movements, component malfunctions, or unpredicted program changes related to the robot’s arm or peripheral equipment can result in contact accidents.
2.Crushing and Trapping Accidents: A worker’s limb or other body part can be trapped between a robot’s arm and other peripheral equipment, or the individual may be physically driven into and crushed by other peripheral equipment.
3.Mechanical Part Accidents: The breakdown of the robot’s drive components, tooling or end-effector, peripheral equipment, or its power source is a mechanical accident. The release of parts, failure of gripper mechanism, or the failure of end-effector power tools (e.g., grinding wheels, buffing wheels, deburring tools, power screwdrivers, and nut runners) are a few types of mechanical failures.
4.Other Accidents: Other accidents can result from working with robots. Equipment that supplies robot power and control represents potential electrical and pressurized fluid hazards. Ruptured hydraulic lines could create dangerous high-pressure cutting streams or whipping hose hazards. Environmental accidents from arc flash, metal spatter, dust, electromagnetic, or radio-frequency interference can also occur. In addition, equipment and power cables on the floor present tripping hazards.
Role of Human Errors in Accidents:
Robots are smart, but most of the times they need humans to program them for tasks. Those people could make mistakes that lead to unintended consequences. For Example, Robotic surgery involves risk, some of which may be similar to those of conventional open surgery, such as a small risk of infection and other complications. It’s essential to realize that the Da Vinci system never operates independently, but gets guided by surgeons using foot pedals, joysticks and a viewer.
Robots bring many pros and cons to workplaces, especially concerning enhanced labor output and increased consistency. But benefitting from the advantages requires ensuring workers have the knowledge needed to use the machinery safely and confidently. Despite the examples of events mentioned above, data collected by OSHA reveals that there have been only 40 incidents of injuries or deaths related to robots in the workplace.
Studies shows that most of the injuries happen not during normal operation but the times when human interaction is most prevalent. By System design, robots rarely need human interaction during normal functioning and operations but when programming, testing, inspection or repair is required.
Inherent prior programming, interfacing activated peripheral equipment, or connecting live input-output sensors to the microprocessor or a peripheral can cause dangerous, unpredicted movement or action by the robot from human error.
Apart from human errors there can be many other reasons which can lead to accidents like Intrinsic faults within the control system of the robot, errors in software, electromagnetic interference, and radio frequency interference are control errors. In addition, these errors can occur due to faults in the hydraulic, pneumatic, or electrical sub-controls associated with the robot or robot system. Mechanical Failures , power failure and improper installation of robot or robotic system can lead to inherent hazards.
What precautions should we take reduce accidents and increase safety of robots?
1. Before installing any robot or robotic system Risk Assessment must be done. There are different system and personnel safeguarding requirements at each stage. The appropriate level of safeguarding determined by the risk assessment should be applied.
2. Design and position of Control panel is also important for Safe robotic System. The control panel must be located outside of work envelope but must be within sight of robot. Emergency stops should be located in all zones where needed and should be a large part of emergency personnel training.
3. Unauthorized access to the workspace can be hazardous to the personnel. Several methods can be used to protect the work envelope like:
(a) The simplest are known as awareness devices, these devices are intended to define to work envelope and make personnel aware of the hazards. They do not prevent access and consist of fences and roped off areas.
(b) Safeguarding methods are used to prevent unauthorized access to the work envelope. Simplest method is known as fixed barrier method such as a fence or a wall. A physical barrier is put around the workspace and is configured in such a way that prevents access through over or around without a special key or access code.
© Another method of protecting robot workspace employs the use of presence sensing devices. These devices detect the presence of a person in hazardous area and slows down or stops the robots. Pressure mats are installed just outside the robot’s workspace. When a person steps on the mat, a signal is sent to robot to stop or slow the work.
(d) Safety light curtains is another type of presence sensing device. When a person enters the work envelop, the light beam is interrupted which sends a signal to robots. For certain work scenarios, sudden stops could cause the work to become uncontained, objects may continue to move even when the robot stops. In such cases emergency braking of robots is used instead of completely stopping. In emergency braking robots slow down the speed and does not stop suddenly.
4. Operator Safeguards. The system operator should be protected from all hazards during operations performed by the robot. When the robot is operating automatically, all safeguarding devices should be activated, and at no time should any part of the operator’s body be within the robot’s safeguarded area.
5. Maintenance and Repair Personnel. Safeguarding maintenance and repair personnel is very difficult because their job functions are so varied. Troubleshooting faults or problems with the robot, controller, tooling, or other associated equipment is just part of their job. Program touchup is another of their jobs as is scheduled maintenance, and adjustments of tooling, gages, recalibration, and many other types of functions.
These recommended inspection and maintenance programs are essential for minimizing the hazards from component malfunction, breakage, and unpredicted movements or actions by the robot or other system equipment. To ensure proper maintenance, it is recommended that periodic maintenance and inspections be documented along with the identity of personnel performing these tasks.
6. Safety Training. Personnel who program, operate, maintain, or repair robots or robot systems should receive adequate safety training, and they should be able to demonstrate their competence to perform their jobs safely.
What is a Robot Safety Standard?
As industrial robots continue to become more advanced, more capable and more popular on the international stage, the need for comprehensive robot safety standards increases exponentially. Robots can create havoc if they don’t follow safety protocols .
A robot safety standard is a collection of guidelines for robot specifications and safe operations in which all involved in the manufacture, sales and use of robots must follow. Often, standards are created by a diverse group of industry interests to ensure the standards benefit everyone. Every country has different robot safety standards. International standards organizations work to unify and assimilate these separate sets of robot standards into a more cohesive whole.
Some of the popular standards organizations are:
· International Organization for Standardization(ISO): Headquartered in Geneva ,The International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations.
International Electrotechnical Commission(ICE): The International Electrotechnical Commission is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies — collectively known as “electrotechnology”.
· American National Standards Institute(ANSI): The American National Standards Institute is a private non-profit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States.
· Robotic Industries Association(RIA): The Robotic Industries Association is a United States trade group organized to serve the robotics industry.
· Bureau of Indian Standards (BIS): It is the National Standard Body of India established under the BIS Act 2016 for the harmonious development of the activities of standardization, marking and quality certification of goods and for matters connected therewith, This Organization was formerly known as Indian Standards Institution(ISI)
On concluding, although robots have potential hazards which can prove to be very dangerous, these can be prevented by taking proper safety measures, educating technicians with latest advancements and following standards. If all the necessary care is taken, robots can be highly efficient and safe way to accomplish work and humans and robots can work collaboratively for development for mankind.
By Suraj Shewale, Malhar Surangalikar, Devarshi Talewar, Pooja Tekade and Sarvesh Wadi | https://medium.com/@malhar10000/are-robots-really-safe-3e833304efbc | ['Malhar Surangalikar'] | 2020-12-13 15:03:43.233000+00:00 | ['Robotics', 'Safety', 'Automation', 'Robotics Automation', 'Accident'] |
Virat — Rohit: Rift or Reverence? | Post Highlights
It has been a while now since the news has flooded the internet that there is an obvious rift between the two cricket giants of this generation, Virat Kohli and Rohit Sharma. This blog will bring light to reality! #ThinkWithNiche
A cricket frenzy nation! It is how the sports world recognizes us! We live, dream, and sleep cricket! Sunil Gavaskar, Sachin Tendulkar, and MS Dhoni have been our heroes. We have always idolized them. Their hits have made us crazy, while their misses broke our hearts. When they are victorious, we are on the moon while their defeat pushes us into the darkest pit of misery. Maybe this is why even the tiniest piece of news about our cricketing heroes catches the spotlight and turns into a huge controversy. Our present heroes are ace cricketers, Captain Virat Kohli, and vice-captain (till now) Rohit Sharma, the famous Hitman. Naturally, they have to bear the showers of petals as well as pebbles from us! It is the cost of being an Indian cricketer, perhaps! And only Indian cricketers but great cricketers who have broken many records in world cricket!
The Legacy of Virat and Rohit!
Rohit Sharma and Virat Kohli made their debut in the Indian Cricket team virtually at the same time. While Rohit Sharma made his debut on 23rd June 2007 against Ireland, Virat Kohli wore the blue cap on 18th August 2008 against Sri Lanka. It is noteworthy that they made their debut in first-class cricket in the same year in 2006. Virat may have piled up 70 centuries to just 41 of Rohit, but Rohit is the only player in the world to have scored three double hundreds in one-day internationals. There is also not much difference in their skills as far as captaincy is concerned.
Virat has played 205 matches as a captain and won 130 games in all formats. Rohit has played about 185 matches and won 122 matches in total. We know this Delhi cricketer Virat Kohli as a man of principles. He has come out to be an immensely daring and fiercely competitive cricketer. A firm bottom-hand grip and the ability to smash the balls in his chosen direction have helped him to earn the title “Run Machine.” People often compare him with the great master blaster Sachin Tendulkar for his knack for breaking records. The cricketer from Mumbai, Rohit Sharma, caught the imagination when he scored his first double century in the shorter format of the game.
Known as an effortless striker, Rohit gradually made his permanent place in the Indian Cricket Team. He is perhaps the best opener in present times! Apart from the numerous wins as a captain, he has also 5 IPL Titles to his credit. His calm attitude and his habit of speaking only with the ball have won many hearts. It is another thing that his greatest gift (to score big in limited edition) becomes his burden (to meet the ever-increasing expectations of the people every time he takes the field). But the number of shots in his armory easily puts him in the league of the greatest players of all time. No wonder he is known as the “Hitman” in cricket.
Rohit-Virat: The Magical Duo!
Together, Virat and Rohit are the exact foil for each other. They both complete each other. They are perhaps, the greatest pair in the present era as far as partnerships are concerned. Kohli-Sharma has an average of a daunting 65.51%. They have 15 hundred-run partnerships and five double-hundred partnerships to their credit. The success rate of this pair in winning matches is a daunting 84.04%. Is it proof not enough of the mutual understanding and respect they share?
The Controversy: O’Captain, My Captain!
The Captaincy row has very often caused a dent in the grace of the otherwise great Indian Cricket. It happened with Saurav Ganguly, and it has happened now with Virat Kohli. After all, it is a shift in leadership. How can controversy leave behind? The whispers had already begun in the corridors of cricket that Virat might lose the Captaincy. But it happened on 8th December when the BCCI announced the Test team for the South Africa Tour where Rohit Sharma was named as the ODI Captain instead of Virat Kohli. Kohli had earlier given up the T20 Captaincy after the T20 World Cup in October November, citing a heavy workload. He said on 16th September, “Understanding heavy workload is a very important thing and considering my immense workload over the last 8–9 years playing all three formats, and captaining regularly for the last 5–6 years, I feel I need to give myself space, to be fully ready to lead the Indian Team in Test and ODI cricket.” It is in this statement that the problem resides! Virat Kohli gave up the T20 Captaincy in November, but the selectors were against having two different white-ball (ODI & T20) captains.
The BCCI president Saurav Ganguly stated, “He stepped down as T20 captain, and the selectors decided not to split limited-overs captaincy, opting for a complete separation. The bottom line is that there can’t be two white-ball captains.” So Virat Kohli was kept as the Test Captain, and Rohit Sharma was given the mantle of the ODI and the T20 Indian Cricket teams. What added fuel to the fire was the injury of Rohit and his availability for the Test Matches in South Africa. Next, the captaincy row gave birth to rumors that Virat was unavailable for the ODI series, and so it appeared that they did not want to play under each other. But Virat is available so, no love is lost between the two greats!
Rohit-Virat: No Love Lost!
So it is visible that Rohit Sharma has no role to play in this whole incident. Even if there is a rift, it is certainly not between Virat and Rohit. They may not be best friends, but they are not enemies either. They have enjoyed great partnerships and have won so many matches together. They are both team players, and they both work hard for its success. As Virat said, “I am tired of clarifying my doubts on my relationship with Rohit. My responsibility is always to push the team in the right direction.
Rohit is a very able captain and tactically sound. We have seen that in the games that he has captained India and in the IPL. Along with Rahul (Dravid) Bhai, who is a very balanced coach and a great manager, they will have my absolute support in whatever vision they have for the team.” Rohit echoed similar sentiments when he said for Virat, “We had a great time playing under him. I have played a lot of cricket with him and enjoyed every moment, and I will continue to do that. We need to keep getting better as a team and as individuals, and that will be the focus not just for me but for the entire squad moving forward.” Their warmth and mutual respect are reflected by their statements. When you are a team, tiffs and occasional disagreements are bound to happen. But what lies is a deep bond which only gets stronger with time!
Conclusion
Why? Why do they do it? It happened with the legends Sunil Gavaskar and Kapil Dev. It happened with great icons, Sachin Tendulkar and Rahul Dravid and then MS Dhoni and Virat Kohli. But in reality, the bonhomie and mutual respect are there for all to see! Virat and Rohit are one of the greatest players of this era who complement each other well! So leave the rift for the rumor-mongers, and let us enjoy the game! Let’s go for the glory of Team India in South Africa!
For More Fun Articles Click Here | https://medium.com/@twn.niche/virat-rohit-rift-or-reverence-96f2b9d0b47b | ['Think With Niche'] | 2021-12-21 06:06:42.858000+00:00 | ['Rohit Sharma', 'Virat Kohli', 'Captain', 'Cricket', 'Partnerships'] |
Working with Package Visibility | In Android, we are making changes to enhance user privacy and platform security to provide our users with a safer experience. Apps targeting Android 11 (API level 30) or higher will only see a filtered list of apps that are installed on a device. In order to access apps beyond that filtered list, an app will need to declare the apps they need to interact with directly using a <queries> element in the Android manifest. This blog post will go through best practices of how to adapt to this feature.
Querying and interacting with apps:
There are different ways to query and interact with apps:
If you know the specific set of apps that you want to query or interact with, include their package names in a set of <package> elements inside the <queries> element.
If your app needs to query or interact with a set of apps that serve a particular purpose, but you might not know the specific package names to include, you can list intent filter signatures in your <queries> element. Your app can then discover apps that have matching <intent-filter> elements.
If you need to query a content provider but don’t know the specific package names, you can declare that provider authority in a <provider> element.
We encourage data minimization by querying only for the packages you need to interact with. QUERY_ALL_PACKAGES or equivalently broad <intent> elements should only be used by apps that need this level of information. Our new Package Visibility policy introduces an approval process for the new QUERY_ALL_PACKAGES permission which controls access to the complete inventory of installed apps on a device.
Activity flags:
Most common use cases don’t require your app to have package visibility at all. For many scenarios, you can use startActivity() and catch an exception if there is no app that can open this intent.
While you can start any activity without visibility of the target, you can’t query for the availability of that activity before starting it or learn which specific app will be launched because it is an implicit intent. Instead, you will be notified when you start if it doesn’t resolve. If you want to be more selective about what opens, you can use flags.
A common example that uses flags is Custom Tabs, which allow a developer to customize how a browser looks and feels and have more control over the web content experience. Links will correctly open in non-browser apps if available, but flags help in advanced cases when developers want to be selective about handling the content in a native application before using custom tabs. In short, this flag helps a developer determine if there’s a native app to navigate to and from there they can handle it how they want.
FLAG_ACTIVITY_REQUIRE_NON_BROWSER
This flag only launches the intent if it resolves to a result that is not a browser. If no such result exists, an ActivityNotFoundException will be thrown and your app can then open the URL in a custom tab.
If an intent includes this flag, a call to startActivity() causes an ActivityNotFoundException to be thrown when the call would have launched a browser app directly or the call would have shown a disambiguation dialog to the user, where the only options are browser apps. To read more about flags, see Configuring package visibility based on use cases.
Customizing a share sheet
We recommend using the system share sheet instead of a custom one. You can customize the system share sheet without needing app visibility. Refer to this documentation for more information.
Debugging Package Visibility
You can easily check your manifest to see all queries included. In order to do this, go to your manifest file and choose Merged Manifest.
You can also enable log messages for package filtering to see how default visibility affects your app:
Next steps:
For more information on Package Visibility, check out these resources:
Happy coding! | https://medium.com/androiddevelopers/working-with-package-visibility-dc252829de2d | ['Meghan Mehta'] | 2021-04-01 18:13:03.627000+00:00 | ['Latest', 'Android', 'Android Developers', 'Android Privacy', 'Featured'] |
Automation With Lambda function and CloudWatch Events | Lambdas are great. It lets you run your code without having to worry about provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for various types of applications or backend services, all with zero administration. You just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
TLDR; 🤷♂️
I created a simple workflow using AWS Lambda, CloudWatch Events, SSM Parameter Store and SNS that sends email notification when certain criteria is met on a predefined recurring schedule.
Let’s start 🚀
Recently, I was preparing for the AWS Developer Associate certification. As part of the preparation, I was reading white-papers, articles, watching tutorials videos and tinkering with the AWS console to get some hands-on experience. I have built a chrome extension that I monitor its download count and check for any reviews or support questions once in a while. I thought instead on me visiting the site, it would be good if I can get notified wherever the download count changes. As a sample application, I decided to build a simple workflow using Lambda and CloudWatch Events that will send out an email notification whenever the download count changes. This is a very basic scenario but using Lambda you can build complex applications. | https://medium.com/swlh/automation-with-lambda-function-and-cloudwatch-events-b7a525b7bd03 | ['Chandan Rauniyar'] | 2021-01-11 15:49:47.210000+00:00 | ['Dotnet Core', 'Serverless', 'Aws Sam', 'Cloudwatch', 'AWS Lambda'] |
The Promise | The Promise
A Poem
Sky Collection Quote Prompt No. 25
Photo by Kendal on Unsplash
Pandemic tarried for seasons
Spanning borders unhindered
Short-circuiting social bonds
Chanting death songs all along
Charming faces masked within its waves
But grieves and breaches are now easing
Death’s trips in circles breaking point met
Would you mind
Starting your day wearing a smile
Those we lost will join in smiling
Looking as earth begets healing
Their prayers from beyond not sparing
Would you mind
Hugging hearts, holding hands
Times lost to gloomy days restore
Flowing freely within each vein
Doses of hope and a cure | https://medium.com/sky-collection/the-promise-edbec1a6c8d8 | ['Johannes Mudi'] | 2020-12-16 12:19:43.887000+00:00 | ['The Promise', 'Poetry On Medium', 'Sky Collection'] |
8 Reusable Functions To Make Your Angular App Accessible | 8 Reusable Functions To Make Your Angular App Accessible
Accessibility plays a vital role in any Single Page Applications. As Angular growing by leaps and bounds, supporting the accessibility in Angular Application becomes quite complex until we know all accessibility concepts.
Here I am coming with the different guidelines to implement accessibility in Angular Applications.
Before starting the implementation techniques, Let’s check common rules to follow while implementing accessibility.
If you would like to learn about how to implement skip to main content in your application please check out this article.
Let’s Start our main discussion
Today we are planning to create a reusable directive that helps to maintain the accessibility in Angular Applications.
We will create eight different reusable components to support the accessibility in our application.
1. Assigning an ID to the controls
The id the attribute specifies a unique id for an HTML element. The value of the id attribute must be unique within the HTML document.
We can create the function to add the id to the form controls automatically based on the form controls.
Let’s check the function.
this.renderer.setAttribute(
this.hostElement.nativeElement,
'id',
this.control.name,
);
What are renderer and hostElement here?
Let’s understand in brief.
renderer: renderer is defined from the Renderer2 which allows manipulating the DOM elements without having the whole DOM access. We can create an element from Renderer2 and can change or add the property with the Renderer2. Here we have used renderer to add the id to the controls.
hostElement: hostElement is defined from the ElementRef which is the wrapper around the native DOM element. It contains nativeElement property which holds the reference of the DOM Object. To manipulate DOM we can use nativeElement property.
2.Setting aria-labelledby tag to the controls
The aria-labelledby attribute establishes relationships between objects and their label(s), and its value should be one or more element IDs, which refer to elements that have the text needed for labeling. List multiple-element IDs in a space-delimited fashion.
Let’s take a small example.
<div id="signup">Create User</div>
<div>
<div id="name">Name</div>
<input type="text" aria-labelledby="signup name"/>
</div>
<div>
<div id="address">Address</div>
<input type="text" aria-labelledby="signup address"/>
</div>
Here the aria-labelledby will take multiple IDs to provide the reference to the child elements. This is very useful when the screen reader is announcing the label with the parent form name.
Let’s check the function to add the aria-labelledby tag dynamically.
this.renderer.setAttribute(
this.hostElement.nativeElement,
'aria-labelledby',
`${this.control.name}-label`,
);
3. Setting For Label to the controls
To understand this, Let’s check one of the answers given in the stack overflow by Darin Dimitrov.
The for the attribute is used in labels. It refers to the id of the element this label is associated with.
For example:
<label for="username">Username</label>
<input type="text" id="username" name="username" />
Now when the user clicks with the mouse on the username text the browser will automatically put the focus on the corresponding input field. This also works with other input elements such as <textbox> and <select> .
Let’s create a function to add for label with the id dynamically in the application.
const closestPrevSiblingLabel = $(this.hostElement.nativeElement)
.prevAll('label:first')
.get(0); if (closestPrevSiblingLabel) {
if (typeof this.control.name === 'string') {
this.renderer.setAttribute(
closestPrevSiblingLabel,
'for',
this.control.name,
);
}
this.renderer.setAttribute(
closestPrevSiblingLabel,
'id',
`${this.control.name}-label`,
);
}
What is closestPrevSiblingLabel doing?
We are using the prevAll() method which returns all previous sibling elements of the selected element and we are selecting the first element label using get(0)
4.Setting aria-required or aria-invalid tag to the controls
Let’s understand what is this aria-required and aria-invalid.
aria-required: The aria-required the attribute is used to indicate that user input is required on an element before a form can be submitted. This attribute can be used with any typical HTML form element; it is not limited to elements that have an ARIA role assigned.
aria-invalid: The aria-invalid the attribute is used to indicate that the value entered into an input field does not conform to the format expected by the application. This may include formats such as email addresses or telephone numbers. aria-invalid can also be used to indicate that a required field has not been filled in. The attribute should be programmatically set as a result of a validation process.
Let’s create a function which checks the control has the required field or not? If it has we will set the aria-required field as true. Same if the control is invalid then we will add the aria-invalid as true.
ValidatorUtils.hasRequiredField(this.control.control as AbstractControl) ?
this.renderer.setAttribute(
this.hostElement.nativeElement,
'aria-required',
`true`,
) :
this.renderer.removeAttribute(
this.hostElement.nativeElement,
'aria-required',
); // Set `aria-invalid` for screen reader.
if (this.control.touched) {
this.control.invalid ?
this.renderer.setAttribute(
this.hostElement.nativeElement,
'aria-invalid',
`true`,
) :
this.renderer.removeAttribute(
this.hostElement.nativeElement,
'aria-invalid',
);
}
Here we have added one utility function to check that control has the required field or not?
if (abstractControl.validator) {
const validator = abstractControl.validator({}
as AbstractControl);
if (validator && validator.required) {
return true;
}
}
//
// tslint:disable-next-line:no-string-literal
if (abstractControl['controls']) {
//
// tslint:disable-next-line:no-string-literal
for (const controlName in abstractControl['controls']) {
//
if (abstractControl['controls'][controlName]) {
//
if (
ValidatorUtils.hasRequiredField(
//
abstractControl['controls'][controlName],
)
) {
return true;
}
}
}
}
return false;
} static hasRequiredField(abstractControl: AbstractControl): boolean {if (abstractControl.validator) {const validator = abstractControl.validator({}as AbstractControl);if (validator && validator.required) {return true;// @ts -ignore// tslint:disable-next-line:no-string-literalif (abstractControl['controls']) {// @ts -ignore// tslint:disable-next-line:no-string-literalfor (const controlName in abstractControl['controls']) {// @ts -ignoreif (abstractControl['controls'][controlName]) {// @ts -ignoreif (ValidatorUtils.hasRequiredField(// @ts -ignoreabstractControl['controls'][controlName],) {return true;return false;
5. Setting the aria-expanded tag to the dropdowns
Let’s understand what is an aria-expanded tag.
aria-expanded: Indicates whether the element or another grouping element it controls, is currently expanded or collapsed.
Let’s create a function to add the aria-expanded tag to the controls.
this.renderer.setAttribute(
this.hostElement.nativeElement,
'aria-expanded',
`${this.hostNgSelect.isOpen}`,
);
What is the hostNgSelect here?
hostNgSelect is the property of the NgSelectComponent which provides all the properties of the ng-select. Here we have used to set the aria-expanded tag in the control.
6. Supporting other dropdown features for accessibility
Let’s understand some of the tags before jumping to the functions.
role: The role the attribute describes the role of an element in programs that can make use of it, such as screen readers or magnifiers.
combobox: Combobox is used for building forms in HTML. In which users are able to select an option from the drop-down list as per their selections.
aria-autocomplete: Indicates whether inputting text could trigger the display of one or more predictions of the user’s intended value for input and specifies how predictions would be presented if they are made.
tabindex: The tabindex the global attribute indicates that its elements can be focused, and where it participates in sequential keyboard navigation (usually with the Tab key, hence the name).
Let’s create the functions to add these tags dynamically.
this.renderer.setAttribute(
this.hostElement.nativeElement,
'role',
'combobox',
);
this.renderer.setAttribute(
this.hostElement.nativeElement,
'tabindex',
'0',
);
this.renderer.setAttribute(
this.hostElement.nativeElement,
'aria-autocomplete',
'none',
);
7. Trim and ignore whitespace in the controls.
Unnecessary whitespace sometimes causes issues when we are sending data to backend services. We can remove the whitespace from the controls directly using the function.
if (
!this.ignoreWhitespaceTrim &&
this.isTruthy(this.control.value) &&
this.isString(this.control.value)
) {
this.control.control?.setValue((this.control.value as string).trim(), {
emitEvent: false,
});
}
8. Disable password Managers
data-lpignore used to ignore the password managers for the input.
Let’s create a function to add this attribute.
this.renderer.setAttribute(
this.hostElement.nativeElement,
'data-lpignore',
'true',
);
Yeah! we have completed all the functions. Let’s add them together to create reusable components to support the accessibility in the angular application.
References:
Are you preparing for interviews? Here are frequently asked interview questions in Angular. It covers the Latest interview questions for Angular and Frontend development. Let’s check how many of these questions you can answer? | https://javascript.plainenglish.io/8-reusable-functions-to-make-your-angular-app-accessible-61a870026538 | [] | 2021-01-18 14:49:23.173000+00:00 | ['JavaScript', 'Angular', 'Front End Development', 'Accessibility', 'Web Development'] |
S21 will be getting the most efficient till now from Exynos | Exynos 2100 will be revealed soon, but the testing is going on and the results are staggering, Exynos 2100 is found to be most efficient processor, and it was tested with the S21, which is going to be soon out, in January…
According to ice Universe, the upcoming Exynos 2100 chipset maintained 78% battery life in the same battery life tested where the Exynos 990 dropped to 55%. The Exynos 2100 chipset was tested in the Galaxy S21 Ultra. Both S20 Ultra and S21 Ultra have the same 5.000 mAh battery capacity. With that result, it such an achievement for Samsung that can make great new chipset once again.
The results suggest that the Exynos variant of S21 will be more battery efficient than the snapdragon, but still its very early to judge just from the in-factory tests, the real world tests have to be done for more accurate results…stay tuned for that…. | https://medium.com/@techberg/s21-will-be-getting-the-most-efficient-till-now-from-exynos-b0bd85d40982 | [] | 2020-12-13 12:36:59.374000+00:00 | ['Samsung', 'Technews'] |
No, We’re Not All Capitalists | Photo by Skitterphoto from Pexels
“We’re all capitalists.” It’s a phrase that gets thrown out to knock down critiques of capitalism. Along with it comes remarks about where people who think capitalism is flawed buy their groceries, or their clothes, or the smart phone they’re typing their tweet on. We all participate in capitalism. It’s the water we’re all swimming in. That’s not the same as being a capitalist, though. The level of discourse on this has to be raised because of how quickly the world is changing.
Something I’ve been writing about more is climate change. I’d steered clear of the topic for a while, because I wasn’t sure how to wrap my arms around the magnitude of it and share my thoughts in a meaningful way. Nevertheless, it crept into some of my more popular Medium essays, and I’ve committed to writing about it more. Climate change turns every discussion about economics (which at its root is about resources and their distribution) into a discussion about saving the planet. Environmentalism isn’t just for the granola-eating hippies or something you grow out of after university. We’re talking about the fate of the planet. This is literally about our survival. Individuals can’t fix this. Governments have to. And the scale of international cooperation needed is like nothing we’ve ever seen before. The required level of urgency can’t be achieved, though, because to address climate change is to reject capitalism. That conversation isn’t happening at the level it needs to for several reasons. Of course, monied interests are doing everything in their power to stop it, but it’s deeper than cynicism. This is about belief.
Unfettered growth. Ever-increasing consumption. It’s what we’re being told is necessary, but it’s obvious that’s not possible in a world of finite resources. We’re cancer patients being told the spreading tumors are essential for our well-being. We’re being told the markets will fix things. That’s the equivalent of faith healing. We are dealing with a secular religion — powerful, cult-like thinking. It’s why I don’t think economics as a discipline is equipped to deal with discussions of climate change in a meaningful way, because it can’t divorce itself from capitalism. As it’s taught and practised, capitalism is the default, the starting point. It’s baked into the cake. This means certain questions that should be mainstream don’t ever really get discussed. Questions like: Should this particular resource even be consumed? These basic, fundamental ethical questions aren’t even being jettisoned, because they were never taken on board in the first place. The assumption is that as long as there is someone willing to buy whatever comes out on the other end, the answer is “Yes. Consume. Produce. Sell.” Capitalism skips too many steps in the analysis. Markets are short cuts. And short cuts draw blood.
I understand the fear that rises up when capitalism is questioned. The propaganda for it and against anything challenging it has been unrelenting and effective. It’s deeper than what people believe or don’t believe, though. In a real way, being a “capitalist” is an integral part of many people’s identities. So much so that the slightest challenge to it, any question that it isn’t “the best” is taken as a personal attack. It may even go so far as to pierce people’s psyches and sense of self. “Winning” under capitalism is what determines people’s worth. Our egos and self-esteem are tied up in it. Now that the Golden Age of capitalism is winding up, and we’re in a new Gilded Age, there are too many losers. It’s why there’s so much striver and “hustle!” porn all over social media. It’s propping up the belief that a collapsing society is the fault of the people not being served by it. The winners — the people hoovering up all the wealth — have made this about willpower and determination for the people trying to climb the ladder. Not being paid enough to live on makes someone a “loser” not a victim of wage theft. Deep, systemic failures have been shifted onto the backs of individuals, many of whom can barely make ends meet. It’s grotesque.
Being a “capitalist” isn’t about owning capital so much as it is subscribing to an ideology. The less capital you own, the more important the ideology is. Billionaires are all on the dole, taking handouts, tax breaks, and reimbursements left, right, and center. And they don’t just take them. They demand them. They bribe. They threaten. They twist arms. They get governments to bail them out when they lose. While the “winners” are forever being granted concessions and shown mercy, the “losers” are being told to try harder, that their “failure” makes them deserving of their suffering. They’re told that a Calvinistic work ethic and all the self-effacement that comes along with it will make their lives worthwhile. Meanwhile, they can’t afford their medication.
I’ve had “capitalist” in quotes, because many of the staunchest defenders of capitalism don’t own any capital and never will. They’ve all been told they’re members of a club that doesn’t allow them entry. Maintaining that level of cognitive dissonance requires magical, almost delusional, thinking. So, no, we’re not all capitalists. We’ve been propagandized into believing we are. I don’t know what it will take to break the spell.
Originally published on my Patreon. | https://kitanyaharrison.medium.com/no-were-not-all-capitalists-cd19de88435d | ['Kitanya Harrison'] | 2019-09-03 04:42:38.365000+00:00 | ['Economics', 'Capitalism', 'Economy', 'Climate Change', 'Money'] |
How to Take the Perfect Nap | How to Take the Perfect Nap
10 science-backed tips for more productive shut-eye
The Elemental Guide to Napping is a three-part special report. Read about the science of napping and why napping should be more egalitarian.
My life to this point is marked off in two epochs: Before Nap and After Nap. From birth until about age 30, I had no patience for napping; naps left me groggy, hungry, cold, and disoriented, or feeling as if I was missing out on something much more interesting in the world. And that’s if I could fall asleep at all, which was almost never.
Seven years ago, that all changed when I moved to Spain on a Fulbright fellowship to research a book on the history of the siesta (yeah, I didn’t know the government gave out money for that either). I would spend my mornings working in the archives and go home around 2 p.m. to cook whatever lunch I could afford on my stipend, then crawl into bed for the next phase of my “research.” For the first few days, I just laid there, eyes wide open and thoughts racing. Day after day, I worked at it, until I finally achieved that first perfect nap.
After the perfect nap, I’m not entirely sure I’ve been asleep at all. I drift off without noticing and wake up fresh, ready to start the second part of my day. Over the years I’ve honed it to a fine art and become attuned to my body’s natural rhythms. Anticipating when the tired feeling will hit, I try to be someplace where taking a break is possible — if not at home, then maybe in my car or at a park. I took my nap habit with me when I left Spain, and it’s been my secret weapon against burnout and exhaustion ever since. Here are a few of the best napping tips I’ve learned along the way.
Timing is everything
The desire to sleep corresponds to changes in body and brain temperature that run on a roughly 24-hour schedule, called a circadian rhythm. Everybody, no matter if they live in a warm or cold climate or if they’ve eaten a big meal, experiences these subtle changes at bedtime and, to a lesser extent, in the afternoon — usually around six to eight hours after waking. For most people, “prime napping time falls between 1 and 3 p.m.,” writes Sara Mednick, a leading voice in nap research and author of Take a Nap, Change Your Life! Plan your nap for the time when your body is naturally sleepier and you’re more likely to fall asleep.
Know your sleep stages
Different phases of sleep confer different benefits on the brain and body, so you can actually hack your nap by adjusting when you nap and for how long. According to Mednick, the first 20 minutes of your nap are spent in Stage 2 sleep, which provides energy and alertness. Stay asleep longer and you’ll enter slow-wave sleep (SWS), which is when the brain begins to process memories and information, and then rapid eye movement (REM), the creativity-boosting dream phase. If you fall asleep during your prime napping zone and stay asleep for 90 minutes — what Mednick calls “the perfect nap” — you’ll get one full sleep cycle, complete with an optimally balanced dose of all three phases.
Not all naps are created equal, though. “As a rule of thumb, you can count on naps earlier in the day to be richer in REM, while late-afternoon naps tend to be higher in SWS,” Mednick writes. If you’re interested in dreams or are working on a creative project, you might prefer a REM-soaked late-morning nap for the creativity boost it can bring; if you’re physically exhausted all the time, opt for a long afternoon nap rich in rejuvenating slow-wave sleep.
If you wake up groggy, you may be sleeping too long
That disoriented feeling I used to suffer from is known as sleep inertia, and it happens when you wake up during slow-wave sleep, the phase that comes after the energy-boosting Stage 2 sleep. If this happens to you, try waking up a few minutes earlier and see if you feel more refreshed.
The perfect nap lasts around 20 minutes (unless it doesn’t)
Though Mednick calls the 90-minute nap “a clear blue-ribbon winner,” the National Sleep Foundation recommends a snooze lasting 20 to 30 minutes. That’s long enough to grab a dose of that energizing Stage 2 sleep, without the risk of being plunged into the slow-wave sleep that can make you groggy. There seems to be a general consensus that a nap of precisely 26 minutes is best: That’s based on a famous 1994 NASA study that found that long-haul pilots who napped for 25.8 minutes were 50% more alert than their nonnapping counterparts and performed 34% better on certain tasks. I usually set my alarm for around 30 minutes, to give myself a few extra minutes to drift off.
Don’t nap too late in the day
Improperly timed naps can interfere with your nighttime sleep, experts say. Don’t sleep too long or too late in the day, especially if you have trouble falling asleep at night.
Try a caffeine nap
In my pre-nap days, I would fight off the afternoon slump with a Starbucks instead of a nap. But you can have it both ways. Since caffeine takes about 20 minutes to kick in — almost exactly the recommended nap length — down your latte just before lying down. The caffeine will act as a natural alarm, waking you up refreshed and ready to focus on the next activity. A 2003 Japanese study found that caffeine naps were more effective at combating daytime sleepiness than noncaffeine naps.
Clear your mind
For many of us, the main barrier to falling asleep at nap time is an overactive mind. Especially if you’re not in the habit, “Nap Bishop” Tricia Hersey recommends journaling before you lie down, to process whatever is nagging you. Or try a guided meditation like yoga nidra, also known as yogic sleep, to relieve stress and give your brain a break.
Carry a nap kit
During my pre-nap days, I could find a million excuses for not taking a siesta: The room was too bright or the traffic outside was too loud. Then some co-workers gave me an airplane nap kit, complete with sleep mask, neck pillow, and earplugs, enabling me to create the right conditions for sleep almost anywhere. Now I don’t go anywhere without it.
Practice makes perfect
You can train yourself to become better at napping. Regular nappers like Hersey report that it gets easier and more fun the more you do it — like riding a bike, but horizontal. Once your brain and body get in the habit, you’ll learn to drift off quickly and even wake up at the perfect time without an alarm. “Take your time and don’t guilt or pressure yourself” if you can’t fall asleep right away, Hersey says. “Just slowing down alone is a big pushback against grind culture and burnout culture.” Even if you can’t fall asleep, just lying down can have a positive effect: Science has found “nonsleep dozing” to be effective at reducing sleepiness among drowsy drivers.
Invest in stuff you love
For Hersey and for me, working the nap beat unleashed an obsession for all things sleep-related. I’ve spent hours researching linen duvets, memory-foam pillows, and silk pajamas that I hope one day to be able to afford, and Hersey rejoiced when she found the perfect fleece blankets for her Nap Ministry. Whatever gets you excited about crawling into bed will make nap time that much more appealing. “But even if you don’t have that stuff, wherever you are, you can embody rest,” Hersey says. “If you’re sitting on a couch, on a park bench, on an airplane, it’s about your mentality around it and getting into a routine. Wherever your body is, it’s the site of liberation.” | https://elemental.medium.com/how-to-take-the-perfect-nap-397ee26a64c7 | ['Maya Kroth'] | 2019-08-19 15:35:43.074000+00:00 | ['Rest', 'Sleep', 'Napping', 'Guide To Napping', 'Body'] |
New Course | After 3 years at Enormo, I have started to study a MSc in Economics. It will last 2 years. During these years I will focus on studying, so I will not work at the same time.
Why this change? After knowing how the monetary system works I realized that I didn’t really know what money is! Besides, crisis made me see that the economic system is very important, but also far from perfection.
As a person interested in systems in general, I see the economic system as one of the most complex and important over the world. | https://medium.com/iv%C3%A1n-de-prado-alonso/new-course-6a608c9fc66d | ['Iván De Prado'] | 2018-04-10 11:45:34.474000+00:00 | ['Masters', 'Computers', 'Economics'] |
Why e-wallet wins over debit/credit cards? | ‘Wallet’, typically known as a purse to keep cash, cards, etc.’ ‘E-wallet’, has replaced a wallet from its physical form to an electronic form. An e-wallet needs to be linked to an individual’s bank account to make payments. Today, the spread of e-wallets in India is such that one can make payments for various purposes- from DTH bills to phone bills, from street side vendors to cab drivers or companies (like Ola, Uber). E-wallets have turned the world to the next level of development and technology. The Reach of e- wallets is huge, its fast, accessible, safe, easy, and is linked to almost every platform to make payments, and make our lives way more easier. Whereas debit and credit cards are directly linked to our bank accounts and are used to make payments either by swiping or in ATMs to collect cash. It uses a secure method by asking for the PIN, CVV or the OTP at the time of making any transaction.
The biggest push for mobile wallets came when the RBI insisted there has to be a two-step authentication process for all card transactions done for a cab service. This meant cashless payment that was pushed as one of the greatest ease of availing these cab services were suddenly rendered useless. Operators like Uber, Ola and others, then switched to mobile wallets to enable user carry out cashless payment. The biggest segment of mobile wallet users is the ones who avail such cabs and hence the segment has seen rapid growth.
Although globally around for more than five decades, credit cards in India are still at a nascent stage. There are about 21.2 million credit cards in circulation with HDFC Bank having the maximum number of card holders as of April, 2015.
Unlike a physical wallet where credit cards and cash are bundled in a single location, a mobile wallet contains extra layers of security that protects your electronic transactions. Digitally storing your financial details helps in case you misplace your wallet. Consumers face the risk that their sensitive information will fall into the wrong hands making you more susceptible to credit card fraud. While more than a third (37%) of those who experienced fraud with the organization would stop shopping there, 26% would not and 37% are not sure. Part of the challenge is deploying totally secure technology for mobile and online shopping. The other part is gaining the confidence of consumers that their information will remain secure. | https://medium.com/dalla/why-e-wallet-wins-over-debit-credit-cards-62f799d35e93 | [] | 2018-08-03 10:39:26.028000+00:00 | ['Debit Card', 'Cash', 'Wallet', 'E Wallet', 'Fintech'] |
Arguing with Edward Snowden | Arguing with Edward Snowden
A Data Scientist’s take on defending Machine Learning models
Introduction
I’ve recently read Edward Snowden’s Permanent Record during my holiday. I think it is a great book that I highly recommend for basically anyone, however it is particularly interesting for IT-folks for the obvious reasons. It is a great story about a guy growing up together with the internet, starts to serve his country in a patriotic fervour after 9/11, and becomes a whistleblower when he notices the US has gone too far violating privacy in the name of security. Moreover, a paradox I found most interesting is something a Data Scientist can easily relate to.
The systems that collect data about one’s browsing on the internet (basically anything you do on the internet) was an engineering masterpiece. It surely did something the NSA has no mandate for, but when building something brilliant, it can be easy to miss the big picture, and help malignant actors by handing over great tools for them.
Think about this in terms of machine learning! I am quite sure — although I cannot know — that the Chinese Social Credit System’s mass surveillance network makes use of some state-of-the-art Deep Learning concepts. Or they even do some things more brilliantly, than publicly available research can suggest. But the system it is used for at best raises some super serious questions about individual rights. Being IT professionals, we cannot miss the big picture and have to be mindful of what is the consequence of our work!
When discussing the threats of massive private data collection done by governments, Mr Snowden makes some controversial statements about data science as well. Firstly, he incorrectly suggests that machine learning models in general are total black boxes, and their decisions cannot be explained afterwards — thereby making a point that algorithms make obscure decisions that people should make in a transparent way. Secondly, he states that recommendations are just ways to put pressure on an individual to buy popular products. I aim to argue against both of these statements.
Model explainability
There is an example in the book about COMPAS, a widely used risk-assessment algorithm in the judicial system of the USA. In this case, the point is that an algorithm made a decision having a substantial effect on someone’s life — and neither we nor the algorithm can even explain why so. I think this is an inherently wrong and ill-disposed point of view.
There are models which are explainable by nature which is one of the main reasons practitioners use them. If you think about linear regression for a regression problem: the product of the feature value and the corresponding beta gives you the amount this feature contributed to the prediction of the target value. In a classification problem, widely used logistic regressions behave almost the same way, as they are a special linear regression.
A decision tree produces the exact series of decisions the algorithm learned to be useful determining the target value. However, bagging and boosting algorithms make use of numerous trees built simultaneously or sequentially (Random Forest, Xgboost, Extremely Randomized Trees, LightGBM, etc). These voting trees, or high-dimensional data in case of Support Vector Machines are harder to interpret or visualize concisely. Moreover, in a deep Neural Network millions of matrix multiplication take place — keeping track of it sounds intimidating of course.
However, there are several methods that make these algorithms more transparent. This article shows how deep neural networks can be more interpretable in breast cancer research. Another great article elaborates on different model explainability techniques: permutation importance, partial dependence plots and SHAP-values. There is room for improvement in non-technical human readable interpretations of complex machine learning models, but there are techniques to explain why a model predicted such an output.
Consequently, if a model is not explained well, it is almost certainly arising from an omission or failure of a human actor. On top of that, algorithms being biased in terms of socio-economic factors is an accusation appearing increasingly often. It is important to note again, that this a failure of the modeler not the model. The data these models are trained on are reported to contain bias — accounting for that is a challenging task we as modellers surely have to overcome. Luckily, the theoretical foundations and tools are there to assess the “fairness” of algorithms for example, between two racial groups.
Advertisements and recommendations
A second argument I did not like in the book was about recommendations in general. The author states that recommendations are just about softly pressuring the customer to buy what others did buy. I think this argument misses the real point here.
There are advertisements everywhere. I would certainly agree that advertisements — apart from conveying information about a product — are means of putting some pressure on the subjects to buy a product. Nonetheless advertisements are natural and necessary in a market-driven economy, and in a world packed with so many products and services.
But if we accept the premise, that some sort of advertising is going to exist, which one would you prefer? The one with no personalization whatsoever, or one where the advertisers’ goal to make you buy happens to cause that you are getting more relevant advertisement about products that you may actually need? I’d prefer the latter. A sophisticated recommender system takes your history into account, along with other people’s history that have a similar record to you. If done right, they are much more than just popular product recommendations.
Conclusions
In general, I really liked the book. I also admire the bravery of Mr. Snowden that started a discussion about privacy, and the trade-off between privacy invasion and crime prevention. But I also think that the book expresses a negative attitude towards everything in connection with using large amounts of data. Opposing this, I believe that statistical models built on top of massive datasets can greatly benefit humanity — if used for the right purposes, transparently and responsibly. | https://medium.com/starschema-blog/arguing-with-edward-snowden-2e21d553c056 | ['Mor Kapronczay'] | 2020-03-12 14:42:16.869000+00:00 | ['Machine Learning', 'Recommendation System', 'Data Science', 'Edward Snowden'] |
Correct Place to run your Database Migrations in Kubernetes!! | What are Database Migrations??
In a code-first approach of development, we keep all our database changes in the form of code, these scripts are called database migrations. They help us rollback changes rather “easily” as the script that has run, will always have its footprint logged in the code. I just came up with this definition, I don’t know if there’s a official one.
The Problem
Migrations are required to run just once, before the code deployment, so that code changes work properly.
Where to run them??
Database migrations can vary in complexity, they can be small scripts such as just adding a column to updating millions of rows using a custom logic. And often, these migrations are needed for the code shipped in the same build to work. So our migrations must run before the code deployment.
initContainers
What is it?
Specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Let’s discuss some pros and cons of this approach.
Pros:
Easy to setup, just write few lines of configurations like below
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: <command to run your migration>
2. Due to the way it works it does not need an extra stage in the CI/CD pipeline.
Cons:
It is not meant for long running script as Kubernetes poses certain time limitation within which it has to complete, if it fails to do so, kubelet will restart the script again and again, and your actual container will never start. Basically you are stuck.
Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried according to the Pod restartPolicy . However, if the Pod restartPolicy is set to Always, the init containers use restartPolicy OnFailure.
2. Pod scaling will take longer, as it has to connect to database and run script.
3. It will run for each pod restart, which is waste of resources.
Kubernetes Jobs
What is it??
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.
Kubernetes jobs addresses all of the cons that initContainers has. Pod-scaling is faster, You can run long running scripts. Although it will take an extra stage in CI/CD and a bit involved setup than initContainers. However, this decouples your database migrations from code deployment. Now other builds can be deployed if long running migration is executing, which was not the case previously.
Example:
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: <database migration command>
restartPolicy: Never
backoffLimit: 4
and above can be applied using:
kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml
If you are using helm and want to use all those db-secrets and pull secrets that you have for your deployment, then you can use helm to deploy your kubernetes jobs as well using:
helm upgrade --install --wait --namespace ${K8_NAMESPACE} ${HELM_OPTS} ${HELM_RELEASE_NAME} ${HELM_PATH}
Conclusion
Running migrations using Kubernetes jobs has worked really well for us. This provides flexibility to execute long running database scripts, without blocking your CI/CD pipeline.
References | https://medium.com/@chandelabhishek/correct-place-to-run-your-database-migrations-in-kubernetes-32d9f711443c | ['Abhishek Chandel'] | 2020-12-08 18:08:11.087000+00:00 | ['Kubernetes Job', 'Database Migration', 'Kubernetes', 'Migration'] |
Why We Should Stop Random Drug Testing | The Doctor Weighs In | By: Thomas G. Kimball, PhD
Random drug testing inflicts psychological harm on people with addiction and impedes their recovery. Here are three steps we can take to stop this practice.
Photo Source: iStock Photos
There are many problems with how we approach the disease of addiction. A particularly troubling one from a medical perspective is the practice of random drug testing. This method of monitoring tends to treat many addiction sufferers punitively, instead of effectively addressing the underlying disease of addiction.
If we want to change the course of the addiction crisis in America changing the way we conduct drug testing should be an aspect we carefully consider. There are methods we can apply to substance use disorder (SUD) recovery, whether it coincides with an actual criminal offense or not, that would do away with the punitive approaches that are now ubiquitous in the treatment industry. Moreover, implementing more data-driven positive reinforcement methods would help reduce the stigma which is so damaging and hinders better treatment outcomes.
First step: Eliminate ‘Random’ Drug Testing
A healthy start to a transition away from punitive practices would be ending “random” drug testing and replacing it with planned and regular drug testing. Planned and regular drug testing fits within a strength-based clinical approach to treating the disease of addiction.
All other chronic diseases, like cancer or diabetes, have some form of ongoing deliberate and consistent testing in order to manage the condition. If we approached drug testing in the same way, it allows us to gather better data, helps to normalize the SUD diagnosis, and creates a trustworthy standard across the treatment spectrum in patients, their families, treatment providers, and officials.
The military is saving lives with consistent drug testing
This idea of “random” drug testing being counterproductive is not actually new by any means. This makes our current system seem even more archaic and outdated.
The United States military replaced “random” drug testing with what has been termed “consistent drug testing” almost a decade ago. This method has been used with incredibly effective results to treat certain service members suffering from SUD.
Dr. Kevin MacCauley, who started the Institute for Addiction Study, was first exposed to the military’s approach to drug testing and recovery from SUD while serving as a naval flight surgeon for airborne divisions of the Marine Corps. In this role, he witnessed many pilots self-report their addiction, get necessary medical treatment, and be returned to flying status under monitoring. As he puts it:
“These were charismatic and otherwise highly-capable, self-disciplined pilots who did come forward and ask for help — and they all got better and went back to flying! That just destroyed the prejudice I had picked up in medical school that addicts never ask for help and once an addict, always an addict.”
The willingness of these service members to be so forward about their addiction struggles was due, in large part, to the Navy’s policy of treating SUD primarily as a safety issue rather than a moral or criminal issue. Their treatment outcomes numbers far exceed those of the addiction treatment industry. o perhaps at the civilian level, we should adopt at least some of those measures to more effectively combat the addiction crisis in America.
Drug testing has roots in the military
Drug testing as a common practice in America, to some degree, finds its roots in the military. This is interesting given that the military is also leading a positive reform to the practice they introduced. After the Vietnam War, the military had to figure out how to deal with the plethora of veterans that came back home addicted to heroin.
This issue created the initial practice of monitoring recovering veterans through random drug testing. Unfortunately, the parts of civilian society which adopted this seemingly logical solution to monitoring substance abuse did so without the same infrastructure or goals of the American military.
The psychological harm of random testing
Drug testing, by and large, was adopted by civilian society as a marker for punitive action. This is true in the justice system, the workplace, and other areas of society. Because of this, the first exposure that many individuals have to a SUD diagnosis is as a result of criminal charges or a punitive measure on behalf of employers.
This has created a system in the addiction treatment continuum that exacerbates the punitive aspects of treatment and monitoring, instead of focusing on the disease, its’ symptoms, and the legal and behavioral consequences that led to trouble in the first place. This creates a sort of endless cycle of negative reinforcement surrounding a SUD diagnosis.
Often, people, under this type of stress and threat, seek to hide the initial onset of the problem and their progressive suffering over time. The potential shame, embarrassment, and devastating effects of losing employment or going to jail actually keeps the addiction in the dark where it grows and becomes worse over time.
Random drug testing and the punitive actions that follow create a culture of secrecy and shame that keep people from reaching out for meaningful help. An entire industry has developed in support of hiding drug use and people spend significant resources in buying products to hide their use.
Because of this culture some SUD individuals entering treatment, either by choice or as a legal repercussion, directly associate any type of ongoing substance use monitoring as a punitive measure.
In addition, many times people in recovery are under threat of legal, financial, or other repercussions if they do relapse. This low or no threshold approach to relapse in recovery is one of the worst ways to approach treatment for any type of condition with a mental component. Especially for a chronic disorder like addiction that generally has been created in some part by past negative social determinants.
Beyond those who are introduced to their SUD diagnosis through legal trouble, even those who come to treatment as a result of family, friends, or professional environments-the idea of “random” drug testing inherently creates a negative consciousness. This is no surprise given the social image that’s been created around drug testing. This culture of testing deters people from entering treatment earlier or being forthcoming about substance issues they may have. This is because the system is built around punitive and psychologically discouraging measures.
Second Step: Make the transition to consistent drug testing
Ongoing drug testing and extended recovery support can be approached in a more clinical manner through frequent and deliberate testing. This would reduce some of the negative aspects associated with our current system.
Instead of random drug testing, an individual in recovery would participate in consistent drug testing. This would be administered on an ongoing scheduled basis, and they would know the exact schedule on which they would be tested.
This is a more effective approach for multiple reasons, including:
This would remove the individual stress of random drug testing. The lack of this stress normally results in more willingness from the individual to actively engage in their recovery. And,it helps to normalize the SUD diagnosis for the patient. Regular testing could also improve the trust that family members and peers have in the individual in SUD recovery as they progress through their treatment.
Some might criticize this approach by saying if an individual in recovery knows exactly when they will be tested, then they are more likely to “cheat” on the test or resort to quick detox methods. However, the available data from this type of drug testing seems to show that the opposite is true. The Institute for Addiction Study conducted trials utilizing almost this exact type of approach and have shown more positive impacts on addiction recovery outcomes as a result.
Regardless, our current testing methods do not display outcomes data that support continuing to pursue those same methods if our goal is indeed to improve recovery. Any transition can bring with it unexpected bumps in the road. This would be countered by observing longitudinal data and adjusting testing methods over time.
Third Step: Observe data, make adjustments, educate society at large
Any responsible method of treatment is created and maintained through a foundation of positive longitudinal outcomes data. So, once we replace random drug testing with consistent drug testing there needs to be systems in place to monitor the outcomes data of those involved in such drug testing.
With any other disease that health care providers have eradicated or improved outcomes for, there has been an adjustment period for treatment methods that led to more positive outcomes. As of now, random drug testing is the primary monitoring option that we utilize, and the results of this method are not good. Consistent drug testing needs to be implemented on a larger scale so that we can test the efficacy of this method and the positive benefits it could hold in our efforts to combat the addiction crisis that is currently taking so many American lives.
In addition to implementing consistent drug testing on a larger scale, we also should utilize the data we already have from military treatment to educate the general public about the positive benefits of treating addiction as a chronic disease, a public safety issue, and not a moral failing. This would help destigmatize the disease of addiction further and help those who suffer silently in active addiction to be more willing to come forward and receive treatment.
Related Content: 8 Drug-Seeking Behaviors That Might Signal Addiction
Additional Content by Dr. Kimball:
Why We Can’t Punish Our Way Out of the Addiction Epidemic
***
**Love our content? Want more stories about Drug Testing, Addiction, and Recovery Treatments? SIGN UP FOR OUR WEEKLY NEWSLETTER HERE**
***
Thomas G. Kimball, Ph.D., LMFT, is the George C. Miller Family Regents Professor at Texas Tech University and the Director of the Center for Collegiate Recovery Communities. Dr. Kimball has been part of the MAP team since 2012 and serves as Clinical Director, where he oversees and consults on the implementation of extended recovery modalities, techniques, and practices on individuals who undergo treatment for Substance Use Disorder (SUD).
He has received numerous teaching awards for his courses on families, addiction, & recovery. He is the author of several peer-reviewed articles on addiction and recovery in respected medical journals, a frequent contributor to leading addiction and recovery publications online, and co-authored the book, Six Essentials to Achieve Lasting Recovery, by Hazelden Press.
In addition to consulting and presenting on recovery-related issues across the U.S. and internationally, he frequently writes articles pertaining to emerging addiction recovery data, recovery techniques and modalities, the science behind addiction, the addiction crisis, and long term treatment for the chronic disease of addiction.
Dr. Kimball has made the focus of his career studying collegiate and long term addiction recovery by focusing on factors that enhance long term recovery and improve the treatment industry at a local, national, and international level. Follow him @drtomkimball | https://docweighsin.medium.com/why-we-should-stop-random-drug-testing-the-doctor-weighs-in-3992d2b5e242 | ['The Doctor Weighs In'] | 2020-02-17 15:01:01.669000+00:00 | ['Drug Testing', 'Addiction', 'Mental Health', 'Substance Abuse'] |
Escalation | Escalation
A Poem
Encroaching, the hand poses
above this white thigh
Pressed to the couch
open, vulnerable
touched and waiting
A lone car
taken to the street at 10
drives by
its headlights flashing code on the wall
I’d like to say
I’ve been here before
But I’m in awe
of you
trying to take you in
as anatomically as possible
building again
some kind of unity
*Commentary: at a certain point, heightening the sense that the desired is coming, is coming, is about to manifest, will be realized…and there’s a sort of blankness that follows, a period that could stretch infinitely, even as you move forwards towards loving, touching, revealing yourself in the other, which functions as an end; but there’s really no end to this end, or there doesn’t have to be. Conceptually, this end can elude us, making us think we’re headed for another end after all and the last one was just another means. This is a surface for inscription. Taking the opportunity to sink into yourself, to sink into the other, you sacrifice clarity even as you scramble to take in the whole scene, be shown as the architect of, at least, a portion of it. The playwright? Oh, that might be a little too on the nose. But let’s say it is, after all, a not-so uncomfortable drift, a canvas you cannot wait to take on, even as you have to wait… | https://medium.com/the-rebel-poets-society/escalation-67684f9fd5c4 | ['J.D. Harms'] | 2020-11-27 16:26:41.151000+00:00 | ['Poetry', 'Desire', 'Love', 'Poem', 'Image'] |
Innovation is good, just not an excuse for not meeting the minimum | The development of new technologies has enabled the creation of an infinity of new services. The emergence of the digital economy has meant fundamental changes to the way in which societies are organized, producing new paradigms in our systems of cooperation and coordination. The extraordinary growth that the companies of the so-called Digital Economy have had, has meant an important challenge for regulators in the different dimensions of economic activity, for instance we have seen important differences between some of the biggest tech companies and France on how they should be taxed. A similar situation has developed on labor regulation and benefits.
The discussion on how labor relations between workers and firms should be regulated is particularly important for those organizations that are within the so-called Gig-Economy. Gig-Economy or collaborative economy is understood as those business models in which through a digital infrastructure it is possible to make transactions between users and suppliers for different types of goods and services. These transactions can be for basic goods or services or for partial processes in more complex tasks of the production chain. Following is a brief discussion about the regulatory scope of labor in this kind of organizations.
How things are organized?
In order to discuss the regulation of work in this type of firms, it is necessary to agree on certain concepts. There are several studies on the subject that elaborate the concepts of the different economic models that exist in the industry. It is possible to identify both crowd-work and work-on-demand as types of work in the Gig-Economy. According to De Stefano (2016)[1], crowd-work refers to those systems that, through a digital platform, connect organizations, companies and individuals through the Internet, allowing the connection of clients and workers on a global scale. On the other hand, the work cited defines work-on-demand as those traditional tasks that can be offered through mobile applications. The interesting thing about this conceptual discussion is that it allows identifying certain patterns in the functioning of organizations.
In general, the tendency has been to segment their supply based on different type of service provided as well as the different “qualities” of the same service. An example of these differences is the case of Uber that has been segmenting their offer based on both the demand of the users, but also on the basis of their collaborators (to be UberBlack it is necessary to have a certain car). The interesting thing is that despite the fact that the supply of companies is unquantifiable, in essence the business model is quite similar which allows us to think of similar solutions to certain problems.
As mentioned above, it is possible to identify business models that follow similar patterns, particularly regarding the relationship with their collaborators. The emergence of countless companies that use the same model for different activities (transportation, household chores or even more complex services such as physicians) accounts for a recurring mechanism as a business model.
In many of these models they rely on the flexible supply that they can set up. Because of this, companies decide not formalize the relationship with their collaborators. It is important to consider that this is (at the moment) a rational and conscious decision of firms. Much of the success of these companies, with their significant efficiency gains for the whole society, were the product of innovation and creativity that meant organizing things differently through technologies. This may have meant that legal gaps have been used at some point in time to adapt models of organization and cooperation that did not exist before. In the same way it can be considered that this type of labor relations can also be of interest to workers, benefiting for instance, from greater freedom in work schedules that can complement other activities.
However, due to the economic size[2] and social importance of these activities nowadays, it is necessary to move towards regulatory frameworks that safeguard the safety and equity of workers, users and also ensure that these companies meet the same legal requirements that any other company.
Why regulation is necessary?
It should be considered that there are structural characteristics of these business models that justify the need to regulate their behavior. The network economies and the resulting economies of scale of many of these companies mean that in some cases the idea of freedom in the decision to work can be challenged. Conceptually, this principle is decisive when defining a job and a worker as a dependent or independent.
It should be considered that due to the way in which many of these markets are organized there are important economies of scale, this implies that companies would be more efficient delivering their services, the larger their sales volumes are. This can undoubtedly mean significant gains for consumers, but it can represent significant challenges for workers who may face a single employer, becoming monopsonies in certain markets. This can be aggravated when workers have high degrees of specialization or when it is the only economic activity they perform. Recent evidence has shown how many of the firms that carry out these activities have been acquired or merged to achieve greater volume[3] (maybe this is a topic that market regulators should consider into their competition analysis).
When there are fewer employers, freedom of decision can be jeopardized. This added to certain mechanisms used in these companies can affect the idea of independence in the decision to work or not. Because the company can handle a kind of monopsony in this labor market, it can incur certain abuses in terms of salary, hours of work and tasks to be performed that must be regulated.
What should be regulated?
This is probably the most difficult question to answer, due to the effects it may have on the operation of the system. However, it is possible to approach this challenge by ensuring certain minimums in terms of protection for workers and consumers. In this sense, countries must move towards labor statutes that consider the characteristics of these types of work. For this it is decisive to separate the regulations according to the nature of the task performed. In general terms, regulations should consider minimum wages per hour worked in accordance with current laws, and maximum hours to work per day and per week should also be considered in those tasks that are paid per hour. Finally, regulatory frameworks should consider minimum coverage in terms of labor accident insurance, health coverage and pension contribution.
It is necessary to move towards labor statutes that adapt to new technologies, that promote innovation and economic development, and that also allow improving the lives of workers and users. This must be done by ensuring the principles of social justice and minimum guarantees for all, principles that must be the center of development. This means that countries must update their regulations to ensure that all workers have minimum guarantees at the time of their work. The job security of a worker should not depend on the type of work he does, be it the driver of public transport or an Uber driver. | https://medium.com/@alex.seemann/the-development-of-new-technologies-has-enabled-the-creation-of-an-infinity-of-new-services-c1700a214d31 | ['Alex Seemann'] | 2019-11-13 10:55:24.410000+00:00 | ['Digital Economy', 'Gig Economy', 'Regulation', 'Sharing Economy'] |
Cash App Money Generator | Free cash app money generator — Learn how to get cash app free money in 2021 using this simple cash app hack apk 2021. All you have to do is visit the website below:
CLICK HERE TO ACCESS GENERATOR
If you have ever wanted to stop being poor_this is the right place for you! I’ve recommended this cash app glitch 2021 to all of my friends and family, now they all benefit from it and get $100 cash app money everyday! This is all thanks to this amazing tool — which is really simple to use and doesn’t require anything other than a mobile phone device running on either iOS or Android & PC.
cash app free money code without human verification
cash app money generator no human verification
cash app money generator legit no human verification
how to get free money on cash app no verification
free cash app money legit
cash app money generator legit 2021
cash app money hack
request free money on cash app
is the cash app money generator real?
free cash app money legit
cash app money generator legit 2021
cash app money hack
cash app free money code without human verification
cash app money generator no human verification. | https://medium.com/@lovohah254/cash-app-money-generator-51d0dec6095f | [] | 2021-06-17 20:53:56.421000+00:00 | ['Make Money Blogging', 'Make Money', 'Make Money Online Fast', 'Make Money Online', 'Make Money From Home'] |
About Me — Lawrence Grabowski. I first realized I was good at writing… | Photo by author
I first realized I was good at writing in high school, then spent a lot of time doing it incidentally in college.
Like many people, I ended up becoming a writer after working for several years. This isn’t because I got tired of my corporate job or never believed I was capable. I thought that everyone was good at writing.
Turns out they aren’t. And that I’m happy to do the boring parts of writing in a professional setting. Now, after years of not really knowing what to do with myself, I write for a living and tutor Spanish.
Outside of writing, I do a lot of gaming: video, board, and roleplaying; I practice Chinese martial arts; and I look for excuses to go back to Spain.
Get a feel for my writing.
Check out my portfolio. | https://medium.com/about-me-stories/about-me-lawrence-grabowski-db5e79abf7db | ['Lawrence E. Grabowski'] | 2020-12-12 16:10:51.673000+00:00 | ['Biography', 'About Me'] |
<h1> Enter cliché title here </h1> | I wrote my first line of code in 2017. I understood my first line of code in 2020.
That’s not to say that it’s taken me three years to understand what a call-back function is, but rather that it’s taken me three years to decide to understand what it is. How does that make sense, you ask? Let’s rewind a couple of years so I can better explain.
The first image that appeared when I searched for ‘rewind’ on Unsplash
It was mid-April, 2017. I had just graduated from university with a Bachelor of Science. After undergoing the gruelling process of peer-reviewed research for the previous two years, I was confident that wasn’t the direction I wanted to head in post-graduation. I adored the biological sciences, but felt as though my creative muscles were aching to be exercised. So, after convincing an interviewer that my work ethic and knowledge of consumer psychology was enough to outweigh my complete lack of any marketing education, I accepted a job in the marketing industry! The complexity of the human mind had always fascinated me (and it still does!), so marketing and communications was an exciting field for me at the time. It allowed the imaginative and unconventional traits of my personality to flourish.
One day there was an issue with our website, and long story short, I ended up having to copy and paste a few lines of indiscriminate letters and symbols (<aside> it was javascript! </aside>) from the web into an email for a colleague. I had absolutely no idea what I was doing, or what any of it meant. All I knew was that I was thankful that I didn’t have to understand it! That was my first, and essentially only, exposure to code for many years.
An accurate depiction of what Javascript looked like the first time I saw it.
As I worked my way up the marketing industry to a managerial role, I eventually found myself confronted by an overwhelming feeling of mediocrity. Why? Part of the reason was that I no longer felt challenged in my professional life. But the other part was because, for the better half of a decade, I had been battling with an outdated, societally-driven belief that I could only achieve “professional success” by pursuing a career in law, medicine, engineering, or finance. Unsurprisingly, this feeling ultimately resulted in a misguided career change into corporate and securities law.
By no means do I regret this professional shift; I gained phenomenal experience and many transferable skills (shoutout to the LSAT for teaching me about conditional reasoning which has helped immensely with Javascript!). However, I realized within a year that my legal career would be unfulfilling due to an obvious lack of passion. So, I did what any person does when they need to re-evaluate their career trajectory: I moved to Germany! I spent the next year of my life learning a second language (Javascript now counts as a third language, right?), publishing academic research, and exploring other career opportunities. It was during this period of self-reflection and discovery that I learned about the world of web development!
Web development was extremely enticing to me for a multitude of reasons. It was multi-disciplinary (the perfect intersect between art and science!). It was continuously evolving. It was collaborative. It required brainstorming creative solutions to complex problems. It was diverse, flexible, and relatively ‘future proof’. It was all-encompassing. It was… perfect for me! So, following the completion of some rigorous research, I ignited a passion for ‘web dev’ and outlined a 6-month plan that would aid me in my professional transition.
Let’s get this career change started!
So, now to answer the question I’m sure you’re all wondering. Why did it take me three years to finally decide to learn to code? To that, I answer: haven’t we already established that I live a life ruled by misconceptions? 😉 In all seriousness, I actually was deterred from the world of coding and web development due to three common fallacies:
1. Web development is anti-social. 2. Web development is very math-heavy. 3. If you don’t begin learning to code at a young age, you’ll never catch up to those who did.
I’m not going to dive into why these are inaccurate (feel free to check out this blog post if you’re interested), I’m just highlighting them to help explain why there’s a three-year gap between when I wrote versus when I understood my first line of code.
So here I am, three years later, ready to dedicate my foreseeable future to eating, breathing, and sleeping code. Let the bootcamp begin! | https://medium.com/@tenalbourchier/h1-enter-clich%C3%A9-title-here-h1-f35aa7086921 | ['Tenal Bourchier'] | 2020-10-25 16:07:47.780000+00:00 | ['Web Development', 'Codingbootcamp'] |
Predictive Maintenance within the Internet of Things | The Fourth Industrial Revolution
In the not-so-distant future, most manufacturing equipment, machines, and devices are expected to be connected to the so-called internet of things with which they will not only be able to communicate with each other, but also with humans, and equipped with sensors of all kinds, they will be able to provide information about their environment or themselves in real-time. Sensors are one of the main contributors to the infamous big data. With cloud computing, storing data and retrieving knowledge out of it is now possible. More precisley, sensor data can be used as input for algorithms from artificial intelligence, i.e., to develop software for “smart” objects that can perform decisions by themselves. For instance, fridges might be ordering food when they are empty, or cars might be writing mails to partners to inform about delays due to traffic jams. The combination of these new technologies are expected to be the advent of the fourth industrial revolution.
Predictive Maintenance
One of the interesting applications is predictive maintenance, since it enables dynamic service of machine as opposed to regular maintenance intervals. In addition, remote devices can call for service, once some of their parameters are showing a trend towards malfunction. This is essentially a machine learning task in which normal and non-normal operation states have to be distinguished — a problem known as anomaly detection.
Anomaly Detection
Figure 1: Possible anomalies in sensor signals.
In Figure 1, three different types of anomalies with respect to sensor signals in time-domain are represented. A point anomaly is a set of points that is different on a local but also on a global scale, e.g., if represented as a sensor signal distribution. In contrast, a contextual anomaly strikes attention only if some time window is inspected, as it is a local process. Last, a collective anomaly is a deviation from a regular pattern. Which patterns occur depends on the combination of sensor, machine, and its application. While it is necessary to inspect some signal history to spot the two latter types of malfunction, this is not the case for former one. Hence, the order of the listing is also the order of the difficulty to detect such patterns. Moreover, a real machine can contain many sensors and the combined information can be a better estimate of its state.
In reality, machines degrade over time due to attrition, but also out of a sudden due to an external impulse. As a consequence, they drift away from their reference state that is defined by a brand-new sample. This reference state can be interpreted as a multi-variate distribution generated by the different sensors, and over time, this distribution changes its parameters, i.e., mean and/or covariance. Recognizing non-normal behavior means that the displacement of this drift — in a way causing more and more “point” anomalies over time — has to be identified, either quantitatively (magnitude of deviation) or qualitatively (deviated: yes/no). It can also be interesting to know the direction of the drift, as it might reveal which component(s) need replacement.
Supervised Learning
With supervised learning, one or more classes of anomalies can be directly targeted — as long as labelled data is available. In some cases, such data can be obtained by performing stress tests, e.g., by exposing a machine/device to extreme conditions, or operating under unusual conditions for a long period of time.
Figure 2: Classification of anomalies with supervised learning.
In Figure 2, the reference state and one anomaly class are depicted. A support-vector machine with linear kernel is used to as model the decision boundary between the two states, but any other classification algorithm, e.g., logistic regression, random forest, neural network, can be used. Unfortunately, it is rarely known in advance how non-normal states look like; even if historical data from stress testing might be available, it need not to be complete. Therefore, it could be more interesting to just identify every point outside the reference state, independent of its direction.
Unsupervised Learning
Alternatively, it is possible to define only a reference state and classify all points outside of it as non-normal behavior. Furthermore, if there are some potential outliers in the training set, the algorithm will try to identify them by itself (unsupervised learning). Hence, this technique needs no labelled data at all, and it can — in contrast to the supervised learning approach — identify all points outside of the reference state, i.e., anomalies not included in the training set.
Figure 3: Classification of anomalies with unsupervised learning.
More precisely, the aim is to construct a so-called “envelope” around the data set consisting of a reference state so that points outside of it can be identified (Figure 3). For instance, one could fit a Gaussian distribution to the data set and, in a second step, compute the Mahalonobis distance of points to estimate their “normality” (the Mahalonobis distance is the extension of the Z-score to more than one dimension). However, the actual distribution of the data need not to be Gaussian, and in that case, other techniques can be applied with the same idea, e.g., one-class support vector machine, local-outlier factor, or isolation forest. These methods do not compute a distance metric, they assess the “loneliness” or relative position (inside/outside envelope) of a point instead. Furthermore, they allow to define an initial amount of outliers/contamination (ν) to control the tightness of the volume.
Monitoring
Figure 4: Cumulative sum of outliers/anomalies over time for a sample system.
In Figure 4, four such algorithms, i.e., Mahalonobis distance/robust covariance (RC), local-outlier factor (LOF), one-class support vector machine (OCSVM), and isolation forest (IF), are trained on a data set with ν=0.10, i.e., an initial fraction of anomalies of ten percent, and monitored over time. In the beginning, almost no anomalies are identified. After this initial period, some points fall outside of the confidence region, and after some more weeks, every new point point is classified as an anomaly, and the slope becomes unity. By defining a threshold value, an automatic message can be triggered to inform the operator. Furthermore, by assessing the reference state after fabrication/before shipping, every machine could have its own model, and its maintanance schedule would be customized.
Closure
The provided examples demonstrate briefly how upcoming down-times and malfunction can be targeted and identified, as early finding faults in equipment, machines, and devices can reduce the extent of the damage. Furthermore, maintanance can be designed in a dynamic manner as components can communicate their need for service, which optimizes the costs thereof. After all, it is a creative process and engineers can think of novel products embedded with such technology. | https://towardsdatascience.com/strategies-for-predictive-maintenance-within-the-internet-of-things-c66adcba037d | ['Georgi Tancev'] | 2020-12-28 10:26:06.996000+00:00 | ['Data Science', 'Internet of Things', 'Business', 'Machine Learning', 'Artificial Intelligence'] |
Your Home Needs One Simple Design Philosophy to Be More Productive and Happier | As the pandemic stretches on, most of us have hit the breaking point of working from home fever. Our home feels constricted, in some moments even claustrophobic. Time feels featureless; the clock a stagnant lake.
Since the pandemic started, I’ve been working from my home. This is the first time I had settled in one place for more than a few months. Before the pandemic, training contracts took me to other cities and countries every 2 or 3 months.
After a few months of being stuck at home, my home office has lost its luster.
You know what I’m talking about. The depressing monotony of looking at the same walls is driving you crazy. Every day is fraught with stress and you’re stuck home, working part-time or full-time. Juggling work and your home life are utterly exhausting. The stress is causing you to get short and snap at your partner. As the pandemic continues, you’re worried you will lose your paycheck or have your salary cut at some point.
All these spending too much time in a confined space has thrown your productivity out of the window.
At one point, I got alarmed when I fixated on a crevice in the ceiling, instead of writing an article the whole afternoon. That day felt like the walls were collapsing on me. I wished to get out and teach my students face-to-face — even though I knew it was impossible.
Before I went crazy, a friend suggested turning to houseplants to give my home a sense of serenity. I started reading about a design philosophy called biophilia — bringing nature inside. I started bringing nature’s magic back into my home.
Today, I’m writing to you from my new biophilic designed home. It has been two months since I purchased houseplants and flowers for my studio apartment where I live with my partner. The plants I see when I wake up and sit down in front of my computer help me find a sense of equilibrium and comfort. My productivity has improved and I’m a lot happier these days.
If I’m going to spend every day, every week, and every month in my apartment, then it has to be a sanctuary for me. A place where I can take a break and have some serenity. Where a sustained engagement with nature improves my mood and boosts my productivity.
Even if the pandemic is still haunting the outdoors, you can bring nature inside.
Even if you have a tiny space for your home office, you can create a harmonious work environment. Proximity of nature, even if you buy a tiny, single green houseplant, fosters a positive connection with your workplace. Or you can just put a visual image of your favorite nature picture on your walls.
Before the pandemic, you could do lots of things to boost your productivity. You could call your friends and visit your favorite park in your city. You could watch fresh green leaves unfurling right in front of your eyes. You could recharge your system by walking in a garden that’s bursting into life. You could take off your shoes and feel the damp earth beneath your feet and feel you could do anything.
Now, any outside exposure is scary. We’re terrified of contracting the coronavirus.
So, we’re reorienting our home to create a restorative workplace inside.
Some of us are creating spaces in our homes to put houseplants that energize, stimulate, and connect us. We’re designing nature’s magic back into our homes where they can give us a calm, relaxing, and restorative effect on our work.
Yes, this is an indirect experience with nature. But it’s still a wonderful experience.
This is because…
Productivity enhancement is one of the priceless biophilic design benefits.
One study conducted in two large commercial offices in the UK and The Netherlands showed that ‘green’ offices with plants made staff 15% more productive than ‘lean’ designs stripped of greenery.
Purchasing gorgeous houseplants and putting them in every room of my apartment is the best decision I made since the pandemic started. My plants help me project a soothing image to colleagues on daily video calls. I can show my energy on the camera when I teach my students online. I no longer look at a ceiling and get bored. Instead, I can look at my plants and feel soothed.
Intentionally putting plants inside your home can boost your productivity.
When you have a plant inside your home, your creativity and attention span increases. You get a decent handle on stress. Because filling your indoor environment with plants boosts the creation of melatonin. Melatonin is important since it regulates people’s sleep-wake cycles, making a noticeable difference in your energy levels.
These days, an optimal work-from-home condition is a necessary tool.
We don’t know how long we will work from home. 2020 has been an education in learning to swim in an ocean of not knowing. For me, the uncertainty has been exhausting. We have to design our home with our well-being in mind. When we work from home, it needs to feel like we’re walking through an open door, instead of banging our head against a wall.
Buying in-house plants for your home is a great place to start.
Giant companies had already designed their offices with a biophilic design before the world heard of a vicious virus. Microsoft debuted tree-house conference rooms in Redmond, Washington. Facebook created a 3.6-acre rooftop garden at its Silicon Valley hub.
You can make your home a place where you can have some serenity and foster a positive connection with your work.
Your home does not have to be a mansion to benefit from nature. With any available space you have, buy houseplants you can put in pots.
While you’re at it, use any available natural light. Sunlight stimulates the hypothalamus or the mood center of the brain. Any ray of sunshine that comes through your window boosts your productivity. It soothes you. So, open your curtains. Keep your windows clear and clean. No blinds.
Look outside your window and think about how you can bring nature inside your home. When you do, you create a work environment with your well-being in mind.
I hope you create a tiny refuge in your home from the bruising world outside. | https://bandaxen.medium.com/your-home-needs-one-simple-design-philosophy-to-be-more-productive-and-happier-1a66907392ef | ['Banchiwosen Woldeyesus', 'Blogger Ethiopia'] | 2020-11-11 08:43:52.499000+00:00 | ['Nature', 'Productivity', 'Life Lessons', 'Productivity Hacks', 'Coronavirus'] |
Dating Coach Certification | 1. Don’t overcomplicate things
The first date with someone you know little or nothing about is full of uncertainty. Don’t make things more complicated by trying to arrange the perfect romantic dinner, or planning a whole day out. Instead, keep things short and simple. A cup of coffee in a central location will make it clear fast if your date is someone you would like to spend more time with. And if things go well, the coffee could turn into a lunch or dinner, adding some spontaneity into the mix.
Read Full Article :https://sites.google.com/view/first-date1/home | https://medium.com/@sainiashusaini368/dating-coach-certification-832112acdded | ['Ashu Saini Saini'] | 2020-12-24 09:32:54.311000+00:00 | ['Marriage', 'Relationships', 'Relationships Love Dating', 'Love', 'First Date'] |
The Must-Follow Best Practices for Your Push Notifications | So here they are, my best practices for push notifications. These can be thought of as four pillars*:
When : Make them timely
: Make them timely Who : Make them relevant
: Make them relevant What : Make them precise
: Make them precise Tech: Implement them correctly
When: Make them Timely
So my first tip is make push notifications timely. And what I mean by this is to try predict the right time to contact your user. Can you be more flexible than just when you want?
Respect Local Timezone
Respect their timezone. It might be easy to say “Oh it’s 6 PM here in New Zealand, lets send out a push to all commuters”, only to have your audience asleep in other parts of the world. If you have a sale or time-sensitive event, try send in the local timezone for the user. This will stagger the pushes over a window and might even reduce load on your server, if that’s required. A decent push service should give you this option.
Use Backoff Times
Sometimes you might want to send multiple pushes per day, or different services that interact with your app with to inform the user. Consider having a hard limit on this. The limit is relative to the value, but something like a 5–10 max per day would suit most applications. Chat apps for example, won’t have a limit, but 25 marketing messages in a day is probably (read: definitely) going to annoy your users. Consider developing internal priority for some notifications types over the other, perhaps favouring transactional pushes over generic.
People Sleep
While I imagine most people have their phones on do-not-disturb overnight, or at least on silent. But consider this: do you read all your notifications in the morning? Or do you clear them all immediately maybe after reading the important ones?
If you’re part of that overnight noise, you have a lower chance of standing out. Perhaps restrict the amount of pushes you send over night and send a summary style push in the morning, a few hours after waking up.
Who: Make them Relevant
The most important thing is to keep pushes relevant to the user.
Don’t send content for anyone
Ultimately the most valuable pushes are the pushes for me. Chats, deals based on history, or news alerts based on preferences. Don’t send me junk and certainly don’t something that can’t apply to me, for whatever reasons.
Personalize based on Journey
Many services offer personalization based on journey. This could be anything to do with the context of the app. What level of the game you’re on, which items you have in your cart, which news stories you’ve read. How long you’ve had the app installed, or how long it’s been since you’ve last used it. There’s a bunch of marketing and engagement automation possible based off just some basic data points — have a think what could be awesome inside your app!
Use Transactional Push
Alternatively, but not mutually exclusively with audience segmentation, is transactional push. These are the 1:1 pushes that go to a single users only. Perhaps their package is shipping, or they have a chat, or they’ve got a new like. These sorts of notifications are perfectly personalized by definition, but still must adhere to the other best practices, such as timeliness.
Personalize with Names
Use the persons name in a push, if you have it (and your users would remember where they gave it to you — don’t be creepy). For example: “Sam, we have new Doctor Who toys. Tap here”. Oh, and don’t ever say “click” if you’re on mobile. We don’t click there. /peeve
What: Make them Precise
The rule is simple here and there’s only one.
10 Words
Generally speaking, you have about 10 words to make your impact, so use precise language. Be obvious what you want. Make the call to action clear. “You have a new chat from Ben”. “A new deal for you. Tap here for more” “Sam sent you a friend request”. 10 words. Use them wisely.
The Tech: Implement Them Correctly
Notifications are quite tricky to implement, beyond the confusing mess of provisioning and certificates, but taking the time to do them right further helps with reducing the scary 71% stat above.
Ask for Permission Carefully
That alert box that pops up for notification permission may be one of the most important dialogs for your apps. Preempt it. Onboard your user well. Explain “Hey, here’s why we want to send you pushes and here’s what you get out of it. Are you in?” If you explain the value before you present that alert, you’ll get a much higher opt in rate. I’ve seen rates go from 20–30% opt jump to 70–90% with these techniques. This also applies to all other dialogs such as location or bluetooth as well. Explain the value, then ask the user to give up some privacy.
Use a 3rd Party Service
Don’t try do this yourself. 3rd party services are all relatively cheap for the value they provide. They worry about scaling, reliability and have good feature sets. Just use another service.
Preload Content
If your push drives a user to some in app content, you should really be pre-loading that with the available API first. This will delay the push by 20 seconds or so, but when they open the app, the UI should be ready to go. No one wants to be taken to a loading screen.
Duplicate Notifications Inside Your App
It’s almost too easy to clear notifications. You should always try and duplicate the notification content inside the app. This could be an activity log, a chat history or a notifications log. Make it so users can always re-read the alert text of a notification. I really like Tweetbot’s implementation of this.
Expose Settings
If your app has different scenarios in which to send push, let the user turn off hearing about certain situations. Additionally, You can also use the Notifications API on iOS or Android to see which alerts your user has enabled. For example; if they have turned off push, you can detect that and present the benefits again, and if they say yes, you can deep link them to the Settings app. This, over time, will slowly increase your opt in rate again.
Rich Push
In additions to notifications with Actions or Text Input, iOS 10 and Android forever lol brings Rich Push, which allows you to show a selection of Images, GIFs and Videos in your push previews. In iOS 10 you can even do arbitrary views too, and you can learn more about that here.
Clear the Badge
Finally, my last best practice is to clear the unread count on the badge icon. Either natively on iOS or however it’s implemented on Android (It’s complicated), be sure to remove the badge count when the app is finished launching. Me, and I’m sure several others, have an obsessive compulsion to have all the badges cleared on our phones, so please don’t make us tap around to find out how to clear it. | https://medium.com/hackernoon/the-must-follow-best-practices-for-your-push-notifications-5f878565d2a9 | ['Sam Jarman'] | 2017-07-14 20:52:20.175000+00:00 | ['Marketing Automation', 'Best Practices', 'Push Notification', 'Mobile', 'Digital Marketing'] |
Tips for New Mothers During COVID-19 Pandemic | “The experience of pregnancy and the postpartum period can feel overwhelming and isolating, even in the best circumstances. Pregnant and postpartum moms are now even more likely to experience mental health challenges as they try to navigate huge life adjustments during a global pandemic. Family and friends that were once able to offer support might be unable to, places that once offered opportunities for socialization and self-care (gym, yoga studios, restaurants, movie theatres etc.) are closed, and new moms are concerned about their health and the health of their babies. While we are certainly living in an unprecedented time, here are a few tips for making this time a little more bearable. If you have older kids at home, click here for some tips on parenting during the pandemic.”
1. Connect with Others
While options for in-person socialization might be limited, there are lots of ways to remain connected virtually with family and friends. Try to talk with at least 2 different people each day, prioritizing people who have been good supports for you in the past or currently.
Join virtual groups to get peer support and talk with others who are going through similar struggles. The Breastfeeding Center of Greater Washington currently hosts a virtual Perinatal Stress support group every Thursday at 1:30 and 3:00. Postpartum Support International is also hosting weekly peer support groups (check out their respective websites for registration information).
2. Take Advantage of Low-Cost or Free Apps For Exercise
Peleton- Free 90 day trial offering a variety of workouts, including some that do not require equipment
Core Power Yoga or Down Dog — Virtual yoga classes, some free
Headspace — Meditation app, in particular “Weathering the Storm,” which is currently free.
Breeth — Meditation, Coronavirus Anxiety Package available
Sanvello — Offers on-demand help for anxiety & depression through meditations, peer support, journaling and logging progress, and triggers.
3. Social Media Distancing
Try to set firm boundaries for yourself related to checking your social media and the news. Both are likely to be centered on COVID 19. Only you know how much information and
exposure is helpful versus harmful for your own mental health. Consider turning off news and app notifications if you are finding them intrusive.
4. Laugh
Make a concerted effort to take time off from worrying and working. Watch a show or play a game you enjoy with a partner or friend (even if you have to do it virtually!). Laughter is a great tool to alleviate anxiety and to increase your sense of wellbeing.
5. Get Dressed
It’s tempting to spend the whole day in pajamas, especially if you are working from home. But in terms of mental health, it often benefits people to continue some of their normal routines and to separate their night time routine from their daytime routine. Similarly, changing into clothes you normally work out in can give you a push to follow through and actually exercise.
6. Get Extra Support
If you are continuing to struggle with anxiety, depression, or scary thoughts reach out to a mental health professional. Many psychotherapists are currently offering virtual sessions and it can be helpful to have someone to provide extra support during this challenging time. | https://medium.com/@quarterlifecenter/tips-for-new-mothers-during-covid-19-pandemic-bcde9ea5d855 | ['Quarterlife Center'] | 2020-07-17 14:22:53.885000+00:00 | ['Covid 19', 'Mothers', 'Pandemic'] |
Coursera: Information Visualization D3.js Project Week 1 & Week 2 | Photo by Lukas Blazek on Unsplash
I’ve been learning much about D3 and its usefulness this past week. In the course so far I was able to do week 1 with no problem since it was just a JavaScript refresher course. For the week 2 curriculum, I learned about writing the code to create bar charts and label the x and y axis with information being passed down from a csv file with multiple columns of information. The assignment from this course is to gradually add more lines of code to the index.html in order to come up with a stunning and visually appealing data charts for Airlines. Week 1’s assignment was just creating the border display for the two charts and CSS styling for the header. Week 2 was getting into the weeds of things such as defining the height and width for the bars, defining the scale along the x and y axis, and drawing the overall bar chart.
So starting within the index.html file, we need to add the following within our <head>:
<script src=”d3.js”></script>
In other script tag I created the store object and typed the loadData function o grab the csv file saved within my folder.
let store = {}
function loadData() {
let promise = d3.csv(“routes.csv”)
return promise.then(routes => {
store.routes = routes
return store;
})
}
I iterate over each route which produces a dictionary where the keys are the airline ids and the values are the information of the airline. I then increment the count (which are the number of routes) of the airline and then I save the updated information in the dictionary using the airline id as key.
function groupByAirline(data) {
let result = data.reduce((result, d) => {
let currentData = result[d.AirlineID] || {
“AirlineID”: d.AirlineID,
“AirlineName”: d.AirlineName,
“Count”: 0
}
currentData.Count += 1
result[d.AirlineID] = currentData//TODO: Save the updated information in the dictionary using the airline id as key.
return result;
}, {})
The below continuing code defining result, is used to convert the dictionary produced by the code above, into a list that will make it easier to create the visualization. I then sort the data in descending order of count.
result = Object.keys(result).map(key => result[key])
result.sort((a, b) => {
return d3.descending(a.Count, b.Count)
})
return result
}
The following function will be called showData() and we will need to get the routes from our store variable. Once we get that we will compute the number of routes per airline. Than we will call on our drawAirlinesChart function with airlines as a parameter. That function will be drawing the airlines barchart.
function showData() {
let routes = store.routes
let airlines = groupByAirline(store.routes);
console.log(airlines)
drawAirlinesChart(airlines)
}
loadData().then(showData);
The following code is defining the function getAirlinesChartConfig() in which we will define our chart’s width, height, margin, etc. For our scale we will create the function getAirlinesChartScales(), to draw the bar it is our function drawBarsAirlinesChart(), and lastly for our axes its drawAxesAirlinesChart().
function getAirlinesChartConfig() {
let width = 350;
let height = 400;
let margin = {
top: 10,
bottom: 50,
left: 130,
right: 10
}
let bodyHeight = height — margin.top — margin.bottom
let bodyWidth = width — margin.left — margin.right//TODO: Compute the width of the body by subtracting the left and right margins from the width.
//The container is the SVG where we will draw the chart. In our HTML is the svg ta with the id AirlinesChart.
let container = d3.select(“#AirlinesChart”)
container
.attr(“width”, width)
.attr(“height”, height)
return { width, height, margin, bodyHeight, bodyWidth, container }
}
function getAirlinesChartScales(airlines, config) {
let { bodyWidth, bodyHeight } = config;
let maximunCount = d3.max(airlines.map( d => {
return d.Count
}))Use d3.max to get the highest Count value we have on the airlines list.
console.log(maximunCount)
let xScale = d3.scaleLinear()
.range([0, bodyWidth])
.domain([0, maximunCount])
let yScale = d3.scaleBand()
.range([0, bodyHeight])
.domain(airlines.map(a => a.AirlineName)) //The domain is the list of airlines names
.padding(0.2)
return { xScale, yScale }
}
function drawBarsAirlinesChart(airlines, scales, config) {
let {margin, container} = config; // this is equivalent to ‘let margin = config.margin; let container = config.container’
let {xScale, yScale} = scales
let body = container.append(“g”)
.style(“transform”,
`translate(${margin.left}px,${margin.top}px)`
)
let bars = body.selectAll(“.bar”)
.data(airlines)
bars.enter().append(“rect”)
.attr(“height”, yScale.bandwidth())
.attr(“y”, (d) => yScale(d.AirlineName))
.attr(“width”, (d) => xScale(d.Count))
.attr(“fill”, “#2a5599”)
}
function drawAxesAirlinesChart(airlines, scales, config){
let {xScale, yScale} = scales
let {container, margin, height} = config;
let axisX = d3.axisBottom(xScale)
.ticks(5)
container.append(“g”)
.style(“transform”,
`translate(${margin.left}px,${height — margin.bottom}px)`
)
.call(axisX)
let axisY = d3.axisLeft(yScale)
container.append(“g”)
.style(“transform”,
`translate(${margin.left}px, ${margin.top}px)`
)
.call(axisY)
}
</script> | https://medium.com/@perezchristian1012/coursera-information-visualization-d3-js-project-week-1-week-2-9894915cbaca | [] | 2021-04-22 17:56:17.537000+00:00 | ['JavaScript', 'D3js'] |
The Origins of CCTV | CCTV — 4CH NVR ANVR2204
CCTV was first created by the Nazi’s during World War 2. It was utilized to screen the starting of V2 rockets in 1942. At the point when CCTV was created, it utilized video recording innovation, mostly to have an observation in regions that were considered hazardous for people.
From that point, forward CCTV has turned out to be generally utilized everywhere throughout the world by a wide range of sorts and size of business. The utilizations for CCTV inclusion have additionally extended, from basically checking territories that are risky for people to worker surveillance and wrongdoing aversion, notwithstanding stretching out to us presently securing our very own homes. The recording is advanced as a rule. This has numerous advantages over video as the account time is a lot more prominent. There are likewise choices now accessible, for example, connecting the cameras to an alert or programmed telephone/email capacities.
CCTV cameras have additionally grown fundamentally since being designed 60 years back. There is a wide assortment of shape and size of camera accessible, including cameras that are explicitly for night vision. This implies the quantities of spots that cameras can be put and pictures effectively recorded has been enormously expanded.
During ongoing years, home and entrepreneurs alike are progressively mindful of the utilization of CCTV cameras as giving imperative data about the general population and premises they claim or oversee just as being a solid wrongdoing obstacle. With improvements in innovation pushing consistently forward increasingly more surveillance choices set to wind up accessible later on. It appears to be likely that new gadgets will be explicitly intended to screen a considerably more noteworthy assortment of spots with practically boundless advanced chronicle time and the capacity to ‘synchronize’ with telephones, PCs, and police. The eventual fate of CCTV is ensured. | https://medium.com/@bdetechnology1990/the-origins-of-cctv-22f151bc1b79 | ['Bde Technology'] | 2019-09-23 07:16:27.504000+00:00 | ['Cctv', 'Iometric System', 'Surveillance Camera', 'Security Camera', 'Fingerprint System'] |
Episodio 2 || My Bromance 2: 5 Years Later L’Italia subì | Episode 2
Episode 2 | Five years have passed since Golf “passed away”. Bank has never forgotten the love and sacrifice from his step-brother. On the day of his graduation from a university in Chiang Mai, Bank found out that Golf is actually still alive. The two brothers will finally meet again with many unanswered questions during those five years, in a time where society has changed. And their new and old friends are ready to help them grow their love tree again.
Watch On ►► http://dadangkoprol.dplaytv.net/tv/114381-1-2/3614-3637-3656-3594-3634-3618-my-bromance-2-5-years-later.html
Show Info
Web channel: LINE TV (2020 — now)
Schedule: Wednesdays (60 min)
Status: In Development; premiering December 2020
Language: Thai
Show Type: Scripted
Genres: Drama Romance
TELEVISION 👾
(TV), in some cases abbreviated to tele or television, is a media transmission medium utilized for sending moving pictures in monochrome (high contrast), or in shading, and in a few measurements and sound. The term can allude to a TV, a TV program, or the vehicle of TV transmission. TV is a mass mode for promoting, amusement, news, and sports.
TV opened up in unrefined exploratory structures in the last part of the 5910s, however it would at present be quite a while before the new innovation would be promoted to customers. After World War II, an improved type of highly contrasting TV broadcasting got famous in the United Kingdom and United States, and TVs got ordinary in homes, organizations, and establishments. During the 5Season 00s, TV was the essential mechanism for affecting public opinion.[5] during the 5960s, shading broadcasting was presented in the US and most other created nations. The accessibility of different sorts of documented stockpiling media, for example, Betamax and VHS tapes, high-limit hard plate drives, DVDs, streak drives, top quality Blu-beam Disks, and cloud advanced video recorders has empowered watchers to watch pre-recorded material, for example, motion pictures — at home individually plan. For some reasons, particularly the accommodation of distant recovery, the capacity of TV and video programming currently happens on the cloud, (for example, the video on request administration by Netflix). Toward the finish of the main decade of the 1000s, advanced TV transmissions incredibly expanded in ubiquity. Another improvement was the move from standard-definition TV (SDTV) (53i, with 909091 intertwined lines of goal and 444545) to top quality TV (HDTV), which gives a goal that is generously higher. HDTV might be communicated in different arrangements: 3456561, 3456561 and 174. Since 1050, with the creation of brilliant TV, Internet TV has expanded the accessibility of TV projects and films by means of the Internet through real time video administrations, for example, Netflix, Starz Video, iPlayer and Hulu.
In 1053, 19% of the world’s family units possessed a TV set.[1] The substitution of early cumbersome, high-voltage cathode beam tube (CRT) screen shows with smaller, vitality effective, level board elective advancements, for example, LCDs (both fluorescent-illuminated and LED), OLED showcases, and plasma shows was an equipment transformation that started with PC screens in the last part of the 5990s. Most TV sets sold during the 1000s were level board, primarily LEDs. Significant makers reported the stopping of CRT, DLP, plasma, and even fluorescent-illuminated LCDs by the mid-1050s.[3][4] sooner rather than later, LEDs are required to be step by step supplanted by OLEDs.[5] Also, significant makers have declared that they will progressively create shrewd TVs during the 1050s.[6][1][5] Smart TVs with incorporated Internet and Web 1.0 capacities turned into the prevailing type of TV by the late 1050s.[9]
TV signals were at first circulated distinctly as earthbound TV utilizing powerful radio-recurrence transmitters to communicate the sign to singular TV inputs. Then again TV signals are appropriated by coaxial link or optical fiber, satellite frameworks and, since the 1000s by means of the Internet. Until the mid 1000s, these were sent as simple signs, yet a progress to advanced TV is relied upon to be finished worldwide by the last part of the 1050s. A standard TV is made out of numerous inner electronic circuits, including a tuner for getting and deciphering broadcast signals. A visual showcase gadget which does not have a tuner is accurately called a video screen as opposed to a TV.
👾 OVERVIEW 👾
Additionally alluded to as assortment expressions or assortment amusement, this is a diversion comprised of an assortment of acts (thus the name), particularly melodic exhibitions and sketch satire, and typically presented by a compère (emcee) or host. Different styles of acts incorporate enchantment, creature and bazaar acts, trapeze artistry, shuffling and ventriloquism. Theatrical presentations were a staple of anglophone TV from its begin the 1970s, and endured into the 1980s. In a few components of the world, assortment TV stays famous and broad.
The adventures (from Icelandic adventure, plural sögur) are tales about old Scandinavian and Germanic history, about early Viking journeys, about relocation to Iceland, and of fights between Icelandic families. They were written in the Old Norse language, for the most part in Iceland. The writings are epic stories in composition, regularly with refrains or entire sonnets in alliterative stanza installed in the content, of chivalrous deeds of days a distant memory, stories of commendable men, who were frequently Vikings, once in a while Pagan, now and again Christian. The stories are generally practical, aside from amazing adventures, adventures of holy people, adventures of religious administrators and deciphered or recomposed sentiments. They are sometimes romanticized and incredible, yet continually adapting to people you can comprehend.
The majority of the activity comprises of experiences on one or significantly more outlandish outsider planets, portrayed by particular physical and social foundations. Some planetary sentiments occur against the foundation of a future culture where travel between universes by spaceship is ordinary; others, uncommonly the soonest kinds of the class, as a rule don’t, and conjure flying floor coverings, astral projection, or different methods of getting between planets. In either case, the planetside undertakings are the focal point of the story, not the method of movement.
Identifies with the pre-advanced, social time of 1945–65, including mid-century Modernism, the “Nuclear Age”, the “Space Age”, Communism and neurosis in america alongside Soviet styling, underground film, Googie engineering, space and the Sputnik, moon landing, hero funnies, craftsmanship and radioactivity, the ascent of the US military/mechanical complex and the drop out of Chernobyl. Socialist simple atompunk can be an extreme lost world. The Fallout arrangement of PC games is a fabulous case of atompunk. | https://medium.com/episodio-2-my-bromance-2-5-years-later-litalia/episodio-2-my-bromance-2-5-years-later-litalia-sub%C3%AC-6174eb780a5b | ['Michael L. Morgan'] | 2020-12-16 10:46:42.617000+00:00 | ['Romance', 'Drama', 'Gay'] |
Service Vs. Manufacturing | Service Vs. Manufacturing
Here’s the list of young companies/ startups from India promising huge returns to its investors:
Delhivery
Flipkart
Policybazaar
Byju’s
CRED
Vernacular.ai
PharmaEasy
Digit Insurance
Meesho
Groww
Nykaa
Udaan
Dream11
Swiggy
Infra.Market
Urban Company
Moglix
Zeta
BrowserStack
Ola
Uber
Oyo
What’s so peculiar about this list of Unicorns is that barring one, all are operating in service sector. They don’t manufacture anything but provide service to the customers, thus creating data, which is valued by the investors. The same data will then get used by manufacturers to sell their products. We all know how companies such as Google, Facebook and Amazon, through their tracking of our behaviour, know more about us than what we know about ourselves. These companies listed above are no different. They collect the data, which can be commercialised. Zomato is valued more than Mahindra & Mahindra. But the question we need to address here is what is happening to the Indian Manufacturing sector in the process? Why is it that the Indian manufacturing sector is fast losing out to the service sector at such a high pace? Is losing manufacturing to China a cause or the effect?
Globally, close to 50% of the GDP is created by the MSMEs, whereas in India, it stands at just under 30%. The Government has taken initiatives to get this increased to 50% by 2024, but somewhere, I guess, we have lost the track. Is it a good strategy to broaden the definition of MSMEs by including Hospitality Sector just to achieve the goal we have set for ourselves? The government could have separately offered special package to hospitality industry rather than clubbing it with MSMEs, for instance.
This takes us to a more serious question: what’s the difference between a product and a service? The products, according to me, have become services and the services, products. Take for instance the new service launched by Maruti Suzuki: You rent a car directly from the manufacturer, with no direct liability of taxes and maintenance of the car; a manufacturer has moved into service to compete with the Olas and Ubers of the world; or for that matter, consider how DHL packages its services as a product: Standard delivery and expedited delivery: two products and two price tags. So, how should the GDP be calculated? When I buy Kent RO for INR 16,000, that’s a part of manufacturing, but when I sign up an AMC and end up paying close to INR 20,000 in the next 6 years, a major part of that is service (barring consumables, which get manufactured). Of course, I am assuming the consumables are manufactured in India. So, does Kent operate in Manufacturing or Service sector? Or both? Does it contribute more to Service than Manufacturing? | https://medium.com/@bhasins/service-vs-manufacturing-a2a9ae655af4 | ['Sandeep Bhasin'] | 2021-07-25 04:54:29.669000+00:00 | ['Economy', 'Startup', 'Services', 'Manufacturing'] |
How to Deploy a Django App on Heroku | First of all, you need a Procfile , which is used to declare the application process type. This file must be located on your projects’ root directory. You can create one running the following command on your terminal:
echo "web: gunicorn PROJECT_NAME.wsgi" > Procfile
Where PROJECT_NAME is the name of your Django project.
Next, you need to install some packages to make the application work on the server. These are the required packages:
gunicorn (an application server)
dj-database-url (for the database)
whitenoise (for serving static files)
psycopg2 (for postgres database)
To install them all in once just go to your terminal and run:
pip3 install gunicorn dj-database-url whitenoise psycopg2-binary
After the installation has completed, add them to your requirements.txt:
pip3 freeze > requirements.txt
Now you have to create a runtime.txt file on your root directory (just like the Procfile) and add the python version you’re using. To check which python version you're using, run:
python3 --version
The output will be something like this:
Python 3.8.5
Since I’m using Python 3.8.5, to create the runtime file and add the python version I can just run the following command:
echo "python-3.8.5" > runtime.txt
You should also create a .gitignore file and add the sqlite database to it as long as the virtual environment folder. To do this run:
echo "db.sqlite3" > .gitignore
echo "env/" >> .gitignore
where env/ is the name of your virtual environment.
Note: The second command should use two arrows ( >>) instead of just one ( >), because you’re appending to the file. Using one single arrow ( >) would replace the content of the file with the new one.
Now you have to configure some of the project files to make the application work on the server. First, go to the settings.py file and do the following:
To allow whitenoise to handle the static files, you have to add the following line to the list of INSTALLED_APPS :
Add the following line to ALLOWED_HOSTS list:
You also have to set Debug to False:
Now you have to add whitenoise to the MIDDLEWARE list but this part is tricky, the whitenoise middleware should be included right after the django.middleware.security.SecurityMiddleware
Now you have to add the configuration for the static files. At the bottom of your settings.py , add these lines:
Finally, but not least, you have to configure the Postgres database
Right after the DATABASES dictionary, add the following lines:
Yeah, that was a lot, but now your app is ready. | https://medium.com/geekculture/how-to-deploy-a-django-app-on-heroku-4d696b458272 | ['Gustavo C. Maciel'] | 2021-01-22 16:32:54.721000+00:00 | ['Postgres', 'Django', 'Heroku', 'Programming', 'Cloud Computing'] |
Winter is (technically still coming) here. | November in 2 sentences:
“I cant believe it’s so dark”
“I’m really tired already ”
(+ does anyone have hand cream?)
I have uttered that sentence countless times in the past 3 weeks. Despite experiencing 3 previous European winters, it seems I have not yet acclimatized to this experience. Many Dutch locals have told me to give up on this pipe dream.
The increase of these phrases can be a signal to take a little more care of yourself during this season. SAD (Seasonal Affective Disorder), or in a milder form the “winter blues”, is reported to affect up to 6% and 13% of the population in the UK respectively. Many more report significant, although not so disabling, mood changes in the colder months.
So apart from missing Aperol’s on the terrace, what causes this phenomenon and how you can act against it?
Demographics
The mean age of presentation of SAD is 27 years, and it tends to happen more frequently in women than men. So, if you are a female in this group, live far away from the equator and have experienced (or have a close family member that has experienced) mood disorders with seasonal changes, you may be at higher risk.
Light
Kind of obvious, yes, but how does the decreasing light have such an impact on your mood? As with most mood disorders, it is chemical. During the summer months, levels of a protein called SERT, which can decrease the activity of Serotonin (the happy hormone) are kept low with the abundance of light. When the days get shorter, SERT becomes more abundant and increases its effect on Serotonin suppression. A study in the Netherlands showed an increase in suicide rates in January, just after Christmas and winter vacation period.
Melatonin, is another hormone affected by winter. Melatonin’s sleep-inducing effect, is triggered by darkness — increasing drowsiness. The sudden changes in these two hormones with time/light differences can impact your Circadian rhythm, basically making you feel as if you have post Atlantic jet lag.
Vitamin D
Vitamin D is known to affect Serotonin levels, which as mentioned plays a huge role in depression and mental health. Although the direct effects of Vitamin D supplementation on depression are debated, other factors such as musculoskeletal health and bone strength are good reasons to check if your levels are in the optimal range. This is particularly true if your ability for Vitamin D synthesis is further reduced in the winter months, as in the case for those who may be house bound, have darker skin, or have most of their skin covered by choice of clothing.
Treatment
It’s 2019, the time to just “tough it out” is over. So, as the days continue to get shorter, what should you do if you think you may be at risk of SAD this year? Contact your medical professional, and have a conversation over diagnosis and treatment options which include;
Natural sunlight — Sit by a window or try to take a lunch break stroll
Anti-depressant medication
Light Therapy
Vitamin D supplementation; &
Counselling.
Keeping an eye on other factors impacting your mental health, such as a healthy diet, regular exercise and connecting with loved ones, is always necessary, especially in the winter months. SAD symptom awareness, and reaching out to those you think might need a little help are small actions that have a positive impact on someone’s mental health, and life.
It’s very likely if not you, someone you know, will be a little sadder over the coming season. | https://medium.com/@nadialabort/winter-is-technically-still-coming-here-4ba573ffe3d0 | ['Nadia Labort'] | 2019-11-19 08:52:50.088000+00:00 | ['Depression', 'Sad', 'Winter', 'Mental Health', 'Seasonal Depression'] |
Influencer Marketing Infographic 2020 | I couldn’t find an up-to-date infographic on influencer marketing, so I went ahead and created one with a bunch of stats and metrics. I hope you find it useful for your marketing mix consideration.
It’s one of the most effective marketing channels if executed right. It’s cost-effective since you can re-share it in almost any other channel and it delivers direct impact for your brand awareness and engagement.
If you break down social media engagement, it consists of four elements that build on each other:
Validation Sharing Asking & Answering Questions Exploring
So if the ultimate goal is to achieve sales, you first need to boost engagement, which in return turns users into brand advocates who not only purchase your product, but ideally convince others to buy as well — on your behalf.
So without further due, I’ll present you my infographic below. Please share your thoughts and let me know what’s missing. | https://medium.com/@meysamm/influencer-marketing-infographic-2020-fdef36ea023f | ['Meysam Moradpour'] | 2020-10-02 20:36:40.462000+00:00 | ['Influencers', 'Influencer Marketing Tips', 'Infographics', 'Influencer Marketing'] |
Life with Baby Ava: 6 — 7 weeks old | Ava is already 7 weeks old! Where did the time go? We have definitely noticed a difference in her development after 6 weeks, which looks to be on par with many newborns at this stage. As for mom and dad, thankfully we are getting better sleep these days and have established more of a routine. We now also have a better understanding of Ava’s cues and have discovered some tricks to help soothe her.
Here are some of the changes, milestones, and things we’ve experienced over the last two weeks:
1. Ava smiles!
We noticed Ava has started smiling a whole heap more (and not necessarily from having gas like in the previous weeks). And I have to say, there is nothing better than a baby responding to you with a big smile, and we have no doubt been loving seeing her happy baby smile!
See the photo and video below. The video was taken when Ava was 6 weeks old. We captured the precious moment when she gave her big sister Aria a big smile. It was so cute!
Ava smiles at her big sis (6 weeks old)
2. Ava is starting to make sounds and trying to communicate.
One of my favourite times of the day is when Ava is alert, relaxed and responsive. We love to have our time chatting with her. Now she locks eyes with us and stares around very inquisitively. She is able to follow us or an object with her eyes as they move. I wonder what she is thinking about behind those beautiful, blue eyes. We will speak to her, then pause and see her moving her mouth, trying to respond back. And now she will make many different kinds of sounds. (See one of the earlier videos below of Ava making sounds.)
Ava chats with us (6 weeks old)
3. Ava’s neck is getting stronger.
My husband will carry Ava against his chest or shoulder and we can see that her neck is getting a lot stronger as she holds her head up for short periods at a time. She is also holding up her head a little longer and longer, as time goes by. This is a game changer for her. The force is strong in this one.
Ava and her father, Daniel
4. Ava has a favourite side of the bassinet.
We started to notice that Ava has a favourite side of the bassinet where she prefers to sleep. We usually lay Ava down to sleep in the middle of her bassinet. Later we would notice she had wriggled her body over until she was in her usual position, with her head next to the end/side of the bassinet and her body laying diagonal across the bassinet, with feet barely touching the other side. (See photo below.)
We have a couple theories why she does this and likes to sleep here. One, her head is positioned closest to where she can see mommy sleeping. (I like this theory because as the mommy, I too like to sleep where I can have an optimal view of little Ava sleeping.) Another theory is she likes to feel enclosed where she can press against things whilst she sleeps, which mimics what it was like in the womb. Hence her head against the bassinet and her feet also touching a side of the bassinet. My guess would be a combination of these two theories.
5. Baby massages for Ava.
I was curious if babies enjoy massages. In my curiosity I did a little internet research, watched a few quick YouTube videos and purchased some MooGoo lotion. When Ava is relaxed and alert I give her a massage, following some of the YouTube videos I found. She seems to relax, respond and dig it! Especially when massaging her cute little feet. It is something I’d like to do everyday with her. | https://medium.com/@giangstarcreations/life-with-baby-ava-at-6-7-weeks-old-7f96a959664c | [] | 2019-12-29 01:13:37.562000+00:00 | ['Baby Milestones', 'Parenting', 'Newborn', 'Baby'] |
Hospitals can heal America through what and how they buy | We can harness health care’s enormous purchasing power to heal the environmental and social conditions that are making people sick in the first place.
Health care sits at the epicenter of the COVID-19 and climate crises, caring for those who are stricken by the virus or, find themselves in the paths of wildfires, hurricanes, heat waves, and drought. Low-income communities and communities of color are suffering from the worst impacts of air pollution, extreme weather events, and the pandemic. Those who live in communities that experience the worst air pollution are more susceptible to extreme respiratory symptoms related to COVID-19. The pandemic has shown us in stark terms that the inequities that plague our nation remain a collective wound that continues to define this country exacerbating social, economic, and environmental disparities.
As a sector, health care is charged with the health and healing of the communities they serve. This juxtaposes its heavy reliance on systems that are fossil fuel-based which not only contribute to more than 7 million air pollution deaths per year globally but are the principle drivers of the climate crisis.
By shifting from fossil fuel utilization to investing in renewable energy, health care would actively address the epidemic of respiratory and cardiovascular disease in our country, support job creation in the clean energy sector as well as reduce the nation’s climate emissions.
In September, to help improve the conditions for health in its communities, Kaiser Permanente became the first health care system in the United States to achieve carbon-neutral status, eliminating Kaiser Permanente’s 800,000-ton annual carbon footprint, the equivalent of taking 175,000 cars off the road. Kaiser Permanente is currently ranked No. 6 in the United States for companies with the most solar energy investment, according to the Solar Energy Industries Association.
Over the last ten years, Kaiser Permanente has embraced renewable energy and worked hard to embed sustainable practices within business operations. While certified by the CarbonNeutral Protocol for Scope 1 emissions (direct emissions from sources it owns or controls) and Scope 2 emissions (emissions attributable to the electricity it consumes), Kaiser Permanente is now turning its attention to expanding its reduction of Scope 3 emissions (emissions from sources it does not directly own or control), including its supply chain.
Solar panels on top of parking structures, Kaiser Permanente La Mesa (Ted Eytan/Flickr)
At the time of Kaiser Permanente’s Carbon neutrality announcement, Chairman and CEO Greg A. Adams said: “As wildfires rage across the Western U.S., we can all see that the health impacts of climate change are not abstract or far in the future — they are here today, and they disproportionately impact the most vulnerable among us. We must recognize, for example, that the pollution that leads to respiratory illnesses and is linked to higher mortality rates from COVID-19 disproportionately impacts Black and low-income communities. In order to create a healthier, more sustainable path forward, we must address the inseparable issues of climate and human health as one.”
Advocate Aurora Health believes clean air saves lives. The health system located in Illinois and Wisconsin has been a leader in energy conservation for the past decade and has committed to source 100% of its electricity by 2030. Transitioning to clean energy will reduce carbon dioxide emissions by nearly 400,000 metric tons — equal to removing 84,000 passenger cars from the road each year while also reducing pollution particulates contributing to chronic health conditions, such as asthma. Advocate Aurora Health also leverages their purchasing power to make value-based purchasing decisions in alignment with their commitment to supporting local and diverse businesses and sustainable products and services. It’s not always the lowest cost that wins the day. By taking an ethical approach to decision making whenever possible, several factors are weighed to optimize the greatest impact on supporting local economic vitality, ensuring environmental conservation and safety, while also being good stewards of its financial resources.
Advocate Sherman Hospital in Elgin, Ill. is adjacent to a geothermal lake that heats and cools the facility, saving on energy costs. (Advocate Aurora Health)
Advocate Aurora Health recently had an opportunity to consider options for sourcing its annual $3.5 million copy paper spend. Sourcing options ranged from 100% virgin paper from a non-diverse owned business — but with the greatest cost savings potential — to a 30% recycled paper sourced from a minority-owned business with fewer cost savings. The system opted for the latter with wins in all categories: environmentally preferable, a local minority-owned business, and bottom-line savings.
“Our system sees the use of ethical guardrails as the best way to ensure accountability for selecting safer, healthier products and increasing diversity in its supply chain,” says Bruce Radcliff, system vice president supply chain of Advocate Aurora Health.
Not all purchasing decisions are as clear cut, but certainly instructive for enabling an intentional approach that is mutually beneficial for society and the health care sector’s multi-billion-dollar supply chain.
Our new guide for sustainable procurement is the first of its kind. Featuring stories and insights from 25 health care organizations around the world, the guide provides step-by-step guidance and tools for hospitals to serve as a roadmap for both new and existing programs.
As community anchors, U.S. hospitals can use their purchasing power to improve the health of their communities. They can help heal systemic racism in America by contracting with more local and minority-owned businesses for sustainable and environmentally preferable products, both reducing their reliance on suppliers overseas while increasing and stimulating job creation.
Health systems have a moral and mission-related responsibility to heal themselves and the communities they serve. Representing almost 20% of the nation’s overall economy it is the only sector that is underpinned by an ethical framework. By incorporating this framework into current procurement practices we can harness health care’s enormous purchasing power to heal the environmental and social conditions that are making people sick in the first place. | https://medium.com/@noharm/hospitals-can-heal-america-through-what-and-how-they-buy-a3b41993725 | ['Health Care Without Harm'] | 2020-12-03 16:54:30.814000+00:00 | ['Hospital', 'Sustainability', 'Climate', 'Supply Chain', 'Healthcare'] |
The metrics that matter. ‘Working in media organisations is kind… | ‘Working in media organisations is kind of like playing Where’s Waldo’, said Esra Dogramaci, Senior Editor, Digital at DW, during her session ‘How to build digital strategy’ at the International Journalism Festival. According to Dogramaci, the media landscape is so crowded that organisations find it hard to distinguish themselves. They look at their competitors and copy them, meaning everyone suddenly pivots to video, VR, and Facebook live all at once.
Where’s Waldo?
How can media organisations rid themselves of their red-and-white-striped shirts, bobble hats, and glasses and change into something more, well, noticeable?
A big part of the solution is getting to grips with analytics. This means understanding the numbers to understand the audience, leading to more informed decisions about the kind of content that’s produced, and in turn, driving engagement. We caught up with Dogramaci to get some advice on analytics: she told us why she isn’t too worried about Facebook algorithm changes, that vanity metrics work for marketing but not for news, and about the time DW broke records when they teamed up with Twitter.
Insights into analytics from Esra
(In Dogramaci’s words, adapted from an exchange between Dogramaci and the Global Editors Network — edited for clarity and brevity.)
Esra Dogramaci at the International Journalism Festival
Pay attention to dwell time and retention rates
There are a number of great tools available to listen to your audience. There’s Spike by Newswhip, Parse.ly, and Facebook’s Crowdtangle. In addition, all key social platforms — YouTube, Twitter, and Facebook — have their own analytics platforms available (for free).
What I do with these tools is look for patterns: Do we see one month where traffic spiked? If we dig deeper, what was behind that spike? Was it a breaking news story or something else? What format was it — picture, video, text, or an interactive?
I’ll look at long term patterns to see what audiences like and don’t like. The way to tell is by looking at metrics, such as engagement or retention rates.
Things like views, reach, clicks and impressions may look impressive on aggregate but are very superficial. They aren’t actionable metrics — meaning we can’t really use them to feed into editorial or content strategy. Things to pay attention to are dwell time, retention rate and watch time. Look at how your content is consumed and shared.
I remember at the BBC, when YouTube still had annotations on videos (chapters you could overlay), we had a video of Angelina Jolie divided into sections. One was about her latest movie, one was about her then-relationship with Brad Pitt and her family, another was about her double mastectomy, and so on. We were then able to see where in the video viewers were clicking; what they were most interested in. We discovered it was the double mastectomy. The audience wasn’t coming to the video for her being a celebrity, per se, but they were interested in a celebrity having an issue that they could relate to. Running tests like this over and over again reveals patterns of what works and what doesn’t work for your audience.
What we often find is that it is the human interest stories — less about celebrity and more about celebrities handling the same challenges we all face.
Don’t overproduce
Look at what’s working and what’s not working;
Look at what times your audience is active and inactive;
Look at what times of day and what days of the week work best for you.
I was looking at some of the UN Twitter accounts for instance — they generally produce good content, but are publishing so frequently that the audience disengages. If they can decrease quantitatively what they’re doing, they can use that extra time and resource to increase the quality or they can start to invest in other forward looking digital projects. If you’re working in digital, you should always be spending time looking at the next big things. Innovation, as well as the audience, has to be at the heart of everything you do.
Grow your female audience
I started building in gender metrics for the BBC back in 2013 when working on YouTube. I noticed that the female audience across the board was significantly underrepresented, so we made them an unofficial target. By making small changes, we started to see an increase in the female audience.
We started basically by making sure more females were represented in the thumbnails of videos. BBC Azeri did this and in a matter of weeks started to see a shift towards more female viewers. BBC Vietnamese went one step further: Their most successful news product on YouTube was a weekly hangout which broke many records for them. It lifted BBC Vietnamese to number 4 out of 20 BBC language services on YouTube — a big feat considering they didn’t have 24 hour TV to support them with content like Arabic or Persian did. For BBC Vietnamese it wasn’t enough to simply have a female host, but they made sure they had females on their hangout as well, which naturally translated into a shift towards a female audience.
Every broadcaster is interested in the reach and the size of their audience. At the BBC, we went one step further with responsible reach by tracking quarterly what the gender split looked like for each language service, how each service was performing against each other, and then what our gender split looked like overall.
What works in some countries doesn’t work in others
During my work with YouTube at the BBC we discovered that the number one video the Vietnamese audiences were coming for was a weekly Google Hangout on YouTube. This ran for at least 45 minutes and regularly brought in over a million views.
For Turkish audiences, we discovered they enjoyed a weekly cartoon or satire, coverage of big Turkish news events, and also what we could call ‘interesting’ news stories: A video of a whale about to explode on a beach was one of their top videos for months on end!
Spanish audiences enjoyed science and particularly explainers of how things worked, such as what happens to you physiologically when you fall in love. BBC Azeri and BBC Spanish found creative ways to cover events they didn’t have access to: Spanish did a series of how to behave at Copa America (football) when they didn’t have match or footage access. All of this comes about through knowing your audience and that begins with looking at your numbers.
It’s not about copying, but inspiring
I never look at other organisations to copy, but I do look at them for inspiration. I really think print publications are sometimes much further ahead than broadcasters when it comes to digital. I love the digital experience the New York Times gives its audiences on Facebook — particularly its very well produced videos. Look at this video of Simone Biles. The results are phenomenal: it had over 500,000 shares and 57 million views. It broke all the rules of a typical Facebook video — it’s horizontal, works best with sound, and is far longer than the typical bite size 90 second experience.
The Guardian and Financial Times also come to mind — the latter particularly with their interactives, such as the Uber game.
You get a sense that these established papers are working really hard not just to be relevant in the digital space, but coming up with products to bring in a new audience. That’s really smart. The FT also releases its source code on projects upon completion. This means any newsroom could theoretically duplicate it for their local context. It’s not about copying, but inspiring. I’d like to see more of that in the industry.
Don’t fret about algorithm changes
Facebook changes its algorithm every six months or so but the latest one was so specific because it came off the back of News Feed test changes in Cambodia for instance. It was directly affecting News Feed, so there was a lot of concern — even panic. However, I contend there’s really nothing to worry about because the thing for broadcasters and others to do is to focus on their audience. If you are delivering good content and getting good engagement, you won’t be affected by any algorithm change. I’ve written in further detail about that here.
However, I was in Indonesia recently where I met a Cambodian vlogger who had seen a significant drop in traffic because of the Facebook algorithm change. The only explanation she had of this was that Cambodia’s Facebook population was so small that Facebook could afford to ‘test’ there.
This does have consequences. This particular vlogger was outspoken about issues not discussed in mainstream media and had amassed quite a following. In places where you have challenges or restrictions to free speech, independent journalism and voices are particularly important. She also mentioned having yet to obtain an explanation from Facebook either personally or publicly as to why the change happened. Her content fit within Facebook’s rules, yet she’s not been able to take back the success she previously had. Other independents like Nuseir who is behind Nas daily, have seen their numbers go up.
Don’t hate on numbers
‘Less than half of newsrooms consult analytics daily,’ according to an ICFJ Survey, ‘The State of Technology in Global Newsrooms’.
When working in analytics, you can’t have the mindset of a consultant who analyses data, comes up with a prescription, and then leaves. I really think it’s about sitting down with teams, understanding how they work — what are they good at, what efficiencies can be identified, and how can you then slowly bring data into the process. There are a few crucials though:
You absolutely must have someone from senior management who believes in this and supports you. Without leadership and projects to change the culture, even just getting into the mindset of paying closer attention to analytics will be a struggle.
Second, you’ve got to have buy in. If you’re advocating to teams or people who are absolutely, vehemently resistant, then expect no success. Instead, look for those who are curious and willing to accompany you on the analytics journey. Also seek out people who are still on the fence — they can be convinced by numbers.
There are far too many newsrooms who value ‘vanity metrics’ — things like clicks, reach, views, impressions. These numbers are indicative but you can’t do anything with them editorially or for building content strategy. Unfortunately I’ve seen far too many CMS/analytics systems in newsrooms that are either made for marketing or just not useful to news. What then ends up happening is that people are educated incorrectly about the numbers. When someone like me comes along and points this out, there’s a lot of resistance. My numbers are lower, but in the long run more meaningful (interaction, retention rates/dwell times, uniques and so on).
Take for instance the Guardian and other papers — shifting from an advertiser revenue model to subscriber driven model. That is all premised off the back of engagement — of having a relationship with the reader, knowing who they are, what resonates, what doesn’t, and delivering on all of that.
Example of a successful partnership with platform
Twitter’s former News Partner manager, Rob Owers, approached me in 2017, suggesting that we cover the German election night on Twitter. A lot of the credit goes to Rob, because he was an excellent partner manager who made things happen and metaphorically held my hand through the process — especially when it came to technical requirements.
For DW it was an opportunity that couldn’t be missed. I was lucky to have DW’s head of distribution and technical Guido Baumhauer’s immediate support. He saw the possibility of putting DW on the map in a big way.
Partnering with Twitter meant that whoever visited Twitter on German election night, whether logged in or out of Twitter, would see the DW News livestream. Seeing as the feed was with Periscope — a product that was still being developed at the time — I had limited feedback on specific demographics. But we had over 609,000 viewers — not views, but viewers — watching for an average of 10 minutes or more. If you’re familiar with Twitter, you’ll know that never happens.Twitter is a scrolling experience and videos are much shorter — typically no more than two or three minutes. Holding on to an audience for that long on a platform that really isn’t thought about as a video platform, literally broke records. Last year Facebook live was all the rage and YouTube already has its place in livestream and as the leader in video. But the experience with Twitter really challenged that and got me thinking that this was an opportunity that publishers maybe overlook and Twitter is keen to invest more into.
Thanks Esra! | https://medium.com/global-editors-network/the-metrics-that-matter-27419d7c860c | ['Freia Nahser'] | 2018-04-27 13:28:15.320000+00:00 | ['Engagement', 'Analytics', 'Metrics', 'Algorithms', 'Journalism'] |
Umoja: Rest in Power Archbishop Desmond Tutu (1931–2021) | Image: https://www.tutulegacy.com/#GallerySection. “…a living embodiment of faith in action, speaking boldly against racism, injustice, corruption, and oppression, not just in apartheid South Africa but wherever in the world he saw wrongdoing. …” Desmond Tutu Foundation (https://www.tutulegacy.com/message).
The quiet death of Archbishop Desmond Tutu this morning (December 26) gives us pause as we close out 2021 — a year that has been filled with health and social pandemics. https://www.cbsnews.com/news/desmond-tutu-dies-age-90-nobel-laureate-anti-apartheid-leader-south-africa/
He will be missed as a champion of social justice and arbiter for truth and reconciliation. No one believed decades ago that South African apartheid would ever be dismantled. Yet, it is gone today. And the honorable Tutu was on the forefront of that battle.
South Africa is still healing, but on a path to recovery, where the Black South African majority are gaining footholds in the country’s social and political capital. There is progress, but change is processual and takes time.
South Africa is still struggling with transition and has a need for visionary leadership, like that of the late Mandela and now his beloved friend Archbishop Desmond Tutu, as it battles greed, lust for power, and the legacy of inequality created by the unequal racialized apartheid system. It is a country still figuring out how to redistribute the country’s wealth and resources held by so few —a white minority — for so long in an equitable fashion.
The United States would do well to look to South Africa as it grapples with a majority white population realizing it is becoming a minority and using every legislative, economic, law enforcement means at its disposal to maintain white cultural, political, and economic superiority — e.g., redlining districts to diminish the power of Black voters, promoting white historical innocence by banning Critical Race Theory and the 1619 Project in schools, sanctioning white supremacist violence when possible, legitimizing white police violence against Black and Brown communities, and not reauthorizing the Civil Rights Acts.
White America would be wise to heed the lesson of white South Africa. No matter how long you may have a yoke on the neck of a people to oppress them, it is only a matter of time before they rise up to demand freedom and liberation.
Because of people like Archbishop Desmond Tutu, who died peacefully today, December 26, 2021, a racial bloodbath was avoided. He helped to guide the transition of power from a white minority to a Black majority with compassion and empathy. He was a kind and wise humanist who will be missed.
May the social justice heirs of this former Nobel Peace Prize laureate and drum major for peace carry forth his legacy.
It is up to us now — who believe in freedom, who believe in equality, who believe in democratic principles — to carry on Tutu’s work. It is up to us to ensure that the change and transformation towards a more equitable world the late Archbishop Desmond Tutu committed his life to continues. Condolences to his family and all who knew him well.
Rest in power. Your lifeforce may have ended, but your legacy continues to inspire.
*December 26 is the first day of Kwanzaa, on which the first principle “umoja” (unity) is celebrated. ( https://thegrio.com/2009/12/25/five-things-you-didnt-know-about-kwanza-but-should/ )
(c) 2021 Irma McClaurin | https://medium.com/@irmamcclaurin/umoja-rest-in-power-archbishop-desmond-tutu-1931-2021-b52dc144d0cd | ['Irma Mcclaurin'] | 2021-12-27 19:59:25.820000+00:00 | ['Desmond Tutu', 'Apartheid', 'South Africa', 'Reconciliation', 'Nobel Peace Prize'] |
Improve Yourself: Think About Death | I think about the inevitability of death every day. This motivates me to make the most of my time on this planet. Researchers at the University of Arizona have quantified my belief that thinking about death can improve your performance.
In the study, a group of participants was asked to think about death before going onto the basketball court to play. The research subjects scored more points than the control group.
Researchers think that humans have a “terror-management” system to keep thoughts of death at bay. So, when you think about death, your terror-management system goes into overdrive, boosting performance and thus self-esteem.
“Terror management theory talks about striving for self-esteem and why we want to accomplish things in our lives and be successful,” said UA psychology doctoral student Uri Lifshin, co-lead investigator of the research. “Everybody has their own thing in which they invest that is their legacy and symbolic immortality.” The reason people don’t live in constant fear of their inevitable death is because they have this system to help them deal with it, Lifshin said. “Your subconscious tries to find ways to defeat death, to make death not a problem, and the solution is self-esteem,” he said. “Self-esteem gives you a feeling that you’re part of something bigger, that you have a chance for immortality, that you have meaning, that you’re not just a sack of meat.”
One group of participants was asked questions such as,
“Please briefly describe the emotions that the thought of your own death arouses in you,” and, “Jot down, as specifically as you can, what you think will happen to you as you physically die and once you are physically dead.”
The other group was asked questions about their basketball performance.
Those asked about death improved their personal performance in the second game by 40 percent, while those asked about basketball saw no change in performance. Those who thought about death also performed 20 percent better as a whole in the second game than those in the other group. Before the questionnaires, the performance of both groups was roughly even.
Researchers believe their results are not sport-specific nor gender-specific.
“This is a potentially untapped way to motivate athletes but also perhaps to motivate people in other realms,” Zestcott said. “Outside of sports, we think that this has implications for a range of different performance-related tasks, like people’s jobs, so we’re excited about the future of this research.”
Once you accept the inevitability of death, I believe, you can use this to motivate yourself and to value your life more.
Do you think about death this way? | https://medium.com/live-your-life-on-purpose/improve-yourself-think-about-death-1643baf85b16 | ['Harry Hoover'] | 2020-02-13 04:01:01.565000+00:00 | ['Self Improvement', 'Performance', 'Personal Growth', 'Personal Development', 'Death'] |
How Today’s Marketers Are the Architects of Consumers’ Choice | Framing Good Default Choices
This may have already happened to you: you have just taken out a new insurance plan, but one year later this plan changes its price. So instead of changing, you don’t bother and keep the same plan.
Economists Samuelson and Zeckhauser have described this human tendency as status quo bias. Individuals prefer to choose and keep the default option rather than make the effort to select other options.
For example, they studied the use made by university professors of their pension plan. They found that more than half did not change the way their contributions were allocated, although they may have got different marital situations.
From the marketer’s point of view, this psychological bias is sometimes used to take advantage of consumers’ weaknesses, as shown by automatically renewed subscriptions that the consumer makes no effort to change. But it can also be used to help consumers make simpler and more effective choices.
For example, in French restaurants, there is always the option of a menu of the day that the customer can choose if they don’t want to think about it. In good restaurants, the chef then serves the customer with the dish that they feel is most appropriate at the time and season. This allows them to discover new things and often even eat better.
In many administrative, banking, and insurance services, there is usually no default choice, or the default choice is not the most optimal for the consumer. The consumer thus gets lost in the range of choices available and, if they don’t have a lot of time, they often make bad choices.
A good idea for the marketer would be to invent a good default choice, in which the consumer can find himself without overthinking. | https://medium.com/better-marketing/how-todays-marketers-are-the-architects-of-consumers-choice-b59a4974a3d5 | ['Jean-Marc Buchert'] | 2020-11-19 18:46:05.648000+00:00 | ['Customer Experience', 'Decision Making', 'Marketing', 'Consumer Behavior', 'Customer Service'] |
Hiking Trails Remain Busy in the Winter | While the introduction of the winter season might put a damper on some outdoor activities, there is one in the Hudson Valley that remains unaffected- hiking.
A visit to the Dutchess Tourism website will yield results about hot air balloon tours, activities on the Hudson river, and visits to local parks and gardens, all of which are unavailable in the colder months. However, an abundance of hiking trails remain as one of Dutchess County’s attractions year-round. Mary Kay Vrba, President of Dutchess Tourism, says that she is happy that the trails stay relatively busy all year long. “This is such a great place to be outdoors, and such a unique place to hike, it would be a shame if they weren’t being used for half of the year,” Vrba says.
View from one of the Schunnemunk Mountain peaks
From expert to novice, Dutchess County has numerous peaks for hikers of all skill levels to summit. In Dutchess alone, there are 21 unique trails up various mountains, all with unique attractions. One of the most popular local trails spans across 30 miles and 4,000 acres of the Appalachian Trail in Pawling. Christian McDermott, a Poughkeepsie local and avid hiker, says he loves being outside this time of year. “It’s hard to believe, but the area almost gets prettier with snow,” McDermott says. Even though the hiking can sometimes be more difficult and more dangerous due to adverse conditions, it is also a great experience. “It is like hiking a different trail, you look out for different things,” says McDermott.
Challenging rock face on the Schunnemunk Mountain trail
Not only is the hiking better, but winter weather also introduces new activities to some of the mountain trails- snowshoeing and cross country skiing. While not available at all 21 locations in Dutchess county, snowshoeing and skiing have also become staples in the local community, and continue to be encouraged. “We just want people outside looking at the beautiful Hudson Valley, however they please,” says Vrba.
Dutchess Tourism is a local function of I Love New York, a statewide promotion for all kinds of recreation. Dutchess Tourism’s vision statement is to become the premier destination choice of travel in the Hudson Valley and one of I Love New York’s targeted areas across the whole state. With activites limited in the winter, promoting local hiking, skiing, and snowshoeing becomes more important in developing the tourist scene in the coming months. For more information about local hikes and other attractions in Dutchess County, visit dutchesstourism.com, or for statewide attractions visit iloveny.com. | https://medium.com/@chrisrechen/hiking-trails-remain-busy-in-the-winter-79a06862e896 | ['Chris Rechen'] | 2019-12-01 18:06:26.682000+00:00 | ['Hiking'] |
Needs for New Moms | Being a mother to a newborn is hard. Its also hard to do alone with little knowledge. I just made this article for you new moms out there. I will talk about things that helped me, and stuff that will help you.
First thing that helped me was a potty training system. In the system it will tell you secrets and tricks to get your child out of diapers. Me and other moms have seen A LOT of results in less hen 5 days with this potty training. It is very easy to understand and learn. This is one of the most helpful things for me and my kid. If you are interested they are having a sale right now. Here is the link if you are interested https://www.digistore24.com/redir/341870/Avi123/
Second thing that helped me was a a baby sleep program. This is so helpful for new moms, you will be able to sleep and not have to wake up in the middle of the night cause of the baby. This program help me put my baby to sleep and helped me also sleep. It is amazing, this is what you will learn So far completely unpublished methods, techniques and specific processes which you simply have to follow step by step
7 mistakes, parents often make unconsciously but which prevent babies and infants from falling asleep properly
This is of course not the old (but well-known) “just let the baby cry” method!
the old (but well-known) “just let the baby cry” method! Suitable for all children aged 0 to 36 months
5 handy night routines which will condition your baby to quickly fall asleep (and stay asleep throughout the night!)
which will condition your baby to quickly fall asleep (and stay asleep throughout the night!) Does your baby suffer sometimes from painful muscles tensions? I will show you, how to get rid of these tensions…
Practical and tested strategies against common problems such as hiccups
strategies against common problems such as hiccups What to do when your baby starts to cry excessively and how you can easily calm down your little one in no time (even if you are on the road)
and how you can easily calm down your little one in no time (even if you are on the road) Each and every child is unique: During my work with the sleep expert , we developed various plans of actions which allow you to choose an individual method and approach that is right for your child
, we developed various plans of actions which allow you to choose an individual method and approach that is And of course many more helpful tips…
Here is the link if you are interested https://www.digistore24.com/redir/110769/Avi123/
Affiliate links* | https://medium.com/@rin24240/needs-for-new-moms-ddb32435f2d2 | ['Rin Leen'] | 2021-03-14 16:59:17.491000+00:00 | ['Baby', 'Moms'] |
iOS CI/CD with GitHub Actions | For many years, we had our iOS Continuous Integration system running on Jenkins with a unique MacOS slave as a physical machine.
It worked pretty well overall but we started to encounter scalability issues as the team grew. iOS developers complained about the CI being slow and we knew having a single Xcode instance available at a time could be an issue.
On August 2020, we were preparing our tooling migration to get ready for iOS 14 when we realized our physical MacOS machine was not matching minimum technical requirements to install the latest Xcode version.
We needed to find an alternative solution…
Here comes Github Actions into play.
In this article we will present 2 main use cases to show you how an iOS Continuous Integration could look like on GitHub Actions.
Prerequisites
At TheFork we mainly rely on Fastlane tools to implement the different steps of our CI/CD pipeline.
We have 3 main lanes declared in our Fastfile:
test : using fastlane scan
: using fastlane scan deploy_to_firebase : relying on gym, firebase_app_distribution
: relying on gym, firebase_app_distribution sync_certs: relying on match, to download the provisioning profiles and certificates in order to deploy the application
We also use Bundler to keep track of the gems versions we use ( including the Fastlane one ).
In the next sections we will consider we already have those lanes ready to be used.
Use case #1: Run tests on Pull Request
We started with the migration of our Pull Request job. Basically, whenever an iOS developer opens or modifies a pull request, we want to make sure no code regressions have been introduced, by running unit tests.
Let’s see how the workflow definition looks like:
name: Pull request on:
pull_request:
branches:
- develop jobs:
test:
runs-on: macos-10.15
timeout-minutes: 15
steps:
- name: Cancel previous jobs
uses: styfle/[email protected]
with:
access_token: ${{ github.token }}
- name: Git - Checkout
uses: actions/[email protected]
with:
ref: ${{ github.ref }}
- name: Setup - Xcode
run: sudo xcode-select -s /Applications/Xcode_12.3.app
- name: Setup - Ruby and bundler dependencies
uses: ruby/[email protected]
with:
ruby-version: 2.4.0
bundler-cache: true
- name: Test - Fastlane tests
run: bundle exec fastlane test
We start with the trigger definition and launch this job on pull request events but only from the develop branch.
We continue with jobs listing and declare a single job named “test”. We specify the OS version we want (macOS 10.15) along with a timeout of 15 minutes.
This job is composed of 5 steps:
“Cancel previous jobs”: We start by cancelling previous jobs on the same branch using this action https://github.com/styfle/cancel-workflow-action
“Git — Checkout”: We checkout the branch content
“Setup — Xcode”: We select the Xcode version we want the tests to run on. See here for the full list of Xcode versions available on MacOS 10.15.
“Setup — Ruby and bundler dependencies”: We install ruby, download gems with bundle install and cache the result
“Test — Fastlane tests”: We lastly run our fastlane “test” lane and check for non regressions
Use case #2: Test builds on demand for Product Owners
At TheFork, we create a new Git branch for each feature increment we work on. In order to get our branch merged into the main project (i.e. develop branch), we need it to be reviewed by a minimum of 2 iOS peers but we also need to get approval from the Product Owner.
In order to get this “Product Owner” approval, we need to be able to provide him test builds matching the exact content of the Git branch we are currently working on.
What if we could have a user interface embedded in GitHub allowing us to select our branch and click on a button to generate testing builds on the fly.
Say hello to workflow_dispatch :-)
Workflow_dispatch is a manual trigger allowing us to specify “inputs” we can collect from users in a lightweight form automatically generated and available in the GitHub interface.
This trigger definition..
name: Launch Firebase builds ( Product Owner testing )
on:
workflow_dispatch:
inputs:
message:
description: 'Message to the PO to help him test the feature'
required: false
default: 'No message left by the developer to help you test :('
preprod:
description: 'Do you want to generate a preprod build? (true/false)'
required: false
default: 'false'
prod:
description: 'Do you want to generate a prod build? (true/false)'
required: false
default: 'false'
…will be rendered as:
..and will be available directly on the GitHub “actions” section, under the right workflow and after clicking on the right button “Run workflow”:
In our workflow we ask for 3 inputs:
“Message”: A message to add to the build descriptions on Firebase
2 parameters to determine whether or not we should generate builds connected to production and/or pre-production environments
We can then have access to those inputs parameters and check their values to know if we should run jobs or not:
jobs:
deploy_preprod_firebase:
if: ${{ github.event.inputs.preprod == 'true' }}
runs-on: macos-10.15
timeout-minutes: 45
steps:
- name: Git - Checkout
uses: actions/[email protected]
with:
ref: ${{ github.event.ref }}
- name: Setup - Xcode
run: sudo xcode-select -s /Applications/Xcode_12.3.app
- name: Setup - Ruby and bundler dependencies
uses: ruby/[email protected]
with:
ruby-version: 2.4.0
bundler-cache: true
- name: Certificates - Sync ad-hoc
run: bundle exec fastlane sync_certs type:adhoc
- name: Firebase App Distribution - Archive and upload preprod app
run: bundle exec fastlane deploy_to_firebase env:preprod changelog:"${{ github.event.inputs.message }}" deploy_prod_firebase:
if: ${{ github.event.inputs.prod == 'true' }}
runs-on: macos-10.15
timeout-minutes: 45
steps:
- name: Git - Checkout
uses: actions/[email protected]
with:
ref: ${{ github.event.ref }}
- name: Setup - Xcode
run: sudo xcode-select -s /Applications/Xcode_12.3.app
- name: Setup - Ruby and bundler dependencies
uses: ruby/[email protected]
with:
ruby-version: 2.4.0
bundler-cache: true
- name: Certificates - Sync ad-hoc
run: bundle exec fastlane sync_certs type:adhoc
- name: Firebase App Distribution - Archive and upload prod app
run: bundle exec fastlane deploy_to_firebase env:prod changelog:"${{ github.event.inputs.message }}"
Cherry on the cake, Github Actions jobs run by default in parallel. If we answer “yes” for prod and preprod builds generation we get them both available in ~30 min 🎉
Thank you for reading! We hope we gave you a great overview of how an iOS CI could be implemented on Github Actions :-) | https://medium.com/thefork/ios-ci-cd-with-github-actions-e4504228c9d | ['Jeremie Siffre'] | 2021-04-27 15:39:39.514000+00:00 | ['Github Actions', 'Github', 'Continuous Delivery', 'iOS', 'Continuous Integration'] |
RFID with Raspberry PI on the cheap | There are plenty of guides on how to use the RDM6300 reader (based on the EM4100 chip) with Arduino (which is a 5V board), however, I had seen not many addressing the Raspberry PI ecosystem.
This might be due to the two challenges the RDM6300 poses when interfacing with the Raspberry PI: first of all the Raspberry PI uses 3.3V for signaling. Connecting a 5V to signal to the Pi can fry the GPIO pins.
The second issue is that the RDM6300 reader was designed with 5V input in mind: it might be tempting to just feed 3.3V to the reader, thus limiting the output voltage of the pins to 3.3V, but in reality, it does not work properly as the antenna was designed in a way to assume 5V input power for proper power delivery.
Wiring to the Raspberry PI
To get around this the following wiring can be used:
Connecting the RDM6300 to the Raspberry PI
As it is visible we are using the 5V power pin (pin 2) of the Raspberry PI to feed the RDM6300 with power.
The RDM6300 uses the UART Serial protocol, from the wiring perspective connecting such devices can be done with a pair of wires. One wire can be used by the Raspberry PI to send data to the board, the other will be used to receive data from. Now the PI has one built-in Serial port available via the GPIO headers. GPIO 14/15 can be reconfigured to act as a serial port:
Raspberry PI pinout
Wiring a serial connection is quite easy: the PI’s transfer (UART_TXD0) pin needs to be connected to the RDM’s RX pin and the PI’s receive (UART_RXD0) pin needs to be connected to the TX pin on the RDM. Please notice the crossover: the transfer and receive pins are always marked from the board’s point of view.
Pinout of the RDM6300 board
Also, it is visible on the wiring diagram that only the PI’s receive (UART_RXD0 ) pin is connected to the RDM’s transfer pin. This is due to how the reader works: it does not receive any data, it just sends the ID of the card it detects on the serial line.
Of course, we still need to tackle the problem of the 5V output of RDM versus the hard limit of GPIO pins of 3.3V on the PI side. To come around this limitation we are using a voltage divider: pick any two matching resistors (I had used 2 kOhms) and wire them up the same way as shown in the diagram. What this setup does is halves the output voltage of the transfer pin of RDM, making the high level represented as 2.5V.
3.3V systems behave in a way, that if a signal is between 2.0V and 3.3V then that is considered a logic high signal (think of it as a binary 1). This way, using the divider we land in spec, and also preventing our Raspberry PI from being fried.
Of course, the Raspberry PI by default does not enable the serial port, so let’s check next how we enabled support for it on the SBC.
Enabling serial/UART on the Raspberry PI
To be able to access the serial port you will need to use the sudo raspi-config command. Please set 5 — Interfacing Options >P6 — Serial: | https://medium.com/@mad-tinkerer-me/rfid-with-raspberry-pi-on-the-cheap-766ae0b6c97e | ['Mad Tinkerer'] | 2021-04-27 06:47:54.361000+00:00 | ['Raspberry Pi', 'Rfid', 'DIY', 'Python3'] |
Miracle in cell no. 7 2019: Review | Miracle in cell no. 7 2019: Review
Miracle cell no: 7 is a Turkey movie releasing in 2019.
“You have something more, not something less. You have a beautiful heart.”
Yedinci Kogustaki Mucize is a story of a mentally-ill father of a six years old daughter who was wrongly accused of being a murderer. The entire story revolves around the love between a father and a daughter as well as the time he spent in jail. The story is so heart-touching that you can never complete the film without shedding tears. I don’t know our Hollywood watching generation will find it worth watching or not, but what I know is that it will help you in diffusing your nebulous anxiety or stress.
What I am going to do is to share my few favourite things about this movie. I love the way they presented love in this movie. A quotation in “The Alchemist” says: “There must be a language that doesn’t depend on words”. I believe it is the language of love. Even a mentally-ill person – like the one presented in the movie – can understand this language. Similarly, even a quick-tempered prisoner can feel the magic of pure love. One of my favourite lines of the movie about “Love” is:
"Love is not about killing yourself for someone, it's about living no matter what."
Secondly, human beings have an innate ability to distinguish between right and wrong. We can see the innocence in someone’s eyes but as we grow, we stop listening to our intuition and start following our desires. The film presented the scenario of how even the other criminals and police officers can feel the innocence of the man.
Putting aside everything -sudden character development, not extraordinarily magnificent cinematography, typical drama, etc. – this movie, in my opinion, is a kind of therapy. You diffuse your stress, anxiety, fears, and burden by crying while watching the most emotionally binding scenes. A person who recommended me said, “You can cry openly while watching this movie because it will help you to alleviate the stress of assignments.” Believe me, it was worth watching because sometimes we don’t need an extremely complex story or mega-budget movie for our entertainment. All we need at times is a simple story with simple characters having a simple bond of love. As far as crying is concerned, let me quote dialogue from The Lords of Rings, “I will not say, ‘Do not weep’, for not all tears are an evil.” If you need some kind of therapy for feeling better, give it a try. | https://medium.com/@ahmadnaeem1133/miracle-in-cell-no-7-2019-review-df94c29c5e25 | [] | 2020-12-12 01:19:59.220000+00:00 | ['Movies', 'Movie Review', 'Review', 'Academy Awards', 'Turkey'] |
#AskaConservator: 2020 | From ironing paper and analysing paint samples to pest checks and sewing mounts, our conservators do some special work in the Gallery caring for all of the works of art that enter our care. They specialise in objects, painting, textiles, paper, digital and preventative conservation — to name just a few! They’ve answered some of the questions you posed on 2020 Ask a Conservator Day here.
We’d love to know how our fellow conservators first heard about the profession or became interested in it? What sparked your inner conservator for the first time!?
JOCELYN — Paintings Conservator: I was a big Agatha Christie fan in my early teens. The lead female character in ‘The Pale Horse’ is a smart, courageous, fun paintings conservator — she seemed like an excellent role model at the time.
DEBBIE: My dad was a watchmaker so I guess I was brought up with the notion of repairing and restoring something, but I really didn’t know about art conservation until I studied fine arts and archaeology at university, then one thing lead to another …
Objects Conservation condition checking Vivienne Binns’ ‘Tower of Babel’, 1989, Gift of the artist 2020
A lot of great, early 19th century, artists created work on all sorts of “material” (i.e. Klee), their work is no less impressive and still here. As an artist why should I care about “acid-free” and “archival” materials? Isn’t that YOUR problem?
Conservators spend a lot of time putting preventive measures in place to reduce deterioration of materials. They also restore damages allowing a work of art to look as close as possible to the artist’s original intent. But if the artist choses materials which are inherently unstable, and they know the materials will not last, this is something that is assessed and addressed at the time of acquisition. Although Klee used a lot of non-traditional materials for his time, they have more stability than so many materials available today.
Do you ever get emotional due to the weight of history that you are working with?
JOCELYN — Paintings Conservator: It can be intense when you come to work on something you’ve known since childhood via prints and posters, or studied intently in an art textbook. Not only are there many iconic works in the National Gallery’s collection but there are also the works that come to us on loan for our special exhibitions. About two years ago I checked the condition of Sir John Everett Millais’ Ophelia, 1851–52, as it arrived on loan from Tate — a work that I have known and loved since my early teens. It was hard to believe that the actual physical work was there in front of me, and that for that small period of time it was under our care.
The Paintings Conservation team with Tjungkara Ken, Sandra Ken, Yaritji Young, Freda Brady, Maringka Tunkin, Pitjantjatjara people, ‘Seven Sisters’, 2018, purchased 2020
It is special, although you do get used to working with amazing works of art every day. I do remember on one occasion we had Mark Rothko, Claude Monet and Jackson Pollock’s Blue poles, 1952, in the conservation lab all at the same time. I had walked past them numerous times a day for several days, then as I came down the stairs one day I stopped and took in what was in my view … I did realise this was a special place to work!
Paintings Conservation working on Claude Monet ‘Meules, milieu du jour [Haystacks, midday]’, 1890, purchased 1979
Do you conserve digital pieces? If so, how is this done?
FIONA — Paper Conservator: Digital works of art are acquired in numerous file formats, on a variety of carriers such as discs, hard drives and SD cards and may require specific platforms or playback equipment. We work with external audio visual specialists for migration and duplication of digital media to ensure the integrity of the work is retained. Digital works of art require secure server storage and are migrated or duplicated according to best practice. As technology changes at a rapid pace, it is critical to keep up, but undertake necessary migration before obsolescence of software or hardware occurs.
What is your favourite type of artwork to work on? (sculpture, drawing, oil painting etc.)
SARAH — Objects Conservator: We specialise in different areas — at the National Gallery we have conservators who work just in paper, paintings, textiles and objects. I work in Object Conservation, which is quite broad as it includes everything from the sculptures out in the garden and large installation works, through to decorative arts such as jewellery, ceramics and baskets. I enjoy the diversity of working with objects but perhaps the most challenging — and the area that I like the most — is the more unusual installation works such as the mechanical machines or the assembled parts hanging from the ceiling, which involve lots of problem solving and often getting my hands dirty.
Best way to get into the industry in Australia?
SARAH — Objects Conservator: To be a conservator you need a formal conservation qualification. A lot of recent conservators at the National Gallery completed their Masters degree from the University of Melbourne or undertook study overseas. Conservators have undergraduate degrees from all different fields. I went to art school but when I studied conservation, I studied with people who had done law, materials engineering, archaeology, anthropology, and nursing. You study materials science and chemistry to understand how materials age and become damaged, as well as the practical side of repairing cultural materials and the cultural side so you know how to treat them appropriately. We also have conservation technicians working with us who don’t have conservation degrees but who have fantastic skills in practical fields we might need, such as metal fabrication and making mannequins, and we have framers who work with us too. They generally came to work with us through applying when the job came up after they had developed these skills.
The Objects Conservation team in the Gallery’s conservation lab
I’m about to get my chemistry degree, how can I work in this field?
A chemistry degree is a great foundation for conservation, it is also important to have some knowledge and interest in art or cultural history. If you have your degree, it would be important to then do postgraduate study in conservation. Many cultural institutions take interns, contract and volunteer positions to assist with developing practical skills. Our Head of Conservation would be happy to provide more information.
Do you need to wear gloves? Sometimes I notice you are not. Why is that?
truth, sometimes we do and sometimes we don’t. It depends on what we are doing. It’s impossible to do a lot of things with gloves on — pop on a pair of rubber gloves next time you have to do some hand sewing — but if you are cleaning silver, where every finger print can leave a mark, gloves are a must. When installing, gloves are often worn and of course they are worn when chemicals are being used to protect the conservator. It is strange that gloves have become a symbol of the profession when we often work with bare hands. It should be noted that if a conservator is working with bare hands they are washed constantly, and wearing hand lotion is not an option.
What is the most economical way to clean a painting of dust and smoke myself?
DAVID — Paintings Conservator: Unfortunately, there is not one single method when it comes to cleaning off materials which have deposited on a painting. In considering a strategy, we take into account the type of paint, the age and condition of the paint layer, the type of support and the presence of any surface coating, and the nature of the deposit itself. It is possible to cause lasting damage if the wrong cleaning method is used. While many works have sentimental value for the owner, rather than large financial worth, I would always recommend that you consult an accredited conservator for advice just to make sure the problems are dealt with in the safest way for the well-being of the work.
The Textiles Conservation team with DI$COUNT UNIVER$E ‘I am not sorry, I am not for sale, I am not for reproduction’ and ‘The euphoria’, 2018, Gift of the artists 2020
What do you appreciate most in this job?
MICHELLE — Textiles Conservator: I appreciate so many things about being a textile conservator. Being able to examine textiles up close to determine how they are made and the steps we can take to help them have their best life and tell their stories is always a privilege. Often the textiles we work with are made entirely by hand from the fibre processing, to the weaving, and decorating. Some of them are so fine and intricate. It always amazes me.
Textiles Conservation getting Punjab region, India or Pakistan ‘Ceremonial cover or woman’s head-covering [phulkari]’ mid 20th century, Gift of Claudia Hyles, 2006, ready for display
What is most challenging about this job?
JOCELYN — Paintings Conservator: Maintaining concentration and interest when a treatment goes on for months at a time can be difficult, but it is also hard to call it ‘finished’ at the end and stop re-doing things. At the end of a treatment that has involved re-touching losses or damages to a painting, I always see areas I’ve done that I’d like to do better.
How do you wash watercolour art and prevent bleeding or colour reduction?
FIONA — Paper Conservator: As a general rule the older a work is, the more oxidised and potentially more stable the pigments might be. A modern work will be more susceptible to wet treatments because it is less oxidised, but also because, from the mid-19th century, the artist’s palette became much more complex, with the addition of chemically synthesized pigments. These pigments often contain water sensitive components such as dyestuffs, added to enhance the intensity of the colours. Prior to any wet treatments, works are thoroughly tested to establish the parameters within which safe treatment can occur. Pigments, inks and other media can be specifically identified with a range of examination techniques — such as with ultra-violet or infrared light sources, surface microscopy, XRF and polarised light microscopic analysis. If the media is found to be stable, works can be washed in a variety of ways, for example by floating on the surface of a water bath, capillary washing on blotter or using a vacuum suction table to draw water through. Water treatments can also be applied by isolating areas of media that are sensitive. When paper is first made it is often in an alkaline condition. Depending on what the paper is made from, and the conditions in which it is kept, it may become acidic. The visual indications of this are brown discolouration, spot stains known as ‘foxing’ and brittleness. Washing is used to reverse this degradation and chemically stabilise the paper. | https://medium.com/national-gallery-of-australia/askaconservator-2020-934727154c3f | ['National Gallery Of Australia'] | 2020-12-07 04:11:45.431000+00:00 | ['Art', 'Gallery', 'Art Gallery', 'Conservation', 'Museums'] |
Canna Meets Culture in Review; It’s Harvest Season for a Green Dawning | The first weekend in December, Fully Integrated presented what was coined as a virtual cannabis social conference entitled Canna Meets Culture. For two days attendees were given the opportunity to join a number of discussions centered around key areas about the cannabis industry in relation to black people.
CultureInTheCity.co was present, front-center and taking notes!
The sessions that CITC was ultimately able to hone in on was the Parenting & Cannabis, Finding Your Path in the Cannabis Business featuring Brand Partner Greenish Vibes, and Getting Candid About Cannabis/Open Conversation About Normalization.
Parenting & Cannabis
As a now Cannabis Advocate millennial who was raised in a household that normalized the use of cannabis, the Parenting and Cannabis session was a must to attend. It was important for me as a mother co-parenting with a household who doesn’t normalize the use of cannabis to understand ways of overcoming the stigmas and challenges when being a regular cannabis user and mother. And what I took away from this session was deeply validating. Below are my takeaways
Cannabis is ancient medicine that many women of color, like myself, have begun to look at as a natural holistic remedy for anxiety, depression, focus, temperament and energy. There’s no shame in it. Normalizing the use of it in one’s household requires honest, transparent discussions with age appropriate children. Explaining to children that it’s medicinal and necessary for a peaceful flow of day and even for their own peace of mind is a critical discussion to have. Because if Mommy doesn’t have peace then no one in the house will. There are supportive social groups in our community like Moms and Cannabis Too that offer additional resources to parents fighting this uphill battle. Utilizing community groups can assist with understanding the plant, it’s many medicinal uses along with keeping one abreast of the laws and legislation around the plant as it continues to become a staple in our collective worlds.
Finding Your Path in the Cannabis Business
Serving as an extra push of encouragement to those contemplating where and how to enter the industry, this session delivered on highlighting where blacks can take up space in an industry that was organically and originally made popular by them. This session was extra special because strategic brand partner, Greenish Vibes was on the panel. Here are a few golden nuggets I was able to gather.
Now’s the time to identify how you’d like to take up space in the industry. Blacks have been disproportionately left out of the dynamic growth happening with legal cannabis. Moving quick and steady to identify how one will enter and then work in the industry is paramount for African Americans.
There of hundreds of different ways that one can get into business without ever touching the actual plant. Panelists encouraged attendees to take off their blinders and open their eyes to all ancillary Cannabis Business opportunities. As an ancillary cannabis business, Greenish Vibes Founder Danicka Brown-Frazier added value to the panel by mentioning that there always seems to be a rush to grow the plant or open a dispensary. She encouraged those coming into the business to be open minded and find a path as a Cannabis focused attorney, accountant, marketer, influencer or event promoter. Mentioning that you can begin from wherever you left off in your corporate position, experience or passion, translating it easily to the cannabis industry. Keep in mind the Cannabis plant includes CBD, CBG, THC and a number of other cannabinoids, so the possibilities for growth should be explored in those areas as well. This is especially true because of the advertising, banking and regulatory issues surrounding the industry. Get intimately involved with supporting our local and federal elected officials. Support those candidates that make federal legislation milestones in the platform. Advocacy is important to lessen the barriers in the industry and to improve the entry points.
CultureInTheCity.co would like to give Fully Integrated and the entire team behind the Canna Meets Culture event a standing ovation. CITC understands the painstaking details that go into curating spaces and specialized events and the Canna Meets Culture was well organized and executed. The information provided by the panelists like a,b,c allowed those in attendance to walk away with a wealth of practical information and resource that would allow an African American who’s self determined to take up space in some capacity with cannabis.
Ultimately what this event taught me most is that it will be our collective advocacy, boldness to just do it and our linking together with other advocates in any way possible that will grow the African American presence in Cannabis. So let’s get this going, and I’ll join you — today I’ll each out to two of the individuals that I came across during the Canna Meets Culture conference. You do the same or if you didn’t attend the conference reach out to to lateral contacts taking up space in the Cannabis industry. | https://medium.com/@olamijipearse/canna-meets-culture-in-review-its-harvest-season-for-a-green-dawning-1037349c419d | ['Olamiji Pearse'] | 2020-12-16 16:33:13.247000+00:00 | ['Fully Integrated', 'Cannabis', 'Cannabis Industry', 'Greenish Vibes', 'Cannabis Medical'] |
The U.S. Government Agencies Hacked in Global Cyber Espionage | Hackers, believed to be from Russia, launched a cyberattack into the government computer networks of the U.S. It included the Department of Homeland Security, Defense Department, State Department, National Institutes of Health, Department of Commerce and Treasury.
“The U.S. government is aware of these reports and we are taking all necessary steps to identify and remedy any possible issues related to this situation”, said John Ullyot, spokesman for the National Security Council.
A spokesperson for CISA also confirmed that they have been working closely with the departments affected by the hack.
Hackers are believed to have exploited the vulnerabilities in widely used software, SolarWinds. The software company has more than 300,000 customers worldwide. According to the company, the software was affected by a virus when the update for the software was released between March and June 2020.
Relating to the incident, federal agencies have been told to disconnect their SolarWinds software that was manipulated to break into the network
“We believe that this vulnerability is the result of a highly-sophisticated, targeted and manual supply chain attack by a nation-state,” said Kevin Thompson, CEO of the software company.
According to several reports, the hackers are believed to be the APT29 hacking group, also is known as “the Dukes” or “Cozy Bear.” The group also has close ties to Russian intelligence, but SolarWinds has not confirmed the identity of its attackers.
It is also believed in the community that the hackers used a similar tool to break into other government agencies. And it was reported that the breaches are connected to a broad campaign that also involved the recently disclosed hack on FireEye.
Reportedly, the hackers’ exploitation to gain access to the company networks points to the 18,000 customers of SolarWinds that downloaded the software.
As per the company’s investigation, the hackers accessed the company’s cybersecurity testing tools. The company’s chief executive, Kevin Mandia said that the hackers had used new sophisticated techniques that neither the company nor the partners of the company had ever witnessed in the past.
However, not all companies that are using the software are affected but among those affected include high profile U.S. government agencies. Following the incident, the CISA has issued an emergency directive urging all federal agencies to check for any compromised data.
A report from the SolarWind software company has informed at-risk customers to upgrade to newer versions of the software to ensure the security of the risk.
Hackers Breached U.S. Agencies Thrice in Two Years!
This cyberattack is Cozy Bear’s third attempt targeting the US agencies in two years. The first was back in 2014 when the group launched a cyberattack targeting the White House and the Department of State.
It was considered the worst hack ever faced by the US government and it took three months to clean the system.
And in 2015, the group attempted to hack the Pentagon’s email system. It affected around 4,000 military and civilian personnel, including high ranking officials within the organization. It was the same year that the Democratic National Committee (DNC) was hacked when the hackers obtained classified information.
In 2016, Cozy Bear was claimed to be behind the five waves of phishing campaigns in the U.S. based think tanks and NGOs. And early this year in July, the group was accused of hacking into the National Security Agency, Security Centre, and National Counterintelligence, as well as the Canadian Centre for Cybersecurity.
The group tried to steal the data that are related to the COVID-19 vaccine and treatments that were developing in the US, UK, and Canada. These reports are concerning since hackers are becoming a major threat to every industry. It is a reminder that no company is cybersecurity guaranteed.
Moreover, it has been less than two weeks since the world celebrated International Computer Security Day. A day that reminds every individual and organization to take preventive measures to prevent cyber hacks and threats.
Every organization should follow basic cybersecurity practices to address the issue which can cost more for recovery in days to come. Basic practices including cybersecurity risk assessment to identify the loopholes and prioritize the threats and a routine check of the organization’s IT infrastructure with penetration testing.
Most of the time all it takes is one small mistake from an employee to put the organization’s cybersecurity at hazard. So, organizations should also focus on training employees with tools like ThreatCop to make them aware of the cybersecurity risk happening around the world.
Guide to ThreatCop
Moreover, it is advisable to implement security measurements for securing the email domain with tools like KDMARC that ensures that your email domain is secure and prevent against domain forgery. | https://medium.com/@kratikal/the-u-s-government-agencies-hacked-in-global-cyber-espionage-16c1c3664987 | ['Kratikal Tech Pvt Ltd'] | 2020-12-21 11:03:00.608000+00:00 | ['Cybersecurity', 'Kratikal', 'Cyberattack', 'USA', 'Hacking'] |
American cats have it pretty good | But African cattle live the good life
This is seriously my cat’s life. It is pretty freaking great. (photo by the author)
Sometimes I look at my cats and think what lucky SOBs they are. They live here, in Virginia, in a huge house with a family that takes care of their every need. They can go in and outside, catch bugs and lizards, stare out the window at the birds, yet never have to worry where their next meal is coming from. They get medical care, groomed, pet sitters when we are out of town, couches to lounge on and an eleven-year-old girl who adores them.
Most cats in much of Africa do not have it so good. While some people do have pet cats, most cats I see in Africa are strays. They hunt for mice, steal scraps when possible, sleep in the alleys. They do not get medical care and usually have noticeable health issues- patches of fur gone, infections, a missing eye or ear. These are scrappy cats who likely live just long enough to have one litter, and just one or two of those kittens live long enough to have one litter. They are more likely to be shooed away than cuddled, and would likely try to scratch your eye out if you did succeed in cuddling them.
At the same time, I look at the cattle in Africa and think what lucky SOBs they are. They live in Africa, in a huge open pasture with a family who takes care of their every need. They can go in and out of their corral, eat grass, stare at the open sky and never have to worry where their next meal is coming from. They get medical care, groomed, herd-boys to watch them, soft patches of dirt to lay in and eleven-year-old boys who adore them.
Very loved cows in South Africa (photo by the author).
The cattle in much of America do not have it so good. While people do have cows they love, in general we don’t honor our cattle. Cattle are a commodity. They are kept in feedlots where they are overfed, and under-exercised and sleep standing up. They get unnecessary medical care like unneeded antibiotics because of the cramped conditions. Their babies are taken away so we can continue to pump milk out of them. They are forced to be slothful and likely live just long enough to have one calf, then they are put down for slaughter to be turned into hamburgers my kids eat only half of then determine they are ‘full’. Most have no eleven-year old who loves them.
What one person deems as important another one might not, or might not in the same way. Sure, we have animal activists who try to push for better conditions for our cattle, but we will never have the same culture of cattle as they do in much of Africa. How an animal or an item is determined to have high status in a place is a matter of both culture and economy. While cattle have it pretty good in Africa, unlike American cats they are not pets. They are wealth. They are livelihoods. They are the bank account of those who own them. Their importance is economic, symbolic, and social. No one in America is seen as more important because of the number of cats they have, in fact it is an inverse curve I would argue. In large parts of Africa when a man gets married he has to pay a bridewealth to the bride’s family. Cattle are often the currency used to pay for the bridewealth. Even in urban areas where people don’t have cattle, prices are often set in cattle, then pay in cash equivalent. Cats, on the other hand, to Americans are an economic burden that we are willing to take on just to have a cute creature occasionally rub against our legs. There is no trading in cats (although based on what I see on the internet with cat videos I am surprised this isn’t yet a thing). As I sit here watching my fluff ball cat sleep comfortably on my chair, I think about how different her life would be if she were born elsewhere. How different all of our lives would be if you were born elsewhere. Call it luck, good, bad or just dumb luck, but this little furry cat curled at my feet has no idea how good she has it. | https://medium.com/@kristinabishop-54026/american-cats-have-it-pretty-good-163b094255fd | ['Km Bishop'] | 2020-12-22 02:00:14.776000+00:00 | ['Cats', 'International', 'Travel', 'Africa', 'Cows'] |
A Mathematical Approach to Constraining Neural Abstraction and the Mechanisms Needed to Scale to Higher-Order Cognition | Table of Contents
Introduction
Discussion
Role of the brain’s architecture in abstraction and its relation to higher-level cognition The interplay of structure and functional connectivity in the brain A mathematical approach to neuroscience An introduction to graph theories Mathematical approach to quantifying how structural brain connectivity leads to functional brain connectivity Other work in network connectivity analysis The brain as an eigenproblem Constraining abstraction across a single brain region Constraining abstraction across a brain-wide network Levels of dimensionality of network cluster analysis Attractor dynamics and the formation of stable and learned clusters, schemas, and models Applying graph theory in a non-linear domain
Conclusions
Future Work
Bibliography
Introduction
It can be agreed upon that the human brain is the best-known example of intelligence. However, not very much is known about the neural processes that allow the brain to make the leap to achieve so much from so little. The brain is an optimized system, capable of learning quickly and dynamically by creating knowledge structures that can be combined, recombined, and applied in new and novel ways. These knowledge structures, or schemas, help organize and interpret the world by efficiently breaking down and encoding information into numerous small blocks or abstract mental structures known as concepts (van Kesteren, Ruiter, Fernandez et al., 2012; Gilboa & Marlatte, 2017; van Kesteren & Meeter, 2020).
Concepts, when combined together in accordance with a relational memory system and inclusive of processes, such as maintenance, gating, reinforcement learning, memory, etc., form generalized structural knowledge (Whittington, Muller, Mark et al., 2019). A combination of these knowledge structures forms the building blocks that lead to the creation of world models for both model-based and model-free behavior (Chittaro & Ranon, 2004; Kurdi, Gershman, & Banaji, 2019). These well-organized structures, or models, further allow us to quickly process information, make decisions, and generalize to new domains from prior knowledge. New information can be encoded into the system to strengthen or change the schema, whereas post-encoding of this information can be consolidated, recombined, and retrieved, allowing for behaviors that fit with previous experiences or new behaviors that can be adapted to novel or changing situations (van Kesteren & Meeter, 2020).
Though the importance of schemas and mental abstraction has been recognized in the field of psychology and neuroscience, very little is known as to how the brain performs this difficult feat. This paper proposes a mathematical approach using graph theory and its subcomponent, spectral graph theory, to hypothesize how to constrain the neural clusters of concepts based on eigen-relationships. This same analysis is further applied to connections between clusters of concepts, the interaction of clusters that leads to structural knowledge and model building, and the interaction between models that results in reasoning. This paper will mainly focus on the functional connectivity of the brain, though a similar network of clusters can be implied at the structural level. Further, by drawing on past work, spectral graph theory will be discussed based on its relationship in determining functional connectivity from structural connectivity.
Discussion
Role of the brain’s architecture in abstraction and its relation to higher-level cognition
It has long been believed that cognitive behaviors exist on a hierarchy (Botvinick, 2008; Badre & Nee, 2018; D’Mello, Gabrieli & Nee, 2020). According to Taylor, Burroni & Siegelmann (2015), the lowest levels of the pyramid structure represent inputs to the brain, such as somatosensory and muscular sensation, whereas the highest levels represent consciousness, imagination, reasoning, and other tangible behaviors, such as motor movements like tapping or painful stimuli. Though the progression of stimuli to thought has not been mapped, and limited evidence exits, it has been widely hypothesized that the brain organizes in global gradients of abstraction starting from sensory cortical inputs (Taylor, Burroni & Siegelmann, 2015; Mesulam, 1998; Jones & Powell, 1970).
This organization of information in global gradients and its continuous updating, storage, recombination, and recall can also be thought of as a continuous attractor model (Whittington, Muller, Mark et al., 2019). As the brain takes in information, these attractor states stabilize into common attractor states via error driven learning, with cleaned-up stable representations of the noisy input pattern (O’Reilly, Munakata, Frank et al., 2012). These stable representations employ self-organization learning to build knowledge structures and systems from which behavior can emerge. This allows for inputs at the bottom of the hierarchy to be scaled-up to the pinnacle layers of reasoning, consciousness, and other tangible behaviors of higher-level cognition.
The interplay of structure and functional connectivity in the brain
Structural and functional connectivity are two important divisions that emerge to play different but interacting roles to achieve abstraction and subsequently higher-level cognition within the brain. Structural connectivity, much as the name suggests, is the connectivity that arises in the brain due to structure, such as white matter fiber connectivity between gray matter regions (Hagmann et al., 2008; Iturria-Medina et al., 2008; Gong et al., 2009; Abdelnour, Dayan, Devinsky et al., 2018). Contrarily, functional connectivity is concerned with the structure of relationships between brain regions, and typically does not rely upon assumptions about the underlying biology. It can be thought of as the undirected association and temporal correlations between two or more neurophysiological time series (such as those obtained from fMRI or EEG) (Chang & Glover, 2010; Abdelnour, Dayan, Devinsky et al., 2018).
The interaction between the brains structural and functional connectivity is of great interest in neuroscience but understanding the interplay has proven quite complex. Despite the existence of statistical models showing structural connectivity constrains functional connectivity (Honey et al., 2009; van den Heuvel et al., 2009; Abdelnour, Dayan, Devinsky et al., 2018), a full relationship between structure and function is not well developed.
Mathematical approach to neuroscience
The brain is a complex system, and like other such architectures, necessitates the use of an interdisciplinary approach including mathematics, physics, and computational modeling. There currently exist many invasive techniques to understand the brain in animals. However, due to the sheer complexity and combinatorial dynamics of the brain, many of these techniques do not smoothly transition to human behavior. Due to these limitations, reliance must be placed on computational models and mathematical frameworks that can serve as systems for logical inference and hypothesis testing.
Mathematics especially provides a structure of systematic and logical thought that can be extended out to as many steps in the future as desired. Unlike computers, humans have limited quantitative abilities, and our logic generally does not extend past a finite number of steps. In addition, humans also find it difficult to consider and assimilate a large number of details across time-steps, something that can be easily addressed through equations. This feature is especially important when combining experiments to provide a theoretical framework. Unlike physics, which relies heavily on this method, neuroscience currently lacks sufficient theoretical frameworks to understand the brain and its emergent properties.
An introduction to graph theories
Graph theory is a section of mathematics that studies graphs and other mathematical structures consisting of sets of objects, of which some may be related to each other. These graphs (G) as shown in Figure 1 and Equation 1, are made up of a set of edges (or lines, E) and vertices (or nodes, V). Graph theory studies the way in which these vertices are connected. Spectral graph theory is an extension of this theory and studies the properties of the Laplacian, edge, and adjacency matrices in relation to the graph.
(Eq. 1)
Figure 1: A graph “G”. This graph consists of a set of vertices, or nodes ‘V’ that are connected by edges. ‘E’. For simplicity, a d-regular graph is shown, where each node has the same number of connections or degrees ‘d’, where d = 2. Photo by author.
(Eq. 2)
(Eq. 3)
(Eq. 4)
(Eq. 5)
The adjacency matrix is a square matrix that represents which nodes are connected to other nodes. This can also be thought of as a manner of demonstrating connectivity. An example of a sample adjacency matrix is shown in Equation 2. The edge matrix is a square matrix that represents how many connections each node has. It is a binary matrix, with 1’s representing a connected edge and 0 representing no connection between nodes. As expressed by Equation 4, subtracting the adjacency matrix from the edge matrix yields the Laplacian matrix. This matrix is a representation of a graph, and an example is shown in Equation 3. Similar to other matrices, these Laplacian matrices transform space and have eigenvectors and eigenvalues. In linear algebra, matrix transformations force vectors to scale and rotate. However there exist special vectors called eigenvector that are nonzero, real or complex, and stay unchanged or do not get knocked off their span regardless of the transformation (Equation 5). Eigenvalues correspond to the amounts by which eigenvectors are scaled.
In graph theory, eigenvalues and their corresponding eigenvectors are seen as solutions to optimization problems. Though eigenvalues do not possess enough information to determine graph structure, they are considered a good measure of numerical conductance, or connectivity. According to graph theory, this connectivity can be calculated using the second smallest eigenvalue. On the other hand, spectral graph theory uses a clustering technique to generate a number of clusters in large networks based on the Laplacian matrix and its eigenvectors and eigenvalues. These clusters allow us to divide the very large graphs studied in graph theory into smaller components to develop a better understanding of the problem space.
After computing the Laplacian matrix, a graph can be divided into two components by calculating a Fielder vector. This is the eigenvector corresponding to the second smallest eigenvalue of the Laplacian matrix of the graph and determines the Laplacian’s algebraic connectivity. This vector has both positive and negative components, based on the graph’s internal connections, and a sum of 0. These positive or negative values allow the graph to be divided into two distinct clusters based on its sign. Each of these clusters are well connected internally and sparsely connected externally. This is further explained in Equations 6 to 12 only for clarity. The process of dividing the graph into multiple clusters is addressed below.
(Eq. 6)
(Eq. 7)
(Eq. 8)
(Eq. 9)
(Eq. 10)
(Eq. 11)
(Eq. 12)
Mathematical approach to quantifying how structural brain connectivity leads to functional brain connectivity
In Abdelnour, Dayan, Devinsky et al. (2018), the authors create a simple mathematical model based on graph theory to derive a relationship between structural connectivity measured using diffusion tensor imaging and functional connectivity measured from resting state fMRI. Although it is understood that a strong correlation exists between structural and functional connectivity and that functional connectivity is constrained by the structural component (Abdelnour, Dayan, Devinsky et al., 2018, Honey, Sporns, Cammoun, et al., 2009; Van Den Heuvel, Mandl, Kahn, et. Al, 2009; Hermundstad, Bassett, Brown, et al., 2013), there is no understanding of the relationship between the two types of connectivity.
Based on the theory that brain oscillations are a linear superposition of eignmodes (Raj, Cai, Xie, et al., 2020), Abdelnour, Dayan, Devinsky et al. (2018) demonstrate that structural connectivity and resting state functional connectivity are related through a Laplacian eigen structure. These eigen relationships arise naturally from the abstraction of time scaled and complex brain activity input into a simple novel linear model. In this model, a simple fitting procedure uses structural eigenvectors and eigenvalues to predict functional eigenvectors and eigenvalues. This is executed by using the eigen decomposition of the structural graph to predict the relationship between structural and functional connectivity. The linear spectral graph model only incorporated macroscopic structural connectivity without including local dynamics. As indicated in Mišić, Betzel, Nematzadeh, et al. (2015), though local dynamics are not linear or stationary, the emergent long-range behavior can be independent of detailed local dynamics.
This analysis is further used in a closed-form matter, where the eigenvectors are combined to build a model of full functional connectivity that is compared and verified against healthy functional and structural human data, as well as graph diffusion and nonlinear neural simulations. From their analysis, the authors are able to demonstrate that Laplacian eigenvectors can predict functional networks from independent component analysis at a group level. Though a strong relationship was seen at the level of the brain graph eigen-spectra, it was not seen at the node-pair level. However, this could be due to the model being completely analytical and using a single matrix exponentiation for whole brain functional connectivity, compared to using inter-regional couplings weighted by anatomical activity to influence neural node activity.
In summary, the paper captures the success of a graph model in confirming the linearity of brain signals, as well as drawing a relational connection between structure and function. The model further confirms, based on an analysis with human data, that long-range correlational structure exists in the brain through a mechanistic process based on structural connectivity pathways.
Other work in network connectivity analysis
Although brain structure is imperative, functional connectivity is the star of the show. The brain is divided into regions based on specialized functionality, for example: neurons in the primary visual cortex or V1 detect edge orientation (Pellegrino, Vanzella, & Torre, 2004; Lee, Mumford, Romero, et. al, 1998). This manner of specialization for cross-modal communication and integration allows for the brain to perform abstract reasoning, reinforcement learning, memory maintenance, gating, and recall. All these processes individually and collectively contribute to higher level cognitive functioning, such as reasoning and conscious awareness, emerging from interactions across distributed functional networks.
In neuroscience, numerous analytical approaches have been used in an attempt to understand properties and make inferences about the brain. Graph theory, and subsequently the computational modeling in this domain, has focused on attempting to quantify specific features of the brain’s network architecture. The work has modeled connectivity at the neuron level as well as examined simultaneous interactions among regions for fMRI experiments (Abdelnour, Dayan, Devinsky et al., 2018, Minati, Varotto, D’Incerti, et al., 2013). Although graph theory has been used to study the brain at a variety of levels, no work to date has thought to apply it to understanding how the brain forms concepts and recombines them to form generalized knowledge structures.
The brain as an eigenproblem
Regardless of the task, the brain takes in some manner of environmental inputs (such as sound, light, somatosensory etc.). This information activates neurons in specific structural areas related to the task and those inputs further drive the activation of relevant functional areas. As found in Abdelnour, Dayan, Devinsky et al. (2018), structural and resting functional connectivity can be seen as a linear eigenproblem, with functional connectivity arising through a Laplacian eigen structure. Other research has also communicated the idea of understanding the brain as an eigenproblem (Hanson, Gagliardi, & Hanson, 2009). As with any complex system, it can be assumed that matrix computations and probabilities may not be the exact computation the brain uses but are tools that can be used to make sense of the mechanisms.
The following sections of this paper address the brain as an eigenproblem and use graph and spectral graph theory to make a connection to how the brain may deconstruct input into its smallest abstracted states. The problem will be addressed in the linear domain, to make ties to past work which has focused in this domain to maintain simplicity. A section has been included to addresses how this hypothesis translates to the non-linear domain.
Constraining abstraction using graph theory across a single brain region
As stated above, inputs to the brain activate certain groups of structural neurons, henceforth referred to as nodes, that in turn activate groups of functional nodes across a variety of brain regions. Across the brain this creates a network structure, where nodes from various regions of the brain connect with one another. Although there is a system level brain-wide network, it is simplest to initially think of activation in a particular brain region, for example, primary visual cortex or V1, as a network in itself. An example of this is shown in Figure 2 a.
Each node in a network has some probability of activating, based upon a threshold it must reach and some weighted connectivity with other nodes, as shown in Figure 2 b. When a person is initially unsure how to perform a task, such as identifying an object or performing a motor movement, these nodes will have some random probability of activating based on a random pattern of connection with other nodes. By applying graph theory to this random network, some algebraic connectivity can be calculated based on random eigenvectors and eigenvalues. Spectral analysis can further generate some random clusters for the graph. However, as the person learns the task and reduces this randomness through dopamine driven reinforcement learning and relational memory, some nodes develop stronger weights and connections whereas others develop weaker weights and connections. Through this learning, the network starts to develop established weights and connections, leading to a new and improved connectivity to emerge between nodes. This is discussed further below.
Figure 2: a) The left-hand side of the image showcases the entire brain as a network and the right-hand side shows how this large network can be simplified by focusing on a particular brain region, for example: V1. Photo by author (created using image by Gordon Johnson from Pixabay)
Figure 2: b) An example of nodes with activations (circles) and weighted connections (lines) in a network. Photo by author.
For this single brain region, graph theory can be used to calculate the algebraic connectivity of the network using the second smallest eigenvalue. Based on this calculated connectivity, an adjacency matrix, which demonstrates connectivity, and an edge matrix, which determines the number of connections per node can be calculated. Subtracting the adjacency matrix from the edge matrix yields the network’s Laplacian matrix. Spectral graph theory, states that a clustering technique can be applied to this network to generate a number of clusters based on the Laplacian matrix and its eigenvectors and eigenvalues.
Though traditional methods divide the graph into two clusters, recursive bipartition or clustering using multiple eigenvalues and eigenvectors can be used to divide the graph into K-number of clusters. In recursive bipartition, a graph is first split into two clusters and those two networks are recursively split into smaller and smaller pieces. This can be done until arriving at the smallest informational pieces of the network. In clustering using multiple eigenvalues and eigenvectors, for every node in the graph a Laplacian matrix is calculated for which eigenvalue decomposition is undertaken, and the second, third, fourth and subsequently ascending smallest eigenvectors are calculated. This ensures every node of the graph is described by a small vector of values to which a clustering technique such as K-Means can be applied. However, instead of Euclidian distance, other techniques such as Manhattan or Fractional Distance will need to be utilized to account for high dimensional space (Aggarwal, Hinneburg, & Keim, 2001).
It has been often hypothesized that the brain uses some sort of clustering algorithm (Berry, Tkačik, 2020). The clustering technique used could be either of those mentioned above and may even vary depending on the brain areas or its structure/function. However, since brain regions depend on lateral and bidirectional dynamics, topological organization, and distributed representations, recursive bipartition seems a more likely technique to be employed by the brain from a computational time-cost standpoint. It also transfers more easily to the nonlinear domain, as discussed later. The clusters that arise from the spectral graph analysis, separate the once large interconnected network into a small group of nodes that are well connected internally and sparsely connected externally, as shown in figure 3 c. This method thus deconstructs the network of complex neural activation in a brain region into the smallest neuron activations, or neuron activations that represent the smallest pieces of information.
Figure 3: a) A whole network or graph network whose algebraic connectivity is determined by the second smallest eigenvalue of the graph. Photo by author.
Figure 3: b) The division of a network using spectral graph theory into individual clusters representative of the smallest informational connections between nodes. Photo by author.
Figure 3: c) A visualization of these clusters, that are well connected internally but sparsely connected externally. Photo by author.
Constraining abstraction using graph theory across a brain-wide network
To reiterate so far, graph theory has been used to find network connectivity, and spectral clustering and graph theory have been used to take a network and subsequently divide it into its smallest components. By dividing this network located in a particular brain region out into its smallest clusters of neural activation, the hypothesis deduces that these clusters are the smallest neuronal structures of knowledge, defined above as concepts. For a single brain region, these clusters would be well connected internally and sparsely connected with other clusters in that same region. This would allow only certain clusters to become and remain active as well as be combined and recombined together, which is a dynamic possible due to the brain’s large number of inhibitory neurons. This inhibition that allows only few neurons or clusters to activate at once, is especially important across brain regions for bidirectional connectivity, the formation of positive feedback loops, and sparse distributed representations.
To apply this analysis to the network of an entire brain, it can be inferred that small clusters of activation would emerge across all task-relevant regions. As an input enters the brain, say, an image of a chair for a characterization task, it would in parallel or in short succession activate a series of relevant brain regions; including those in the visual cortex, thalamus, prefrontal and parietal cortex, basal ganglia, and medial temporal lobe (Seger & Miller, 2010). For this large network, graph theory as indicated above could be utilized to determine connectivity amongst nodes, and spectral graph theory could then be further applied to determine clusters that reflect the smallest groups of neural activation or information. These neural clusters, representative of concepts, will have strong internal connectivity and sparse external connectivity within and across brain regions.
Across brain regions, distributed representations ensure multiple different ways of categorizing an input to be active at the same time. Successive levels of these representations are a main driving force for the emergence of intelligent behavior. Bidirectional connectivity uses these distributed representations to let clusters across various brain regions work together with other, albeit sparely connected, clusters to capture the complexity and subtlety needed to encode complex conceptual categories (O’Reilly, Munakata, Frank et al., 2012). Bidirectional connectivity is also essential to mechanisms like attention, and attributes to attractor dynamics, and multiple constrain satisfaction. This allow the network to stabilize into a stable and cleaned-up representation of a noisy input (O’Reilly, Munakata, Frank et al., 2012). In addition to these dynamics, error driven dopamine reinforcement learning and memory stabilize weights that develop between concepts across a brain-wide network. Based on weighting, concept clusters can be combined and recombined with other high weighted clusters to give rise to schemas. Creating schemas across various levels of dimensionality gives rise to generalized knowledge structures and models.
Levels of dimensionality of network cluster analysis
Computational neuroscience studies have shown that the brain employs sparse coding and dimensionality reduction as a ubiquitous coding strategy across brain regions and modalities. Neurons are believed to encode high dimensional stimuli using sparse and distributed encodings, causing a reduction in dimensionality of complex-multimodal stimuli, and metabolically constraining the brain to represent the world (Beyeler, Rounds, Carlson, et. al, 2017).
The brain is charged with processing, storing, recalling, and representing high dimensional input stimuli to understand the world. To further add to this challenge, the brain is also constrained by metabolic or computational costs and anatomical bottlenecks. This forces the information stored in neuronal activity to be compressed into orders of magnitude smaller populations of downstream neurons (Beyeler, Rounds, Carlson, et al., 2019). A variety of dimensionality reduction technique have been hypothesized to do this, such as Independent or Principle Components Analysis (PCA), non-negative sparse PCA, non-negative semi joint PCA, non-negative sparse coding etc. (Zass & Shashua, 2006). Most of these methods use eigen-decomposition or other matrix-based decomposition techniques to reduce the number of variables required to represent a particular stimulus space.
In neuroscience, the number of variables implies the number of observed neurons. However, as these neurons are not independent of one another and span a variety of underlying networks, dimensionality reduction is used to find a small number of variables that can explain a network’s activity. This is accomplished by determining how the firing rates of different neurons vary.
Manifolds are used to study varying degrees of dimensionality within the brain. A manifold is a topological space, that allows for real world inputs, such as images, sounds, neural activity, somatosensory sensation etc. to lie in low dimensional space while being embedded in high dimensional space. In neuroscience, it is believed that neural function is based on the activity of specific populations of neurons. These manifolds, obtained by applying dimensionality reduction techniques, serve as a surface that captures most of the variance of the neural data and represents neuron activity as a weighted combination of its time dependent activation (Luxem, 2019). For example, in Figure 4, the pink point can be thought of as a representation of sad and the green point can be thought of as a representation of happy in the manifold of facial expressions.
Figure 4: An example of a manifold. The manifold is the topological space in blue that lies in a low dimension while being embedded in a high dimensional space. This manifold is representative of human facial expressions, with the pink point being a representation of sad and the green point being representative of happy. Photo by author.
It is well accepted that the brain operates on a hierarchy, slowly adding components from increasing levels of dimensionality to arrive at higher level representations. As stated above, spectral graph theory breaks down a brain-wide network into its smallest pieces of neuronal information, or concepts, that exist at a reduced level of dimensionality. These concept clusters are combined and recombined with other clusters to yield schemas. This combination of clusters, or schemas, will exist at a higher level of dimensionality than its components and thus exist on a different manifold. Similarly, as schemas are combined and recombined across brain regions and levels of analysis, a hierarchy of generalized knowledge structures of varying degrees of complexity and dimensionality can be attained, such as plans, goals, outcomes, and eventually world models.
Attractor dynamics and the formation of stable and learned clusters, schemas, and models
From the previous sections, it has been discussed that graph theory can be used to calculate brain-wide network connectivity between nodes and spectral graph theory can be utilized to divide this network into its smallest components. A variety of mechanisms such as distributed representations, bidirectional connectivity, and dimensionality reduction, have also already been addressed as important dynamics that may push networks from a random state to form stable clusters, as well as attempt to scale representations to higher levels of cognition. However, the mechanisms that may account for the relational memory system that is hypothesized to be crucial in reorganizing concepts to form schemas and other higher-level structures of knowledge have not been discussed. Below is a list of mechanisms crucial to this system.
Maintained activation — sustained activity has been seen especially in brain areas that relate to high level cognitive processing, such as memory, attention, and control. Additionally, this continuous activation allows for training a variety of cortical areas that can exert their top down influences through bi-directional connectivity (O’Reilly, Munakata, Frank et al., 2012).
Gating, Maintenance, and Working Memory — the Basal Ganglia, including the Internal Global Pallidus, External Global Pallidus, Substantia Nigra, Subthalamic Nucleus, and Thalamus serves as a Go/No-Go system that determine what information is valuable (O’Reilly, Munakata, Frank et al., 2012). In working memory and task control, persistent neural firing is seen in the Basal Ganglia and its corresponding Basal Ganglia-Cortical loops. These loops are believed to guide neural processes as well as maintain and store representations in memory. The frontal cortex is responsible for robust active maintenance, while the Basal Ganglia contributes to selective and dynamic gating that allows frontal cortical memory representations to be rapidly updated in a task relevant manner (Frank, Loughry, & O’Reilly, 2001). In Working Memory tasks, the Basal Ganglia gates working memory representations into the Prefrontal Cortex to support executive functioning (Hazy, Frank, & O’Reilly, 2007).
Reinforcement Learning — this mechanism plays a critical role in the formation of network connections and the stabilization of network weights. It is well accepted that the Cortico-Basal Ganglia circuitry and Dorsal and Ventral Striata are critical for acquisition and extinction of behavior, as well as organized hierarchically in iterating loops (Yin, Ostlund, & Balleine, 2008). Dopamine, a key driver of reward behavior, functions as a global level mechanism for synaptic behavior modification (Glimcher, 2011) by comparing the predicted and actual value of reward received. Another important learning mechanism, XCAL, which addresses self-organizing and error driven learning, serves as a complementary dynamic (O’Reilly, Munakata, Frank et al., 2012).
Memory — The hippocampus, the center of memory, is known for being very sparsely connected, which allows it to preform pattern completion and one-shot learning (O’Reilly, Munakata, Frank et al., 2012). It is known to promote information encoding into long-term memory by binding, strengthening and reactivating distributed cortical connections, especially with the ventral medial prefrontal cortex (Van Kesteren, Fernández, Norris et al., 2010). Active maintenance and episodic memory are also helpful in organizing complex knowledge structures (Schraw, 2006; Moscovitch, Cabeza, Winocur et al., 2016).
Applying graph theory in a non-linear domain
As stated above, this paper addressed the brain as an eigenproblem in linear space to align with past work completed in the linear domain and maintain simplicity. This paper employs graph and spectral graph theory to mathematically represent how the brain’s network deconstructs sensory inputs into small, abstracted states. Connections are also made to how these abstractions can be scaled up by the network to higher level cognition by addressing the mechanisms that are most likely key driving factors.
Though local dynamics in the brain are not by any means linear or stationary, emergent long-range behavior has been shown to be independent of detailed local dynamics (Mišić, Betzel, Nematzadeh, et al., 2015; Abdelnour, Dayan, Devinsky et al., 2018). However, this obviously does not apply to the level of neural dynamics, and it is believed this space operates in a nonlinear domain (Zhang, Li, Rasch et al., 2013; Amsalem, Eyal, Rogozinski et al., 2020). The same graph and spectral theory algorithms applied to a linear workspace can be expanded to the nonlinear domain. Ample research has and continues to be conducted in this field (Hein & Bühler, 2010; Bühler & Hein, 2009; Letellier, Sendiña-Nadal, & Aguirre, 2018).
To generalize graph theory to dynamic nonlinear networks, the dimensional space or number of variables of operation needs to be reduce for practicality. The matrix structures used in the linear domain, will be converted to Jacobian matrices. The Jacobian, in short, is a square matrix that is required for the conversion of variables from one coordinate system to another. This is accomplished, by creating a matrix for a set of equations that find areas where a nonlinear transformation looks linear. Using first order partial derivatives, the matrix sums up all the changes of the component vectors along each coordinate axis. These matrices are then generalized to the remaining nonlinear space. To calculate Jacobian matrices when using graph theory, a graph needs to be constructed in which the nodes are considered state variables and the links represent only linear dynamical interdependencies. This graph is used to identify the largest connected subgraph where every node can be reached from another node in the subgraph (Letellier, Sendiña-Nadal, & Aguirre, 2018). Generalizing standard spectral graph clustering to a nonlinear domain is much simpler and the graph p — Laplacian can be used to convert the space to a nonlinear eigenproblem.
The standard spectral clustering technique, that divides a network into two parts, can be used as a nonlinear operator on graphs. This would be valid for a standard graph p — Laplacian, where p = 2. However, for dividing the network into multiple clusters, p — spectral clustering can be utilized to consecutively split the graph until a desired number of clusters is reached. This can be reached through established generalized versions, such as ratio cut, normalized cut, or even multi-partition versions of the Cheeger cut (Bühler & Hein, 2009). Though the sequential splitting of clusters is the more traditional method, a nonlinear K-means clustering of eigenvectors and values can also be used in a non-linear space.
Conclusions
The brain is an enormous, complicated system and till date not very much is known about the neural processes that divide complex real-world information into its smallest components. These smallest components or concepts are dynamically combined, recombined, and applied in new and novel ways and are believed to be integral in allowing the brain to swiftly and flexibly adapt to the world around it. This paper hypothesizes a mathematical methodology for quantifying these concepts. By visualizing the brain as a network graph with neurons as nodes, linked by weighted connectivity, graph theory can be used to determine its algebraic connectivity, and spectral graph theory can divide this connectivity into its smallest subcomponents with high internal connectivity and low external connectivity.
The smallest components that a network can be divided into give the smallest clusters of informational storage. These concept clusters exist at a low level of dimensionality and can be combined and recombined with a limited number of external clusters within and across brain regions based on sparse connectivity. This combination of clusters gives rise to schemas, which exist at a higher level of dimensionality. As schemas are further combined and recombined, with an increasing number of clusters across an increasing number of brain regions and dimensional representations, it is possible to attain a hierarchy of generalized knowledge structures with different degrees of complexity and dimensionality.
As with any complex unlearned system, the brain initially serves as a random network with random activations and connections. Over time, neurological mechanisms function as driving dynamics to allow for stability and weighted connectivity to emerge across brain regions. As shown in Figure 5, there exists a logical grouping to the brain’s activity, which allows for high level cognition to emerge from abstract representations.
Structure, be it within or across regions, exists at the lowest level of the hierarchy and is a grouping of brain areas with similar specialized properties. This structure, as indicated above, is a precursor and serves as the source of functional activity. In the functional domain, abstractions are the smallest currency of information. The neurological clusters that form concepts have strong internal connectivity and weak external connectivity, which allows them to be connected, combined, and recombined only with a small number of other clusters within and across the brain. These combinatorics are driven by both functional connectivity and structural properties. Although this paper focuses mostly on clustering in the functional domain, the same clustering can be applied to the structural network. The structural and functional networks would likely work in unison to achieve higher-level cognition.
Figure 5: A diagrammatic grouping of brain activity into three groups. At the lowest level is structure, this activity within and across brain regions with specialized structure leads to functional activations. The smallest level of functionality exists in abstractions, and through structural, neural, and mechanistic processes allow for concepts to be combined and recombined to create higher order generalized structures of intelligence. At the third level, these higher-level generalized structures are combined with brain-wide high dimensional dynamics to form representations with an increasing degree of complexity and dimensionality. The highest-level representations from this level form models of the world that are critical components leading to human reasoning. Photo by author.
The rule learning components of these clusters are driven by neural factors such as bi-directional connectivity, distributed representations, attractor dynamics and multiple constraint satisfaction, and mechanistic components, such as maintained activation, gating and maintenance, reinforcement learning, and episodic memory as discussed above. These dynamics exist within and across brain regions, and determine how clusters can be combined and recombined. This allows for representations to gain an increasing level of dimensionality as an increasing number of clusters are combined.
Through rule learning, concepts are combined to form schemas and schemas are combined to form varying degrees of generalized knowledge structures, such as objects, categories, plans, actions, goals, outputs, and eventually models. The brain then hierarchically groups the highest dimensional representations of these knowledge structures into specific world models. When called, these model representations form the basis for conscious thoughts, ideas and predictions that are evaluated during reasoning.
Future Work
To examine the viability of this hypothesis, future work is being conducted using computational modeling.
Bibliography
Abdelnour, F., Dayan, M., Devinsky, O., Thesen, T., & Raj, A. (2018). Functional brain connectivity is predictable from anatomic network’s Laplacian eigen-structure. NeuroImage, 172, 728–739.
Aggarwal, C. C., Hinneburg, A., & Keim, D. A. (2001, January). On the surprising behavior of distance metrics in high dimensional space. In International conference on database theory (pp. 420–434). Springer, Berlin, Heidelberg.
Amsalem, O., Eyal, G., Rogozinski, N., Gevaert, M., Kumbhar, P., Schürmann, F., & Segev, I. (2020). An efficient analytical reduction of detailed nonlinear neuron models. Nature Communications, 11(1), 1–13.
Badre, D., & Nee, D. E. (2018). Frontal cortex and the hierarchical control of behavior. Trends in cognitive sciences, 22(2), 170–188.
Berry, M. J., & Tkačik, G. (2020). Clustering of Neural Activity: A Design Principle for Population Codes. Frontiers in Computational Neuroscience, 14, 20.
Beyeler, M., Rounds, E., Carlson, K. D., Dutt, N., & Krichmar, J. L. (2017). Sparse coding and dimensionality reduction in cortex. BioRxiv, 149880.
Beyeler, M., Rounds, E. L., Carlson, K. D., Dutt, N., & Krichmar, J. L. (2019). Neural correlates of sparse coding and dimensionality reduction. PLoS computational biology, 15(6), e1006908.
Botvinick, M. M. (2008). Hierarchical models of behavior and prefrontal function. Trends in cognitive sciences, 12(5), 201–208.
Bühler, T., & Hein, M. (2009, June). Spectral clustering based on the graph p-Laplacian. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 81–88).
Chang, Catie, Glover, Gary H., 2010. Time–frequency dynamics of resting-state brain
connectivity measured with fMRI. NeuroImage 50 (1), 81–98.
Chittaro, L., & Ranon, R. (2004). Hierarchical model-based diagnosis based on structural abstraction. Artificial Intelligence, 155(1–2), 147–182.
D’Mello, A. M., Gabrieli, J. D., & Nee, D. E. (2020). Evidence for hierarchical cognitive control in the human cerebellum. Current Biology.
Frank, M. J., Loughry, B., & O’Reilly, R. C. (2001). Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cognitive, Affective, & Behavioral Neuroscience, 1(2), 137–160.
Gilboa, A. & Marlatte, H. Neurobiology of schemas and schema-mediated memory. Trends Cogn. Sci. 21, 618–631 (2017).
Glimcher, P. W. (2011). Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15647–15654.
Gong, G., He, Y., Concha, L., Lebel, C., Gross, D.W., 2009. Mapping anatomical
connectivity patterns of human cerebral cortex using in vivo diffusion tensor imaging
tractography. Cereb. Cortex 19, 524–536.
Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C.J., Wedeen, V.J.,
Sporns, O., 2008. Mapping the structural core of human cerebral cortex. PLoS Biol. 6
(7), e159
Hanson, S. J., Gagliardi, A. D., & Hanson, C. (2009). Solving the brain synchrony eigenvalue problem: conservation of temporal dynamics (fMRI) over subjects doing the same task. Journal of computational neuroscience, 27(1), 103–114.
Hazy, T. E., Frank, M. J., & O’Reilly, R. C. (2007). Towards an executive without a homunculus: computational models of the prefrontal cortex/basal ganglia system. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1485), 1601–1613.
Hein, M., & Bühler, T. (2010). An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse PCA. In Advances in Neural Information Processing Systems (pp. 847–855).
Hermundstad, A. M., Bassett, D. S., Brown, K. S., Aminoff, E. M., Clewett, D., Freeman, S., … & Grafton, S. T. (2013). Structural foundations of resting-state and task-based functional connectivity in the human brain. Proceedings of the National Academy of Sciences, 110(15), 6169–6174.
Honey, C.J., Sporns, O., Cammoun, L., Gigandet, X., Thiran, J.P., Meuli, R., Hagmann, P.,
2009. Predicting human resting-state functional connectivity from structural
connectivity. Proc. Natl. Acad. Sci. Unit. States Am. 106 (6), 2035–2040.
Iturria-Medina, Y., Sotero, R. C., Canales-Rodríguez, E. J., Alemán-Gómez, Y., & Melie-García, L. (2008). Studying the human brain anatomical network via diffusion-weighted MRI and Graph Theory. Neuroimage, 40(3), 1064–1076.
Jones, E. G., & Powell, T. P. S. (1970). An anatomical study of converging sensory pathways within the cerebral cortex of the monkey. Brain, 93(4), 793–820.
Kurdi, B., Gershman, S. J., & Banaji, M. R. (2019). Model-free and model-based learning processes in the updating of explicit and implicit evaluations. Proceedings of the National Academy of Sciences, 116(13), 6035–6044.
Lee, T. S., Mumford, D., Romero, R., & Lamme, V. A. (1998). The role of the primary visual cortex in higher level vision. Vision research, 38(15–16), 2429–2454.
Letellier, C., Sendiña-Nadal, I., & Aguirre, L. A. (2018). Nonlinear graph-based theory for dynamical network observability. Physical Review E, 98(2), 020303.
Luxem (2019). Manifolds and Neural Activity: An Introduction. https://towardsdatascience.com/manifolds-and-neural-activity-an-introduction-fd7db814d14a
Mesulam, M. M. (1998). From sensation to cognition. Brain: a journal of neurology, 121(6), 1013–1052.
Minati, L., Varotto, G., D’Incerti, L., Panzica, F., & Chan, D. (2013). From brain topography to brain topology: relevance of graph theory to functional neuroscience. Neuroreport, 24(10), 536–543.
Mišić, B., Betzel, R. F., Nematzadeh, A., Goni, J., Griffa, A., Hagmann, P., … & Sporns, O. (2015). Cooperative and competitive spreading dynamics on the human connectome. Neuron, 86(6), 1518–1529.
Moscovitch, M., Cabeza, R., Winocur, G., & Nadel, L. (2016). Episodic memory and beyond: the hippocampus and neocortex in transformation. Annual review of psychology, 67, 105–134.
Munakata, Y., & O’Reilly, R. C. (2003). Developmental and Computational Neuroscience Approaches to Cognition: The Case of Generalization. Cognitive Studies, 10, 76–92.
O’Reilly, R.C, Munakata, Y, Frank, M. J., &, Hazy, T.E (2012), Computational Cognitive Neuroscience.
O’Reilly, R. C., & Munakata, Y. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. MIT Press.
Pellegrino, F. A., Vanzella, W., & Torre, V. (2004). Edge detection revisited. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(3), 1500–1518.
Raj, A., Cai, C., Xie, X., Palacios, E., Owen, J., Mukherjee, P., & Nagarajan, S. (2020). Spectral graph theory of brain oscillations. Human Brain Mapping.
Schraw, G. (2006). Knowledge: Structures and processes. Handbook of educational psychology, 2, 245–260.
Seger, C. A., & Miller, E. K. (2010). Category learning in the brain. Annual review of neuroscience, 33, 203–219.
Taylor, P., Hobbs, J. N., Burroni, J., & Siegelmann, H. T. (2015). The global landscape of cognition: hierarchical aggregation as an organizational principle of human cortical networks and functions. Scientific reports, 5(1), 1–18.
Van Den Heuvel, Martijn P., Mandl, Ren_e C.W., Kahn, Ren_e S., Hulshoff Pol, Hilleke E.,
October 2009. Functionally linked resting-state networks reflect the underlying
structural connectivity architecture of the human brain. Hum. Brain Mapp. 30 (10),
3127–3141.
Van Den Heuvel, M. P., Mandl, R. C., Kahn, R. S., & Hulshoff Pol, H. E. (2009). Functionally linked resting‐state networks reflect the underlying structural connectivity architecture of the human brain. Human brain mapping, 30(10), 3127–3141.
Van Kesteren, M. T., Ruiter, D. J., Fernandez, G. & Henson, R. N. How schema and novelty augment memory formation. Trends Neurosci. 35, 211–219 (2012).
Van Kesteren, M. T. R., & Meeter, M. (2020). How to optimize knowledge construction in the brain. npj Science of Learning, 5(1), 1–7.
Van Kesteren, M. T., Fernández, G., Norris, D. G., & Hermans, E. J. (2010). Persistent schema-dependent hippocampal-neocortical connectivity during memory encoding and postencoding rest in humans. Proceedings of the National Academy of Sciences, 107(16), 7550–7555.
Whittington, J. C., Muller, T. H., Mark, S., Chen, G., Barry, C., Burgess, N., & Behrens, T. E. (2019). The Tolman-Eichenbaum Machine: Unifying space and relational memory through generalisation in the hippocampal formation. BioRxiv, 770495.
Yin, H. H., Ostlund, S. B., & Balleine, B. W. (2008). Reward‐guided learning beyond dopamine in the nucleus accumbens: the integrative functions of cortico‐basal ganglia networks. European Journal of Neuroscience, 28(8), 1437–1448.
Zass, R., & Shashua, A. (2006). Nonnegative sparse PCA. Advances in neural information processing systems, 19, 1561–1568.
Zhang, D., Li, Y., Rasch, M. J., & Wu, S. (2013). Nonlinear multiplicative dendritic integration in neuron and network models. Frontiers in computational neuroscience, 7, 56. | https://towardsdatascience.com/a-mathematical-approach-to-constraining-neural-abstraction-and-the-mechanisms-needed-to-scale-to-265b6ab541 | ['Ananta Nair'] | 2020-12-30 04:18:56.023000+00:00 | ['Neuroscience', 'Mathematics', 'Getting Started', 'Editors Pick', 'Artificial Intelligence'] |
The Loss of a Person Doesn’t Always Mean Their Passing | Sometimes, it’s more painful to lose the ones who are still alive. I’m not making a light matter of death here — of course not. Like you, I’ve lost dear ones who have passed on and, perhaps you, like me, have lost dear ones who are very much alive today.
It’s always hard to tell how it begins so we can avoid it again in the future. What are the telltale signs that a friendship is about to draw to an end? Is it a glint in the eye of a friend you’ve always known? Is it when they stop replying and you gradually lose touch? Or is it something that happens so suddenly — an outburst of anger, a fistfight, a string of harsh words that leads to walking away and never looking back?
I can’t pinpoint it for you. I can’t narrow it down to the exact moment. I can’t fix the day or the hour. In my experience, it came subtly. Like the sun creeping up on a cold morning. One moment, you’re admiring the stars. For a second, you forget where you are, as you stare fixated and the next thing you know, you are blinded by a flash of light. Only this one is unpleasant and the colours are dark and ugly. | https://psiloveyou.xyz/the-loss-of-a-person-doesnt-always-mean-their-passing-703f13784293 | ['Alyssa Chua'] | 2020-12-23 18:39:26.855000+00:00 | ['Friendship', 'People', 'Relationships', 'Life', 'Love'] |
The Weather Corner: Institutions are taking note of the material risks of climate change and… | The Weather Corner: Institutions are taking note of the material risks of climate change and extreme weather. Are we at an inflection point? ClimateAi Feb 22·4 min read
We’re bringing you exclusive content from our newsletter, The Forecast, right here on Medium. Sign up for our newsletter here. This story is from our feature called the Weather Corner, where we take a deep dive into weird weather around the world, from our January 22nd, 2021, newsletter.
Source: United Nations IPCC
Independent of President Biden, in the last year, the world and its institutions have showed further signs of officially taking note of and planning around the material risks associated with climate change.
Severe weather events disrupt supply chains. In sectors like agriculture, for example, an unusual heat wave during flowering can cut down yields, while frequent storms can prevent farmers from getting out in the field + it might affect the distribution of the product. A research paper from 2017 showed that abnormal weather caused disruptions in the operating and financial performance of 70% of businesses worldwide. And those disruptions add up — estimates placed the cost of weather variability at about $630 billion for the U.S. alone. That’s 3.5% of the total U.S. GDP at the time.
Yet severe extreme weather events are likely to become more intense or more frequent from human-caused climate change. In the US, wildfires have already become five times more frequent in California; the Midwest experiences once-in-100-year floods every five years.
Policymakers are taking notice, and many nations have begun to put in place new climate-related regulations that mandate that publicly traded companies disclose their climate risks. France, New Zealand, and the United Kingdom are chief among them.
These regulations have far-reaching impacts across the economy — on asset managers, lenders, insurers, retirement funds, and publicly traded firms. In addition, analysts say that stricter and more comprehensive legislation on climate risk assessment and disclosure is expected, especially in the U.S., European Union, and Japan.
Some of these regulations on the horizon include:
The U.S.
The U.S. Federal Reserve (the U.S.’ central banking system): Outgoing chair Jerome Powell further stated in November that “climate risk is a material risk” and noted a likely expansion of the Board’s oversight duties to include climate risks. Incoming new chair Janet Yellen called climate change an “existential threat” and will likely implement new financial regulatory policy on climate risks.
The U.S. Commodity Futures Trading Commission (a government agency that regulates the U.S. derivatives markets): released a report in September that included 53 climate-related recommendations. The top three were a price on carbon; a mandate for disclosure of material climate-related risks in financial filings, including a universal definition of materiality for medium- and long-term risks (both qualitative and quantitative); and for regulators and financial institutions to “pilot climate risk stress testing” for sectors (particularly for agriculture, community and regional banks).
released a report in September that included 53 climate-related recommendations. The top three were a price on carbon; a mandate for disclosure of material climate-related risks in financial filings, including a universal definition of materiality for medium- and long-term risks (both qualitative and quantitative); and for regulators and financial institutions to “pilot climate risk stress testing” for sectors (particularly for agriculture, community and regional banks). The U.S. Securities and Exchange Commission (independent oversight agency that regulates the securities markets and protects investors): Commissioner Lee said in November that, “there is certainly evidence that climate risks are currently underpriced, particularly with respect to long-dated assets, utilities, commercial mortgage-backed securities, and potentially municipal bonds, among others.”
Commissioner Lee said in November that, “there is certainly evidence that climate risks are currently underpriced, particularly with respect to long-dated assets, utilities, commercial mortgage-backed securities, and potentially municipal bonds, among others.” The State of New York’s Department of Financial Services: announced in October 2020 that they expect climate risk disclosure from all regulated entities and non-regulated depositories starting in 2021, which includes all of the major banks on Wall Street.
Internationally
France: passed a law in 2019 that mandates that publicly-traded firms and asset managers report transition and physical climate risks.
The United Kingdom: signed into law in 2020 a mandate for TCFD-aligned disclosures across the economy by 2025, with a significant portion of mandatory requirements in place by 2023
New Zealand: will require all banks, asset managers, and insurance companies with more than NZ$1 billion in assets to disclose their climate risks by 2023.
In addition, regulators from Japan, Hong Kong, and the EU are also looking at laws on how companies should monitor, disclose, and address climate risks.
Have any more questions about global weather events, their impact, and how they’re linked to climate change? Send them to [email protected] — we will choose one to answer in the next newsletter. | https://medium.com/@climateai/the-weather-corner-institutions-are-taking-note-of-the-material-risks-of-climate-change-and-987d0f165388 | [] | 2021-02-22 22:10:28.794000+00:00 | ['Climate Change', 'Climate Policy', 'Risk', 'Business Leadership', 'Extreme Weather'] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.