title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
The screen time problem | None of us are switching off properly. We have a screen time addiction.
It used to be that people would settle for bingeing on boxsets or dozing off to a film at night, but now many of us struggle to do that without a phone in our hand. The interactivity and instant reward of social media [or even just checking your emails for the 100th time] has given it the edge over most other leisure activities. And it’s all too easy to be doing it on the side. Often, we don’t even notice how much screen time we’re exposing ourselves to.
But it isn’t just the quantity of time being spent on phones and tablets. It’s the quality of what we’re seeing that’s having an impact too. Most of us are guilty of having compared our lives to what we see on Instagram once or twice, if not a lot more often. In fact it’s almost harder not to. Who doesn’t see yacht-selfies, new houses and expensive outfits without feeling a bit left out? But there comes a point where it’s no longer healthy. The ease with which the internet has granted us access to seeing things we don’t have is easy to underestimate. In no other century was it possible to scroll through an endless feed of beautifully curated photos of other people’s lives at the touch of a button, in the palm of your hand.
Our obsession with our phones affects each of us differently, from mental health to physical, and often in ways we’re unaware of until there’s a problem.
The algorithm trap
But it’s not always as simple as just turning your phone off [does anyone actually do that?]. Social media has been designed to keep you onsite for as long as possible. There’s always one more page to get through or another video to watch. With our newsfeeds tailored carefully to suit our interests, a quick 5 minute check-up on Facebook can easily turn into 45 minutes of aimless screen time for no real reason. Making the most of your free time is important, and losing so much of it to your phone just doesn’t seem right.
Much of what we see, we’re not even aware of. Scrolling straight past adverts and headlines doesn’t mean that our brains aren’t registering them all on some level. It might seem like we’re immune to what comes up, but it’s all being filed away somewhere; oversaturation is easy to reach. Gloomy news stories and a constant stream of other people appearing to be doing better than you all adds up, whether you were paying much attention or not.
The time we rack up on these platforms takes a toll on our bodies too. The average weight of a human head is around 10–12lbs when standing up straight. But start hunching over a laptop or staring down at your phone and the pressure on your spine can reach up to 60lbs as you droop forward to look at your screen. For people who work at computers daily — already a well known source of back and neck troubles — coming home and spending a few more hours leaning over devices while relaxing doesn’t help much. Eye strain and dry eyes can also be linked to excessive screen time, as we blink as much as 2/3 less than normal when looking at phones or laptops.
Doomscrolling
The news isn’t particularly cheerful at the best of times but it’s been like something out of a horror movie this year. And that’s just the stuff that’s true — we’ve got more on the issue of Fake News HERE. It’s far too easy to find yourself spiralling into the void of what’s come to be known as Doomscrolling. Ever found yourself going from one depressing post to another… and then on to a few more? Realised you’ve just lost a chunk of your time without noticing? Got a sense of impending dread lurking in the background? That’s doomscrolling.
As important as it is to keep yourself informed about the world, there’s only so much anyone can healthily process. Our brains weren’t designed to keep up with the 24hr world news cycle. We see more in a day than a person living a couple of hundred years ago might have done in their entire lifetime; that’s a LOT to cope with. Staying up to date shouldn’t mean completely eroding your mental health.
Pressure to perform
Whether it’s models, actors, influencers or just your own social circle — there’s always someone to compare yourself to. On image-based apps like Instagram, looking good is top priority. But it isn’t always healthy, sustainable or even realistic.
It’s one thing to be sharing that photo of your night out [remember those?] because you had a great time. It’s another thing to be tirelessly striving to make it look like it was the best evening of your life so that other people perceive you a certain way. The pressure to perform, to always be doing the most and the best, is impossible to keep up with. Instant gratification from Likes and Follows can create a reward cycle that isn’t compatible with normal, everyday life. Nobody’s having a great time 24/7, no matter how they present themselves online. There’s no reason to feel like you should be either, especially after such a bizarre year of being indoors.
And between professional photoshop jobs and readily available editing apps, very little of what we see can be taken on face value. A 2019 study in Florida found that 87% of women and 65% of men compared their appearance to people they saw online. The vast majority reported they were unhappy with themselves in comparison. Trends in cosmetic surgery have even started to come and go, based on celebrity fads. People have always followed current fashion, but there’s never been round the clock access to it like there is in the age of social media.
Pandemic screen time
With little to do and oh so much time for the socials this year, the balance has been thrown off. Guilt about not keeping busy has only increased as the year’s gone on, despite huge restrictions taking many of our choices away from us in one way or another. Influencers still appear to be jetting off to exotic locations, and even just seeing friends in different tiers going out for a drink can be a bit of a downer. The idea that digital and social life should be going on as normal just doesn’t work; but the pressure to keep up appearances online is still strong for some. Younger demographics — particularly those falling in Gen Z and millennial brackets — make up the majority of social media users. The urge to show off a bit is natural, but on the flip side, feeling inadequate can come as a result. While it’s nothing out of the ordinary for ‘Got nothing new to post’ and ‘#throwback’ to fill up newsfeeds at this point, there are a lot of people feeling like their self-expression has taken a hit.
Ignoring the irony that you’re probably reading this on your phone, here’s a few ways to cut down [or just improve] all that screen time…
Have a clear out
UNFOLLOW: Remember that Marie Kondo quote: ‘Imagine yourself living in a space that contains only things that spark joy’? Treat your socials the same way. If you find yourself compulsively checking up on accounts that bum you out — just unfollow them. It really is that easy. That influencer with 215k followers really isn’t going to take it personally if you stop keeping up with them. Clear out the content that leaves you feeling a bit Meh about yourself.
Pick your vibe: The beauty of your news feed is that you can curate it. When you’ve banished the bad, seek out the good. Think about what makes you happy and make sure you’re following it; whether it’s memes, baby animals or beautifully laid out food posts, stock up on accounts that will make you smile when you’re scrolling.
No more clutter — Clear out all those old apps. Back up your photos. Sort out contacts. Tidy up your emails. Maybe even change your background? Freshening up your phone and making it as quick and easy to get around as possible will make your life that tiny bit easier and improve the quality of your screen time.
Screen time rules
Time it + Time out: Find out how long you spend on your app of choice. There’s a handy feature on Instagram that lets you see how long you spend on the app a day [Head to your profile, tap the three lines on the upper right, hit ‘Your activity -> Time’ and then brace yourself]. Often, we open our phones and browse for a few minutes for no reason — but it all adds up. Set yourself a few windows throughout the day for checking in purposefully and avoid mindless scrolling in-between.
Clock out: For many, your phone is integral to your work. This can mean a very blurred line between the day job and leisure time. Set yourself a deadline for responding to work emails etc and stick to it! Strong boundaries aren’t just good for you, they’re helpful for anyone getting in touch with you. Give yourself back your evening and clock in again when the time is right — no more peeking at your inbox.
Nights off: Turn. It. Off. Or at the very least, set it to Do Not Disturb when you’re asleep. Often we’ll make excuses about ‘what if there’s an emergency’, but do we really need to be hearing notifications coming through all night. If you can turn it off or put it outside your room for 8 hours, do! If you’re reluctant to be out of contact completely, drop off the wifi and you’ll still be ready for any urgent calls without all the other distractions.
Do it for your health
Warm it up: Avoiding the blue glow from devices for a few hours before bed will give you a better night’s sleep. But if you’re a fan of some reading before bed, or just like a chat in the evenings, make sure you’ve got a warm filter on your screen to cut down on glaring electronic light. Orange tones are easier on the eyes and allow you brain to get sleepy.
Stand up straight: If we’re honest with ourselves, who doesn’t have pretty rubbish posture? Get yourself in the habit of straightening up as often as possible and raise your phone up a bit higher. There’s no need to be breaking your neck staring towards the floor to check Twitter. Bringing your devices closer to your eyeline can ease out neck pains and help you stop slouching. It’s especially worthwhile getting yourself in a comfy, back-friendly position for long spells in front of a computer; you’ll thank yourself later. | https://medium.com/@braderievintage/the-screen-time-problem-1dbda457aa49 | [] | 2020-12-21 13:15:06.724000+00:00 | ['Digital', 'Mental Health', 'Social Media', 'Phone Addiction', 'Wellbeing'] |
Understanding the Adjunct Crisis | What can governments and higher education institutions do to better support adjunct faculty?
As the fundamental shift in the locus of higher education funding and structure of higher education bureaucracy has a role to play in employment patterns in higher education, lawmakers and campus administrators alike are liable for driving the neoliberal policies that have elevated “market thinking” over the qualitative value of a higher education. To better address the growing issue of “adjunctification”, it must be approached as a policy problem and addressed as such.
Following actions that state and federal governments can take to promote positive working conditions for adjunct at public colleges and universities include:
· Reinstating the National Study of Postsecondary Faculty, which was decommissioned and defunded in 2004.
· Requiring the U.S. Department of Education to investigate private and public institutions that engage in unfair hiring practices that limit adjunct faculty pay and benefit parity, such as requiring faculty to work a certain number of hours a week to qualify for institutionally-provided health insurance plans and benefits.
· Increasing the level of state funding that public college and universities receive so institutions can afford to improve adjunct faculty working conditions on campus (examples listed below), rather than feel forced to shift expenses from other sectors of the institution if existing budgetary funding is limited.
Furthermore, a report published by the Delphi Project on the Changing Faculty and Student Success, titled Dispelling the Myths: Locating the Resources Needed to Support Non-Tenure-Track Faculty, outlines potential campus policy changes that institutions can undertake improve the working conditions of adjunct faculty, help them become more effective instructors and academics, and make them feel like valued members of the academic community and their respective campus communities. The outlined campus policy changes are categorized by four cost categories: 1) marginal or no cost ($), 2) some additional expense ($$), 3) moderate increases or reallocation of funding ($$$), and 4) more substantial expense ($$$$). Such policy changes include:
Enhancing existing data collection efforts ($): Direct institutional research offices to reach out to non-tenure track faculty and collect and report data pertaining to their work experiences in order to help institutional leaders identify and make better informed decisions to improve campus policies and practices relating to adjunct faculty working conditions and institutional support.
Ensuring or clarifying protections for academic freedom ($): Institutions may clarify in their faculty handbooks what academic freedom protections adjunct faculty have, as well as determine appropriate procedures for adjunct faculty to file grievances or appeals.
Access to instructional materials, resources, and support services ($): Ensure that adjunct faculty are provided access to basic materials and resources (e.g. textbooks, institutional email addresses, campus ID cards, library privileges, parking, office supplies, and computers and telephones) and are informed what campus support services are available to them.
Access to existing on-campus and off-campus professional development opportunities ($-$$$): Ensure that adjunct faculty are informed of on-campus professional development opportunities and are encouraged to attend by improving outreach efforts through email list distributions and departmental encouragement.
Participation in departmental meetings, curriculum design, and campus life ($-$$): Invite and encourage adjunct faculty to participate in curriculum planning and routine department meetings to ensure adjunct faculty can provide input in curriculum development and are kept aware of important departmental and course-related developments. It is crucial to do so because adjunct faculty make up the majority of faculty teaching introductory and development courses, in which students tend to be at higher risk of attrition. Considering that adjunct faculty are often only paid for instruction and are not compensated for institutional service — unlike their tenured counterparts — compensating adjunct faculty for their participation in departmental politics and curriculum design may help encourage organizational commitment and make adjunct faculty feel like valued members of the academic community.
Participation in college governance ($-$$): Grant adjunct faculty representation on faculty senates and various campus governance bodies such as ad hoc groups, joint faculty-administrative groups, administrative task forces, and campus committees, as well as remove barriers to participation in governance for adjunct faculty. Given the time and commitment often required for college governance, adjunct faculty must also be compensated for their participation in college governance.
Opportunities for faculty mentoring ($-$$): As adjunct faculty may be less familiar with pedagogies and certain teaching strategies, being paired with an experienced faculty member for a certain duration may help them become more effective instructors.
Access to orientation for new hires ($-$$): New faculty orientations often lack specific guidelines or tips to accommodate new adjunct faculty, which may be harmful to effective classroom instruction given the different duties and hiring practices surround adjunct faculty labor. Integrating information specific to adjunct faculty in existing faculty orientations, as well as facilitating separate orientations for adjunct faculty, can help adjunct faculty become better accommodated with the culture of their respective institutions, be better informed of hiring practices, academic freedom norms, and services available to such faculty, as well as make adjunct faculty feel like valued members of the academic and campus community.
Changing hiring practices ($-$$$): Systematically restructuring or formalizing hiring processes for adjunct faculty can help ensure that adjunct faculty are better prepared to teach in the classroom and receive the support and resources they need in a timely fashion, as adjunct faculty as commonly hired within days or weeks of the beginning of the academic term. Such formalization of hiring practices can include getting adjunct faculty know of hiring decisions early. Rehiring practices can also be improved by creating promotion processes for adjunct faculty that have been teaching at their respective institution for a prolonged period of time, as well as being given priority over external applicants for full-time vacancies.
Extending employment contracts to multi-year terms ($-$$$$): As adjunct faculty are often rehired repeatedly, often over multiple years, it may be more feasible for institutions to move to multi-year contracts rather than term-by-term or annual contracts. This may require long-term institutional planning such as gauging class and program enrollment, as well as the hiring of additional staff and creation of new offices.
Compensation for office hours ($$-$$$$): More often than not, adjunct faculty are required to hold office hours to meet with students outside of classroom instruction hours without compensation, despite the work that adjunct faculty engage in outside of the classroom, such as grading papers and assignments, advising and mentoring students, and preparing instructional materials. Tenured and tenure track faculty typically receive compensation for performing such duties during their office hours, so adjunct faculty must also be compensated for such labor in order to make them feel valued as instructors as well as boost organization commitment.
Increasing compensation ($$$-$$$$): The average pay per three-credit-course for adjunct faculty is $2,700, which amounts to just barely $21,000 if one accounts for the typical full instructional workload of eight courses per year. Adjunct faculty often put in just as much work into instruction as their full-time and tenured peers, and often bring valuable experiences to the classroom such as employment in a specialized profession. One of the most commonly cited reasons for job dissatisfaction among adjunct faculty is low pay. To ensure the stability of the academic profession, considering that adjunct faculty now make up a majority of the professoriate, adjunct faculty must be paid at a relatively higher rate that allows them to live comfortably.
Providing benefits ($$$-$$$$): More than half of adjunct faculty report receiving no health, retirement, or paid leave benefits from their institution of employment. This often means that adjunct faculty have to save u and pay out of pocket for such expenses, which can be difficult enough for such faculty given the stark level of pay disparity between adjunct faculty and tenured faculty, the latter of which is often guaranteed benefits by their institution of employment. As personal well-being affects organization commitment and job satisfaction, health, retirement and paid leave benefits must be provided to adjunct faculty to ensure their personal well-being. | https://medium.com/@walifromthebx/understanding-the-adjunct-crisis-aaff0839f307 | ['Wali Ullah'] | 2020-12-21 03:03:20.624000+00:00 | ['Higher Education', 'University', 'Adjuncts', 'Academia'] |
Dental Implants Are a Long-Lasting Replacement for Missing Teeth | Having lost teeth due to dental disease or trauma, living with gaps or dealing with dentures need not be the only options as Conventional bridges and dentures may help fill the gap, but they’re not right for everyone and can lead to discomfort, pain, poor mouth structure.
Dental implants offer a strong, practical & attractive alternative for missing teeth restoring the look, feel, and health of your mouth.
One may wonder how long these implants will last. The good news is that proper Flossing, brushing and visiting the dentist can help your implants last a lifetime and let your mouth stay healthy.
Dental implants are being chosen by more and more people to replace missing teeth and are considered the best choice.
Replacing the function, look, and structure of missing teeth, a Dental implant consists of a titanium “root” implanted into the jawbone.
A metal abutment and porcelain crown are attached to the implant. The replaced tooth looks, feels, and functions just like your natural tooth.
Dental implants are even better than natural teeth since they can’t develop cavities. Moreover, Dental implants do not affect neighboring healthy teeth like fixed bridges or removable dentures do, or lead to bone loss in the jaw.
Dental implants last a lifetime if proper care is taken.
Parts of a Dental Implant
A better understanding of dental implants can help one to understand why the implant can last for so long. The implant is made up of three main parts:
Titanium implant or Base: The part that the dentist attaches to the jawbone is the base made of titanium, a metal whose impressive biocompatibility makes it non-toxic and harmless to living tissues.
A hole is drilled in the patient’s jaw, followed by placing the base and waiting for the bone to fuse around the base. This makes the implant strong enough to withstand strong force.
Abutment: The abutment is the part that connects the base and the implant.
After the patient’s gum has recovered and healed from getting the base, the dentist attaches the abutment to the base
Crown: The crown is the part of the dental implant that is visible. The crown is attached to the abutment making the dental implant complete.
Made out of a ceramic material, the crown is custom-designed to look like the rest of the teeth in the patient’s mouth
Being the part that is most exposed, it gets the most wear and tear but can still last at least 25 years with the right care and maintenance and is generally less painful to replace than the base.
Nonetheless, the base lasts for even longer than the crown.
Benefits of Dental Implants:
Maintain the Health of Surrounding Teeth
The surrounding healthy teeth have to be filed down when one receives a bridge to attach it. But an implant doesn’t threaten the goodness of other teeth.
Look Like Natural Teeth
More natural look and feel wise, with implants one is not able to tell the difference between a natural tooth and an implanted one.
The chewing power is restored by the Dental Implant, and one can brush and floss normally without any additional care.
The confident smile is restored with Implants and you can eat just about anything.
Bone Preservation
The resorption of the bone that can occur when a natural tooth is lost can be avoided with the implants.
The remaining teeth don’t slide into the space left by a lost tooth or the shape of the jaw doesn’t change due to the bone loss as dental implants replace the root as well as the tooth and hence promote the preservation of the bone.
Implants Are Long-Lasting
Dental Implants are a permanent solution to missing teeth unlike dentures, bridgework, or other similar restorations.
As long as good care is taken of the teeth and the implant, they can last a lifetime. However, bridges and dentures need to be replaced every 7 to 15 years.
Caring for Dental Implants
The right oral care practices can go a long way to extend the life of the dental implant.
Brushing the implant at least twice a day along with the rest of the teeth using a soft toothbrush is important. Brushing the crown all around and underneath is very vital too.
Flossing at least once a day with an interdental floss that is coated in nylon can help patients remove bits of food from underneath the implant.
A regular visit to the dentist can help keep the implants and mouth healthy.
Role of The Oral Surgeon
The dental implant is a surgery best done by a trained dental surgeon.
The oral and maxillofacial surgeons have specialized education and training in the complexities of the bone, skin, muscles and nerves and can ensure the best possible results.
According to studies, the implant success rates are higher when the procedure is performed by a dental specialist.
At Kirkland Premier Dentistry Dr. Gaurav Sharma offers dental implants to patients who need a successful tooth replacement with lasting function and looks and can help restore smiles.
Dr. Gaurav Sharma has more than 400 hours of continuing education credits for dental implant training, so you can be sure of being in safe hands.
Call for an appointment to learn if you’re a suitable candidate for a dental implant.
If you are suffering from severe tooth pain then you can also contact us for emergency dentistry services.
Content originally published at Kirkland Premier Dentistry | https://medium.com/@kirkland-premier-dentistry/dental-implants-are-a-long-lasting-replacement-for-missing-teeth-fcafbcaf89d7 | ['Kirkland Premier Dentistry'] | 2021-11-22 06:39:20.822000+00:00 | ['Dentistry', 'Dental Care', 'Dentist', 'Dental Implants', 'Dental'] |
On Automation and Parsimony | Across history, and pre-history, homo sapiens were weaker than their environment, and only by using superior cognitive abilities, and not relying just on the physical, they managed to overcome their enemies and adjust the environment to their needs, and them to the environment.
Actually, the move from being hunter-gatherers to agriculture made homo sapiens mainly use, and rely on, physical strength rather than their brain. But today, when technology has almost altogether relieved the need to use physical strength to complete a task, we are required to use our cognitive abilities.
In all cultures, and throughout history, since the agricultural revolution, humans have been divided into classes and professions. The move between classes and professions was hard and even impossible. If one had a father that was a carpenter, then he, most probably, would also become a carpenter, and so on. When a father was a warrior, then his son would be a warrior also. Breaking the chain and escaping the circle of affiliation and profession was very difficult. The difficulty was double-edged, as it was not only about changing a profession but often changing one’s identity. There is a sort of unity between the identity of the self and the profession held.
We have done our best throughout the ages to find ways and technologies that will make physical labor easier and even redundant- whether by the development and utilization of tools and technologies, or by domesticating animals, and even by using and exploiting other human beings as slaves. We are now experiencing the fourth industrial revolution leveraging AI and robotics to automate tasks and jobs even further.
It is quite apparent that what we really aim for, as humans, is to cancel our need to invest any type of effort to accomplish the task at hand. Energy conservation, even to the point of parsimony, seems to be the leading trait that forwards and advances humanity.
Examples of this trend can be seen in two different domains- agriculture and the military.
With the development of agriculture, humans moved from being hunters-gatherers eating hand to the mouth, to communities with a more sustainable method of providing food. Early farmers had domesticated wheat, so it wouldn’t disperse in the wind, domesticated animals to pull the plough instead of them, and used donkeys and horses to walk instead of them. Today, with autonomous tractors and sensors spread in the fields, farmers are able to operate, run and manage farms essentially by themselves.
The second example is military and wars. Humans had to fight face-to-face combats, either with other humans or with predators, and over time developed technologies, like the bow and arrow, enabling them to kill from a distance, with minimal effort and risk. Later on, armies started using slaves as fighters, and in the age of imperialism and colonialism even other nations (people) fighting for them. In recent years, countries can even hire private armies, like Black Water, to fight wars and even do special operations. Today, armies around the world are employing and deploying more and more autonomous technologies to perform their security operations. The US Chief of Staff stated on June 2017 that by 2025 50% of the fighting force will be robots.
We can see how technology has been gradually making our lives easier, but what will happen when the majority of physical labor and tasks end?!
Is parsimonious energy conservation really the driving force of the human race throughout history?!
If so, where will it lead us next? Through our superior cognitive abilities, we were able to develop the technologies and culture that made us use our physical force. And then, we used again our cognitive abilities to release ourselves from the need to exert any physical force.
Physical force is less and less required, and today via technology, we are making it completely redundant.
Let’s examine the simple example of writing, which started as chiseling letters on clay tablets, moved to writing on papyrus, then fountain pen on paper, printing press, typing machine, computer or mobile phone with a keyboard, then we moved to a touchscreen on our smartphones, from there to speech-to-text, or even just voice messages. And through recent developments in BCI (brain-computer interface) it has even become possible to just to think of the words and they will appear on one’s screen.
Very soon, when we will really have no use for our physical ability, what abilities will we require? Emotional abilities?
I think that our emotional abilities are probably, and unfortunately, not the answer, as if we examine our history and lives we can see that because of these emotions we made mistakes and went on wars or committed other horrific acts. Even though we know from modern neural biology that there is a link between good decision making and emotions, with computers and algorithms we are able to reach objective, data-driven decisions that have minimized emotions, and in some organizations made them redundant and irrelevant.
Contained and suppressed emotions are not limited to the workplace, as we can see other social trends that support it. Suicide rates are climbing around the world, depression is on the rise, people marry much later and even stay single for longer periods of time.
Most research and analyst firms claim that human skills will be the most valued in coming years. But, which ones? But how does this trend, or forecast, conform with the current digital transformation trend that is enveloping the business world? The entire aim of this transformation is to convert human-to-human interactions into human-to-machine interactions. Meaning, removing the human interaction from the equation.
We also know, and predict, that the world is going towards technological unemployment, where due to massive automation there will be simply no work for everyone. Research also shows us that changing careers in the current exponential economy is becoming more and more difficult. And if one will do a career move, it will probably be downward.
Maybe this endgame will resemble the reality envisioned in the superb book “Ready Player One” by Ernst Klein, where he foresees a future where people sit in hi-tech chairs, wearing special suits to handle all their biological needs, while they are strapped and immersed completely in a VR world.
The post fourth industrial revolution world may not need us to operate and run the economy and may even not require our physical presence. If technology will provide our biological needs, act as our emotional mediator, and augment our abilities while we are strapped to a chair, and managed and monitored through various platforms, then maybe our future is more like the “Borg”?!
Is it true that “resistance is futile”? | https://medium.com/future-of-work/on-automation-and-parsimony-6b20d8298f56 | ['Tomer Simon'] | 2018-03-22 14:30:42.486000+00:00 | ['Automation', 'Future Of Work', 'Unemployment', 'Skills', 'Technology'] |
“Go” And Do Security Access Control Properly: Attribute-Based Encryption (ABE) | Photo by Markus Spiske on Unsplash
One of our major problems in security is that we have build systems which use role-based security, and it is flawed. Increasingly we need attributes, such as location, and time, to properly authenticate a user.
Introduction
We are generally poor at properly integrating security, and often use overlay models to overcome our lack of embedded security. Our models of security often, too, come from our legacy operating systems, and which fail to protect data (as they were designed to protect files and directories rather than data). We thus often we fail to encrypt data properly, and we fall back to the operating system to provide rights to files. Our overall policies thus focus on documents and not on data.
We have thus created a data world which is open, and then to protect it we put up perimeters. But we find out that there’s insiders who sit behind the firewall and can access our data. So we then encrypt with an encryption key, but this is often applied on a fairly large scale basis. So how do we control access to sensitive data when we use cloud-based storage? Well, we need to look at better ways of protecting our data, while still being able to process it.
The systems we have created have grown up through operating system security, and apply role based security. In a Linux system we can have:
User: bob
Group: gp
and we have access rights as:
User=rwx Group=rwx Everyone=rwx
In this case Bob will have access rights based on his ownership of a file, or on the group he is in — and is defined as role-based security. In an Active Directory infrastructure, Bob can also be part of multiple groups, and will this gain him rights. But being part of a group is not properly applying security, and we thus have to normally overlay a security model to check Bob’s rights to access given file. What we really want is to be able to define that the access is based on other things, such as his location, or whether he is the clinician associated with a patient. These are defined as attributes for his access rights, and define attributed-based security.
One of the best methods of embedding security into data is ABE (Attributed-based Encryption), and where we can define fine-grained control on the decryption process. For example, we might define that some sensitive health information is only accessible when the patient and the clinician have both authenticated themselves, and are in a provable location. Thus during the encryption process, we apply a policy:
Policy = ((user=GP and location=Edinburgh) or (user=Patient and location=Scotland)
In this case we would allow access to a file based on a user who is a GP in Edinburgh, or a Scottish patient. In this way we can base our accesses on real attributes, rather than operating system rights.
There are two main types of ABE. The first is Key-policy attribute-based encryption (KP-ABE) and the other is Ciphertext-policy attribute-based encryption (CP-ABE). In KP-ABE we generate the key based on a policy that contains attributes. For CP-ABE we use a tree structure with different keys into order to access given attributes.
In this case we have several stages for the encryption process:
Setup. This stage generates the public parameters (PK) and a master key (MK).
Encrypt(PK,M, A). In this stage we take PK, and a message (M), along with an access structure for all the attributes (A). The output will be some ciphertext (CT) and which embeds A, so that when a user satisfies the required attributes, they will be able to decrypt the ciphertext.
Key Generation(MK,S). In this stage we take the master key (MK) and a number of attributes that define the key (S), and output a private key (SK).
Decrypt(PK, CT, SK). In this stage we take the public parameters (PK), the cipher text (CT — and which contains the access policy), and the secret key (for a given set of attributes S), and try to decrypt the ciphertext. If successful we will get our message (M) back again.
Delegate(SK, S˜). If required, we can use a delegate will take the secret key (SK) and return a secret key (SK) for a given set of attributes (S˜).
Coding
So let’s keep it simple. Let’s say we have six attributes (0 1 2 3 4 5), and then define a policy based on these. The following is the Golang code to implement a basic demo [here]:
A sample run is [here]:
Message: Danger, danger!!
Policy: ((0 AND 1) OR (2 AND 3)) AND 5
Attributes: [0 1 3 5]
Decrypted Message: Danger, danger!!
and for a failure [here]:
Message: Danger, danger!!
Policy: ((0 AND 1) OR (2 AND 3)) AND 5
Attributes: [1 3 5] You do not have rights!!
Conclusions
Our security models are old, and where we have had to use overlay methods, and then spanned these across hybrid systems. This has created complex security policies, and which rely often on operating systems and domain controllers making judgments on access rights to files. In a world of Cloud computing we must assume that our data can be accessed by anyone, so we increasingly need to embed security into our data.
Our future must be built by embedding policies into our data, and supporting users providing various attributes to define the claims they have to access the data.
References
[1] Bethencourt, J., Sahai, A., & Waters, B. (2007, May). Ciphertext-policy attribute-based encryption. In Security and Privacy, 2007. SP’07. IEEE Symposium on (pp. 321–334). IEEE. | https://medium.com/asecuritysite-when-bob-met-alice/go-and-do-security-access-control-properly-749f97e052ae | ['Prof Bill Buchanan Obe'] | 2020-05-23 13:15:04.140000+00:00 | ['Golang', 'Cryptography', 'Cybersecurity'] |
6 Marketing ‘Post-COVID’ trends in 2021. | Transparency and Reliability.
If the corona crisis has taught us one thing, it is the importance of transparency and data reliability. Transparency and reliability will play an increasingly important role.
You can see that Google is already working with E-A-T on verifying the expertise, authority, and trustworthiness of authors. This will weigh even more heavily. Referring to reliable sources is becoming increasingly important post-corona.
Are you referring to a reliable source with an external link? Then that is a positive signal. On the other hand, links to different or even unreliable sources have a negative impact. That is the development we need.
Post-corona trust is the new credo. The virtual world has increasingly become a battlefield of fake news and misinformation. Clear, reliable, and consistent communication is a relief for the consumer. Therefore, ensure a clear brand image and brand value pattern. Work this consistently into all your communication. In short: prove the wording, win and strengthen the relationship with customer and prospect. | https://medium.com/evolve-you/6-marketing-post-covid-trends-in-2021-a83085508f59 | ['Bryan Dijkhuizen'] | 2020-12-27 17:26:04.349000+00:00 | ['Money', 'Marketing', 'Business', 'Entrepreneurship', 'Covid 19'] |
Why I No Longer Watch The News | Photo by Markus Spiske on Unsplash
“Until we have begun to go without them, we fail to realize how unnecessary many things are. We’ve been using them not because we needed them but because we had them.”
— Lucius Annaeus Seneca, “Letters From a Stoic”
“Things have no hold on the soul. They stand there unmoving, outside it. Disturbance comes only from within — from our own perceptions.”
— Marcus Aurelius, “Meditations”
On October 12th, 2020; I had spent my Tuesday afternoon receiving my second injection of one of the Phase 3 trial vaccines for COVID-19. My family and my closest friends could not understand why I had willingly decided to volunteer as a guinea-pig for a seemingly rushed vaccine. On Twitter, whenever news networks would tweet articles about the race for a COVID vaccine, I would see replies full of skepticism: people insisted that they would not trust a vaccine released during the Trump administration, that these vaccines were overly rushed compared to vaccines of the past, that the FDA and CDC were untrustworthy, that there was no way this vaccine could possibly be safe. The fact that several trials, such as AstraZeneca’s and Johnson & Johnson’s, were halted in September due to participants mysteriously falling ill had heightened Twitter users’ suspicions.
Yet never would I see Tweets circulating that Phase 1 and Phase 2 of these vaccines’ trials had been successful without any serious problems. I would rarely see any tweets pointing out that these vaccines were able to be developed so quickly because they used a new mRNA technology, which is able to be manufactured much more quickly than DNA vaccines from the past. Personally, after receiving the first dose of the vaccine a month prior in September, I had nothing but minor side effects. I felt totally fine. Even the process getting the second dosage of the vaccine was a breeze.
After receiving the second injection, on the bus home from the hospital I began to reflect on what had motivated me to join this study in the first place. I admitted to myself that, ironically, fear was behind my decision to take this risk. I was hoping to be one of the first people to receive the vaccine as I was terrified of catching the virus from my job and sickening my older family members. I also thought it was odd that people seemed to be more scared of receiving this vaccine than getting the actual virus, and was beginning to lament how the responses to this virus have become so politicized. Not only were people skeptical of the vaccine, but many others were even against social distancing and mask wearing — even President Trump and the people in his administration would refuse to take these basic precautions. And Trump’s favorite platform to vent his beliefs happened to also be Twitter.
As I arrived home and went straight to my bed to relax after having 10 tubes of blood drawn from my arm and receiving an injection, the first thing I did was open up my phone and check Twitter. I scrolled down my timeline: Amy Coney Barrett is about to take away our reproductive rights after being confirmed to the Supreme Court. Hurricane Delta had battered Louisiana, causing widespread floods. The USPS is still being sabotaged by Trump to prevent people from voting by mail. COVID-19 cases in New Jersey are back up to the levels they have been in early June. The FBI had foiled a plot by far-right activists to kidnap Governor Whitmer of Michigan. Amidst protests in Denver, a security guard had shot a self-described “Patriot.”
I see some of President Trump’s first tweets after he has recovered from COVID-19: tweeting about how New York and California have gone to hell, about how Illinois has nowhere to go — suggesting that people should vote for Trump. Tweeting at least 20 times about “Sleepy Joe” and “Democrat-run cities.” Hinting that there would be some form of voter fraud in the upcoming election.
“Today, technology has lowered the barrier for others to share their opinion about what we should be focusing on. It is not just information overload; it is opinion overload.”
― Greg McKeown, “Essentialism: The Disciplined Pursuit of Less”
I put my phone down and wonder what my sister, whom I am recently estranged from for unrelated reasons, would think of me volunteering to receive a vaccine that is not yet approved. Last time I had visited her down in Kentucky, she admitted to me that she thought the COVID-19 pandemic was some government plot to force the entire population to get vaccinations. She confided in me that she had not yet vaccinated her children out of fear that it would cause illness. Citing some posts she had read on one of her mommy groups on Facebook, she believed that all of the new 5G cell-phone towers we would drive past daily were somehow responsible for the spread of this virus. Of course this would come from Facebook, I had thought to myself, not wanting to voice aloud any disagreements as our relationship was already fraught.
Thinking about conspiracy theories and estranged family members, I am reminded of my father. In the mid 2000s, around the time before Barack Obama’s presidency, he would spend hours listening to either Rush Limbaugh’s or Glenn Beck’s radio shows while he worked at home. Limbaugh was especially horrific as he had spread around conspiracy theories questioning the legitimacy of Barack Obama’s birth certificate, suggesting he was secretly a muslim and born in Kenya. These far-right commentators would also spew racist and homophobic vitriol, insisting that slavery was never really that bad, that affirmative action policies were racist against white people, and suggesting that “homosexuality” is linked to pedophilia. My father would become consumed by political discourse, constantly parroting whatever conspiracy he had heard on the radio at the dinner table. It seemed like nearly every time he wanted to talk with me, he wanted to have some kind of debate: whether it was about how he felt muslims were planning to take control of this country and institute “Sharia law”, or about how the “gay agenda” was also taking over the country, seeking to destroy the traditional family.
Even though I have not been on Facebook in years, and though I couldn’t care less about conservative talk radio, I began to question whether the forms of media I would consume also had some kind of negative effect on not only my own thoughts but on the thoughts of the people currently in my life. I definitely have witnessed my mother and step-father begin to express an unusual level of fear they had never shown before as a result of the news they have consumed this year. I think back to the August shooting in Kenosha, Wisconsin by 17 year-old Kyle Rittenhouse — how came down to the kitchen to cook dinner, and my parents happened to have Tucker Carlson’s show playing on Fox News, with Carlson excusing the vigilante murder of two protestors, claiming that all the teenager was doing was upholding “law and order”. My mother, in response to the footage of these riots on television, began saying she was afraid of “looters” and “rioters” coming to our home. Never mind the fact we live in a quiet, suburban area in northeastern New Jersey full of families, mostly middle-aged and older residents; in an extremely safe town which had not had a single murder case in over 5 years.
I unlock my phone again, impulsively deciding to re-open Twitter to mindlessly scroll my timeline. As someone who had formerly been very active politically, a lot of my acquaintances in my personal life and the strangers I followed on Twitter were left-wing activists. My close friends are either left-leaning politically, sharing my support for Bernie Sanders, or simply apolitical, having no interest in keeping up with any political discourse. I realized as I scrolled — as my family had expressed fear about “the Left” becoming increasingly violent, the people I followed on Twitter expressed concerns about right-wing militia groups stirring up chaos. Just like how my family was concerned that the “Democrats” would riot in the streets after Trump had won the upcoming election, the people on my Twitter timeline were just as convinced that Trump would stir chaos in his supporters and call on the Proud Boys if Biden had become the President-elect.
“Remember that if you don’t prioritize your life, someone else will.”
― Greg Mckeown, “Essentialism: The Disciplined Pursuit of Less”
To make things clear: I am not attempting to make a statement about whether “the Left’s” ideas are correct, or whether far-right conspiratorial views deserve any merit. Rather, I am concerned about how the type and the quantity of the media we consume affects our minds and our well-being. Being somebody who was frequently exposed to political views on both sides of the spectrum, I noticed one common denominator: the usage of fear as a tactic to reinforce one’s own beliefs, cultivating an “us vs. them” mentality. As someone who had once very strongly believed that our country was on the verge of descending into fascism and that our planet only had about 20 years left before climate change would become irreversible — I realized I was overwhelmed with anxiety. I noticed that every time I had watched the news on any station, every time I had opened the Twitter app — I felt nothing but a sense of dread that would end up taking over my mood for the rest of the day, affecting my ability to let loose and have fun, and even lowering my productivity levels. As somebody who is currently taking 5 classes while working at my part-time job Monday through Friday, who is currently in the process of converting to a different religion, and also trying to prioritize self-care and physical fitness in order to mitigate the health effects the lockdown has had on me — I simply have no room for any additional stressors in my life.
So on that Tuesday afternoon in which I was off from work, I had decided to go cold-turkey and quit the news. I unsubscribed to the daily emails I would receive to The Atlantic, even emailing their customer service to cancel my digital magazine subscription. I unfollowed quite a few Instagram profiles — those belonging to Bernie Sanders, Andrew Yang, Joe Biden, Kamala Harris, Alexandria Ocasio-Cortez, even “150 Reasons Trump Must Go” and any page relating to feminism. I muted Donald Trump’s Twitter, as well as unfollowing other politicians and political activists, and muting any popular pundits who were bound to be retweeted onto my timeline. I have over 70 words muted from my Twitter account — including “republican,” “democrat,” “COVID-19,” “election,” “Trump,” “SCOTUS,” “45,” “white supremacists,” “fascism,” “police,” “protests,” “voting,” and more. I deleted the Apple News app, unfollowed any political podcasts on Spotify. I told my friends to stop iMessaging and DM-ing me Donald Trump’s tweets or any news articles mentioning the COVID pandemic. Finally, I had decided that each time I would go in the kitchen, I would play calming instrumental music on the radio while facing away from the television in the living room.
Of course, attempting to stay uninformed in our ultra-connected 21st century world is just as hard as anyone would assume. Besides nearly always experiencing a level of FOMO, my friends and family continue to remind me that it is my duty as a citizen to remain informed and that it is not realistic to stick my head in the sand. Even though I know my loved ones are well-intentioned, I constantly have my boundaries pushed by people attempting to talk to me about what is going on. Finally, by attempting to be conscious of my own biases and striving to remain neutral about issues I have not researched well, my liberal friends and acquaintances would accuse me of being too sympathetic to Republicans. My family assumes I voted for Biden despite me repeatedly stating that I do not want to discuss my political preferences with them. It’s not like I’m some fence-sitter who refuses to pick a side — I just do not feel that it is a productive use of my time to complain excessively about things I cannot control to my friends, nor do I want to be put in a position where I have to defend my beliefs from my family. After all the arguments I have had with my father and my sister when I was younger, those such as whether “transsexuals” have some kind of hidden agenda to destroy American society or whether vaccines really cause autism, neither of our minds have been changed. If anything, it just made us angrier with each other.
The “good news” is that by being more intentional about the content I consume, I have had so much more free time to devote to my hobbies and passions that have nothing to do with politics. I have been running each day, while choosing to listen to audiobooks or podcasts on psychology, spirituality, and self-help. By reading books about world history, evolutionary biology, and ancient philosophies, I have actually become more informed about issues that the news seldom reports on. I have caught up on shows, preferring to watch series related to cooking, interior design, travel, and nature. I watch YouTube videos about minimalism, decluttering, and “slow living.” I have been drinking more water, finding time to meditate, even taking up journaling and using coloring books rather than tweeting and retweeting. Even my Instagram feed is now full of “aesthetic”, as I follow pages relating to world travel, restaurants, cooking, home decor, architecture, DIY crafts, tiny homes and van living, nature photography, and more.
My therapist had told me about three weeks ago that the only things I should be focusing on in my life are finishing my degree, thinking about my future career, and envisioning how I want my life to be and who I want to become. I could also continue my commitment to social justice by listening to and supporting those who are part of marginalized groups, and by treating everyone I meet with dignity and respect, both of which I can do without spending hours in front of the television or scrolling down my timeline. When I think about what is truly essential to my life, keeping up with the news and politics are nowhere in the picture.
I fully admit that with this newfound sense of bliss comes a level of ignorance. I still have no idea what the exact number of current daily cases of COVID-19 are in New Jersey, although a friend had informed me that the current number of daily cases are higher now than they have ever been. I also admit that I “cheated” and decided to follow the news during election week very closely, refreshing the New York Times election page every few hours and listening to CNN while working out. And yes — all throughout the week, until Biden was declared the victor on Saturday, I was so anxious that I could barely focus on anything else.
By the way — nobody had actually ended up rioting over the election results, despite the fear mongering on the news and social media, despite businesses in cities across the country boarding up their windows in anticipation of unrest. It is totally natural to feel some level of fear in these uncertain times we are in; I still worry about the ongoing COVID-19 pandemic, and I am nervous that Trump would continue to refuse to accept the election results. In addition, I still cannot avoid glancing at the TV whenever I pass through the living room in my home, which is always left tuned in to a news station. Finally, my boundaries are still frequently pushed, despite having to constantly remind my family that I do not want to engage in any political discussion; my parents suspect that anyone who doesn’t love President Trump must be a so-called “liberal.” My parents especially make it a point to bring him up even during unrelated conversations, attempting to convince me that he is not as bad as the “mainstream media” makes him seem, while still not fully understanding that I do not really watch news on television anymore. I also do not read any full-length news articles online, but I still get a glimpse of the daily headlines and read snippets of current events whenever I open Twitter — my friends still insist on sharing links to news articles with me via Twitter DM or iMessage. Despite politics being nearly impossible for me to fully avoid, spending a significant amount of my leisure time reading about these issues on the news certainly wouldn’t help me feel any more at ease, nor would it change a thing. In fact, I feel better not dwelling on the specifics of what is going on.
“We never see a journalist saying to the camera, “I’m reporting live from a country where a war has not broken out” — or a city that has not been bombed, or a school that has not been shot up. As long as bad things have not vanished from the face of the earth, there will always be enough incidents to fill the news, especially when billions of smartphones turn most of the world’s population into crime reporters and war correspondents.”
― Steven Pinker, “Enlightenment Now: The Case for Reason, Science, Humanism, and Progress”
“Over the last several decades, extreme poverty, victims of war, child mortality, crime, famine, child labour, deaths in natural disasters and the number of plane crashes have all plummeted. We’re living in the richest, safest, healthiest era ever. So why don’t we realise this? It’s simple. Because the news is about the exceptional, and the more exceptional an event is — be it a terrorist attack, violent uprising, or natural disaster — the bigger its newsworthiness.”
― Rutger Bregman, “Humankind: A Hopeful History” | https://medium.com/@hazelkarvelis/why-i-no-longer-watch-the-news-d15a976a53cf | ['H. K.'] | 2020-12-17 02:09:43.158000+00:00 | ['Minimalism', 'Media Criticism', 'Politics', 'Pandemic Diaries', 'Election 2020'] |
NFT can actually be played like this? Hi5BOX invites you to unpack the blind box and light up the exclusive astrolabe! | NFT can actually be played like this? Hi5BOX invites you to unpack the blind box and light up the exclusive astrolabe! Too Z Dec 31, 2021·3 min read
Beeple’s NFT work fetched a sky-high price of over $69 million at Christie’s; NBA Top Shot’s star card sold for $420 million; Booba founder Sun Yuchen spent $10.5 million on an avatar … In the past year, NFT was promoted to a wealth-creating label. The wealth creation effect is rapidly radiating outward from the niche crypto circles to every corner of the world.
Old wine in a new bottle. We can see that almost all of the NFT works that already exist are used for hype and auction, which goes against the freedom of NFT extension. NFT should have infinite possibilities and many ways to play. The mysterious American FOX gave the geek culture to NFT, cut into the [Opening Boxes to Light up Stars] economic ecology, and created the world’s first NFT creation exchange platform Hi5BOX.
The new users of Hi5BOX are collectively called Angels. With the Angel status, they will receive an exclusive astrolabe of twelve signs. The astrolabe is dim at the moment, but users can gradually light up the astrolabe by purchasing a blind box at the market or mall.
For each lit constellation, the Angel user’s identity then gains a star bonus. When the Angel user has 12 stars bonus, that is, when the Angel user lights up the whole astrolabe, he can upgrade to Cupid identity which symbolizes love and gets a Cupid God card.
After synthesizing the Cupid identity, users often have multiple astrological cards left in their warehouse, which can be put up for sale on the market, or for a new round of synthesis. When Cupid users light up the whole astrological disk, they can upgrade to the Apollo identity which symbolizes light, and get an Apollo God card.
Apollo users who light up the whole astrolabe again will be upgraded to honorable Poseidon status and get a Poseidon God card. The above is the regular upgrade mode. When users are fortunate enough to be favored by God, they can open the blind box of the god card and light up the god card directly to have the honorable status.
The difference is not only in the economic ecology, but also in the operational ecology. FOX believes: “Geek culture also needs associations and socialization.” In the Hi5BOX platform, associations are collectively called slot communities. Players can create slot communities and guide other players into their own communities to manage and build them together.
For the primary economic income or even middle-income NFT enthusiasts, the acquisition of sky-high NFT collection is clearly beyond their reach. It is a better quality choice to become a user of Hi5BOX to unpack the blind box, set the horoscope and make friends. | https://medium.com/@TooZ/nft-can-actually-be-played-like-this-370a08372b36 | ['Too Z'] | 2021-12-31 03:01:49.772000+00:00 | ['Box', 'Nft Marketplace', 'Nft'] |
Find Your Rhythm and Run with It | Have you ever heard the saying, “Dip your toes in the water?” It means to start something slow and carefully, as you’re unsure of whether your endeavor will succeed.
Don’t do that.
Instead, jump in the water. It’s the only way you can fully immerse yourself, overcome obstacles, and feel at home with what you’re doing.
If you ask Loukmane about why he joined Djezzy, he’ll echo this idea. You must face challenges in order to learn and become a natural — someone who’s completely in rhythm while performing the task at hand. He states:
“Before coming here, I had always wanted to work in telecom. Djezzy has a big infrastructure. Working here forces you to learn a lot.There’s lots of room for growth and the scale is incredible. You must not only master your specialty, you must be knowledgeable of how the whole operation works, including the framework of the international telecom infrastructure.”
What makes Loukmane optimistic about the future of Algeria is how new graduates are increasingly taking courageous leaps. They’re pursuing entrepreneurship at an increasing rate, as they know they can find their true calling when they’re testing and implementing bold ideas.
“It’s really exciting what the new crop of graduates are doing. You accomplish and experience a lot being an entrepreneur. You learn how to work in a much more efficient, rhythmic manner.” | https://medium.com/djezzy-careers/find-your-rhythm-and-run-with-it-c0a8cf391db6 | ['Veon Careers'] | 2018-05-07 14:31:32.346000+00:00 | ['Music', 'Women In Tech', 'Startup', 'Coding'] |
The Relief of Ownerlessness | Photo by Martin Damboldt from Pexels
All things flourish without interruption. They grow by themselves, and no one possesses them.
Laozi, Daodejing (tr. Chuang-yuan Chang)
Wood. Is there anything more alienated than a wooden table? Carpentry is the ultimate act of kidnapping, unlike metallurgy- when we build of wood we literally build with dead bodies, thieves of bone and blood. Can you imagine being invited over to someone’s house only to find that the walls were built of bone? Yet our ancestors fed and clothed themselves with the bodies of animals. Animals were life inside them, and they lived their lives inside animals.
Heraclitus was seemingly right when he saw strife as fundamental to being, writing “all things happen according to strife and necessity.” Even if what Heidegger called the es gibt of being, its given-ness or its generosity, could be called a kind of inhuman love, there are endless skirmishes in the clearing of reality lit up by consciousness.
In the world of pure surfaces, things have only a human meaning. Tables are just “tables”, and more than that, “our tables”, not the stolen flesh of trees with history buried now in the grave of their human appropriation. Recalling their histories removes them from our ownership. When we no longer own them, they in turn no longer own us.
This dynamic was described by Max Stirner (1806–1856), the influential radical anarchist and iconoclast in his book The Unique and Its Property (Der Einzige und sein Eigentum). Stirner pointed out how the I, the individual, uses something external in a way which makes it “property.” In doing so, the individual defines that thing and makes it into a specific object- a table, a hammer, a glass. The thing, once given an identity and use, now has power over the owner, the very one who assigned it an identity in the first place. In the end the owner, now believing in the reality of the object it created, becomes the property of the object.
This is similar to the process of fetishization in Marx’s thought, though Stirner makes his analysis more fundamental and existential. It is the process of reification where a nameless manifestation of cosmic forces becomes a thing with a character and identity we wrongly believe inheres in the thing itself. To put it in Buddhist terms, we believe the thing has “svabhava” — inherent, or self-generated identity- when in fact it is “empty” of such a nature.
As a result of this we wrongly believe we can depend on that thing to be what we say it is. We also feel that we owe it a fidelity to its identity which we treat as though it came before us instead of being something we have created. Yet life will teach us that nothing is as we conjure or conjecture it to be. Nothing belongs to us, and our stories about what things are are not the final word.
When we learn about all the ways that things do not belong to us, we suffer grief and also liberation.
This is just as true in our relationship with people.
In early Buddhism there was a word for the realization of such ownerlessness as well, the subjective counterpart to the phenomenal reality of “emptiness.” It was called anatta. This is often incorrectly translated “no self”, influenced by later Buddhist thought, when in fact it means “not-self.” In the context of how the Buddha defines and uses the word, it is clear the primary meaning is “not mine.” The Buddhist doctrine of no-self, which denies the existence of an abiding identity in the human being, grew out of this doctrine, but was still centuries away when the Buddha sat under the tree of India and spoke about the simple impossibility of owning anything.
Although grief may follow this realization, in its train it bears relief. Many of us have felt, at least fleetingly, this relief of ownerlessness. Not being an owner is one of the great pleasures of traveling. I might take care of an object, like I would a hotel room, but what a relief for things not to be “mine”!
When something is mine I feel I need to control it, to take responsibility for it, in a way in which it’s really impossible to live up to. As Ajaan Chah (1918–1992), a teacher in the Thai Buddhist Forest Tradition said, a glass I pick up to drink from is “already broken.”
Contrary to this, however, our delusion is compounded. We believe the glass belongs to us, we believe it will last, and we believe, in the first place, that that particular collection of un-nameable energies is a glass.
Stirner called the identities we give things “spooks”, ghosts which haunt the world and rule over human beings. The Buddha too talked about releasing awareness from these spooks so we could rest in the bliss of letting go of them. So that we could live in a world suddenly become weightless. Maybe the “unbearable lightness of being” is only unbearable to those trying to hold on to something.
To understand our lack of ownership over anything is to walk through the door of grief and come out the other side.
This is hard, though. How hard is it to understand my lack of ownership over my child? Over my partner? Over my art, or my house, or my reputation? What about over the teeming earth itself, over all the human culture with which we have filled the world? We don’t own any of that either. We never did.
As Canadian poets Robert Bringhurst and Jan Zwicky write in Learning How To Die: Wisdom In The Age of Climate Crisis, the human project on this earth is ultimately doomed. Even were it not for our anthropogenic climate emergency, everything we do here will one day be consumed in the heat of an expanding sun en route to becoming a red star.
There is tragedy in our gambling so recklessly with our very limited inheritance, of course, and we are almost certainly in the process of bringing a lot more death and suffering to our human sojourn here on the back of Gaia than need be.
Nevertheless, perhaps we need to understand the ownerlessness of earth and of human culture because we need to walk through the door of grief and come out the other side with hands ready to preserve and protect, hands ready to reach out “like someone adjusting their pillow in the middle of the night.”
That line comes from an old Chinese koan (k’ung an), a “public record” of a dialogue between a master and student preserved for contemplation. The koan asks about Guanyin, the awakening being (bodhisattva) pictured in China as a woman of power who responds to the cries of suffering in the world.
Q: How is it with the thousand arms of Guanyin?
A: Like someone adjusting their pillow in the middle of the night.
Such hands cannot be trying to carry the unbearable lightness of being, they must have let it go.
As Jan Zwicky writes, “What use is it, to anyone, to lie down, immobilized by pain? Pain must be used to turn the soul toward the real, to reform both action and attention: to love what, in this case, remains.”
This is a revised version of an essay I published on Medium in February of 2019. | https://medium.com/strange-wonder/the-relief-of-ownerlessness-add2b83c6e79 | ['Matthew Gindin'] | 2021-01-18 23:46:47.635000+00:00 | ['Ecology', 'Crisis', 'Buddhism', 'Climate Change', 'Philosophy'] |
Sexual Objectification: Should I Lighten Up? | Photo by Eddie Kopp on Unsplash
My actions, I believe, result from the sum total of my past experiences and my current understanding.
Because of this, I know exactly why I frowned when the keynote speaker told a story about a naked female butt on stage at a conference. And a month later, when a different man told a dirty joke while on a panel discussing legal issues, I frowned again. Neither speaker’s topic had anything remotely to do with sex, but they still shared anecdotes laden with innuendo.
Most people in the audience laughed. Obviously, my frown offered a minority opinion. Should I, I wondered, lighten up?
My Story
Once, in the seventh grade, I borrowed a friend’s too-short skirt and paired it with a tight crop top. I snuck the outfit out of my house and changed at school. I knew what I was doing.
All my life ads and TV shows depicted women in these outfits. They got lots of attention. I wanted to try the idea of being “sexy” on.
But the attention didn’t make me feel good, quite the opposite. The boys didn’t want to get to know me, they wanted. . . something else. I didn’t feel like a person, I felt like a thing.
Which is not to say I wasn’t a normal teenager. I wanted to flirt and kiss boys. I wanted to be pretty.
But, more than anything else, I wanted to be liked as a person. I wanted to have actual conversations with interesting people. This wasn’t likely in high school, though I kept trying.
Later, when I was in college, my roommates and I hung out with a few frat boys who labeled me a “feminazi.” To this day, I find this strange. I didn’t do anything particularly radical, at least not that I remember.
Maybe it wasn’t what I said or did exactly. Maybe I just didn’t laugh at the sexist jokes they often made. Or I left the room when it all seemed too ridiculous. Perhaps that was enough to earn me the name.
Now, twenty years later, my dislike of the casual sexual references doesn’t surprise me. But maybe I’ve got it all wrong. Maybe I am taking this whole thing too seriously.
To answer this question I dove into the Internet. Then, eyes bleary from hours and hours of research, I emerged with an answer. But first, let’s define what we’re talking about here; it’s called “objectification.”
Objectification
Objectification, in this context, means to remove agency. An object does not think or act for itself. Others prescribe an object’s role and purpose.
Objects do not have feelings worthy of consideration. They are ACTED UPON. Conveniently, English grammar illustrates this point.
According to Grammar Girl, “The subject is the person or thing doing something, and the object is having something done to it.” (emphasis mine)
For example, when I go for a haircut, my stylist sees my hair as an object. She acts upon my hair by cutting it. In a sense, my stylist objectifies my hair as a means to an end–her getting paid and me being happy with my haircut.
But, objectification goes wrong when the whole is reduced to a part. To clarify, if my hair stylist thought of me only as a head of hair, not a person with hair, that’s closer to what academics and researches mean when talking about the effects of objectification.
Obviously, parts of our bodies are “acted upon” during sex. But we sexually objectify a person when we equate the value of that person to their appeal or function as a sexual object. Full stop.
To begin my analysis of whether or not I should turn my frown upside down with respect to these casual, public innuendos, I wanted to know two things. First, is sexual objectification normal? And second, is it beneficial?
Is Sexual Objectification Normal?
The argument for “normal” places the focus on simple biology. Men are more visual than women. The continuance of the human species requires sexual congress.
Physical attraction for all genders is normal. Wanting to have sex with someone one finds attractive is normal. Finding a particular body part attractive is normal.
And, the argument goes, women benefit from a male’s attraction in the form of increased tips and gifts. It is, says some, a source of feminine power.
Is Sexual Objectification Beneficial?
Next, I tackled the second, harder question. Significantly, I found only one study showing empirical evidence of sexual objectification as a potential benefit to women (aside from tips and gifts). This study looked at 113 newlywed couples. The researchers published their finding in a paper called, Women Like Being Valued for Sex, as Long as it is by a Committed Partner.
Specifically, lead author Andrea L. Meltzer notes, “Women can benefit from sexual valuation in the context of a relationship — as long as their partners are committed to the long-term.”
Normal? Maybe. Beneficial? No.
So, attraction to a part of the whole can be normal. Certainly, sexual attraction overall is normal. But only or primarily seeing and valuing a part of the whole seems problematic.
The evidence for this is in the lack of data supporting any benefit from the general, and obviously pervasive sexual objectification of women. Although Meltzer, and her colleagues from Northwestern University, point out how committed relationships–implying trust and equality–do result in benefits for women. This, in turn, points to the fact that the committed partners in the study value all of their wives’ parts, including, but not limited to, the sex ones.
Interestingly, as I searched for “benefits of sexual objectification” most of the results the engine spit back to me led in the exact opposite direction.
Women as Lesser Beings
Clearly, images and references of women as sexual objects wallpaper our world. They’re everywhere. Because they’re everywhere we see them but we don’t always take notice of them.
However, numerous studies have shown that sexual objectification of women contributes to the perception that women are lesser beings. Nathan A. Heflick, Ph.D offers a summary of some of these findings, in his 2011 article in Psychology Today. He says, “research shows that men and women rate these women as less intelligent, and even have less concern for their physical well-being.”
Further, Emma Rooney wrote Effects of Sexual Objectification on Women’s Mental Health for NYU’s Department of Applied Psychology. She says:
A sexist joke and an act of sexual violence might be dismissed as two very different and unrelated events, but they are in fact related. These two behaviors are connected by the presence of sexual objectification. Culturally common and often condoned in the U.S., the sexual objectification of women is a driving and perpetuating component of gender oppression, systemic sexism, sexual harassment, and violence against women
To demonstrate the wealth of data on the negative effects of sexual objectification, I included several studies as references below. These papers explore the impacts of removing the agency of women by calculating their worth and potential through a sexualized lens. Constant reinforcement of women-as-sex-objects seems to normalize treating women disrespectfully–whether it’s as simple as interrupting her or as criminal as sexual assault.
Change the Habit
When someone tells a joke or story that underscores the women-as-sex-object narrative, should I go along? No, I don’t think so. I think I was right to frown.
Consider this, in April 2017, the CDC found that 1 in 3 women and 1 in 6 men experienced sexual violence in their lifetimes. If NOT objectifying people has even the smallest possibility of impacting these numbers, shouldn’t we try? After all, no one ever said, “Yes, please take away my agency without my permission. It’s freeing and it helps me live up to my potential.”
On the positive side, in my searches I found a few essays from men reflecting on objectification behavior. Jason Gaddis at The Good Men Project penned a thoughtful one entitled, Why Men Objectify Women. He wrote of his own experiences and those of many men and women he’s talked to.
“The next thing to note is,” he says, “that men are conditioned to objectify women. It’s ain’t just nature working here. In men’s culture, it’s acceptable to objectify women. Men bond around it.” Notably, for Gaddis, objectifying women was pain medication when he felt disconnected from himself.
By and large, I believe all genders can stop feeding this particular beast. We can change the casual habit of sexual objectification by pausing to consider our words. In particular, we can stop thoughtlessly approving those who say these things in inappropriate contexts with our laughter.
The research clearly shows the damaging effects of the women-as-sex-object narrative. In my own recent experiences, I see ample evidence that we have work to do. In the end, I think we all know, deep down, this behavior is something we can and should grow out of.
To put it another way: Change the habit. Change the world.
Additional References:
Szymanski, Dawn M., Moffit, Lauren B., Carr, Erika R. (2011),Sexual Objectification of Women: Advances to Theory and Research.The Counseling Psychologist, 39(1), 6–38.
Bernard, Phillipe,Gervais, Sarah J., Allen, Jill, Campomizzi, Sophie, Klein, Olivier.(2012). Integrating Sexual Objectification with Object Versus Person Recognition: The Sexualized-Body-Inversion Hypothesis. Psychological Science, Association for Psychological Science, 23 (5), 469–471.
Awasthi, Bhuvanesh. From Attire to Assault: Clothing, Objectification, and De-Humanization–A Possible Prelude to Sexual Violence. Frontiers in Psychology, 10 March 2017.
Gervais, Sarah J., Vescio, Theresa K., Forster, Jens, Maass, Anne, Suitor, Caterina. (2012) Seeing Women as Objects: The Body Part Recognition Bias. European Journal of Social Psychology, 42 (6), 743–753.
Berdahl, J. L. (2007). Harassment based on sex: Protecting social status in the context of gender hierarchy. The Academy of Management Review, 32(2), 641–658.
Fairchild, K., & Rudman, L. A. (2008). Everyday stranger harassment and women’s objectification. Social Justice Research, 21(3), 338–357.
Gardner, C. B. (1995). Passing by: Gender and public harassment. Berkley, CA: University of California Press.
Harned, M. S. (2000). Harassed bodies: An examination of the relationships among women’s experiences of sexual harassment, body image and eating disturbances.Psychology of Women Quarterly, 24(4), 336–348.
Swim, J. K., Hyers, L. L., Cohen, L. L., & Ferguson, M. J. (2001). Everyday sexism: Evidence for its incidence, nature, and psychological impact from three daily diary studies.Journal of Social Issues, 57(1), 31–53.
Photo by Gabriel Benois on Unsplash | https://angelanoelauthor.medium.com/sexual-objectification-should-i-lighten-up-ad5b5f5af8a6 | ['Angela Noel Lawson'] | 2019-03-06 13:35:08.276000+00:00 | ['Feminism', 'Women', 'Workplace', 'Gender Equality', 'Sexism'] |
How Do We Solve a Problem Like Election Prediction? | On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
Here’s how it went wrong according to Venturebeat:
Firms like KCore Analytics, Expert.AI, and Advanced Symbolics claim algorithms can capture a more expansive picture of election dynamics because they draw on signals like tweets and Facebook messages…KCore Analytics predicted from social media posts that Biden would have a strong advantage — about 8 or 9 points — in terms of the popular vote but a small lead when it came to the electoral college. Italy-based Expert.AI, which found that Biden ranked higher on social media in terms of sentiment, put the Democratic candidate slightly ahead of Trump (50.2% to 47.3%). On the other hand, Advanced Symbolics’ Polly system, which was developed by scientists at the University of Ottawa, was wildly off with projections that showed Biden nabbing 372 electoral college votes compared with Trump’s 166, thanks to anticipated wins in Florida, Texas, and Ohio — all states that went to Trump.
For many — like Johnny Okleksinski back in 2016 — the instinctive reaction is to claim these misfires are down to flawed social media data which is simply not reflective of real world populations. In 2018, 74% of respondents agreed and told Pew Research that: “content on social media does not provide an accurate picture of how society feels about important issues.”
But while it’s certainly true that some of these inaccurate AI forecasts were down to the under-representation of certain groups (e.g. rural communities), an interesting paper published earlier this year by open access journal MDPI suggests that social media analysis can actually be more reflective of real-life views than these results might indicate.
The authors of Electoral and Public Opinion Forecasts with Social Media Data: A Meta-Analysis acknowledge the debate around the usefulness of social media in understanding public opinion, but at the same time they caution that dismissing social media’s predictive capacity based on its inability to represent some populations actually misses an important dynamic — namely, that politically active users are opinion-formers and influence the preferences of a much wider audience, with social media acting as an “organ of public opinion”:
…the formation of public opinion does not occur through an interaction of disparate individuals who share equally in the process; instead, through discussions and debates in which citizens usually participate unequally, public opinion is formed.
In other words, although political discussions on social media tend to be dominated by a small number of loud-mouthed users (typically early adopters, teens, and “better-educated” citizens), their opinions do tend to pre-empt those that develop in broader society.
Further, in capturing political opinions “out in the wild”; social media analysis is also able to understand the sentiments of silent “lurkers” by examining the relational connections and network attributes of their accounts. Report authors state that, “by looking at social media posts over time, we can examine opinion dynamics, public sentiment and information diffusion within a population.”
In brief: the problem with social media-fueled AI prediction does not appear to lie within the substance of what is available via online platforms. It seems to be in the methodology and/or tools. So, where do predictive AI tools go wrong? And where can researchers mine for the most useful indicators of political intention?
One of the major areas where social media analysis seems to break down is with language. This intuitively makes sense when we think about how people express themselves online. Problems with poor grammar or sarcasm are doubtless compounded by the difficulties of trying to understand context. Similarly, counting likes, shares and comments on posts and tweets is viewed as a fairly thin and simplistic approach (to use Twitter parlance “retweet ≠ endorsement”).
More robust, according to report authors, is an analysis that considers “structural features”, e.g. the “likes” recorded to candidate fan pages. Previous research found that the number of friends a candidate has on Facebook and the number of followers they have on Twitter could be used to predict a candidate’s share of the vote during the 2011 New Zealand election. But there is still the problem of which platform to focus on for thw closest accuracy.
Most AI systems use Twitter to predict public opinion, with some also using Facebook, forums, blogs, YouTube, etc. Yet each of these suffer from “their own set of algorithmic confounds, privacy constraints, and post restrictions.” We don’t currently know whether using multiple sources (vs. one platform) has any advantage, but with newly popular players like Parler on the scene, there’s reason to believe that covering several platforms would yield an accuracy advantage (though few currently use a broad range).
Finally, the actual political context within which the social platforms operate likely plays into their predictive accuracy. The report in question recalls that the predictive power in a study conducted in semi-authoritarian Singapore was significantly lower than in studies done in established democracies . From this authors infer that issues like media freedom, competitiveness of the election, and idiosyncrasies of electoral systems may lead to over- and under-estimations of voters’ preferences. | https://medium.com/swlh/how-do-we-solve-a-problem-like-election-prediction-5ae0809d5e7e | ['Fiona J Mcevoy'] | 2020-11-20 23:58:27.349000+00:00 | ['Artificial Intelligence', 'Politics', 'Elections', 'Social Media', 'Predictions'] |
“The Trip” by Las Dalias, NFT drop collection available on nft.ibizatoken.com | Art, music, colors, good vibes, smile, love, authenticity, life, boho, hippy market, crafts, food truck, shop, family, fashion, magazine, … We could describe Las Dalias in a myriad of terms, as many as the sensations aroused in their visitors.
Introducing Las Dalias
Las Dalias is an emblematic oasis of colors and peace highly popular among Ibizans, but probably not so much familiar to a more global audience, so we wouldn’t miss the chance to introduce you to this long-standing island spot, which still preserves its original atmosphere and magnetism.
Do you want to know more about this wonderful place? Below you can see a short video about daily life inside Las Dalias.
“The Trip” Collection 🛵
Now, after bringing art and culture to our beautiful island for more than 65 years, Las Dalias is launching the first NFT collection, a unique opportunity for their most loyal supporters.
The Trip NFT drop collection includes 20 crypto-collectibles created by the artist Marcos Torres, based on the latest cover of their own magazine and working as membership cards by offering a €50 discount on restaurants and events of Las Dalias Ibiza.
How to get “The Trip” Collection by Las Dalias
You just need to connect to the Ibiza Token NFT Marketplace https://nft.ibizatoken.com/collection/42 select your favorite collectible and then buy it. Any doubts about how to buy your NFTs on our marketplace? Take a look at our guide: https://ibizatoken.medium.com/ibiza-token-nft-marketplace-d2d45ad8f26
If you need further information, please feel free to contact the team on our Discord channel, Telegram English, and Telegram Spanish Channels
Ibiza Token channels: | https://medium.com/@ibizatoken/the-trip-by-las-dalias-nft-drop-collection-available-on-nft-ibizatoken-com-94d20c474e78 | ['Ibiza Token'] | 2021-12-31 11:29:42.586000+00:00 | ['Nft Marketplace', 'Nft Collectibles', 'Nftart', 'Discount', 'Nft'] |
Dear web designer, let's stop breaking the affordance of scrolling | We can do better than a "Scroll arrow"
Huge's research can tell us one thing or two about how some users can skip your content once you break the affordance of scrolling and about the solutions to that problem. Even though the scrorrow had a very successful result, is it really a solution to be tested? Compare the results between "Scroll arrow" and "Short image". They're literally the same. Now compare the "Scroll arrow" with "Control image". I mean it's obvious to me that in the case of the arrow users scrolled cause the page was yelling at them. In other words, it works but it doesn't provide a good experience. If people perceive content bellow the image, they’ll naturally scroll.
Using subtle animation to communicate (not an animated arrow though)
Animating the elements of the page can give great clues about the content below that huge picture. I'm not saying I have the perfect solution for every case, but I'll use animation to brainstorm other ways to handle this.
In the first example, our content pops from the bottom and disappears right after. It's like saying "Hello, I'm here. If you need me, just do your thing:"
If you're using a parallax effect in the main picture, take advantage of it to help give that sneak peek a less subtle effect — also to be consistent with the page's behavior. After all if the picture zooms out when the user scrolls, it should do the same on that page load hint:
In case of multiple blocks, the content can be nicely choreographed:
Don't hide the content, take control of it
Google Fit Android app uses just part of the first card from below the big circular chart to indicate that there’s more content to see. This approach is intuitive and elegant cause it's using no additional elements to talk to the user. It’s just them hanging out on the land of good perception, while leaving a lot of room for that main circle to shine.
This isn't new. In 2006, Jared Spool was already discussing the use of the cut-off look to improve the affordance of scrolling.
On the web you can achieve something like this getting the picture section to fit around 90% of the viewport max-height, with just one line of CSS or some quick JavaScript (if you need to support old browsers).
What about combining it with an animation and setting a lower opacity for the content? That way it can't take much of the user's attention from your beloved main picture:
Let's just be careful about the level of opacity. If it's too low we're doing no good. Oh and let's not forget to set the opacity back to 100% when the user scroll the page or interact with those elements as well :-) | https://uxdesign.cc/dear-web-designer-let-s-stop-breaking-the-affordance-of-scrolling-fe8bf258df7b | ['Rodrigo Muniz'] | 2016-03-31 15:57:58.797000+00:00 | ['UX Design', 'UX', 'Design'] |
A Brief Study of Cryptonetwork Forks | A Brief Study of Cryptonetwork Forks
Analyzing Variances Between Forked Assets
Cross-Post: This post was originally written by Alex Evans from Placeholder VC, and was originally published on their blog September 17, 2018, here.
Quick Summary
The vast majority of child networks resulting from chain forks are in disuse and have lost significant value relative to their parent networks.
Despite lower use metrics, child networks trade at higher user and transaction value multiples (e.g., NVT ratio) than their parent networks.
Users and developers tend to remain loyal to the original network, while most miners are loyal to economics only, directing hashpower to the most profitable network of the moment.
Intro
Forks are as ubiquitous as they are misunderstood. To frame the debate on the role and value of forking, the aim of this article is to establish a factual baseline for the study of forks.
To date, there have been hundreds of forks, but not all are equal. Broadly speaking, we can think of two types. There are chain forks, where the code, ledger, or both code and ledger of an operating blockchain are altered, creating a live cryptonetwork separate from the parent chain (which we sometimes refer to as a “child” chain in this piece). We can think of this as a partition of a live network. Then there are pure codebase forks, where the original cryptonetwork’s code repository is tweaked offline, before a release of a separate network with a new genesis block.
Digging into chain forks, those that start solely with ledger alterations often end up diverging from the parent codebase, as in the case of Ethereum Classic (and, of course, vice versa). In chain forks where only the source code is altered, the state of the original cryptonetwork is replicated, entitling users with balances at the time of the fork to an equal amount (but not value) of assets in both networks.
To date, Bitcoin is by far the most forked network, with at least 40 chain forks at various block heights (and likely more we’re not aware of). By our rough tabulations, Ethereum has also had at least 15 chain forks, and Monero at least 4. The vast majority of the resulting child networks appear to have no usage or value.
Codebase forks are even more common than chain forks, with prominent protocols such as ZCash and Litecoin resulting from codebase forks of Bitcoin Core. These types of forks are much harder to define precisely as the codebases of most cryptocurrencies have some degree of shared DNA, but also evolve to look vastly different.
The discussion below mostly focuses on chain forks of major networks, as opposed to pure codebase forks. We examine the behaviors of key network stakeholders during forking events — including users, developers, and miners — and attempt to study how their decisions drive value in the underlying networks. The focus is on the network and assets of Bitcoin (BTC), Ethereum (ETH), and Monero (XMR), each of which has at least one prominent child asset resulting from a chain fork. Zcash (ZEC) will also be considered in the analysis, shedding light on the proceedings of a codebase fork (Zclassic).
How do users behave after a fork?
It has been argued that forks are dilutive to the value of a cryptoasset as they split the user base (or demand side) between the parent and the child, resulting in a lower combined network value. Looking at Bitcoin, Ethereum, ZCash, and Monero, the idea that forks divert the demand side of the network from the main chain does not appear to hold true.
Figure 1 illustrates this using daily active addresses and transaction volume as proxies for demand-side activity. Red lines indicate the approximate dates of a network fork (respectively, Bitcoin Cash, Ethereum Classic, ZClassic, and Monero Original and Monero Classic). The periods approximately cover the sixty days before and after each fork (with the exception of ZCash, as ZClassic was created soon after network launch). While we cannot observe the counterfactual, across the four networks the creation of a child chain does not appear to correlate with a decline in daily active users or transaction volume on the parent chain (note that data on Monero transaction volume was unavailable for this analysis). This holds even as the forks took place in different market environments and varied stages of network maturity.
Figure 1: Daily Active Addresses: Ethereum, ZCash, Bitcoin, Monero
Source: Coinmetrics
Rather than true community splits, forks of these networks appear to have created essentially new communities, something also apparent from the behavior of developers.
How do protocol developers behave after a fork?
The loyalty of users to the parent network is matched by that of core protocol developers. Comparing the top contributors of Bitcoin to Bitcoin Cash, Ethereum to Ethereum Classic, and ZCash to ZClassic, their corresponding developer communities diverge over time (note that the recency of the Monero forks precludes serious comparative analysis of the codebase). Following Azouvi et al’s work, I compute the Sorensen Dice coefficient for each network and its child. [1] Plotting the coefficient over time (Figure 2), we see a clear divergence in the contributors beginning at the time of the fork (lower numbers imply less overlap in core developers).
With the communities of top developers becoming increasingly heterogeneous, so too do the two codebases, independently evolving akin to allopatric speciation. It is important to note that this coefficient is a trailing indicator, as it takes time for new developers on the child chain to displace contributors in the top-30 in the original repository, as the two codebases are one prior to the fork.
Figure 2: Sorensen Dice Coefficient
Source: Github
The loyalty of core developers to the original chain is striking, as the number of top 30 contributors from the original codebase that become exclusive contributors to the child is almost zero (excluding the possibility of developers using new pseudonymous identities to contribute to the new codebase). [2]
How do miners behave after a fork?
Unlike users and developers, miners exhibit no discernable loyalty to an underlying chain; they behave like economic mercenaries, loyal only to profit. This is consistent with the hypothesis of an efficient mining market, which we explore by looking at a proxy for relative mining profitability. Following Kiffer et al’s work on the Ethereum/Ethereum Classic fork, I divide average daily difficulty by total revenue per block (block reward times average USD exchange rate per coin). This can (very roughly) be thought of as the number of hashes required per dollar of revenue.
As seen in Figure 3, the charts for the main networks and their forks are almost identical. In other words, miners make the decision on which chain to mine based on daily exchange rates and difficulties of both chains, exhibiting highly efficient economic behavior in exploiting opportunities across networks.
Figure 3a: Hashes/Revenue Ratio
Source: Coinmetrics
A few finer observations: While the numbers for Ethereum and Ethereum Classic are almost perfectly correlated, the ETH/ETC ratio has often diverged from 1 for extended periods in recent months, as shown in Figure 3b.
Figure 3b: Relative Profitability
Source: Coinmetrics
We should expect differences between Ethereum and Ethereum Classic mining dynamics to become more pronounced as the roadmaps of the two projects continue to diverge. On October 16th 2017, Ethereum’s difficulty and issuance changed with the Byzantium hard fork (which we account for by adjusting the block reward to 3 from 5). Similarly, Ethereum Classic has recently implemented a hard fork to diffuse the difficulty time bomb and the Ethereum community is debating a further reduction in block reward. Over time, different approaches to consensus and miner incentivization between the two projects are expected to cause the ratios to diverge even further.
On the other hand, starting in Q1 2018, Bitcoin and Bitcoin Cash have converged on a highly stable equilibrium, with the ratio settling into an efficient range with few sustained deviations from 1. The lower variance for Bitcoin and Bitcoin Cash may also be an indicator of greater professionalization of Bitcoin mining relative to that of Ethereum. Similarly, it may be influenced by the presence of a small number of entities with the majority of hashpower on both networks.
What does all this mean for cryptonetwork value capture?
It is often casually argued that network rents can be “forked away,” with value flowing to a less rent-seeking fork. While we lack sufficient evidence to confidently reject this hypothesis, early evidence points against it.
Empirically, the vast majority of child chains are in disuse and have depreciated substantially relative to their parent chains. Even of the more well-known forks, such as Ethereum Classic and Bitcoin Cash, next to none have appreciated relative to the original networks.
Of the more prominent chain forks listed on CoinMarketCap for Bitcoin, Ethereum, and Monero, all but one has lost value relative to their parents since the fork: Since initial trading, BCH is trading slightly lower than its initial price after the August 2018 hard fork, while BTC has appreciated 133%. Similarly, Bitcoin Gold has lost nearly 95% of its value since it forked from Bitcoin, while BTC has only lost a little over 30% in the same period. While ETC’s market cap was 7.3% that of ETH at launch and peaked near 19% in early 2017, it is now around 5%. Monero Classic and Monero Original have also underperformed XMR.
Of the networks examined in detail in this post, the only asset that has appreciated relative to the parent chain is ZClassic, a codebase fork of ZCash with lower inflation due to the removal of the founder reward. Nonetheless, there doesn’t appear to be any real usage of ZClassic or any ongoing development of the project since February of 2018. ZClassic experienced a speculative run-up in price at the start of this year, due to the Bitcoin Private “dividend” (Bitcoin Private is a chain fork of ZClassic, which is itself a code fork of ZCash, which started as a code fork of Bitcoin Core). After the hard fork, the price of the ZCL collapsed and development on ZClassic appears to have paused. Outside of major chains, PIVX has performed better than its parent, DASH.
Overall, the observation that forked assets consistently underperform their parents runs contrary to the commonly articulated belief that value can easily be forked as users readily switch between networks. A good amount of this phenomenon could be attributed to the fact that forks have failed to attract users and core developers from the original chain and as such have to bootstrap a new community to compete, while the original chain retains its network effects. Unfortunately, it appears that we are too early in the history of these networks to rigorously evaluate this hypothesis.
How do divergences in network values correlate to cryptonetwork ‘fundamentals?’
Divergences in the network value of parent and child chains present an interesting petri dish to further investigate relative valuation metrics, and how the market rationally prices cryptoassets based on their fundamentals (or not).
Comparing network value to transaction value ratios (NVT Ratio) for BTC/BCH and ETH/ETC, using 90-day moving averages for transactions (Figure 4), reveals that the child chains often trade at a NVT premium to the original network assets. Trading at an NVT premium implies the market is valuing the child chains at a higher multiple of transaction value, a pattern which ETC and BCH have displayed relative to ETH and BTC, respectively. The premium cannot be explained by stronger growth, as both child chains have experienced anemic transaction growth, no higher than that of their parent chains. In the case of BCH, the premium is significant, with an NVT ratio 65% greater than that of BTC’s at the time of writing.
Figure 4: NVT Ratios BTC & ETH
Source: Coinmetrics
To explore these divergences further, we turn to addresses. Ratios focusing on daily active addresses primarily involve Metcalfe’s law and its variants. As we are simply comparing between assets, the particular formulation is less important and, as such, we use the traditional n-squared formulation. Figure 5 summarizes the natural logarithm of the ratio of market cap and squared active address count for Bitcoin vs Bitcoin Cash as well as Ethereum vs Ethereum Classic.
The results further accentuate the divergence from fundamentals, as both BCH and ETC trade at premiums to BTC and ETH based on this Network Value to Metcalfe’s law Ratio (NVM Ratio). The discrepancy for BCH is particularly striking.
Figure 5: MCR for BTC & ETH
Source: Coinmetrics
Arguably, both NVT and NVM are demand-side ratios that capture only one side of the network. Unfortunately, there has been little work on holistic models for cryptonetworks that would allow us to take a closer look at what is driving these divergences from implied fundamentals.
The only two-sided framework I am aware of that attempts to build a composite measure is that of Buraschi & Pagnotta, who develop a model where price and hashrate are jointly determined in equilibrium. [3] Figure 6 fits hashrate and active addresses to this model using the parameters specified by the authors (see pp 19–20) and 30-day moving averages for asset supply and hashrate. Leaving aside a statistical exploration of the fit, we can informally observe that the model does a much better job explaining BTC than BCH price. Using the same parameters for the Bitcoin Cash network, however, the BCH price far exceeds that predicted by the model.
Figure 6: “Satoshi Asset Pricing Model” for BTC & BCH
Source: Coinmetrics, Bitinfocharts
In theory, the similarity between child and parent networks would lead one to expect them to trade in close relative valuation ranges. While the numbers are still rough, and more data is needed, early indications point to child chains consistently trading at relative valuation premiums when compared to the parent.
A few possible explanations for this phenomenon are as follows. First, many holders don’t move the child asset, instead leaving it untouched until they choose to claim it at a later date, which means there is less selling pressure than would otherwise be expected and hence the price stays unnaturally elevated. Secondly, investors often perceive the child chain as insurance to the parent unraveling, and thus hold more of that asset for investment purposes than would be otherwise expected from the fundamentals. Third, the parent chain “anchors” the value of the child at a higher level than that implicitly justified by the fundamentals. At present, we lack sufficient data to further evaluate the merit of these hypotheses.
Conclusion
While we are still in the early days of understanding the consequences of cryptonetwork forks, a few tentative patterns have begun to emerge. Contrary to the narrative of frictionless forking sucking value away from large networks, child chains to date have struggled to attract demand and developer talent from their parent communities. We can speculate that the inability of almost any child chain to meaningfully appreciate relative to its parent may result from this difficulty of diverting users and developers from parent networks. That said, child chains often trade at relative valuation premiums when compared with parents. While the short history of major network forks precludes more confident assertions, we intend to use this study as a baseline to revisit the economic dynamics of forking as more data becomes available.
Footnotes:
[1] Sorensen Dice is a measure of population heterogeneity commonly used in ecology. Here, it is computed by taking two times the number of common contributors in the set of top 30 contributors for each codebase (ranked by total number of commits) and dividing by the total number of contributors in both sets (i.e. 60).
[2] Two finer points are worth noting in the results. First, ZCash itself has a non-zero Sorensen-Dice coefficient with Bitcoin and Bitcoin Cash, as the project also started as a fork of Bitcoin Core. Second, ZClassic is still technically a branch of the main ZCash codebase on Github and independent development appears to have all but halted as of March 2018, with some development migrating to a second hard fork, Bitcoin Private. Second, while ETC’s and ZCL’s top contributors diverge from the original developer community after their respective hard forks, the contributors to Bitcoin Cash and Bitcoin-ABC’s codebases diverge in advance of the hard fork on August 1st 2017, as work on the client had started earlier in the year (evidenced by the dates of this thread and this github post).
[3] In this model, price is determined by investors choosing to purchase a decentralized network asset that can be exchanged for future network services. Future value of those services depends on the product of expected future users (with value given by Metcalfe’s law) and the probability of network failure (that is decreasing as a function of current network hashrate). While there are several limitations to the model, it is, to my knowledge, the only one that attempts to bring together both supply and demand factors and explicitly models a two-way relationship between price and hashrate. | https://medium.com/blockchannel/a-brief-study-of-cryptonetwork-forks-1377791d0354 | [] | 2018-09-25 04:07:33.334000+00:00 | ['Blockchain', 'Ethereum', 'Ethereum Classic', 'Cryptoeconomics', 'Bitcoin'] |
From Self-Doubt and Burned Out to Relief and Freedom | Do you try to put your health and wellbeing first, but still find yourself feeling tired, resentful, under-appreciated or overwhelmed at the end of the day? If so, you could be leaving out some key components of wellness that often get overlooked. In this podcast, I share some powerful steps to integrate so you can reclaim your energy and re-calibrate your life.
Are you kind to yourself?
Do you tell yourself the truth about time?
Do you respect your own boundaries and limits?
Do you say no often enough?
These are just a few aspects of self-care that often do not get enough of our attention.
Read that again. These are all aspects of wellness. And I’ve found more people have blind spots around these topics than knowing how critical sleep, movement, staying hydrated and nourished are to good health. Let’s look at some of the hidden ways we neglect our own wellness…
Being kind to yourself.
You may eat healthy and exercise as a means of achieving health and well-being. But what I would like to know is:
How do you speak to yourself? Do you call yourself unkind names? Do you admonish yourself for not being or doing enough? Do you compare yourself to others?
If you answered yes to any of the above questions, you are neglecting a key aspect of your own wellness. As a coach, I really notice how disrespectful and rude many of my clients can be towards themselves.
Having constant negative thoughts and speech about yourself is a form of self-abuse! This is the opposite of self-care.
Be aware of your internal self-talk. I invite you to make a clear decision to speak to and about yourself with kindness and respect, even if you’re the only one listening inside your head. Please speak to yourself the way you would address someone you love and appreciate!
One way to practice positive self-talk is to attach it to an existing habit. After all, our internal self-talk is simply a habit, so it can be changed. I recommend coming up with five ways you are proud of yourself while you are brushing your teeth every night. Not just one or two, but stretch beyond what’s comfortable. I know you can find five things each day and it’s ok if they are small. This is a great way to begin developing a consistent practice of positive self-talk.
Being honest about time.
This is a big one! Most of us are terribly negligent in the amount of time we give ourselves to do things. We block out a 1-hour slot on our calendar for a 1-hour meeting. What we don’t consider is:
The drive time (to and from the appointment)
Finding a parking space
Pre-meeting prep time, material gathering and organization
Post-meeting clean-up, follow-up and organization
One of the biggest problems about not taking the above bullet points into consideration is that we then often beat ourselves up for being late, unprepared, or disorganized. It is not time that needs to be managed. It is you. Self-management involves acknowledging the truth about time… how much there is, and how much is truly required to accomplish the things on your schedule.
Commit to putting the real time involved in your daily activities on your schedule so you can feel good about getting it all done without feeling frazzled or inadequate. Otherwise, you’ll need to refer back to the first item in this list!
Respecting your own boundaries and limits.
You can’t value your gifts, talents and accomplishments when you aren’t honest about your limits.
YOU CANNOT DO EVERYTHING RIGHT NOW!
Leave yourself some room to feel good by being honest about the time and resources you have. Having unrealistic expectations always leads to disappointment.
Recognizing the finite nature of things, i.e., time, resources, energy, etc., allows you to appreciate the things you do get done rather than thinking it’s never enough.
One way to accomplish this is to evaluate your week honestly. Assess what is and isn’t working. Set some standards and adjust when necessary.
And don’t be afraid to make a request when you need something. Asking for help is not a sign of weakness. It simply means you are being self-aware and honest.
Women, in particular, can grow their wellness and satisfaction by learning how to acknowledge and articulate their needs. I say women, because we aren’t generally taught this and, depending on your age, it hasn’t been modeled for us, despite how many roles we take on. Being able to make a request from a co-worker, friend, partner or family member, without overly explaining or justifying, is a valuable skill to have!
Saying no
80% of yeses are incomplete, meaning there is some hesitation.
If you are hesitant about saying yes to something for any reason, you actually mean no. Saying yes to things, when you want to say no, leads to burnout and resentment.
Only say yes when you are 100% sure. I call this the complete yes.
All of the above practices require self-awareness, and a true commitment to your mental and emotional well-being. Committing to respectful self-talk, honesty about your time and limitations, and saying no in an effort to honor your needs will make space for wellness, and a new kind of productivity that comes from living a more balanced life. When you aren’t giving from the fumes of an empty well, you have more to offer.
…………………………………………………………………………
Is your work situation uncertain or frustrating you? Are you without a job or wisely thinking a current furlough may be just the hidden gift to start exploring work you’re truly meant to do? Do you hate your job, but have no idea what to do instead? Attempting to navigate those waters without support is not fun (yes, I do know, but that’s another story). I’m excited to announce that I’ve created The Job I Love Toolkit, with all the resources you’ll need to finally clarify how to get paid to do you.TM To be the first to hear more details, join the join the VIP Wait List. And if you know a friend or neighbor who could use hearing the advice in this article or needs The Job I Love Toolkit, please forward this to them.
…………………………………………………………………………..
Feeling burnt-out in your career? Take the Career Burnout Quiz http://CareerBurnoutQuiz.com to uncover what’s working and what’s definitely not. Receive customized feedback and tips tailored for your situation to start on your path to an improved work life and career you love. | https://medium.com/@internalgroove/from-self-doubt-and-burned-out-to-relief-and-freedom-2861ed7b134a | ['Barb Garrison'] | 2020-11-24 19:30:58.412000+00:00 | ['Freedom', 'Career Wisdom', 'Burnout', 'Self Care', 'Feel Better'] |
K-Means Clustering | K-Means Clustering is an Unsupervised Machine Learning Algorithm, Which is used for the Classification Problem.
Content
Definition Working of K-Means Elbow Method Assumptions in K-Means Advantages of K-Means Disadvantage of K-Means Application of K-Means References
Definition :-
K-Means separate out the labeled data into different groups ( also known as Clusters), on the basis of similar features and common patterns.
K-Means Clustering Algorithm
It is an Iterative Algorithm, which divides the whole dataset into K number of Clusters or Subgroup based on similarity and their mean distance from the centroid of that particular cluster formed.
Working of K-Means Algorithm :-
Following are the steps which explains the working of the K-Means :-
Step 1 : Using the Elbow Method, calculate the optimal value of K to choose the number of clusters.
Step 2 : Randomly initialize the K points ( or say Centroids ) on the datasets.
Step 3 : All data points should be assigned to their closest centroid.
Step 4 : Calculate mean value and place a new centroid to each cluster.
Step-5 : Repeat Step 3 and Step 4, till no further reassignment occurs.
Step 6 : Following are few criteria based on which we should stop K-Means Algorithm :-
a . If newly formed Centroid does not change
b . If points remain in the same cluster
c . If the Maximum number of iterations are completed
Working of K-Means Algorithm
Elbow Method :-
One of the most important steps in K-Means unsupervised Machine Learning Algorithm is to determine the optimal value of K and we can do so by using the elbow method.
Suppose we run the K ( number of clusters ) for 10 iterations. For each value of K, we will be calculating WCSS, where WCSS stands for Within-Cluster Sum of Square. WCSS is the sum of the squared distance between each data point and the centroid in a cluster.
Within Cluster Sum of Square Equation
It will look like elbow, when we will plot the WCSS with the value of K.
Elbow Method
After analyzing the graph we notice that the graph rapidly changes at a point and thus creating an elbow shape. The K-value at this point is considered as the optimal value of K ( optimal number of clusters ).
Assumptions in K-Means :-
Following are the few assumptions of K-Means Algorithm :-
K-Means assumes that clusters are spherical. The prior probability for all K clusters are the same which means all clusters have approximately the same number of observations. In simple word we can say that it assumes that clusters are of similar size.
Advantages :-
Following are the few advantages of K-Means :-
K-Means Algorithm is simple to implement. It is scalable to large datasets. It can easily adapt to new examples. Generalizes to the clusters of different shapes and sizes, such as elliptical clusters.
Disadvantages :-
Following are the few disadvantages of the K-Means :-
K-Means are sensitive to outliers. Manually choosing the optimal value of K. With the increase in the dimensionality, scalability decreases. It does not perform well with clusters of Different size and Density.
Application :-
Following are few application of K-Means :-
Recommendation System
Customer Segmentation
Crime Hot-Spot detection
Optical Character Recognition
References :- | https://medium.com/@imakash3011/k-means-clustering-ef8e9258d76a | ['Akash Patel'] | 2021-06-17 04:43:02.813000+00:00 | ['K Means', 'Machine Learning', 'K Means Clustering', 'Data Science', 'Elbow Method'] |
How to Overcome the 7 Scariest Risks to Your Retirement | Life is full of surprises, many of the unpleasant variety.
As humans, we’re far more committed to avoiding pain or failure than we are to achieving great results. In professional terms, that’s called being “risk averse.”
The more important a goal, the less willing we are to fail, which makes us less willing to take risks, even calculated ones. The problem is that by doing this, we cheat ourselves of bigger wins.
If you want bigger wins in life, here’s a simplified version of what I learned from NASA on managing risk, and how you can apply it to your retirement plan, since “Failure is not an option!”
What Is Risk And How To Manage It
In the simplest terms, risk is the possibility that something will go wrong. Managing risks requires 5 important steps.
Identify your risks Assess each risk’s likelihood and consequence For risks with acceptable likelihood and consequence, accept they may happen and don’t worry For risks with unacceptably high likelihood and/or consequence, if feasible, change your plan to prevent the risk; if not feasible, craft a mitigation plan that reduces likelihood, consequence, or both Reassess each risk’s post-mitigation likelihood and consequence to ensure they’re now acceptable
Why Retirement Planning Is So Challenging
Planning for retirement is one of the biggest personal finance challenges you face for two reasons.
First, a comfortable retirement requires a LOT of money, and since most of us don’t make enough to satisfy all of our current needs and wants plus enough to save “a LOT of money,” that saving must come at the expense of satisfying current wants — delayed gratification anyone?
Second, as physics Nobel laureate Nils Bohr once quipped, “ Making accurate predictions is very difficult, especially about the future,” more so when that future is decades away, and even more so when you have to predict with plausible accuracy both how the market will fare until (and in) retirement, and how much you’ll need for a comfortable retirement.
Just listen to the financial talking heads and you’ll quickly realize that none of them can predict market returns with any degree of accuracy for any specific day, week, month, year, or even decade.
According to a US Bank study, 41% of Americans say they use a budget. This means 59% don’t have a budget for their current spending, so how could they possibly have a clue about how much they’ll need in retirement?
The 7 Scariest Risks To Your Retirement Plan and a 5-Step Plan to Overcome Them
Let’s start working on the 5 steps of managing the risks to your retirement.
Step 1. Identify The Risks.
Risk #1: Market Risk
As mentioned above, even the experts can’t predict market returns, but if history is any guide, over the course of a multi-decade retirement you should expect 10%+ “corrections” (read “losses”) every 2–3 years, and a bear market (20%+ loss) every 6–7 years.
On average, bear markets lasted 14 months, had an average loss of 33%, and took just over 2 years to return to the pre-bear-market value. When your retirement income depends in large measure on the size of your portfolio and its returns, suffering a 33% loss can be devastating.
Risk #2: Sequence-Of-Returns Risk
In the simplest terms, sequence-of-returns risk is the risk that the market will drop like a rock just as you start your retirement. The result would be that you’d need to sell when the market is low to fund your retirement needs in the first few years, and this loss would have decades to reverberate through your portfolio.
To demonstrate this, let’s consider two scenarios of what your first decade in retirement might look like.
In both cases you retire with a $1 million portfolio, and draw $40,000 a year (for simplicity, let’s assume there is no inflation). In Scenario 1, you experience 6% annual losses in the first 3 years of retirement, followed by 9% annual gains in each of the following 7 years. In Scenario 2, you experience the same returns, but in opposite order. First the 7 years of 9% gains, then the 3 years of 6% losses.
Mathematically, if you neither contribute to your portfolio nor withdraw from it for this decade, the ultimate results would be identical in the two scenarios. However, since you are in retirement in this example and are drawing money out of your portfolio, things don’t work out like that.
As you can see in the graphic, despite having an identical starting point and the same average annual return of 4.26% over the decade in question, you end this decade with a 7.7% loss in Scenario 1 vs. a 7.9% gain in Scenario 2. Just changing the order of returns results in a $155,870 difference in outcome!
Example of how sequence of returns can cause a retirement portfolio to lose 7.7% over a decade with losses at the start vs. gaining 7.9% if those same losses occur at the end of the decade, all with the same average 4.26% annual gain in that decade.
Despite having an identical starting point and the same average annual return of 4.26% for your first decade in retirement, you can end up with a 7.7% loss in Scenario 1 vs. a 7.9% gain in Scenario 2. Just changing the order of returns results in a $155,870 difference in outcome!
Your sequence-of-returns risk is the risk that your early retirement will be more like Scenario 1 and less like Scenario 2.
Risk #3: Interest Rate Risk
During retirement, as well as in the last few years approaching it, retirees mitigate market risk by diversifying a significant portion of their portfolio into fixed-income assets such as treasury notes, certificates of deposit (CDs), and savings accounts.
This results in a risk that if interest rates drop as they have in recent years, retirees can’t count on as much money from interest payments, and when they have to reinvest this portion of their portfolio because a CD or treasury note reached maturity, their income may suddenly drop significantly.
This results in having to spend more of your principal, the money you’re counting on to produce more money for later years.
Risk #4: Inflation Risk (Especially Healthcare Inflation)
Historically, most years see prices increase compared to prior-year prices. This is called inflation, and is most often quoted based on the US Department of Labor’s (DOL) Consumer Price Index (CPI). This CPI is calculated based on monthly data on prices of a basket of goods and services.
In the worst rolling 20-year period between 1926 and 2018, the dollar lost over 70% of its value! In the average 20-year period, it still lost nearly 53%.
DOL also calculates a so-called “CPI-E” where the “E” stands for “elderly.” The CPI-E uses a slightly different basket, with greater emphasis on healthcare costs, for obvious reasons. Since 1981, the annual CPI averaged a 2.8% increase, whereas the CPI-E increased by an average of 3.1%. The extra 0.3% a year is due in large part to the faster price increases in healthcare, which averaged 5% a year since 1981.
The risk here is that prices start increasing more rapidly when you’re in retirement, which puts pressure on your budget.
Risk #5: Investor Behavior Risk
As Ben Le Fort points out in his excellent article about how investors sabotage their own portfolio performance, investors as a group do extremely “well” at buying high and selling low — a great way to lose out on good investment returns.
Le Fort quotes a Morningstar study showing that on average, over the 3-, 5-, and 10-year periods ending Dec 31, 2012, investors under-performed the mutual funds they owned by 0.23%, 0.53%, and 0.95%, respectively (see graphic).
Investor underperformance over time relative to the performance of the funds in which they invested.
On average, over the 3-, 5-, and 10-year periods ending Dec 31, 2012, investors under-performed the mutual funds they owned by 0.23%, 0.53%, and 0.95%, respectively!
One wonders why investor under-performance becomes so much worse as the investment period lengthens. It could be a matter of our emotional behavior — fear, greed, and overconfidence — having more time to wreak havoc on our investments, in which case, you’d expect under-performance to become much worse as the investment period grows to 20, 30, 40, or more years.
Alternatively, it could be driven by market volatility over the specific period in question. As shown below (data from Yahoo! Finance), market gyrations were worse over the 10-year period ending Dec. 31, 2012 compared to the 5-year period, and worse for that vs. the 3-year period ending on that same date.
If this is the reason for the above-shown trend, we could expect under-performance over very long periods to be similar or slightly worse compared to its 10-year level.
3-year S&P 500 monthly closing values from 2010 to 2012
5-year S&P 500 monthly closing values from 2008 to 2012
10-year S&P 500 monthly closing values from 2003 to 2012
What This Means For Your Investment Results
To bring all this into perspective, here’s how badly investors hurt themselves through their emotional investing behavior.
The graphic below compares the result for two hypothetical investors who each invest $12,000 at the end of each year from age 25 to 65, where one simply invests and allows the funds’ hypothetical 7.05% annual return to work its magic, while the second tries to time the market and ends up with a 6.1% annual return while investing in those same funds.
Investor long-term returns can lag the returns of the mutual funds in which they invested by almost 25%
Over this hypothetical 40-year period, the active investor’s portfolio reaches a healthy $1.9 million. Not too shabby. However, his buy-and-hold friend’s portfolio reaches $2.4 million investing the same dollar amounts in the very same funds!
The buy-and-hold investor outperforms his active friend by a total of 27%, ending up with over $520,000 more in his portfolio, allowing him to draw an extra $21k a year in retirement according to the 4% rule (though this rule may need to be updated).
Investing “from your gut” can cost you 25% over your lifetime relative to staying the course, even if investing in the exact same funds.
Risk #6: Longevity Risk
This is the risk that you will outlive your money, one of a retiree’s worst fears.
Over the past several decades, our life expectancy in retirement has increased by many years. According to the Social Security Administration (SSA), a worker turning 65 in 1940 was expected to survive about 14 years in retirement. That same number for workers turning 65 in 1990 was over 17 years (averaging male and female life expectancies).
Using the SSA’s life expectancy calculator, that number is currently just over 20 years. This shows that life expectancy at age 65 increased by nearly 43% in the last 79 years. More importantly, half of people reaching age 65 now will survive longer than 20 years in retirement.
The SSA’s actuarial table shows that a couple turning 65 today has about a 50% chance that at least one of the two will survive to age 90, a 20% chance that at least one reaches age 95, and nearly a 5% chance at least one lives beyond age 100. Research also shows that the SSA’s numbers underestimate the life expectancy of people who are more affluent than average.
Risk #7: Health Risk
According to Fidelity, the average 65-year-old couple who retired in 2019 should expect to spend $285,000 for medical expenses in retirement, excluding long-term care. This number doesn’t take into account any inflation in healthcare costs.
Knowing this, you can plan for it when crafting your retirement plan. You can use a Nerdwallet calculator to estimate your own median expected retirement healthcare costs.
However, the above estimate is the average. The health risk is that you and/or your spouse may suffer some severe health-related crisis or chronic condition that increases your healthcare costs significantly above the average.
A serious illness or accident that puts one of you in the hospital for a long period could cost you hundreds of thousands of dollars in hospital bills, prescription drugs (especially if these aren’t part of the so-called “formulary” of commonly prescribed drugs, etc. Having a long-term chronic condition that requires in-home support for many years could similarly devastate your retirement plan.
Now that we’ve identified the 7 risks, we can finally move on to Steps 2–5…
Step 2. Assess Each Risk’s Likelihood And Consequence.
Now that we know what the risks are, let’s use a NASA tool, called a “risk matrix,” to figure out how scary each one really is. I identify each of the above risks by its number above, and place it in what I think is an appropriate rubric.
A 5x5 risk matrix, or “fever chart,” showing the likelihood vs. consequence of the 7 scariest risks if you don’t do anything to prepare and overcome them
Step 3. Accept Acceptable Risks.
Here, with all risks in yellow or red regions, no risk should be viewed as acceptable.
Step 4. Craft Risk-Avoidance And Mitigation Strategies.
Here are some mitigations you should seriously consider implementing to address these 7 scariest risks to your comfortable retirement.
Mitigating Market Risk
The first and most important thing you can do to mitigate the risk that your portfolio will drop in value is to make it as large as possible in the first place.
This means investing for retirement as much as you can right now (even if that’s only 1% of your income), and allocating at least half of each raise, each bonus, each cash gift, and each bequest you receive. By not trying to save all of these “found money” situations for the future, you’re far more likely to be able to stay on track with that.
Especially as you near retirement, shift your investments so your market risk is reduced. This doesn’t mean moving your entire portfolio permanently out of the stock market and into bonds, CDs, and savings or money market accounts. That would dramatically increase your risk that inflation will eat away at your portfolio’s value, potentially by more than half.
You can consider your age and likely longevity. Some experts recommend subtracting your age from 120 and using that to determine the percentage of your portfolio that you should keep in equities. At age 65, that would be 55%.
However, some researchers recommend moving almost completely out of equities at when you’re close to retirement, and then gradually moving money back into stocks and stock funds (see also the mitigation for sequence-of-return risk).
Mitigating Sequence-Of-Returns Risk
To mitigate this risk, as mentioned above, you could move nearly all your investments out of equities as you approach retirement, and gradually move back into equities over the first 5–10 years of retirement.
This will minimize the likelihood and consequence of a potential bear market hitting just as you retire. It will increase the likelihood and consequence of inflation risk, but by staying out of the market for only a few years, that won’t have as devastating an impact on your retirement as would being fully invested in equities and experiencing a major bear market just as you start drawing from your portfolio.
Another mitigation is to at least have 1–2 years’ worth of retirement expenses in a highly liquid and low risk asset such as money market fund or high-interest savings account. If the market tanks, you can draw from that, and if it soars, you can draw from your equity position and/or from stock dividends and bond coupon payments.
Mitigating Interest Rate Risk
To mitigate interest rate risk, you can purchase immediate annuities or single-premium deferred annuities (SPDA) when interest rates are higher. Choosing a fixed interest option will keep your interest rate from dropping if market interest rates drop or if the stock market underperforms. You can also allocate at least a portion of the equity portion of your portfolio to high-dividend stocks and funds investing in those.
Mitigating Inflation Risk (Especially Healthcare Inflation)
To mitigate against inflation risk, you can invest in assets that typically grow faster than inflation, such as equities; and ones that are guaranteed to outpace inflation, such as Treasury Inflation-Protected Securities (TIPS).
Mitigating Investor Behavior Risk
This is one mitigation that’s entirely up to you. The first step is to honestly assess how big a drop in your portfolio value you’d be willing and able to ignore and stay the course. Then, allocate your investments between equities and assets with lower risk accordingly.
Next, stop obsessing over the markets and don’t listen to all the so-called market mavens. They don’t know any better than you how well the markets will perform tomorrow, or next week, month, or year.
Warren Buffet wrote in one of his famous annual letters to investors in Berkshire Hathaway that he doesn’t understand why the same people who would delight in a half-off sale at their favorite store in the mall would panic when the market drops by 50%, giving them a golden opportunity to buy great companies’ stock on sale.
When the markets tank, don’t sell in a panic. In fact, consider if it’s a great buying opportunity, and possibly move more money into equities. Conversely, when markets soar, consider taking some profit by rebalancing your portfolio to your planned equity allocation.
If needed, use a financial advisor to help you restrain yourself from figuratively shooting yourself in the foot, by panic-selling in a bear market, which only serves to lock in your losses.
Mitigating Longevity Risk
To mitigate against the risk that you outlive your money, make sure you draw a percentage of your portfolio that’s likely to allow a safe multi-decade retirement. The famous 4% rule suggests drawing 4% of your portfolio in your first year in retirement, and adjusting that dollar amount each year by the previous year’s inflation rate. While historical data up to the 1990s showed this would offer a safe method for a 30-year retirement, as I wrote elsewhere, a study by David Planchett shows that a more appropriate initial draw based on forward-looking estimates would be 3%.
Further, you can increase that percentage by growing the portion of your wealth that provides guaranteed income (e.g., Social Security, fixed annuities, pensions, etc.), and decreasing the fraction of your retirement spending that’s non-discretionary (e.g., mortgage payments, utilities, car loan payments, etc.). That latter will give you more freedom to reduce your spending in down years, preserving more of your portfolio’s value.
A fixed annuity, whether an immediate annuity or an SPDA will increase your guaranteed income for life, reducing the likelihood and impact of longevity risk. This also helps reduce market risk, and if the annuity has an inflation-adjustment rider, it can mitigate inflation risk as well.
Finally, consider delaying your Social Security benefits, claiming them as late as age 70 if you can afford it, which will increase your benefits the most. Claiming benefits at the early retirement age of 62 will cut your benefits by 30% permanently relative to your full-retirement-age benefits. On the other hand, delaying beyond your full retirement age (FRA) permanently increases your benefits by about 8% for each year of delay.
If your FRA is 67, claiming Social Security at age 70 will increase your benefits by about 77% compared to claiming at age 62.
Mitigating Health Risk
The best way to mitigate your health risk is to shift your lifestyle choices to a healthier diet, regular exercise (which could be simply walking for an hour each day, but would ideally include workouts several times a week). Make sure you have adequate health insurance, including dental coverage since Medicare excludes most dental care.
If you have a high-deductible health insurance policy, it may make you eligible for investing in a Health Savings Account (HSA). This account has better tax advantages than even the best retirement plan.
Contributions are tax deductible, you can invest the money and its growth isn’t taxed, and withdrawals used for qualified medical expenses are tax-free. If you contribute into an HSA each year you can, and never withdraw from the account until you retire, you reduce the after-tax cost of your healthcare.
Finally, consider also buying long-term-care insurance to avert devastation of your savings if you need in-home care for years. If you do buy this coverage, purchasing it long before you’re likely to need it, say in your fifties will make it less expensive, and reduce the risk of developing health problems that cause insurers to decline to sell you such a policy.
Step 5. Reassess Each Risk’s Post-Mitigation Likelihood And Consequence.
If you successfully and effectively apply all the above mitigations, your risk “fever chart” may look more like the final graphic below.
Here, all risks are down to the “safe” green zone. This isn’t to say that your retirement plan is 100% safe from any risk. If nothing else, there could be a worldwide economic meltdown that devastates all asset classes at the same time, forcing you to draw down your portfolio. Another round of stagflation like the 1970s could hit, bringing a combination of high inflation with stagnant or even negative economic growth.
Note added: I first published this article in October 2019, several months before any of us had heard of the so-called “novel coronavirus.” The currently ongoing COVID-19 pandemic had a massive economic impact on many people, even if they never caught the virus. Had the government not spent many trillions of dollars to shore up the economy, it might well have been worse than the Great Depression.
However, following the steps outlined above could dramatically reduce the likelihood of your retirement plan going off the rails, and if bad things happen, the impact on your retirement will be much easier to bear.
The same 5x5 risk matrix, or “fever chart,” this time showing the likelihood vs. consequence of the 7 scariest risks if you do everything you can to prepare and overcome them
The Bottom Line
As I said at the start, life is full of surprises, including many unpleasant ones. This is just how things are. However, if you prepare for the biggest risks and mitigate them, your retirement will likely be more comfortable.
About the Author
Opher Ganel has set up several successful small businesses, including a consulting practice supporting NASA and government contractors. His most recent venture is a financial strategy service for independent professionals (where you can sign up for his biweekly newsletter and get some nifty free PDFs). You can also connect with him by following his Medium publication, Financial Strategy.
Disclaimer
This article is intended for informational purposes only, and should not be considered financial advice. You should consult a financial professional before making any major financial decisions. | https://medium.datadriveninvestor.com/how-to-overcome-the-7-scariest-risks-to-your-retirement-234180930406 | ['Opher Ganel'] | 2020-10-18 17:34:50.662000+00:00 | ['Retirement', 'Investing', 'Risk', 'Finance', 'Money'] |
Blind Date: The Reality of Software Engineers Hiring Process | Blind Date: The Reality of Software Engineers Hiring Process
I feel like we know each other for years
Have you ever been on a blind date? If you have, I am sure you remember the feeling of “what am I doing here?” during the date, followed by the “what was I thinking when I said yes” on the way home. I can hear you saying “some great love stories started with a blind date”. Yes, I know. But is blind dating an efficient way to find love?
Efficiency is a core concept in software engineering. We have sophisticated methods and fancy tools that helps us drive efficiently in almost every process. Hiring is an exception.
I assume we can agree that building a strong team is critical for our success and that hiring the right people is a core element in building a strong engineering team.
Somehow when it comes to our hiring process, analytical methods & efficiency are moved aside, and instincts & gut feeling based decisions kick in.
The situation doesn’t make more sense from the candidate's perspective. You are going to make a very important decision that will impact your career path. On a more immediate level, the decision you are about to take will define what you will be doing in the majority of the hours in which you are awake, and who are the people that you will be spending this quality time with.
Basically, both sides are looking for a good match. Each side has its own criteria for what a good match is, but the hiring process isn’t really tuned for providing confidence on the match criteria.
In this article, I will dive into this topic, based on ~20 years of experience I have both as a candidate as well as a hiring manager. We will begin by describing the desired match criteria, which is basically the signals that each side would want to get during the hiring process. I will then move to describe reality, meaning what usually happens and how far it is from satisfying the needs of both sides. Finally, I will try to imagine how a better world would look like, a world in which our first day on a new job won’t feel like a blind date.
Ruby on Rails?
What is the candidate looking for?
Meet David. David is a 32 years old software engineer. He has a computer science degree from a good university plus 6 years of pragmatic experience working as a developer. His first job was at a large enterprise, where he spent 4 years working on a B2B SaaS product. In his first job, David had a chance to work with experienced engineers and learn stuff he still considers as the foundation of building software the right way.
After 4 years, David felt its time to move on. He wanted to work for a smaller company where he believed he could make a bigger impact. He also felt his compensation was a bit low, and he was frustrated by it being driven by the company’s job leveling system, rather than by his actual performance (BTW: compensation of long-tenured engineers will be the topic of one of my next articles).
So David returned his badge, posted a nice goodbye message at LinkedIn, and moved on to his next challenge. He joined a small startup that was aiming to “change the world of eCommerce forever”. The CEO’s pitch made perfect sense (something with the word “democratize”) and David felt that onboarding a “rocketship” company as employee number 25 can’t be a bad decision. Especially considering the fact he was granted “10K stock options” of what will probably soon become a unicorn or even the next Amazon/Shopify/Some huge company with a typo in the name.
The reality of working at this small startup was very different than what David expected. He joined a team of 9 developers, all reporting directly to the CTO, who was also one of the founders. Everyone was too busy all the time, rushing to complete projects that David knew nothing about. Generally speaking, there was very little communication and cooperation in the team. Like his teammates, David was assigned to “lead” a project (he was also the only person working on this project). The context of the project and the business requirements were blurry, but he did get a very clear deadline of one month to “have this thing in production”.
The Architecture (Photo by Klara Kulikova on Unsplash)
David tried to get some context and learn the product by going into the source code, but this proved to be a bad idea because what he found there was a huge pile of spaghetti. Turns out that technology stack, architecture, and design patterns are not considered to be important here, or at least not as important as delivering features quickly. The one thing that the team was really good at was generating technical debt. These were just early signs for a much broader problem. It didn’t take too long until David understood that taking this job might have been a mistake. After almost 2 years of trying to make the best out of this situation, David finally decided that he must move on and find a new job.
This time, David decided to take a different approach to his job search. He created a list of the 12 aspects that are most important for him in the next job. For each position he will be interviewed, he will rate each of these aspects (on a scale of 1 to 10) based on signals collected during the hiring process. Here is David’s list:
Technology Teammates Direct Manager Ability to Impact Development Process Learning Opportunities Culture Product Company Compensation Work-Life Balance Personal Growth Opportunities
Running away was easy; not knowing what to do next was the hard part
What is the hiring manager looking for?
Meet Sara. Sara is an R&D team leader in a successful SaaS company. Her team is responsible for building and operating an application for analyzing user behavior in web & mobile applications. This app is basically processing clickstream data of millions of users, transform this data into actionable insights, and offer those insights via a sexy web GUI as well as via a GraphQL API. Cloud-native, big data, AI-driven. As cool as it gets.
Like most teams in the engineering org, Sara’s team is structured as a feature team. The team she is leading contains 4 full-stack developers, 1 test automation developer, and 1 DevOps engineer. Based on next year’s strategic initiatives and budget plan, Sara got approval to increase the HC and add an additional full-stack developer to her team. Great news.
What would be a good hire? that’s a great question. Sara thinks about her current team. What kind of engineer would make us better? Of course, the dream is to bring in a 10x engineer that would boost the entire team, but that’s easier said than done. Like most team managers, Sara ended up building a match profile based on the people she currently has in the team. “It would be great if we can hire someone like Ben” she tells the recruiter. “He is a good coder, but more important, he is proactive and sincere”.
Much like David’s list, Sara also has a list of 12 aspects that defines the developer that would be a good match for her team. For each candidate (Including our friend David), Sara is going to rate each of these aspects (on a scale of 1 to 10) based on signals collected during the hiring process. Here is Sara’s list:
Technical Skills Team Player Coachability Passion to Make an Impact Understanding of SDLC Passion to Learn & Improve Culture Fit Communication Skills Attention to Details Responsibility & Ownership Deal with Pressure Potential to Grow
git clone? (Photo by Cleyton Ewerton on Unsplash)
What happens in reality?
The short answer is that reality sucks. Both David (the candidate) and Sara (the hiring manager) knows what they are looking for. They even have a rating system to measure each opportunity and find out whether it's a good match or not.
Amazingly, the two lists they created are almost a perfect reflection. So all we have to do is create a hiring process that will allow each of them to collect clear signals for each of the aspects in their lists. Sound trivial, right?
It turns out it's not trivial. In reality, most software engineers hiring processes that I have been part of (both as a candidate as well as a hiring manager) had at least 7 phases that involved at least 5 stakeholders from the hiring company:
Initial Screening: Done by a recruiter, based on the way she understood the match profile Team Leader Interview: Technical interview done by the hiring manager Technical Home Assignment: Explained and reviewed by the hiring manager & a senior developer/system architect. Engineering Director Interview: Done by the hiring manager's manager HR Interview: Done by the HR partner of the engineering org Reference Calls: Done by HR & the hiring manager Job Offer: Done by HR
This is the bare minimum. I have seen companies that had additional phases or that split the second phase (team leader interview) into multiple technical interviews. But the length of the process is not where the problem is.
The real problem is the lack of mapping between the different phases of the hiring process and the match criteria aspects each side wanted to validate.
On the hiring company side, it is not always clear on which phase, how, and by whom should each match criteria be evaluated. Most of the questions being asked during the different interviews aren’t crafted for getting strong signals on specific match criteria. I have also seen cases where the same question (“tell me more about that system you built in your previous role”) was asked by multiple people in multiple interviews, as well as cases in which the same questions are used for both junior and senior developers. At the end of the process, after these 7 long phases, the hiring manager hasn’t collected enough signals to evaluate the candidate on a significant number of the match criteria she has on her list.
The same problem exists on the candidate side. It’s often not clear on which phase of the hiring process should he get the data he needs for evaluating each of the different items on his match criteria list. In most cases, the expected outline of the hiring process isn’t clear. Many candidates (especially the less experienced) are trying to collect pieces of information being shared with them by the different interviewers along the process, plus stuff they read on apps like Glassdoor, and then try using those information fragments for building a complete picture of the job they are interviewing for.
Let me tell you a secret: this picture might not be a great representation of reality.
“A mistake repeated more than once is a decision” (Paulo Coelho)
What can we do to fix it?
That’s a tricky question. I mean, if there was a simple “one size fits all” answer, I guess we would all be adopting it already. Still, there are 8 common areas for improvement which I find both valuable and feasible:
Ownership: Having so many people involved in the hiring process creates an ownership problem. Specifically, there is a gray zone between engineering & HR. The way I see it, the hiring manager (the engineering team leader, who is going to be the direct manager) is the sole owner of the entire hiring process, from the minute the profile is defined, until the minute the contract is signed. All other stakeholders (recruiter, architect, engineering director, HR) should be synchronized by the hiring manager and provide their feedback to her. She owns the process and she is accountable for the outcome. Planning: Each phase in the hiring process must have a well-defined objective. The objective of a phase should be defined by the set of match criteria (a subset of Sara’s 12 match criteria listed above) for which we need to get strong signals. The hiring process needs to be planned in a way that each of the match criteria will be covered in at least one of the phases. After each phase, the hiring manager must ensure that we indeed got strong enough signals for rating the candidate on the criteria that were planned to be covered. Preparation: Come prepared for each phase in the process. This applies to both the hiring manager as well as to the candidate. On the hiring manager side: going through the candidate’s resume 10 minutes before an interview doesn’t count as coming prepared. The bare minimum is to check LinkedIn, Medium, and GitHub. You may find people you both worked with, as well as interesting projects that the candidate was involved in. That may drive some adaptation to the flow of the interview. The same is true for the candidate side: Spending 5 minutes on the company’s web site and 5 minutes on Glassdoor isn’t enough. Search for engineering blogs, public repos on GitHub, information about the tech stack on stackshare, and reviews about the product on G2. Transparency: Be as transparent as possible. Create an environment in which the candidate gets as many details as possible as early as possible, and feels comfortable to ask for additional details in case you didn’t cover something that he considers as an important signal for one of his match criteria. It starts with sharing the outline of the hiring process itself: How many interviews? with whom? what is the purpose of each interview?. It then goes to sharing as many details as possible about the company, the team, and the specific position. In some cases, I would even offer a candidate to spend some time with a team member holding a position similar to the one we are hiring for. This is not an interview, but rather an informal conversation between people who may soon become teammates. Personalization: Select the right questions for each candidate. I am not against having a fixed bank of questions and using them with multiple candidates, as long as the bank is wide enough and you can pick the right questions for the candidate you are interviewing. What do I consider “the right question”? typically I would go with questions that are either in a domain that the candidate was focused on in his past roles, or go to the extreme opposite and ask about domains he has no experience with. Pairing: Interview paring is great. It’s not a 2 on 1 interview. I mean, there are 2 people from the hiring company in the room, but one of them is actually doing the interview and the other one is just an observer. The observer’s role is to carefully examine and document signals for the match criteria that were defined as the objective of that specific interview. Immediately after the interview is completed, the interviewer and the observer should discuss the signals that the observer documented, and rate the candidate accordingly. Pairing is also a good method for sharing interviewing knowledge and continuously tune & optimize the questions being asked. Respect: Hiring involved a lot of pressure. For most people, a job interview is not a comfortable situation. This is especially true for software engineers. Treat candidates with great respect. They are professionals, and they are investing time and effort in this process. Respect mostly means efficient communication and direct feedback. Just as an example: after each phase, you must tell the candidate how long will it take for you to reach a decision, and make sure you (the interviewer) are calling him back to provide the decision. Not HR sending a cold email or something of that sort. Measuring: It’s impossible to improve without measuring. Document every phase of the hiring process you perform. There are multiple systems for doing this (at WalkMe we are currently using Lever), but honestly, even a simple spreadsheet will be OK. Define a set of KPIs for what you consider a good hire. Usually, these KPIs should be related to the performance of the hired employee ~6 months into his new role. Doing this consistently will provide you with the data you need for analyzing and optimizing your hiring process.
“Now, I wanna dance, I wanna win. I want that trophy, so dance good” (Mia Wallace)
Conclusion
There is no other area in software engineering that has an impact to improvement ratio as hiring. The impact of good hiring is huge, the improvement over the last years is tiny. When it comes to hiring software engineers, we are still using almost the exact same methods that we used 20 years ago. It’s about time we start applying a professional state of mind for optimizing the hiring process, just as we do for optimizing our dev process, quality process, release process, etc.
The same applies to the candidates. You are investing so much time and effort in learning technologies and becoming a great engineer. How about investing some effort in improving the way you select your next job so that your first day won’t feel like a blind date?
We can all do it better, and the time to improve is now! | https://levelup.gitconnected.com/blind-date-the-reality-of-software-engineers-hiring-process-f89754b5a1d4 | ['Ofer Karp'] | 2020-12-21 14:54:08.533000+00:00 | ['Leadership', 'Software Engineering', 'Software Development', 'Hiring', 'Human Resources'] |
About Me — Nitish Menon. Unwrapping good in the world — one pun… | About Me
Originally from India but currently residing in Toronto after successfully completing my MBA degree here. Yes, that business guy — but with a whole lot of heart. Promise.
Which is a perfect segue to learn about the content I write about. I generally write at the intersection of marketing, business, strategy, and a few thought pieces inspired by personal events. If any of the above themes excite you, be sure to check out some of my work. I guarantee you won’t be disappointed.
Did I mention earlier that I love puns and sarcasm of all kinds? It’s almost always expected in my stories, or at least wherever possible.
I love exploring new places. This passion of mine has taken me to over 10+ countries across the globe, with many more on the wish list. If you have some cool recommendations on places to visit, be sure to drop them in the comments below. I would love to check them out.
Another cliché, but I’m a huge football enthusiast. No, please don’t call it “soccer”. Absolutely love the English Premier League and Arsenal FC (once upon a time called “The Invincibles”). I also play the sport, not just admire it from a distance. Fun fact, I had the chance to represent my country at a soccer tournament hosted in Malaysia. Safe to say, I’m not bad at it. | https://medium.com/about-me-stories/about-me-nitish-menon-1cc2db74cf19 | ['Nitish Menon'] | 2021-03-31 23:37:56.561000+00:00 | ['Life', 'Introduction', 'About Me', 'Personal', 'Journey'] |
What are the “Strategies to Improve Business Leads? | This is Era of the digital world. Everyone is aware of internet advertising. The promotion of products or services using digital media including text messages, video messages, youtube, facebook, google, bing, amazon, emails, content, and radio channels to approach the targeted clients is known as internet marketing.
Numerous organizations are working on electronic media in order to provide marketing services online to the targeted customers. They have proper mediums to communicate with the clients.
Navicosoft is one of the best online advertising companies providing the best marketing services on a limited budget.
Evolution of internet marketing Worldwide:
The publication of a product, organization, or venture so as to increase sales or public awareness refers to the term internet marketing. Philip Kotler is known as the father of modern marketing principles. He gave the idea of advanced advertising strategies that are interconnected with economics.
Do you have any idea about web marketing evolution? The concept of the internet was developed when Ray Tomlinson in 1971 sent their very first email. IN 1990, people could only find the required information on the web 1.0 platform but couldn’t send it to another web. Later on, with the passage of time, new technologies emerged. Yahoo and the World Wide Web were launched. In 1998, Google and MSN search engines began to expand globally. By the year 2000, Facebook, LinkedIn, and Twitter were born. Nowadays new technologies are emerging faster.
How does an internet marketing company work?
Internet marketing companies had expert professionals who prefer customers’ demands and provide their efficient work in boosting their business strategies. Their aim is to provide better service to get good exposure on the internet. Online advertising has made things easily approachable to clients. Internet advertising companies exhibit different types such as SEO agency, paid advertising, online advertising, social media agency, complete service advertising agency, and inbound advertising agency.
What are the Tactics of internet marketing companies:
Followings are various tactics of the online marketing companies:
SEO:
SEO stands for Search engine optimization. This term refers to the practices specifically designed to enhance the availability of your site while searching for products or services on the internet.
SMM:
SMM stands for Social Media Marketing. This marketing strategy plays a significant role in getting a higher ratio of leads thus increasing sales of products. For example, people are highly active on Facebook, Twitter, TikTok, and Instagram.
Email Marketing:
Email marketing is a strategy that is used to communicate with customers by sending commercial emails. This strategy is helpful in winning customers’ loyalty.
Content Marketing:
Best internet marketing services had Content writers who write different contents and blogs for websites and businesses in order to increase sales.
SEM:
SEM stands for Search engine marketing. It is a form of online advertising that includes the publicity of websites by enhancing their visibility in SERPs through paid marketing.
Pay per click (PPC) is a marketing tool in which marketers pay every single minute when the user clicks their ads.
Is it fruitful to hire an internet advertising company:
It is better to consult an online advertising company than to set up your own marketing team. It provides better business insight. The customer always gets the latest trends. The expertise in an internet advertising company monitors the online reputation of clients’ business. Online marketing agencies help the industries to gather and manage a large amount of data concerning customers. Internet advertising company provides customers with reliability and liability. Online marketing companies had data analytics for monitoring brand performance. | https://medium.com/@muqadamnavicosoft/what-are-the-strategies-to-improve-business-leads-64cf29c258bc | ['Muqadam Navicosoft'] | 2021-12-28 10:13:50.631000+00:00 | ['Online Marketing', 'Digital Marketing', 'Best Internet Service', 'Internet Marketing', 'Digital Marketing Service'] |
15 post-purchase emails you should be sending (infographic) | Now that we’re almost past Black Friday / Cyber Monday, marketers need to ask an important question: What next?
All the resources and energy you put into Black Friday / Cyber Monday marketing — how do you ensure the momentum continues?
Obviously you can’t continue for the rest of the year all the deals, discounts and offers you offered during Black Friday / Cyber Monday.
So what do you do?
The answer, at least partly, lies in what you do after the sales.
Post-purchase emails is the one of the best things you’ll be doing after the marketing frenzy cools down.
Well-structured post-purchase emails can go a long way in improve customer experience, improving brand loyalty, encouraging repeat purchase and increases the chances of cross-sell and upsell.
Here’s an infographic that briefly shows what all post-purchase emails you should send out to customers.
Full blog available here.
Infographic courtesy QuickEmailVerification. | https://medium.com/@mayankdb/15-post-purchase-emails-you-should-be-sending-infographic-d4d6632dd6fc | ['Mayank Batavia'] | 2020-12-10 11:08:18.981000+00:00 | ['Customer Engagement', 'Black Friday', 'Email Marketing', 'Marketing Strategies', 'Growth Hacking'] |
Stabbed 27 Times | The victim’s hand shakes as she picks up a glass of water and takes a sip. Lifting her head to drink exposes markings that resemble serrated edges of a knife across her neckline. It’s somewhere in between scab and scar with the faintest bit of fresh still lagging.
The bruises on her face are healing and now a faded shade of yellowish-green. Darkness from exhaustion sit beneath her swollen eyes. Her lower lip quivers.
It looks like you are set to be released from here within the next couple of days, I say to her, avoiding eye contact. Do you have a plan for where you will go?
Her eyes gaze down at the table. I feel helpless sitting across from her. I can help her now that it’s over, if she let’s me but there’s nothing I can do to help what’s been done to her before now.
She breaks down sobbing.
The room is suffocating from the dense swell of tension in the air. I need to go back. If I don’t he will find me. He will find us. I know he didn’t mean to do it. I realize now what I did wrong. I can change. I can make him forgive me. I can.
She pleads. The knot in my stomach tightens.
Her fear is palpable, her pain heavy. I reach my hand across the table to touch hers. She pulls away. I inhale deeply, attempting to compartmentalize my emotions. It’s imperative I have no emotion. I was sent here to do a job. I need to get that job done.
He’s in jail, I inform her. I’m here to talk to you about pressing charges.
She stands up from her chair and walks over to the wall, facing it as if she were a scolded child who is hiding from me, her body swaying slightly. I can’t press charges. If I do the Department of Child Safety will take my children. I won’t lose them. No! Her voice is escalating.
I shuffle through the pages of her case file scanning for facts. Ma’am, the state is filling charges against him whether you press charges or not. He stabbed you 27 times. He is going to prison. You can’t stop it from happening but you can increase the length of his sentence.
The Judge will find him guilty. There are witness accounts and forensic evidence.
You can prevent him from doing this to you or someone else again, but only if you press charges along with the state. It’s the difference of 36 months and 25 years.
Our eyes meet for the first time. A tear rolls down the shape of her cheek, pausing at a slice mark above her jawbone before making it’s final descent onto the hospital gown that conceals the remaining stab wounds.
I can’t do that to him. Her tone is muffled as she wipes the moisture from her face. He loves me. His children need him. This is just a misunderstanding.
She begins to rattle off rationale, justifying his actions. She is prepared like an old pro skilled at making excuses for him and spinning stories.
I cut her off mid sentence and redirect our conversation back to matter of fact.
The man you refer to as your husband, I refer to as your abuser. Your abuser is a known gang member. He murders people. He’s involved in a lot of illegal shit.
She puts on an annoyed face and looks away.
He is being held without bail but that does not guarantee your safety. Anyone he is affiliated with could potentially come after you. I will arrange for you and your children to be placed in police protection upon discharge.
She crosses her arms indicating she’s shutting down or shutting me out. I’m going to lose her.
Three hours ago my boss gave me a job I didn’t want to do. I’ve since changed my mind so please, let me do that job. I assure you going into a safe house will prevent any involvement from the Department of Child Safety but if you do not agree, I cannot protect you.
I can put the paperwork wheels in motion making the decision for her but I want her to choose to do so on her own. Her abuser has already taken her power from her and I will be no better than him if I do the same.
You have until discharge to make a decision, I say as I slide my business card across the table to her. I collect my belongings and stand up. I feel her presence following me as I walk toward the door. I turn around searching her face for an answer.
I will think it over, she says before closing the door behind me. | https://erikasauter.medium.com/stabbed-27-times-58c2cd9e6446 | ['Erika Sauter'] | 2018-11-08 14:51:45.854000+00:00 | ['Relationships', 'Work', 'Domestic Violence', 'Politics', 'Culture'] |
Your Business Plan: 9 Places To Look For A Great Opening Line! | Your Business Plan: 9 Places To Look For A Great Opening Line
There it is. That blank screen with the little blinking line. And everything sounds so mundane.
Jakes Bakery will serve the best cakes in the county. (Yawn.)
I researched the industry and found that it is fail-proof. (Yawn.)
We came together to form a really good business. (Yawn.)
Its kind of like, Hey, whats your sign? Everybody knows why everybody is here, but cant I come up with a better opening line?
Even the most prolific writers get blank screen-itis. To help you get back on your pizazzing path, here are some places to look for inspiration for your business plan.
1. Your competitors websites. Seriously. Somebody put a great deal of time and effort into those websites. What do the headlines say? Is there a neat turn of phrase that you can turn again into your business plan concept?
2. Industry ads. Who better to put on your side that Madison Avenue advertising executives? Real pros have been at work here. They have had to distill major ideas into a few lines, a few catchy phrases. Study your industry publications for jewels that you can pick off their… | https://medium.com/@hassanmushtaq003/your-business-plan-9-places-to-look-for-a-great-opening-line-306ff2ba50dd | ['Hassan Mushtaq'] | 2020-12-19 11:27:41.338000+00:00 | ['Business Opening', 'Mindset', 'Business Development', 'Business Strategy', 'Business'] |
Alumni Spotlight: Abir Mazloum | Mentee, SANAD Lebanon Mentoring Programme.
“Mentoring to me is a support to those who cannot specify their goals and objectives; it’s the right guide for them to do so.” Abir Mazloum, Mentee, SANAD Lebanon Mentoring Programme
While many businesses have struggled to survive this year, Abir has been on a journey to transform her family business. It’s been a year of growth and discovery, one that has seen her develop a new profit line and increase her revenue, geographic reach and clients. “I joined the SANAD Lebanon Mentoring programme at a point when I was uncertain about the direction to take. I had two choices, to either open my own florist shop or further develop my father’s business. I’d seen my father struggle with the business for years and wanted to not only help him, but develop a successful family business,” shares Abir.
Abir considers herself a constant learning and has always sought out opportunities to develop her skills. Although her expertise lies in landscaping and design, she has taken up various online personal development courses which led to her interest in the mentoring programme. When she attended Mowgli Mentoring’s Mentoring Awareness Session, she found that the content aligned with her personal values, and she decided to apply for the programme.
“I felt that mentoring would help me develop objectives for the business and refine the ideas I already had.”
One of the key phases within Mowgli Mentoring programmes is the Kickstart Workshop. The workshop brings together mentors and mentees for 4-days of learning, and rapport building, paving way for the matching of mentor and mentee pairs. This is a critical stage within which the right foundation for thriving mentoring relationships is set, ensuring that the pairs are well prepared and aligned to develop trust-based, impactful relationships.
“When I was matched with Hussein Chouman, I felt that we had completely different ideas. The only thing we had in common was our personal and social values.” Keeping her commitment, she chose to keep going with the programme and soon discovered that the differences were the biggest advantage to their match. “Hussein gave me insights which I felt were quite foreign but provided fresh insights into my business and perspective. He has been a great support system.”
Copyright. Abir Mazloum.
One of the greatest milestones that Abir has realized in 2020, and with the support of her mentoring relationship is business growth. Being in the gardening and landscaping business, Abir’s business like many others was affected by the Covid restrictions. However, she was able to cultivate a growth mindset and become innovative. “The pandemic has helped me to develop my creative thinking, I now think through how we can better serve our clients. I realized that people were struggling with accessing fresh groceries.” Responding to a specific need, Abir has gone beyond landscaping and designing decorative gardening, to planting fruits and vegetables in her clients backyards. “At the time, the Lebanese government had a campaign urging us to stay home. We leveraged #StayHome campaign and similarly created our campaign “#StayHome, we’ll come to you” allowing us to help our clients to grow their own food from their backyards.
Through her mentor’s support, Abir has tapped into digital marketing to expand her business. “When we’re on lockdown, we spend much of our time online on our phones. It therefore made sense that we market our services on social media, a move that has seen us attract over 1,000 followers over the past few months. I now have clients from various regions of Lebanon.”
Abir’s family business currently sits on over 10,000 square meters and employs 9 full-time staff as well as 6 casual staff who are called on demand. | https://medium.com/@mowglitweets/alumni-spotlight-abir-mazloum-d713c26dd12d | ['Mowgli Mentoring'] | 2020-12-17 08:55:33.725000+00:00 | ['Landscaping', 'Mentorship', 'Entrepreneurship', 'Entrepreneurial Journey', 'Mentoring'] |
Introducing Individual NuGet Packages for Syncfusion Blazor UI Components | We at Syncfusion are happy to inform you that new individual NuGet packages for our Syncfusion Blazor UI components are now available from the 2020 Volume 4 release (v18.4.0.30).
The Syncfusion.Blazor NuGet source has been segregated based on the component and namespace. Each NuGet package uses the same namespace as in the NuGet Syncfusion.Blazor source. So, the new migration will not break your applications.
Pros of using individual NuGet packages
The individual NuGet packages are extremely valuable in reducing the size of the application by avoiding the loading of unwanted assemblies.
They will reduce the initial loading time when compared to the whole Syncfusion.Blazor package. You can install the required Syncfusion Blazor components alone in your application and ignore the rest of the components’ source code.
You can utilize the Blazor WebAssembly lazy-loading functionality.
You can use these individual NuGet packages in Blazor server-side applications to reduce the application’s deployment size in the production phase.
Note: You can’t use both the Syncfusion.Blazor and individual NuGet packages in the same application. It will lead to ambiguous compilation errors at build time.
Available NuGet packages
The following NuGet packages are now available in nuget.org.
Performance metrics
The following screenshot shows the assembly loading time of the complete Syncfusion.Blazor NuGet package and its dependency in the web browser.
The following screenshot shows the assembly loading time of the Syncfusion.Blazor.Buttons package by itself and its dependency in the web browser. When compared to loading the complete Syncfusion.Blazor NuGet package to use Buttons, loading the Syncfusion.Blazor.Buttons NuGet package is more efficient.
The following screenshot shows the assembly loading time of the Syncfusion.Blazor.Grid package and its dependency in the web browser.
Note: The previous metrics were taken in a Blazor WebAssembly application with .NET 5.0 in localhost release mode.
Summary
In this blog, we have talked about the availability of the new individual NuGet packages for the Syncfusion Blazor UI components in the 2020 Volume 4 release. We have also looked at the NuGet package segregation, performance metrics, and advantages.
Syncfusion Blazor components offer over 65 high-performance, lightweight, and responsive UI components for the web, including file-format libraries. Make use of these well-matured components and save time developing complex applications.
You can contact us through our support forum, Direct-Trac, or feedback portal. We are always happy to assist you! | https://medium.com/syncfusion/introducing-individual-nuget-packages-for-syncfusion-blazor-ui-components-3f01aa7b5907 | ['Rajeshwari Pandinagarajan'] | 2020-12-21 12:02:18.602000+00:00 | ['Productivity', 'Csharp', 'Blazor', 'Web Development'] |
Time-Series Data Analysis & Machine Learning Algorithm for Stock Trading | DATA WRANGLING & PIPELINE FOR DOWNSTREAM ANALYTICS
Time-Series Data Analysis & Machine Learning Algorithm for Stock Trading
A case study with technical analysis, feature selection, accuracy score & bias-variance trade-off
Image by Sarit Maitra
A lot of interesting work has been conducted in the area of applied ML Algorithms to analyze price patterns and predict stock prices. Most stock traders nowadays use Smart Trading Systems to predict prices based on various situations and conditions. An intelligent trader would predict the stock price and buy a stock before the price rises, or sell it before its value declines. Though it is quite hard to replace the expertise of an experienced trader, however, an investment in a machine learning based prediction algorithm is directly result into high profits.
Moreover, stock market is a financially volatile market and it is important to have a very precise prediction of a future trend. Because of the financial crisis and scoring profits, it is necessary to have a secure prediction of the stock prices. Predicting a non-linear signal requires advanced ML algorithms. Here. we shall work with classification ML algorithms with the help of feature selection to examine the success rate of making a profit.
We have focused on technical analysis here in this article. However, ML can also play a major role in evaluating and forecasting the performance of the company and other similar parameters helpful in fundamental analysis. In fact, many successful automated stock prediction and recommendation systems use some sort of a hybrid analysis model combining both fundamental and technical Analysis.
This article is written in 2 parts, where Part-1 is this part which covers data collection, data wrangling, feature creation and important feature identification which involves multi-collinearity and variance inflation factor. Part -2 covers different ML algorithms, hyper-parameter optimization, accuracy metrics, confusion metrics and bias-variance trade-off.
Let us pull the data and inspect the available features.
Data comprise of Brent Crude Oil continuous futures since 2000 with 11 variables. Individual futures contracts trade for very short periods of time, and are hence unsuitable for long-horizon analysis. Continuous futures contracts solve this problem by chaining together a series of individual futures contracts, to provide a long-term price history that is suitable for trading, behavioral and strategy analysis.
We have here few variables with missing values and are not needed for our future analysis. So, let us drop these variables form the data frame. The other missing values in the data are interpolated using forward-fill (‘ffill ()’)method to propagate last valid observation forward.
The data frame has the index set to dates and the columns are:
Open: The price of the first trade on the given trading day.
High: The highest price at which a stock traded for the given trading day.
Low: The lowest price at which a stock traded for the given trading day.
Settle: This is the the closing price after adjustments and popularly known as Adjusted Closing price. Adjusted prices are adjusted for stock dividends, cash dividends and splits, which creates a more accurate return calculation.
Volume: The number of shares traded for the given trading day.
Daily return
Adjusted closing price which is Settle here, gives us all the information we need to keep an eye on the stock. We can use un-adjusted closing prices to calculate returns, but adjusted closing prices save us some time and effort. The mathematics behind daily returns calculations works like this:
For example, the 5 May 2020 price was $30.97 and the 6 May price was $29.72, the daily return is [(29.72/30.97)-1] * 100 = -(4.03)%.
We have used simple python function to derive at average daily return and volatility for all the rows.
However, there is a catch in daily return calculation:
As an example, if we made 25% one day and lost 20% the next, our geometric average daily return is (1.25*0.8)^ 0.5–1 = 0. Therefore, 0% mean daily return which is arithmetic return and which does not always mean that stock is not making money.
Volatility
Traditionally, risk is always calculated as volatility. Daily volatility is computed by finding out the square root of the variance or std deviation of daily stock price. Volatility not only measures risk, but affects the expectation of long-term (multi-period) returns. Traditional risk frameworks that rely on std deviation generally assume that returns conform to a normal bell-shaped distribution which shows that about 2/3rd of the time (68.3%), returns should fall within one standard deviation (+/-); and 95% of the time, returns should fall within two standard deviations.
Volatility normally erodes returns. We are aware of random walk theory that, std deviation increases in proportion to the square root of time. Considering that let us understand the arithmetic behind this:
we start with $100 and then gain 10% to get $110.
Then we lose 10%, which takes us to $99 ($110 x 90% = $99).
Then we gain 10% again, to net $108.90 ($99 x 110% = $108.9).
Finally, we lose 10% to net $98.01.
It may be counter-intuitive, but our principal is slowly eroding even though our average gain is 0%.
Probability of +/-(1%); +/-(3%); +/-%(5) change in price
We wanted to check the probability of change in price which also gives an indication of high volatility. Here we see that over 96% probability that price may change between +/- 5%.
Data wrangling
Let us add few external data-set to examine if we can find any correlation with Brent Crude oil. This has been done for illustration purpose.
In real-life business analysis, we may need to explore relevant variables which might have an impact on crude oil pricing e.g. weather, US economy, international economy, US dollar exchange rate, geopolitical events, supply and demand statistics etc. and there could be many more.
We have added-
USA real gross domestic product USA real disposable personal income Civilian unemployment rate.
Resampling
These time series data are available on monthly frequency format. We need to summarize or aggregate time series data by a new time period. Therefore, we need to up-sample time series data to a higher frequency and interpolate the new observations. The below database queries are quite similar to SQL queries. Here, I have chained together with a ffill() and then bfill() the remaining NaN values.
Correlation
Correlation Matrix is basically a covariance matrix. It is a matrix in which n-m position defines the correlation between the nth and mth parameter of the given data-set.
It is quite clear from the above values (<0.5) that, none of the additional variables are correlated with actual price column (‘Settle’). So, we drop the idea of adding these variables and will focus of adding different features that we can create from original price & date columns. However, point to be noted here that, correlation can only directly measure linear relationships between its inputs.
Let’s define our problem statement.
Problem statement
As we are aware that, the more of a share is transacted, more it is valuable; and on the other hand, if a share is put into transaction in a low volume, it is not so important for some traders and by default, it’s value decreases. Therefore, depending on the power to predict future values, the above anticipation of the market can generate profit or losses. A list of algorithms are available to develop a model; however, due to non-linear nature of the stock market signals, some methods have yet to give promising results; others have not reacted as well on the stock market exchange.
Technical Analysis
When applying Machine Learning to Stock Data, we are more interested in doing a Technical Analysis to see if our algorithm can accurately learn the underlying patterns in the stock time series.
Feature engineering
We create an ‘outcome’ binary variable, 1 if the trading session was positive (‘Settle > Open’), or else 0. Intuitively, based on the efficient market hypothesis, the price of the stock yesterday is going to have the most impact on the price of the stock today. Thus as we go along the time-line, data-points which are nearer to today’s price point are going to have a greater impact on today’s price.
We examine the number of 0,1 values to check if our dateset is balanced. as we can see that our binary numbers are almost evenly distributed.
Moving Average (MA)
The average of the past n values till today.
Rate of Change (ROC)
The ratio of the current price to the price n quotes earlier. n is generally 5 to 10 days.
Relative Strength Index (RSI)
Measures the relative size of recent upward trends against the size of downward trends within the specified time interval (usually 9–14 days).
Relative strength index (RSI) = 100–100 / (1 + RS), where RS = (Average gain /Average loss) over the last 14 days
Williams %R
It is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R used to find entry and exit points in the market.
Ease of Movement (EVM)
It is a volume-based oscillator which indicates the ease with which the prices rise or fall taking into account the volume of the security.
We write the below function to clean the data set of nan, Inf, and missing cells (for skewed datasets).
Feature selection as part of a pipeline
Feature selection is usually used as a pre-processing step before doing the actual learning.
Multi-collinearity
Multi-collinearity is a state of very high inter-correlations among the independent variables. It is therefore a type of disturbance in the data, and if present in the data, the statistical inferences made about the data may not be reliable.
Variance inflation factor (VIF)
VIF measures the impact of multi-collinearity among the X’s in a ML model on the precision of estimation. It expresses the degree to which multi-collinearity among the predictors degrades the precision of an estimate. VIF is computed as (1/(1-R2 )) for each of the k -1 independent variable equations.
Here we see that, out of all the variables (26) used, only 11 are enough to explain the variance in the model. No classifier can compensate for the use of irrelevant or inaccurately measured features. The only way to improve performance is to find better features. This is the reason it is important to choose mathematically correct features that are also separable.
Using variance to reduce feature could be troublesome because we don’t have any absolute scale on which to pick a threshold for a good variance. If we want to be more objective, a grid-search mechanisms is advisable to choose a good variance threshold.
We will test different machine learning algorithms to determine the best model.
Conclusion
Here, we have done experiment with Brent Crude Oil Continuous Futures daily data. The data set was from year 2000 to till data. We have performed necessary data cleaning and data wrangling by transforming and mapping data into a format with the intent of making it more appropriate and valuable for downstream analytics purposes. we have done technical analysis and feature engineering to see if our algorithm can accurately learn the underlying patterns in the stock time series. Necessary cleaning was conducted to perform collinearity statistics. Collinearity statistics helped us to determine the inter-correlations among the independent variables and reduce the number of relevant variables for predictive analytics. We have used selected features based either on their stand-alone characteristics (e.g. variance) or on their relationship with the target (using correlation or information).
In the next part (Part-2), we shall cover the classification algorithm and determine the accuracy of the developed model.
I can be reached here.
Notice: The programs described here are experimental and should be used with caution. All such use at your own risk.
References: | https://medium.com/swlh/time-series-data-analysis-machine-learning-algorithm-for-stock-trading-2e22a5b3794a | ['Sarit Maitra'] | 2020-09-10 14:49:44.620000+00:00 | ['Data Analysis', 'Machine Learning', 'Feature Engineering', 'Algorithmic Trading'] |
Google is paving the way to personal TV nirvana | Pierre Donath is Director Product and Marketing at 3SS
Today’s digital entertainment sector is more fragmented than ever: it’s downright confusing for some. The now perennial threat of cord cutting hovers like a sword of Damocles for incumbent Pay TV operators worldwide. In parallel, premium content brands and broadcasters who are intent on forging direct, profitable relationships with viewers on multiple screens are getting to grips with how difficult their ambition is to achieve. Moreover, they all have become acutely aware — often through bitter experience — how a loyal relationship with a subscriber can quickly lead to a complete turnoff if the service is at all difficult to navigate or if finding the content which the viewer desires is even just a bit inconvenient or not intuitive enough.
What all incumbent and prospective TV providers have in common is the strong desire to create a service which consumers will truly love, recommend to their friends and colleagues, and happily pay for over the long term.
Fortunately, that vision for what makes a next-generation, truly compelling TV service is becoming clearer. To the surprise of many — and to the anxiety of some — none other than Google is emerging as the TV service provider’s best friend on this journey.
To examine what a great TV experience feels like, let’s put ourselves in the shoes of the viewer, or sit on his or her sofa, trusty remote in hand, as it were. What do viewers really want? What does a great TV experience feel like?
We think the answer is pretty clear: People want lots of great content, linear, live and on-demand, originating from broadcasters as well as from apps and the web — all seamlessly offered in one attractive interface, smorgasbord-style, and easily searchable for viewing at any time.
Importantly, TV needs to be ultra-easy-to-use, and ultimately, personal. Enabling the best possible user experience is absolutely paramount.
Sounds simple, but we all know that making it happen isn’t. | https://medium.com/@3SS/google-is-paving-the-way-to-personal-tv-nirvana-90aef3f02c9 | [] | 2019-05-10 12:49:59.106000+00:00 | ['AndroidTV', 'TV', 'Future', 'Operator', 'Android'] |
Ceto: A Sea Monster in the Trading Market | Ceto’s collection
Ceto is a well-known sea monster in Greek mythology, one of the oldest monsters, and the daughter of Pontos and Gaia, half fish, half serpent. She and Phorkys gave birth to a slew of terrifying sea monsters, including the dragon Ekhidna (viper), the sailor-devouring Skylla (crab), the hundred-headed serpent Ladon, the one-eyed Graiai (grey ones), and the horrifying Gorgone’s (terrible ones).
As the mother of sea monsters, a creature with infinite power that ruled the depth of the Oceans. Through the myth, we named our project to emphasize the fact that it’s one of a kind in the whole internet computer, a project with vision, empowered by the IC network aiming at providing a new way of trading NFT and F- NFT, to provide liquidity, to swap and mint token within a minute.
Ceto is ready to reign once more.
As an FT and NFT/F-NFT trading platform, Ceto can support token swap and order book trading. F-NFTs are the next big thing to govern the future and Ceto is positioned as a leader.
Ceto’s core features
Ceto collectible share distribution:
Ceto collection is divided into 100,000 F-NFT equity shares;
30,000 shares Airdrop for Twitter activities;
10,000 shares Airdrop to public testers to detect bugs and submit feedback;
10,000 shares for operating expenses;
The remaining 50,000 shares are locked for future use.
What are the advantages of owning the Ceto collection’s F-NFT share?
1. Ceto’s F-NFT share will function as a governance token of Ceto’s collection, offering voting power and income rights in the form of a DAO;
2. Buyback mechanism, 50% of the handling fees will be used in the first two months to buyback the Ceto F-NFT collection and burn to make it deflationary;
3. Other NFT projects are also eligible for Airdrop
Ceto F-NFT trading will officially launch in Q1 2022 to support more NFT/F-NFT access and trading. | https://medium.com/@cetoswap/ceto-a-sea-monster-in-the-trading-market-8ecabf36d32a | [] | 2021-12-31 02:17:25.258000+00:00 | ['Nft Collectibles', 'Icp', 'Ceto', 'Dfinity', 'F Nft'] |
Moderna Vs. Pfizer mRNA Vaccines for Covid-19: The Key Points | Moderna Vs. Pfizer mRNA Vaccines for Covid-19: The Key Points
Explaining the efficacy and safety profiles, handling protocols, and remaining questions about disease spread and long-term immunity and safety between the two vaccines.
Photo by Nataliya Vaitkevich from Pexels
The fastest vaccine the FDA has approved was the Ebola DNA-based vaccine that took about five years. For Covid-19, in less than a year, we already have two candidate vaccines — mRNA-based vaccines from Pfizer and Moderna — awaiting approval this year or early 2021. A record-breaking indeed. How do the two mRNA vaccines compare, and what makes them so effective? And what are the things left unanswered?
Current knowledge
1. Efficacy
While data is yet to be published as formal peer-reviewed scientific papers, Pfizer claimed a 95% efficacy, and Moderna claimed a 94.5% efficacy in preventing Covid-19 infections in press-releases. As Pfizer has not provided as much clinical information as Moderna, this section will focus on the latter.
As it’s our own cells that make it, the protein’s expression and levels are more stable and, thus, enable a more potent immunogenic response.
Moderna’s phase III clinical trial enrolled 30,000 participants in the U.S. and randomized them into the vaccine or placebo group. After administering the shots on day-1 and day-29, the researchers waited until 95 participants become positive for SARS-CoV-2.
(Assuming 1% of the population gets infected, we can expect 300 infections out of the 30,000 sample size, so 95 is a satisfactory number.)
The trial then unblinded the 95 participants. Results found that 90 of them belonged to the placebo group and only five to the vaccine group. In these 95 cases, there were 15 adults aged >65 and 20 people of color. Moderna’s trial also reported 11 severe Covid-19 cases that all belonged to the placebo group. This means that Moderna’s vaccine is 94.7% (90/95) effective in preventing Covid-19, which applies to severe cases and diverse populations.
19th November update: Pfizer just updated their webpage to provide more clinical information. Pfizer’s clinical trial recruited 43,000 participants with a similar study design as Moderna’s. Pfizer did the unblinding once they reached 170 cases of Covid-19, of whom 162 were in the placebo and eight in the vaccine group. There were 10 severe cases, of which nine belonged to the placebo group. The vaccine was also effective across age, sex, race, and ethnicity groups. Overall, that makes 95.3% (162/170) effectiveness.
The striking effectiveness of the mRNA vaccine lies in its biology. mRNA stands for messenger RNA, a genetic sequence that carries information that the cell’s ribosome reads. mRNA thus messages the cell’s ribosome to make new proteins without any involvement of outside proteins or microbes (as seen with traditional vaccine types). As it’s our own cells that make it, the protein’s expression and levels are more stable and, thus, enable a more potent immunogenic response.
2. Safety
Both Pfizer and Moderna reported no significant safety concerns about their vaccines. While Pfizer has not released the specifics data about its vaccine’s safety, Moderna has documented a few.
In Moderna’s clinical trial, side effects include fatigue (9.7%), myalgia (8.9%), arthralgia (5.2%), headache (4.5%), pain (4.1%), and redness at the injection site (2.0%) that are temporary. These side effects are also commonly seen with other vaccines, so they are not much of a concern.
19th November update: Pfizer’s clinical trial reported that notable side effects with over 2% occurrence were only fatigue (3.8%) and headache (2%). These lesser side effect rates than Moderna’s is an encouraging finding.
But considering that mRNA is so fragile, long-term health problems are unlikely.
3. Storage conditions
Pfizer’s vaccine needs to be stored at -70°C or below. In contrast, Moderna’s vaccine only requires a -20°C storage condition, the same as a regular fridge freezer. Both vaccines can last for six months in their respective temperatures. Moderna’s vaccine can even be stored at 2–8°C for 30 days, and at room temperature for 12 hours. So, Moderna’s vaccine is much easier to distribute.
As mRNA is single-stranded, it’s much less stable than its DNA counterpart and degrades easily. So, a freezing temperature at -70°C preserves the mRNA integrity. In contrast, DNA is double-stranded and coiled into a helical formation that is more structurally tough.
But Moderna’s vaccine used a lipid nanoparticle coating to increase the mRNA stability. “[It’s] kind of like putting your chocolate inside a candy coating — you have an M&M, so the chocolate doesn’t melt,” explained Margaret A. Liu, professor of microbiology and immunology at the University of California and a former president of the International Society for Vaccines. As expected from a company that developed two candidate mRNA vaccines in the past, this is very innovative.
4. Booster shots
Pfizer’s vaccine requires two shots taken three weeks apart (i.e., days 1 and 22). The same applies to Moderna’vaccine that needs two shots taken four weeks apart (i.e., days 1 and 29). This means double the syringes, clinical visits, vaccine vials, time, and workload.
For Pfizer’s vaccine, it also means twice the trouble in distributions, owing to its strict storage condition, especially in developing countries or areas. For Moderna’s vaccine, the one-week extra time between shots is also encouraging as it allows a buffer time for any unexpected events.
Remaining questions
1. Transmission
For one, we still don’t know if these vaccines prevent transmission. “Even if [Covid-19] vaccines were able to confer protection from disease, they might not reduce transmission similarly,” a paper in The Lancet stated.
However, if the vaccines protect the high-risk groups, that alone would significantly reduce the pandemic’s health burden. As one in five SARS-CoV-2 infections shows no symptoms but remains contagious, we should hope that the mRNA vaccines stop transmission as well.
Plus, we can control a few lifestyle factors to maximize our chances of successful, potent immunization.
2. Long-term efficacy
As these mRNA vaccines rely on B-cell antibody responses that tend to wane over time, lifelong protection is not guaranteed. But the good news is that these mRNA vaccines also trigger T-cell immunity, although to a lesser extent than B-cell, which could provide a more robust immunological memory. Another upside is that even waning antibodies can still provide protection via other mechanisms such as T-cell-induced activation of antibodies.
So, we can be assured that immunity does not fade easily. Immunity may be less effective but unlikely to turn zero. Plus, we can control a few lifestyle factors to maximize our chances of successful, potent immunization.
3. Long-term safety
This one is controversial. As mRNA vaccines are new inventions, they have never been approved for any infectious diseases. So, there are no prior research or historical records we can rely on, unlike the traditional types like live, inactivated, or protein-based vaccines. But considering that mRNA is so fragile, long-term health problems are unlikely.
“We will have a safety profile for only a certain number of months, so if there is a long-term effect after two years, we cannot know,” said Dr. Tal Brosh, head of the Infectious Disease Unit at Samson Assuta Ashdod Hospital. If we wait for two years, “then we would have the coronavirus for two more years.”
Short abstract
Moderna’s vaccine has less stringent storage and booster shot requirements, which helps make mass distribution easier. Both Moderna’s and Pfizer’s vaccines have similar efficacy at around 95%, which applies to severe Covid-19 cases and diverse populations with no major safety issues. Notably, Pfizer’s vaccine has lower side effect rates than Moderna’s. But whether these mRNA vaccines stop transmission, provide lifelong immunity, or are safe in the long-run remains unanswered.
Still, mRNA vaccines' success is something to be optimistic about as it’s one step closer to normalcy. Lastly, comparing the two mRNA vaccines shouldn't promote competition as “this really isn’t a race,” said Professor Liu. “Just by sheer numbers, we probably need multiple, multiple vaccines.” Who knows if the third vaccine (like AstraZeneca’s one) is the better one.
December 2020 updates: Both Moderna’s and Pfizer’s mRNA vaccines for Covid-19 have successfully attained FDA approval for public use. | https://medium.com/microbial-instincts/moderna-vs-pfizer-mrna-vaccines-for-covid-19-the-key-points-39da4fe65e0d | ['Shin Jie Yong'] | 2020-12-25 01:28:21.709000+00:00 | ['Technology', 'Science', 'Innovation', 'Coronavirus', 'Life'] |
What past research says about the effect of vegetables and fruits on Coronaviruses? | What past research says about the effect of vegetables and fruits on Coronaviruses? GASTROVET May 2, 2020·4 min read
There are great confusion and misinformation on the ongoing Covid-19 outbreak nowadays. One common question is how we can protect ourselves from this virus by just getting nutrients or their products through a healthy diet. Or, I mean, is it really possible to get some protection by simply eating or drinking something?
First, we can boost our immune system by eating fresh plant products that have been shown to have beneficial effects such as garlic, turmeric, berries, citrus fruits, and omega-3 fatty acid sources such as cold water fishes such as salmon, camelina seed and flaxseed. Another important thing is to stay away from omega-6 fatty acids as much as possible nowadays. Omega-6 fatty acids have been shown to have inflammatory effects in humans. In contrast, Omega-3 fatty acids are known to have good anti-inflammatory effect. So we do not want to have this inflammatory effect of Omega-6 fatty acids during a disease outbreak. For instance, soy oil, corn oil, sunflower oil are packed with omega-6 fatty acids. As a recommendation, we should get 30% of oil consumption from omega-6 sources, %30 from omega-3 fat sources and rest can come from monosaturated fat sources such as olive oil daily. This is something we need to do every day to keep our immune system alert for invaders like bacteria and viruses. However, this does not necessarily mean they will protect us from Covid-19? Or, at least, for now, we do not have any specific report on that. But it is better to have those dietary ingredients and oils in our diets regardless of Covid-19. But we also need to look at the past research on the efficacy of plants including vegetables, fruits, and their products.
Salmon is a good source of omega-3 fatty acids
To really understand how nutrients can fight viral infections including coronavirus infections, we need to look at the past research on the efficacy of plants and products on earlier Coronavirus varieties named SARS and MERS.
When we investigate the published data on the effects of plants on SARS and MERS coronaviruses, we found that there are 2 different categories roughly: the first group is the plants and products that people can from the grocery store, and the second group is the Medicinal Plants and herbs that people can not reach out them easily.
Under the category of plants that we can get easily from markets, there seem to be 2 main categories: the first category is the plants that come with a high amount of LECTINS, and the second category is the plants that come with a high amount of QUERCETIN.
Some good sources of LECTINS are garlic, leek, and wild garlic. Plant lectins have been shown to have antiviral activity against SARS-Cov. Another good source is garlic and shown to have antiviral activity against a type of chicken Coronavirus. Leek also was shown to interfere with the binding of coronavirus to the host cells.
Garlic is loaded with Lectin
Some foods that are rich in Quercetin are onions, broccoli, peppers, apples, green tea, black tea, berries, and red wine. The main positive effect of quercetins in humans is that it reduces inflammatory cytokine production. This is important since we know that Covid-19 results in an extreme cytokine release in severe cases. Therefore, although there is no published data on the effect of Quercetin on Covid-19, it can be assumed that having these plants in our diets can be protective, at least to some degree.
There are also some uncommon medicinal plants used against SARS-Cov and proven to be helpful in some ways. Some of them are Chinese skullcap, horse chestnut, and licorice root (ingredient for coke production). Their mode of action is different from the lutein, and some of them just inhibit the virus replication and some of them help boost the immune system. However, people can not get these plants easily and most of the times it is not easy to incorporate them into diets.
To sum up, it looks like getting Quercetin and Lectin through diet can be recommended based on past research against coronavirus infections but their efficacy on the new Covid-19 requires further research.
If you want to get more details on the effects of plants to fight Coronaviruses you might want to watch the video given in the link below. Stay healthy. | https://medium.com/@pekel1234567/what-past-research-says-about-the-effect-of-vegetables-and-fruits-on-coronaviruses-dc7efb8fe8ff | [] | 2020-05-02 10:12:47.249000+00:00 | ['Medicinal Plants', 'Fight', 'Plants', 'Coronavirus', 'Vegetables'] |
Best smart thermostat: Reviews and buying advice | The best smart thermostat will have an outsize impact not only how comfortable you are in your home, but also on your household budget. Heating and cooling your home accounts for nearly half of the average home’s utility bills, according to the U.S. Department of Energy.
To save money on your holiday shopping, see our roundup of the best deals on TVs, soundbars, media streamers, and more.A programmable thermostat can help reduce those costs by turning your HVAC system on when you anticipate being home, and off when you don’t think you’ll need indoor climate control. A smart thermostat goes far beyond relying on a simple schedule. It will not only enable you to create more sophisticated schedules for every day of the week, and give you complete control over your HVAC system, even when you’re away from home. We continually test and evaluate smart thermostats and can help you find the right one for your home.
Updated December 4, 2020 to add our Wyze Thermostat review, which—by virtue of its stronger feature set and much lower price—ends the Google Nest Thermostat’s short reign as our top pick in the budget category.
[ Further reading: A smart home guide for beginners ]Best smart thermostat Ecobee SmartThermostat with voice control Read TechHive's reviewSee itEcobee tops itself before anyone else can: The Ecobee SmartThermostat with voice control is our number-one pick in this category.
Nest usually gets all the attention—and the company deserves credit for shaking up a once sleepy market—but Ecobee’s latest smart thermostat is the best you can buy today. The new model builds on the model that preceded it, which was itself very well executed. Many other smart thermostats rely on measuring a home’s temperature in just one spot: Where the thermostat is located. Trouble is, that spot is usually in a hallway or somewhere else that you never spend any time in. Ecobee lets you place multi-purpose sensors in various rooms in your home, so that the rooms you’re in are the ones that the thermostat instructs your HVAC system to heat or cool to keep you comfortable.
Runner-up Nest Learning Thermostat (3rd generation) Read TechHive's review$249.00MSRP $249.00See iton Google Play StoreThe Nest is still the best for users who don’t want to think about their thermostat, but it's no longer our top pick.
Don’t count Nest out of the thermostat game. The Google division has worked harder than anyone to build out a comprehensive smart home ecosystem with its own products—the Nest Cam security camera series and the Nest Protect smoke and carbon-monoxide detectors—as well as a wide array of third-party products: Everything from ceiling fans to lighting controls and even smart appliances. The recent addition of the Nest Temperature Sensor makes this device even smarter.
So why does it garner runner-up status here? Nest counts on your buying other Nest products to help determine when you’re home and away, for starters. And anyone investing—or planning to invest—in Apple’s up-and-coming HomeKit ecosystem should steer clear of Nest products.
Best budget smart thermostat Wyze Thermostat Read TechHive's review$49.99MSRP $49.99See iton Wyze LabsBy far the best budget-priced thermostat we’ve tested to date.
Wyze Labs is the market leader when it comes to offering inexpensive smart home products, and its new Wyze Thermostat is certainly no exception. It’s not the prettiest or most elegant device we’ve seen, but it offers more features and supports more types of HVAC systems than the Nest Thermostat, our runner-up in this category, and it costs just $50. If Wyze delivers on its promise to offer remote room sensors, it will be an even stronger value.
Runner-up Nest Thermostat Read TechHive's reviewSee itThe all-new and budget-priced Nest Thermostat is easy to recommend, but it would be an even better value if it supported Nest’s remote sensors.
It’s hard to beat the Nest team when it comes to attractive industrial design, and the Nest Thermostat is an elegant device if you don’t need to support more sophisticated HVAC systems or you don’t care that it doesn’t support remote sensors that can eliminate hot and cold spots in your home. But its $130 price tag is a significant premium for design.
Best smart thermostat for high-voltage heaters Mysa Smart Thermostat Read TechHive's reviewMSRP $129.00See itA stylish and high-tech choice for making dumb high-voltage heaters a whole lot smarter.
These types of thermostats are designed for baseboard, radiant, fan-forced convector, and similar types of heaters, as opposed to the more common central HVAC systems. As such, there are far fewer choices in this category. So far, the Mysa Smart Thermostat is our top pick, due to its elegant industrial design and its broad support for other smart home devices, including Amazon Alexa, Google Assistant, and Apple HomeKit.
Best controller for a stand-alone air conditioner Sensibo Air $179.00MSRP $199.00See iton SensiboThe Sensibo Air’s killer feature is a remote sensor that detects motion as well as temperature and humidity, ensuring you get the most comfortable environment in the exact area of the room you’re occupying, and saving you money by turning your air conditioner off when you’re not in the room to need it.
If you don’t have a central HVAC system, or if your supplement one with one or more stand-alone air conditioners, the Sensibo Air will make those units smarter and more efficient. It’s expensive, but very much worth the cash.
Runner-up Cielo Breez Plus Read TechHive's reviewSee itWhile this isn’t the most attractive air-conditioner controller we’ve seen, it is the most versatile and the easiest to set up and use. It's also compatible with more air conditioner models than its competitors.
The Cielo Breez Plus doesn’t have the slick discrete room sensor that comes with the Sensibo Air, but it will still greatly improve the performance of your stand-alone air conditioner, and it’s less expensive than its more sophisticated competitor.
What to look for when shoppingC-wire requirement Most smart thermostats require more electrical power than a set of batteries can provide. Fortunately, they don’t require so much power than they need to be plugged into the wall. They rely instead on low-voltage power provided by your HVAC system. Many smart thermostats require the presence of a dedicated C (common) wire for this purpose, while others can siphon electricity from another source, typically the R (power) wire. But the latter practice is known to cause problems with some HVAC systems, including permanent damage. If you pull out your existing thermostat to install a smart model and find no C wire connected to it, look inside the wall to see if there’s one that hasn’t been connected. If there’s no C wire, our advice is to have one installed. Only a couple of the thermostats reviewed here require a C wire, but all the manufacturers highly recommend using one.
Ease of installation A thermostat shouldn’t be difficult to install, even if you’re only moderately handy. The manufacturer should provide comprehensive, yet easy-to-understand instructions with plenty of photographs or illustrations to guide you through the process. The thermostat itself should be clearly indicate which wires go where, and most companies provide labels that you can attach to the wires coming out of the wall as you disconnect and remove your old model. The wires themselves should be color coded, but a good practice is to photograph your old thermostat for reference before you take it down.
HoneywellGeofencing This feature uses the thermostat’s app and your smartphone’s GPS chip to establish a perimeter around your home. When you leave the perimeter, you presumably no longer need to heat and cool your home, or you can at least have the thermostat adjust the temperature so that it’s not running unnecessarily. When you cross the perimeter again as you come home, your HVAC system can kick into action so your house is comfortable when you walk in the door.
High-voltage heater support Most smart thermostats are designed to work with central HVAC systems. If your home is heated by high-voltage heaters (baseboard, radiant, and fan-forced convector, for example), you’ll need a thermostat that’s specifically designed to work with that type of heater.
Remote Access Remote access enables you to control your thermostat from afar, so that you can check in and adjust the temperature from wherever you have a connection to the internet.
Sensors Geofencing is great—provided everyone who lives in the home has a smartphone. Motion and proximity sensors offer an alternative means of determining if your home is occupied and therefor in need of climate control. The original Nest thermostat was often criticized for relying too much on its motion sensor. If no one walked past it often enough, it would decide that the house was empty and it would stop heating or cooling. Some smart thermostats can also tap into door and window sensors as well as the motion sensors for your home security system. And proximity sensors on the thermostat itself can trigger a thermostat’s display to turn on when you walk past it, making the screens a handy feature in their own right, even if for no other reason than providing a nighttime pathway light.
Samsung The best smart thermostats can be integrated into broader smart home systems, such as Samsung’s SmartThings.
Smart-home system integration Every smart thermostat comes with an app so you can control it with your smartphone or tablet, but the best models can also be integrated with other smart-home devices and broader smart-home systems. This can range from being able to adjust the temperature with a voice command via an Amazon Echo or Google Home digital assistant, to linking to your smoke detector so that your fan automatically turns off when fire is detected, preventing smoke from being circulated throughout your home. Other options to consider include IFTTT and Stringify support, Apple HomeKit compatibility, smart-vent connectivity, and tie-ins with home security systems.
System complexity Each of the thermostats we tested support multi-stage heating, ventilation, and air conditioning (HVAC) systems, as well as heat pump systems. If your home is divided into zones that are heated and cooled independently of each other, you’ll probably need one thermostat for each zone. A single app should be able to control multiple zones.
User interface Long gone are the days when a thermostat’s user interface consisted of numbers on a dial. The more sophisticated a device becomes, the more difficult it can be to learn to use. The last thing you want to be doing is staring at inscrutable hieroglyphics on the wall when all you really want is to be warmer or cooler. A smart thermostat should convey important information at a glance and should easily adapt to your specific needs.
How we test smart thermostatsWe install thermostats in a single-family home with a conventional HVAC system and use each one for a week or more to determine how effective it is at maintaining a comfortable environment. The home’s existing thermostat was wired with G, R, W, and Y wires. There was also a C wire in the wall that was connected to the furnace, but that had not been previously used.
While there is no regulated standard for color-coding HVAC wires, industry practice has the G wire connecting the thermostat to the fan. This wire is typically green. The R wire, typically red, is for power. Some systems have separate power wires for heating and cooling and are labeled RH and RC respectively. The typically white W wire is for auxiliary heat; i.e., a second source of heat. The Y wire, which is typically yellow, connects the thermostat to your air conditioner. Finally, the C or “common” wire is used to carry power and is typically blue (think cerulean if you need a mnemonic).
Our smart thermostat reviews Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@kevin91395093/best-smart-thermostat-reviews-and-buying-advice-d34780897c88 | [] | 2020-12-23 21:49:24.975000+00:00 | ['Lighting', 'Audio', 'Home Tech', 'Electronics'] |
How to install Hyperledger Fabric 2.2 on Ubuntu 20.04 in Google Cloud Platform | Detailed documentation link: https://hyperledger-fabric.readthedocs.io/en/release-2.2/
Find a working end-to-end example of installation and quick test of Hyperledger Fabric is quite complicated. The only working guide seems to be the mentioned above. However, if you are a beginner, maybe you will get lost in the tons of details. The process is simple and the following guide will take to the stage, when you can start exploring deveploment and deployment of the smart contracts.
First you will need a virtual machine. 2Core/4GB should be ok:
Select the right boot disk (100GB should be enough and Ubuntu 20.04 LTS Minimal). Do not forget to add SSH public key to access the machine ;)
When instance is ready, SSH into that.
$ sudo apt update
$ sudo apt upgrade # dependencies
$ sudo apt-get install vim git wget curl net-tools apt-transport-https ca-certificates curl gnupg-agent software-properties-common
Next you will need a docker instance:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” $ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose $ sudo usermod -aG docker $USER
# or sudo chmod 777 /var/run/docker.sock # test if you can see empty list of containers
$ docker ps -a $ sudo systemctl start docker
$ sudo systemctl enable docker
Docker is up, now Golang. If you prefer Java, search hint in the full documentation.
$ wget https://dl.google.com/go/go1.15.2.linux-amd64.tar.gz $ sudo tar -xvf go1.15.2.linux-amd64.tar.gz # optional
$ rm go1.15.2.linux-amd64.tar.gz $ sudo mv go /usr/local # add it to .bashrc
$ export GOROOT=/usr/local/go $ sudo ln -s /usr/local/go/bin/go /usr/bin/go # final check
$ go version
Finally Hyperledger Fabric installation steps:
# this is the core installation script (I was suspicious where it is going, but you can check it before)
$ sudo curl -sSL https://bit.ly/2ysbOFE | bash -s # paths
$ cd fabric-samples/
export PATH=$PATH:/home/server/fabric-samples/bin
# update ~/.bashrc $ cd test-network/
# create channel (in my case "uni")
$ ./network.sh up createChannel -c uni # to shutdown and clean up docker containers use ./network.sh down # starting a chaincode on the channel
# I'm not sure if this step is necessary
$ ./network.sh deployCC -c uni
All done! Congratulations, you Blockchain server is up and running!
Now, how to test it that it actually works? The demo network containes some assets of two organisations. First we have assigned session to organisation (Org1):
$ export FABRIC_CFG_PATH=$PWD/../config/
$ export CORE_PEER_TLS_ENABLED=true
$ export CORE_PEER_LOCALMSPID="Org1MSP"
$ export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
$ export
$ export CORE_PEER_ADDRESS=localhost:7051 # identity Org1$ export FABRIC_CFG_PATH=$PWD/../config/$ export CORE_PEER_TLS_ENABLED=true$ export CORE_PEER_LOCALMSPID="Org1MSP"$ export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt$ export CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/[email protected] /msp$ export CORE_PEER_ADDRESS=localhost:7051 # initialize the ledger with assets $ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile ${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C uni -n basic --peerAddresses localhost:7051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"function":"InitLedger","Args":[]}' # response# -> INFO 001 Chaincode invoke successful. result: status:200 # get list of assets
$ peer chaincode query -C uni -n basic -c '{"Args":["GetAllAssets"]}' # response something like this:
[{"ID":"asset1","color":"blue","size":5,"owner":"Tomoko","appraisedValue":300},{"ID":"asset2","color":"red","size":5,"owner":"Brad","appraisedValue":400},{"ID":"asset3","color":"green","size":10,"owner":"Jin Soo","appraisedValue":500},{"ID":"asset4","color":"yellow","size":10,"owner":"Max","appraisedValue":600},{"ID":"asset5","color":"black","size":15,"owner":"Adriana","appraisedValue":700},{"ID":"asset6","color":"white","size":15,"owner":"Michel","appraisedValue":800}] # Chaincodes are invoked when a network member wants to transfer or change an asset on the ledger. Use the following command to change the owner of an asset on the ledger by invoking the asset-transfer (basic) chaincode - from Michael to Christopher $ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile ${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C uni -n basic --peerAddresses localhost:7051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses localhost:9051 --tlsRootCertFiles ${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"function":"TransferAsset","Args":["asset6","Christopher"]}'
# get list of assets
$ peer chaincode query -C uni -n basic -c '{"Args":["GetAllAssets"]}' # response: [{"ID":"asset1","color":"blue","size":5,"owner":"Tomoko","appraisedValue":300},{"ID":"asset2","color":"red","size":5,"owner":"Brad","appraisedValue":400},{"ID":"asset3","color":"green","size":10,"owner":"Jin Soo","appraisedValue":500},{"ID":"asset4","color":"yellow","size":10,"owner":"Max","appraisedValue":600},{"ID":"asset5","color":"black","size":15,"owner":"Adriana","appraisedValue":700},{"ID":"asset6","color":"white","size":15,"owner":"Christopher","appraisedValue":800}] # to switch indentity just apply replace the env. variables:
$ export CORE_PEER_TLS_ENABLED=true
$ export CORE_PEER_LOCALMSPID="Org2MSP"
$ export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
$ export CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org2.example.com/users/[email protected]/msp
$ export CORE_PEER_ADDRESS=localhost:9051 $ peer chaincode query -C uni -n basic -c '{"Args":["ReadAsset","asset6"]}' # response: {"ID":"asset6","color":"white","size":15,"owner":"Christopher","appraisedValue":800}
That’s it! I would like to add more to this topic later when I will have a working demo of Smart Contract in Golang.
Update: hyperledger-ansible-1.0 for GCP ( Source code (tar.gz)) | https://medium.com/@janrock/how-to-install-hyperledger-fabric-2-2-on-ubuntu-20-04-in-google-cloud-platform-4f750cb51cb4 | ['Jan Rock'] | 2020-12-19 12:20:43.189000+00:00 | ['Blockchain', 'Hyperledger Fabric', 'Guides And Tutorials', 'Hyperledger', 'Blockchain Development'] |
The Role of IoT and GIS in Transforming Lives | Image: Spar3D
The Internet of Things (IoT) and Geographic Information System (GIS) are potent technologies standalone. They have helped build the world around us and have become indispensable to our lives, whether we realize it or not.
IoT is a collection of devices that work in conjunction with each other to transmit data and carry out functions without human intervention. It can be deployed in many industries and has many uses. From our bedroom technologies and voice-controlled devices to air traffic control and military equipment, IoT has been a gamechanger when it comes to interconnecting devices to increase efficiency.
GIS, on the other hand, is a location technology that facilitates simple functionalities such as GPS tracking of your food order to complex functions like studies of environmental trends over hundreds of years.
When used together, these powerful technologies facilitate the building of relationships between devices and objects that are able to operate and be tracked with respect to each other. It looks like something out of a sci-fi movie and is truly fascinating.
Also Read: What is Internet of Things and Its Top Applications
The data produced and collecting when IoT is used in conjunction with GIS is valuable as it gives deep insights into effectiveness, cross-device functionality, and seamlessness of the system created by these technologies.
Today, IoT and GIS are used together for a variety of things. Mostly at scale, here are some industries and projects leveraging these two powerhouse technologies:
Smart Lives
With everything in your life being monitored through IoT and GIS devices, it is imperative to understand how this data is benefitting us. Remember how your phone recommends you the fastest route to work at 8 am with your most commonly used mode of transport? IoT and GIS are to thank here.
Now, you’re able to switch up your thermostat and refrigerator settings while sitting at your work desk in an office miles away. You are able to find the nearest restaurants, see their availability in real-time, and connect with them at the tap of a button. All of this has GIS and IoT being used at some point or the other.
Smart Vehicles
Vehicles that have IoT and GIS-enabled are able to transmit data to authorities for urban traffic management and road safety. These cars can predict traffic, provide alternate routes, and also warn drivers of road conditions and potential threats.
IoT and GIS used in vehicles can help reduce the number of accidents by reducing the split-second decisions people have to take in times of crisis. It also helps authorities increase road safety, find problem areas, and effectively help people.
Smart Infrastructure
Infrastructure can be made more efficient by using IoT in conjunction with GIS. It can help analyze energy use patterns across different buildings, make resource use more efficient, and help build strategies for sustainability by analyzing data and trends over time.
Smart Urban Development
Planning Smart Cities that use IoT and GIS can help create ecosystems that thrive and are sustainable. Deploying GIS and IoT at this scale can help reduce pollution and the stress on the environment. It can also help build strategies to safeguard against future threats, create an efficient system for public services, and reduce financial wastage.
The use of IoT and GIS has proven to be beneficial individually, but together, these technologies create a future-proof tomorrow.
Watch Video: Know All About Digital Twin | https://medium.com/@thegeospatialnews/the-role-of-iot-and-gis-in-transforming-lives-728eb4dd4b02 | ['The Geospatial'] | 2020-05-02 11:12:07.292000+00:00 | ['Future Technology', 'IoT', 'GIS', 'Technology', 'Smart Cities'] |
Comparing Distributed SQL Performance — Yugabyte DB vs. Amazon Aurora PostgreSQL vs. CockroachDB | Update: A new post “The Effect of Isolation Levels on Distributed SQL Performance Benchmarking” includes performance results from running these workloads at serializable isolation level in YugabyteDB.
We are excited to announce the general availability of Yugabyte 2.0 this week! One of the flagship features of the release was the production readiness of the PostgreSQL-compatible Yugabyte SQL (YSQL) API. In this blog post, we will look at the performance and scalability of YSQL as compared to two other PostgreSQL-compatible distributed SQL databases — Amazon Aurora PostgreSQL and CockroachDB.
SQL benchmarks demonstrated that YSQL is 10x more scalable than the maximum throughput possible with Amazon Aurora. Additionally, for a similar hardware profile, YSQL delivered nearly 2x the throughput of Amazon Aurora at half the latency.
In case you are wondering what distributed SQL is — it brings together the SQL language and transactional capabilities of an RDBMS and the cloud-native capabilities such as high availability, scalability, fault tolerance and geo-distribution that are typical to NoSQL databases.
Benchmark Setup
The table below summarizes the design points of these databases. Note that we are explicitly not considering multi-master setup in Aurora PostgreSQL because it compromises data consistency.
In this post, we look at the following performance and scalability aspects of these databases:
Write performance
Scaling writes
Scaling reads
Scaling connections
Distributed transactions
All the benchmarks below were performed in the Oregon region of AWS cloud. The benchmark application can be found here. Yugabyte DB 2.0 was setup on a three node cluster of type i3.4xlarge (16 vCPUs on each node) in a multi-az deployment. This deployment is shown as a UI screenshot below.
CockroachDB (version 19.1.4) had an identical setup to Yugabyte DB. Aurora PostgreSQL was setup on 2 nodes of type db.r5.4xlarge (16 vCPUs on each node). One node was the master, the other node a standby for fast failover in a different AZ. This setup shown below.
[Edit] Note that CockroachDB only supports Serializable isolation, while Yugabyte DB and Amazon Aurora support both Serializable and Snapshot isolation. Amazon Aurora even supports the lower isolation level of Read Committed which is also its default. The benchmarks in this post use the default settings in all DBs — which is sufficient for the correctness of these workloads (simple inserts and secondary indexes). The SQLInserts and SQLSecondaryIndex workloads of this benchmark client were used for these results.
Write Performance
The benchmark was to insert 50M unique key-values into the database using prepare-bind INSERT statements with 256 writer threads running in parallel. There were no reads against the database during this period. The benchmark results are shown below.
While the above numbers are already impressive, we are just getting started with the performance of YSQL. YugaByte DB’s core storage engine, DocDB, which powers both YSQL & YCQL, is capable of much higher throughput. The semi-relational YCQL API, which runs on top of DocDB similar to YSQL, is more mature and hence performs better as shown below.
There are additional improvements that we are working on in the YSQL query layer to achieve even better performance (to match that of YCQL).
Scaling writes
What happens when we need to scale? We had noted in the table above that AWS Aurora cannot horizontally scale writes. The only way to scale writes in Aurora is vertical scaling, meaning the node has to be made beefier. The maximum write IOPS Aurora can scale to is limited by the largest available node in terms of vCPUs.
Beyond 1 Million Writes/Sec with Yugabyte DB
Since Yugabyte DB is both high-performance and horizontally scalable, an experiment to scale it to a million write ops/sec was in order. The setup was a 100 node Yugabyte DB cluster with c5.4xlargeinstances (16 vCPUs and 1TB of storage per node) in a single zone. This cluster, named MillionOps, is shown below.
This cluster was able to perform 1.26 million writes/sec at 1.7ms latency!
168K Writes/Sec Ceiling of Aurora PostgreSQL
The above benchmark numbers (28K writes/sec) was observed on a 16 vCPU (db.r5.4xlarge) machine. The largest instance available for Aurora has 96 vCPUs (db.r5.24xlarge), which has 6x more resources than the one used for the benchmark shown above with 16 vCPUs (db.r5.4xlarge). Assuming that the writes scale with the machine size, the maximum write throughput of an Aurora database with multiple tables is capped at 168K ops/sec. Even though Amazon Aurora can store up to 64TB of data, this throughput bottleneck will pose a practical challenge in exploiting the available storage. After this write throughput ceiling, the only choice is to manually shard the data at the application layer, which is a complex undertaking.
By contrast, a Yugabyte DB cluster scales linearly with the number of nodes. A 12 node cluster of Yugabyte DB would be able to exceed the above write throughput of 168K ops/sec. A graph comparing the write scalability of these two databases is shown below.
Scaling Reads
Both these databases can scale reads, however:
Read scaling in Aurora compromises consistency by serving stale reads.
Application design could get more complex if they have to query read replicas in Aurora.
Let us look at how read scalability is achieved in these databases.
The Aurora PostgreSQL documentation outlines the following in order to scale the database.
We have already looked at the write throughput ceiling as a result of instance scaling. Let us examine read scaling in Aurora. Reads and writes have separate endpoints in Aurora. In order to scale reads, it is the responsibility of the application to explicitly read from the multiple read endpoints.
Firstly, this means that applications are required to explicitly include which endpoint to connect to in their design. This decreases the velocity of developing applications because which the endpoint to connect to becomes a part of the application architecture and may not be a trivial exercise, especially when considering failover scenarios.
Secondly, and more importantly, the bigger issue is that reading from a replica returns stale data — which could compromise consistency. In order to read the source of truth, the application has to read from the master node (which also handles all the writes). Because a single node needs to serve consistent reads, this architecture would limit the read throughput to whatever can be served by the largest node (similar to the analysis we did with writes).
By contrast, Yugabyte DB treats all nodes in an identical manner. This improves things in the following ways:
The application simply needs to connect to a random node in the cluster, and the rest is handled by the database. All the nodes of the database can be put behind just one load-balancer.
When performing reads, all nodes of the cluster are able to participate and hence the read throughput is much higher.
Eliminating the Load Balancer with Cluster-aware JDBC Drivers
As an attempt to simplify things even further, we’re working on a cluster-aware version of the standard JDBC driver, called Yugabyte JDBC. These drivers can connect to any one node of the cluster and “discover” all the other nodes from the cluster membership that is automatically maintained by Yugabyte DB.
Events such as node additions, removals and failures are asynchronously pushed to these client drivers, resulting in the applications staying up-to-date when it comes to the cluster membership. With cluster-aware JDBC drivers, you no longer need to update the list of nodes behind the load-balancer manually or managing the lifecycle of the load-balancer, making the infrastructure much, much simpler and agile.
Scaling Connections
Scaling the number of connections is a common concern with PostgreSQL. There is a limit to the number of connections to an Aurora PostgreSQL database. From the AWS documentation, the table below summarizes the recommended number of connections to the database based on the instance sizes.
The table shows that the maximum number of connections recommended, even in the case of the largest Aurora PostgreSQL database, is 5000 (though the theoretical maximum mentioned in the docs is 262,142). In cloud-native applications which have many microservices and massive scale, this quickly becomes a limitation.
With Yugabyte DB, the number of connections is specified per node in the cluster. The default number of connections per node (also configurable) is 300, in our example setup of 3 nodes we would get a maximum of 900 connections. But scaling for connections is easy. By choosing 6 instances with 8 vCPUs (instead of 3 instances with 16 vCPUs), we have effectively doubled the number of connections to 1.8K while keeping the resources the same! Similarly, by choosing 24 instances with 8 vCPUs (rough equivalent of the largest Aurora cluster with 96 vCPUs), the deployment can scale to over 10K connections.
Distributed Transactions — Latency vs Scalability
Yugabyte DB is a horizontally write scalable database. This means that all nodes of the cluster are simultaneously active (as opposed to just one master, as is the case with Aurora). To achieve horizontally write scalability, the data is seamlessly split up into small chunks, called tablets, which are then distributed across all the nodes of the cluster.
When Yugabyte DB needs to perform a distributed transaction, it needs to perform writes across the different tablets, which end up being RPC calls to remote nodes. The upshot of this is that the database might have to perform RPC calls over the network in order to handle an end user’s transaction, which can affect both the latency and throughput as seen by the end user. With Amazon Aurora, the entire transaction is handled on the master node with no remote RPC calls.
This becomes an architectural tradeoff that is fundamental to the two designs, and thus needs some careful thought before picking one versus the other. But what do the raw performance numbers look like? In order to determine that, we performed a benchmark to insert 5M unique key-values into a database table with a secondary index enabled on the value column. There were no reads against the database during this period.
Analysis of Tradeoffs Using a Benchmark
Below are the results of this secondary index benchmark across these distributed PostgreSQL databases. These benchmarks write 5 million transactions (each of which write two keys as a transaction) with 128 writer threads. These benchmarks were performed with the standard setup outlined above.
Yugabyte DB needs 3–4 remote RPC calls before it can perform a distributed transaction involving multiple shards of the main table and the index (which is also modeled as a table). This results in a correspondingly higher latency and lower throughput. The write latency of a transaction in the above benchmark in Yugabyte DB is 22ms, while that of Aurora PostgreSQL is only 6ms. Additionally, the write throughput of a 3 node (16 vCPU) YSQL cluster is only 5.3K, while that of Aurora PostgreSQL is 20K.
Let us look at what happens when the time comes to scale the writes of the above workload. We have already discussed in a previous section that Aurora PostgreSQL can only scale to a maximum of 96 cores, or the write ceiling of the Aurora PostgreSQL database is capped at 120K transactions/second across all the transactions performed by the app and indexes on the various tables in the database. With Yugabyte DB, a 63 node cluster would deliver 120K transactions/sec, a 106 node cluster would deliver over 200K transactions/sec.
This means that Aurora PostgreSQL is a great choice if your database instance would never need to handle more than 120K transactions/sec. If future proofing for increasing scale is important, then Yugabyte DB is a better choice.
Note that the analysis in this section only applies for write transactions, reads are not affected by this analysis.
Future Work
We have a number of items we are working on.
Improve the performance of YSQL to be on par with YCQL, which is very achievable.
Change the connection handling architecture of YSQL. It currently spawns one process per connection, which can be a performance bottleneck. YCQL on the other hand, spawns one thread per connection and therefore can handle connection spikes much better.
We intend to make cluster-aware JDBC drivers the default for Yugabyte DB.
Run a TPCC benchmark against YSQL.
If you are interested in any of the above or other similar kinds of work, please reach out to us — we’re hiring!
What’s Next? | https://medium.com/yugabyte/comparing-distributed-sql-performance-yugabyte-db-vs-amazon-aurora-postgresql-vs-cockroachdb-4bbfdc4c5878 | ['Karthik Ranganathan'] | 2019-09-24 18:28:42.656000+00:00 | ['Scalability', 'Postgres', 'Database', 'Sql', 'Kubernetes'] |
Data-driven components in Vue.js | In the previous articles, we discussed how to adopt TypeScript in a lean way and how to modularize the application logic in Vue.js applications. But in both articles, we barely touched Vue components. It’s time to change that. In this article, we will pick up where we left and will leverage our Type definitions and our modularized logic to build a lean, maintainable, and reusable invoice component.
Let’s get started!
Sketching the functionality
In our previous articles, we have defined a simplified data model for an invoice application, and we have built the core logic for handling operations on an invoice. If you haven’t checked these articles yet, now it is the time to do it.
Today we are going to build a few components to render and manipulate an invoice.
Below we have a rough mockup of what we want the component to look like:
Mockup for Invoice component
Please keep in mind that our goal here is to discuss code structure. We will overlook concerns such as UI and UX.
Planing the components
So, how do we go about breaking up the requirements into manageable components? And, perhaps more importantly, what will the interface (props, events emitted, slots) of these components look like?
Here is one possible high-level breakdown of the components:
Components breakdown
The two main components here are the Invoice and LineItem components. The Invoice component takes an invoice object of type Types.Invoice and, whenever this object changes, emits the updated invoice. The same thing happens for the LineItem component.
The ProductSelector component will encapsulate the logic for selecting a product and will emit the chosen product.
If we use appropriate names for the props and events emitted, we can use Vue’s v-model directive to bind data to our components. Let's see how that works in code.
Invoice component
We can start by implementing the Invoice component fully, assuming the other parts are available. This approach will generate a wish-list of components that we will implement one by one.
Invoice component definition
Our invoice model doesn’t currently have the concept of Invoice Number or Invoice Due Date. It would be straightforward to add it to the invoice type and modify the invoice module, but to make this article simpler, we are just hard coding some values there for now.
Notice how we are taking a prop of type Types.Invoice and are emitting input events whenever the invoice is changed. Now our modularized Invoice logic is paying off its price. Look how simple the code in our Invoice component is: it just ties the events from the underlying components to the Invoice module.
We are using the Emit decorator from vue-property-decorator . It will emit the return value of the decorated function, which makes this code really concise. If you are not used to it, it is possible to achieve the same thing by doing:
Using explicit $emit instead of Emit decorator
Notice also how we are invoking the LineItem and AddLineItem components, that we have not yet implemented. Let's take care of that.
AddLineItem component
Let’s start with the AddLineItem component. In the Invoice component, we have defined that the AddLineItem component should emit an add event whenever a line item is added. This is the component definition:
AddLineItem component definition
This component is also rather simple. We have a button that will trigger our EditLineItemModal component, passing a new LineItem object to it. This new line item object is built in the newLineItem method. Notice how here we are using a Types.Partial<Types.LineItem> type.
Types.Partial is a helper that we will add to the types folder to allow having incomplete objects of a certain type. In this case, we don't have a product to assign to the LineItem object, that is why we are using a partial. This how the Types.Partial helper is defined:
Partial type definition
A Partial object will have the same properties of the passed-in type, but all fields can be undefined or null . This helper should be used with caution because we cannot know if the properties are present or not.
Let’s move on to the EditLineItemModal component now.
EditLineItemModal component
EditLineItemModal definition
We are using a SimpleModal component here to encapsulate the modal behavior. It is the same code as available in https://vuejs.org/v2/examples/modal.html. We are not going to reproduce the modal code here, but it is available at the repo.
This component has three fields to define a LineItem: the product field, encapsulated in the ProductSelector component, and two input fields for the rate and quantity .
One thing to notice here is how we are making a local copy of the passed-in prop. As we have Ok and Cancel buttons, we cannot update the prop itself when a field is changed, because the user might hit cancel. So we do a deep copy of the item prop into the localLineItem object anytime the item changes and emit the local line item when the user clicks Ok .
Also, as the rate is a Decimal object, we had to wrap its value in a getter and setter , so that we can transform it to and from a number, that is what the input html element can handle. If you have several places in your application where you need to handle Decimals, you might want to create a DecimalInput component that takes Decimals in and emits Decimals out, so that you can use v-model directly with your Decimal object.
ProductSelector
The ProductSelector component is a thin wrapper around the select element.
ProductSelector component definition
We are hardcoding the products here to simplify our example. But in an actual application, this component would have the ability to search the products, loading them as needed from an API. The main takeaway here is that we are encapsulating the product selection in a component, so we can easily change its internals, without affecting the components that use it. If you need to implement a selector similar to this one, take a look at Vue Multiselect.
We have now completed the components needed to build the AddLineItem functionality. Let's move on to the LineItem component.
LineItem component
LineItem component definition
The LineItem component shows the line item details, along with the line item total amount. There are also two buttons, one to edit the current line item and one to remove it from the invoice.
We are reusing the EditLineItemModal component we wrote for the AddLineItem . We emit a LineItem object whenever the item is edited. We also emit a remove event when the user clicks the Delete link. Once again, we are using our module's logic when needed, in this case, to calculate the total amount of the line item.
Using the Invoice component
Now our Invoice component is fully developed, and we can use it in our application. Let's add it to the existing HelloWorld component.
Using the Invoice component
Here, we are creating a local invoice, using the Invoice module, and passing it to the Invoice component we just wrote.
The Invoice component is fully usable — we can add, remove, and edit line items, and the total amount is calculated correctly. In a real application, instead of just tying the Invoice component with a local data invoice object, we would probably link it to a Vuex store that would eventually trigger network requests to send the data to some API. Anyway, we have neatly encapsulated the invoice manipulation logic in the component, which delegates the business logic handling to the modules.
Validation
If you are reading closely, you might have noticed that we haven’t added validation to our EditLineItemModal form. This could lead to a bad state in our application because this component is taking a Partial line item object as a prop, and it might as well emit a partial LineItem. Let's fix that now.
Disabling OK button if localLineItem is invalid
This is a bit naive validation, but it is enough for our purposes. Now it is not possible to save a line item if the product or rate are not set or if the quantity is not a number.
In a real application, we should use more robust validation libraries such as Vuelidate or Vee Validate.
Wrapping up
We have developed a few components to create our invoice functionality. We started by defining a rough wireframe for the invoice component and have broken it down into smaller pieces.
We created small and maintainable components that are derived from our type definitions. As promised, the components are a thin layer that wires the user interactions with our core logic. As long as we keep the interface (props/events) untouched, we can change our components freely, and the overall functionality should still work.
I hope you have liked this approach. Let me know your thoughts in the comments. | https://medium.com/swlh/data-driven-components-2ab02ccbf204 | ['Vinicius Teixeira'] | 2020-06-02 16:17:12.622000+00:00 | ['Typescript', 'Vuejs', 'JavaScript'] |
FIFA 19’s extreme illusion of narrative consequence | FIFA 19 The Journey’s protagonists: l-r, Alex Hunter, Kim Hunter, and Danny Williams
The lockdowns and varying tiers of Covid-19 restrictions in 2020 enabled me to play and “finish” more video games through the year than I have done in a long while, but what drove me to pick up a sports game two years late?
When I first received the game in 2018, I was wrapped up in a compulsion to play the then recently released Red Dead Redemption 2 and almost neglected the sports sim. Save for a few cursory exhibitions, my only interaction with the game was dipping into the start of “The Journey” mode, picking up Alex Hunter’s story fresh off celebrating a Bundesliga title win in my FIFA 18 save, which in turn followed the first season of his dramatised career in FIFA 17.
Finishing “The Journey” was my only reason to drift back to FIFA 19 as 2020 draws to a close, and after completing lengthy narrative-led games like Final Fantasy VII Remake and The Last of Us Part II, it feels like light-hearted fluff, a palate cleanser. And with my compulsion to “finish” the games on my always growing backlog, revisiting and completing “The Journey” feels like ample playing time to tick it off the list. Yet, despite enjoying the easy-going run through, the game itself leaves a lot to be desired, especially as there’s a clearly demonstrable audience clamouring for the narrative fantasy simulation of progressive seasons, moulding your favourite teams, your dream-fulfilling avatar and winning trophies.
This mode, a much trumpeted narrative adventure within the straight-faced and almost entirely uncritical gaze of the FIFA games, chronicled the career of a teenage soccer sensation across 3 of the series’ yearly instalments. Like Goal and other media representations of football before it, the story stretches credulity to breaking point, yet provides a somewhat compelling reason to practice training drills and play a through a long series of matches. But for anyone who’s actually a fan of football, the sense of immersion is completely broken from the start.
To recap the story, a teenage Alex Hunter is released from an unnamed football club before attending a trial game and being signed up by a Premier League club of the player’s choosing. He’s joined by his also released childhood friend Gareth Walker at said club, and a playing time rivalry begins. Hunter falls down the pecking order after Tottenham’s Harry Kane or PSG’s Angel Di Maria is also signed for your chosen club. You’re then loaned out to one of 3 Championship (the second-tier of English football) teams, to recapture your goal-scoring form, being reunited with Danny Williams, a rival from the opening trial game. This loan is a success, with Hunter returning while an antagonistic Walker forces a move to your club’s biggest rivals (in my save, he went directly from Liverpool to Manchester United, which hasn’t happened since 1938). The rivalry reaches boiling point as both teams reach the FA Cup final, to finish the first season. In the second instalment, Hunter receives a speculative approach from Real Madrid which turns out to be fake, and after already handing in his transfer request, is shipped out to MLS team LA Galaxy. After moving to America, Hunter is reconnected with his absent father and finds out about his “secret” half-sister Kim Hunter, who turns out to also be a soccer prodigy, about to make her debut for the US Women’s National Team. Returning to Europe, Danny Williams is now at your former English club, whilst you get the choice of Bayern Munich, PSG or Atlético Madrid for the second half of the season, before a knee injury takes Hunter out of contention. Shifting perspective to Williams, whose career, despite being a Premier League level player, is somehow at risk of falling apart. Recovering from injury, Hunter’s favourite coach’s job hangs in the balance dependent on winning silverware, a league or a cup, to end this season. In the final instalment, Real Madrid’s interest in Hunter finally materialises, and he joins his fifth team in three seasons, the kind of journeyman career you can only dream of. This game splits perspective between Hunter at Real Madrid, Williams at your chosen English club, and Kim Hunter with the US team as they approach the FIFA Women’s World Cup 2019. The season progresses at a steady pace, suggesting to the player to shift between the characters at scheduled intervals, and culminating with Hunter and Williams facing each other in the UEFA Champion’s League final (added to this instalment of FIFA after the expiry of an exclusivity agreement with Konami’s rival Pro Evolution Soccer series), whilst Kim Hunter reaches the World Cup final.
Unlike playing your own “Career” mode with your chosen club as a created player or manager in FIFA, or the more organisational immersion of a Football Manager, “The Journey” manufactures struggle, regardless of your training performances or results on the pitch, with storyline sections don’t recognise what is going on. Hunter is punished and taken out of the starting line-up, because of insinuated off-field distractions, irrespective of the results you’re getting. You’re even set targets during matches, urged to “Break the Deadlock” though you could already be 3–0 up.
Story-line transfers are ignored across the instalments, as the realism of updated rosters is then blended with “The Journey”’s fictional inclusions. Harry Kane, whose transfer forms a key moment of the initial game’s story, returns to the Tottenham’s line-up in 18 and 19, a sadly unexplained change to reality, that’s reminiscent of 08–09 Robbie Keane, and something that’s never mentioned or commented on.
In fact, the soccer world’s reality is often stranger than the fiction in these games — with 19’s focus on Real Madrid, developer EA Vancouver were surely disappointed when not only Cristiano Ronaldo departed for Juventus before the game’s release, but head coach and former “Galactico” Zinedine Zidane resigned, both leaving after winning the club a third successive Champion’s League trophy. Zidane was to be replaced with then-Spain manager Julien Lopetegui, announced ahead of the 2018 World Cup. On receiving this news the Spanish football association immediately dismissed Lopetegui, whose reign at Real Madrid in turn then felt fated to be calamatious, he was sacked after barely more than 2 months, and eventually Zidane returned.
Lopetegui appears on the side of pitch during your games for Real Madrid throughout “The Journey”, becoming a curious ghost in this simulation, emblematic of a brief rudderless chapter in Real Madrid’s history that they’d sooner forget. That it wasn’t updated out of the game, ultimately offers a somewhat more interesting fantasy; what if he was given a bit more time? what if he had a new talismanic attacking player like Alex Hunter to replace the departed Ronaldo? It’s a what if scenatio that would definitely lead you down more rabbit holes than the narrative that you play through.
“The Journey” does attempt to give the narrative cut-scenes weight, hinting at text box choices having significant consequence, with icons that appear in the top corner of the screen to reflect that what you are seeing is the result of something you decided, or a goal you scored, before. But when the story’s major beats follow a set structure, it’s hard to see beyond these choices being anything other than narrative illusion, as crucially you never lose the opportunity to play simulated games of football in between those cutscenes.
What these illusions hint at is the idea of a sports simulation game with modular story design, where a player’s ongoing sporting career could be intertwined with dramatic role-playing. Whilst “The Journey” falls short of this idea, the fully realised alternative, a procedurally generated narrative drama truly reflective of in-game performance, would become a very attractive and game-altering prospect beyond what has been offered, a scripted and almost on-rails single player experience.
Sports simulation games need something to elevate the standard of Career Modes, that remain persistent favourites, and which already rely on an extension of role-playing imagination from the player. Yet, it’s hardly the top priority for a company already benefiting from producing incrementally changed yearly instalments, against the backdrop of extreme financial incentives from micro-transaction heavy modes like “Ultimate Team” and in turn its own e-sports popularity.
A series like Football Manager endures with its own version of addictive sports role-playing within a spreadsheet-like user interface and heavy data management. Every iteration revolves around an ongoing and somewhat believable alternate reality, persisting in spite of the player, that you can compare and contrast with how the real football seasons play out. Assembling your dream line-up, running your favourite club to success, or taking relative minnows up the football pyramid, all work mostly within the rules of the real footballing world. And whilst FIFA has included more attempts at verisimilitude with the likes of visualised transfer and contract negotiations, there’s a lack of depth or accurate representation for the life of a player or manager, that fails to compare with what Football Manager offers, at least within a text-only basis.
Not that Football Manager is itself without problems; there’s an over-reliance on stock responses as you deal with the media or team talks, that whilst accurately echoing football’s clichéd platitudes, quickly grow stale, there’s also the “dynamics” and “morale” systems, an illuminating injection of psychology, that can also cause your team to swing from undefeated to unbreakable losing streaks in a matter of minutes, amongst many other niggling issues that carry over from instalment to instalment, in spite of developer Sports Interactive’s constant refinement of the series.
My desired end point would be some combination of the two games, a dose of Football Manager’s realism interjected into FIFA’s satisfyingly solid and robust gameplay, and an evolved career progression that can integrate unique narratives. Perhaps some addition of an open world, where series like NBA 2K have begun to dabble, combined with a more rounded narrative like the charmingly laid-back Nintendo Switch title Golf Story by Sidebar Games. Konami’s Pro Evolution Soccer, with the team-building “Master League” mode, already offers something of a progressive and league climbing career journey, though within a fictionalised alternate structure and lacking a rounded narrative touch, relying on the player to fill in any role-playing gaps.
EA shifted focus after “The Journey”, replacing it with “Volta Football” in the latest two instalments, an updated take on their previous FIFA Street spin-offs, including its own narrative adventure for a player-created character, but crucially drifting away from the tight structure and reality of a football season. Whilst I’m sure it’s engaging, taking the game further from the “pure” simulation of professional football is a disappointing blow for fans who want to create their desired alternate reality.
Here’s hoping that with the new console generation, and an already impressive visual upgrade, EA can return their focus to a realistic career-story mode that gives the player real choices and a reflective, personalised, progressive experience. | https://medium.com/@naterobt/fifa-19s-extreme-illusion-of-narrative-consequence-b111491e40ce | ['Nathan Roberts'] | 2021-01-02 11:48:20.020000+00:00 | ['Games', 'Football', 'Football Manager', 'Game Design', 'FIFA'] |
Have You Thought of These Budget Cuts? | Saving money is something the majority of people could do better. There are always additional areas in life we could cut back on or spend more wisely. However, it’s very easy to turn a blind eye to unnecessary costs that add up quickly. Here are a few budget cuts that could help you find extra dollars each month and reach your financial goals faster.
Photo by Artificial Photography on Unsplash
Subscriptions
Tim Denning gets really fired up about subscriptions, so I’ll let him go off in this post . Have you really taken the time to look at everything you’re subscribed to? Did you know that the cost of a cable subscription is probably cheaper, and encompasses way more than what you’re paying for with individual subscriptions? I have Hulu Live right now for one show, and I realize it’s an absolute waste of money. If you just can’t give up a certain show, meal from Hello Fresh, etc, understand you have more options than just saying goodbye.
Could you share it with someone? Does that Patreon podcast go live on iTunes a couple of days after subscribers hear it? Can you recreate that boxed meal at the grocery store?
Photo by Charles Deluvio on Unsplash
Pet toys
Hot take for pet owners out there: most of what you buy for your dog is actually for you.
My dog has a bone, a ball, and a rope. He becomes the house hype man every night at exactly 9 pm. After zooming around and barking, he’ll bring me his ball and ask me to throw it a few times. About 20 minutes later, he’s exhausted and goes back to sleep.
Once a week, he’ll bring the rope toy instead of the ball. The bone is reserved for dinner time. After trying to beg for our food and getting rejected, he gives out a heavy sigh and chews on his bone for a couple of minutes. Anything else that I’ve ever bought him just gets destroyed or takes up space. He’d much rather throw an old sock around than have a new toy.
Next time you’re thinking of going to Petco because your cat looks like she had a rough day, just give her some snuggles and call it good. Your animal is a minimalist. They would rather have you partaking in the FIRE movement so you can retire early and pay more attention to them.
Plants
Everyone loves a good house plant. Did you know if you have one, you can have dozens? Here is a short and sweet article explaining plant propagation. You can even start your first plant for free by asking a friend for clippings off of one of their houseplants. Propagations make excellent gifts, too. Imagine if someone grew you your birthday gift!
Photo by Micheile Henderson on Unsplash
Plant food
Continuing with the plant theme, I waisted some money on plant food this summer. My tomatoes started experiencing bud rot, and I was devastated. I ran to my local hardware store and spent $11 on an organic bud rot food, which ended up being mostly chicken poop.
I told a friend about my garden struggle. She said “oh, I just crush up eggshells and Tums and mix them into the soil. Works like a charm.”
Eggshells and Tums, two things that were already in my house and also didn’t smell like chicken poop.
Lesson learned. Always research home remedies for garden care before spending money.
Containers
I get the appeal of a perfectly perfect set of mason jars, filled with dry goods, labeled symmetrically, lined up in your pantry. It just isn’t worth the money when you toss usable containers in the recycle bin every week!
Buy the same pasta sauce for a month if you need the containers to match, but please don’t overlook the packaging you’re already bringing into the house. It’s best for the environment and your wallet to save and reuse jars and containers and reuse them in the kitchen.
Electricity
Did you know electricity is cheaper during off-peak hours? Doing your laundry at night can actually save you a few cents per load. Think about it, when electricity is in high demand, companies are able to charge more. Chores before the sun comes up or after you put the kids to bed is an easy way to pay yourself.
Photo by Michael Longmire on Unsplash
Bank fees
When I had some extra time on my hands at the beginning of quarantine, I was casually combing through my bank statements. I realized that a new account of mine was charging $5/month in “bank fees”. Never ever pay for these. Demand a different option from your bank, or move your money somewhere else.
I’ve heard that sometimes it’s as simple as calling and saying “I want this removed”.
I chatted with an associate from Bank of America using their online instant messenger service. I just asked what other options I had, and I learned that because of the direct deposits I was making, I qualified for an account with a better interest rate and no bank fee. They switched my account over, didn’t even have to change any account numbers, and all of my automatic withdrawals and direct deposits kept functioning as normal.
Remember, banks make money by investing the money you are holding in them. They’re good, they don’t need your $5 a month. Don’t let them get away with that.
Every day, we are confronted with hundreds of ways to not spend money. Each time you avoid temptation from an ad, walk right past the snack aisle, or say no to Starbucks, you are paying yourself. I hope these ideas add funds to your wallet and progress your financial goals. | https://themakingofamillionaire.com/have-you-thought-of-these-budget-cuts-b2d1c1e81dde | [] | 2020-09-02 18:13:31.841000+00:00 | ['Money', 'Saving Money', 'Budgeting', 'Spending Habits', 'Finance'] |
Avoid unnecessary function indirection | When a function calls two or more functions and processes each output before sending it over to the next, essentially you are composing logic. When a complex logic becomes difficult to manage and read, one may choose to break it down into smaller functions and call them from the main function. I have seen in the code base of the developers where a function merely does little to nothing before calling the function and returning an output. Makes me wonder is there a need for a function accepting arguments and merely passing to another function?
Why would a person do something like that? Is there a real advantage?
Let’s take a look at an example of function indirection. Suppose we want to render products with categories on the page and display the first name of a user. We created the helper functions that return the needed information to render on the page. To achieve this, we are directing our execution of logic and the reading order from the OnPageLoad function to these helper functions. Each of the function is just a wrapper doing as little as preprocessing the input and returning the response. Do we at all need them?
We don’t. So how we can improve our code so that while reading the flow of execution a reader does not have to jump between the lines in the modules and keep scrolling up and down forcing us to spend time on a worthless exercise.
So what are we missing here? It is not treating a function as a first-class citizen of the language is the actual problem here. Not all the language support functions as first-class, but Javascript does.
What does a first-class citizen mean? Is there a second-class citizen? Nah, it is just a metaphor.
A function in Javascript is an object and when it is assigned to a variable, the variable carries the reference to wherever it is passed or could be invoked later, did you forgot callbacks? When you have an opportunity to make the codebase readable and light you should. The more the lines of code, the more is the surface area for the bugs. Keep in mind!
As you can see the equivalent line of code below each function shows how to treat functions as first-class citizens which removes unnecessary indirection. After this refactoring, do you need these helper functions at all? You don’t unless you want to rename the function name. The implementation in OnPageLoad doesn’t change at all. | https://medium.com/@afiz-momin/avoid-unnecessary-function-indirection-cb2d0a509d9b | ['Afiz Momin'] | 2020-12-24 06:44:39.914000+00:00 | ['Function', 'Clean Code', 'First Class Function', 'JavaScript'] |
Revised and expanded: Reflections on Tony Hsieh and what it means to have a vision for your community | The first part of this essay is a segment of one I published last week, called “
Reflections and gratitude for Tony Hsieh, vision, and persistence.” Given the accounts of his last year that have come out since I published that, I wrote a coda, as it were, that tried to put those earlier statements in the context of what appears to be a more tragic story. In the process, I also reconnected with a crucial part of what makes the Downtown Project — and most of the successful community revitalization efforts I’ve seen actually effective. Hint: it’s not a Great Man.
Since Medium sort of penalizes you for going too long form, I’m going to direct you to the earlier article, and then come back and read this one as it was intended, a follow-up to what I wrote before. Thanks.
I never met Tony Hseih. That was on purpose.
The Downtown Project in Las Vegas crossed my conscience at a moment when I had become frustrated, embittered by the failures of my professions. As an urban planner and economic developer with a long history in downtown and community revitalization, I had hit a dead end, concluding that the tools I had learned and used to make communities better, healthier, more resilient, had failed. My professional belief system, as it were, was falling apart. And I had nothing to replace it….
{Read the rest of that essay here }
Perhaps that’s the challenge of the next DTLV: fully building and integrating that community of love within places that enable that kind of community, perhaps a new/old type of community, to happen.
Thanks, Tony. Godspeed.
Container Park, Downtown Project
____
Additional edit, December 9:
As more information about Tony’s last few months has come out, I’m finding myself trying to reconcile the tragedy of his experience with what I wrote above. I don’t like to edit my writing after it has been published — I feel like that’s dishonest, in some sense — but I need to add a few things.
First, it’s hard now not to see the Project and Tony’s vision as at least in part the product of his… whatever it was. His need for people, or for distraction, or for a Something that transcends our usual communites. I’m no psychologist. And some of the trappings of the Project — the drinking/party culture — wasn’t my gig, as a middle-aged Midwesterner with two kids and a pretty traditional life back home that I wasn’t trying to escape from. What I resonated to was the energy, the intent that lay under the EDM and the Giant Beer Pong and the party scene. Those were’t the community activities that I personally would have been looking for, but the intensive effort to build a real community, a different, more intensively-interrelated community…that was something that I didn’t see in the dozens or hundreds of efforts to indirectly, ineffectively, “build community” through fancy streetscapes and parks and incentive programs.
Second, Tony’s death rightly or wrongly now becomes linked with the handful of suicides that took place within and adjacent to the Downtown Project’s tech startup world during the first five years of the Project. We have learned a lot more about the mental health of entrepreneurs since then, due in part to the bravery and honesty of people like Brad Feld. And I learned since then myself about how dark those periods feel when you are in them. It’s pointless, and yet also necessary, to say that self-destructive behavior, intentional or unintentional, is horrific and necessitates soul-searching. How does one build honest care for others’ mental health into a community, a network? Into startups and entrepreneurship? We understand more than ever how crucial that is, how peoples’ mental and emotional state affects everything else (thank you 2020, I think). But we have to ask ourselves next: what does that look like in a resilient community?
Finally, I was uncomfortable with “love” and “quasi-family” in the conclusion I published before, even as I was writing it. At the time I chalked my reaction up to Gen X cynicism and decided I was going to push myself in that direction (part of my ongoing personal struggle to write in Non-Geek.) Now I think I was reacting to the fact that those are actually not the right words. A family can be a dysfunctional mess of baggage, and “love” is too vague, too blurry, too many meanings. What’s the better way to describe this?
At one tough point in the DTP’s history, leadership shifted the language they used from talking about “community” to “connectedness” — I and others were told that this was because the general public was regarding them as government, asking them to install garbage cans and fix public infrastructure problems. I think “connectedness” was the better term — the term that better fit what all of these people were working to build. The point was to create an environment — physical and social alike — that enabled people to find and build connection with people that they might not encounter otherwise. Tony articulated that in his book, Delivering Happiness, and that was clearly a guiding principle. The result: even an outlier like me could come into the community and be welcomed, connect, be energized, build relationships. If you’re from the eastern US, you know how incredibly hard it can be to enter and be immediately welcomed in most environments. And our nice streetscapes and parks don’t necessarily make that easier. Downtown Las Vegas was the first place in all of my life where I felt like people geniunely welcomed me, wanted to get to know me at a level beyond how my business might help theirs. And I will never stop being thankful for that.
But here’s most important part that was missing in what I wrote before. Although Tony was the face, the spearhead, much of the money and perhaps even the wellspring of the Downtown Project Vision, it wasn’t his vision alone. Hundreds of people bought into it, invested their time and hearts and, yes, money into building it. Some of those people are still there, some have moved on to other places. But I can tell you from my own experience elsewhere that being part of a shared vision remakes your way of living, changes how you interact with the world from then on. And I’ve seen that in the people who passed through the Downtown Project.
My consistent message in all of the years that I wrote and talked about the Downtown Project was that if we were looking at Tony, we were missing the bigger and (from my perspective) the much more interesting part of the story: how those hundreds of people shaped the community — and the Vision, as it played out in reality — by their own actions.
We too often talk about community revitalization, urban planning, grand visions and big projects, according to the Great Man theory of history. Dan Gilbert in Detroit. Richard M. Daley in Chicago. Frederick Olmstead in Central Park. We make it about Them, as though they had single-handedly wrought the iron and carved the stone. That not only makes the story boring, but it makes it false.
A community vision is profoundly different from a personal vision. Even if the vision for the community starts with one person, it will pass through hundreds of hands, most of whom will add a shape or a flavor or a new element or a nuance along the way. And if the first person, through whatever control they have, insists on the vision staying under glass, exactly the way they conceived it, those are the visions that become fossilized relics.
So, ironically, perhaps Tony’s greatest gift was to place his vision, the one that he cared for so much, in the hands of the community that surrounded it. And by doing that, giving up control over it, opening the possibility that it would not go where he intended. Maybe he didn’t imagine it turning out as it has. Maybe he didn’t realize all of the ways it could go awry. But when we make a gift, we don’t really know what the recipient is going to do with it anyways. And often the people to whom we gave the gift don’t use it the way we intended.
But sometimes, sometimes, the gifts we give echo far beyond where we imagined.
I think that will be the case for the Downtown Project, and indeed, for many people, it already has.
So, thanks Tony. Godspeed. | https://medium.com/@dellarucker/revised-and-expanded-reflections-on-tony-hsieh-and-what-it-means-to-have-a-vision-for-your-c39adf628eaf | ['Della Rucker'] | 2020-12-10 19:38:25.009000+00:00 | ['Riptonyhsieh', 'Urban Planning', 'Urbanism', 'Vision'] |
LocalBitcoins’ Q2.2020 in a nutshell — Infographic | See also:
Вторая четверть 2020 для LocalBitcoins
Segundo tremestre de LocalBitcoins en el 2020
It is still summer in the Northern Hemisphere, but this year has been action-packed for Bitcoiners so far and we feel like it is passing so quickly! Let’s now take a moment to look back at the highlights of the second quarter of 2020 in LocalBitcoins. See the most important Q1.2020 numbers here.
Between April and June, approximately 2.8 million trades were successfully completed in LocalBitcoins — hurray, and keep on keeping on! During that time, we had a bit more than 5000 resolved disputes with an average resolution time of 36 hours.
Those completed trades moved an average of 19,800 BTC per month, which shows an increase in comparison with Q1.2020. Our users have also sent and received nearly 800,000 external transactions each month in Q2.2020, excluding transactions between LocalBitcoins wallets!
Bank transfers were the most popular payment methods of this period, followed by the e-wallet Qiwi, widely used in Russia. Bank transfers are a very convenient payment method for buyers and sellers and involve relatively low risk. But LocalBitcoins also supports many other payment methods to make Bitcoin trading easily accessible to everyone, including those who don’t have or don’t want to use a bank account for trading.
The list of top trading countries in LocalBitcoins during Q2.2020 shows that peer-to-peer Bitcoin trading remains strong in Russia, Venezuela, and Colombia. The honorable mentions of Q2.2020 go to Argentina that made it to the top 12 list this time, and to Nigeria that also ranked higher in comparison with Q1.2020!
1.Russia
2.Venezuela
3.Colombia
4. United Kingdom
5. United States
6. Nigeria
7. China
8. India
9. South Africa
10. Peru
11. Spain
12. Argentina | https://blog.localbitcoins.com/localbitcoins-q2-2020-in-a-nutshell-infographic-388f40f24296 | [] | 2020-07-31 09:45:27.658000+00:00 | ['Bitcoin', 'Bitcoin Wallet', 'Cryptocurrency', 'Bitcoin Exchange', 'Buy Bitcoins'] |
Interviewed by Finnemore Consulting | December 2020: I sat down (virtually) with another stalwart of the EdTech sector, Nick Finnemore of Finnemore Consulting, to discuss the MIS market, EdTech payments, the rise of the Big 8 and what inspired me to start Omega Pegasus…
Nick spent many a year in the upper echelons of Capita (SIMS), where he and I interacted far too many times. We sat down in December 2020 to talk about the MIS Challenge (Stats), and what inspired me to start the project. We chat about the MIS sector in general and what the future looks like, including the rise of the ‘Big 8’ corporations who are hoovering up all the MIS and other vendors in the market.
We also talk about Payments within EdTech. Most interestingly of all, and an important note, is this was filmed just DAYS before the big announcement of Montegu purchasing Capita SIMS (their whole education unit in fact) AND the investment into ParentPay! Which is really interesting to see just how much was predicted!
Another article covers this news in more detail: SIMS, NuMIS and The Big 8
The interview includes exclusive details of some other projects also in the mix!
Full interview below: | https://medium.com/@grazreed/interviewed-by-finnemore-consulting-f6ac96c17eee | ['Graham Reed'] | 2021-03-08 14:18:46.807000+00:00 | ['Mis', 'Edtech', 'Interview'] |
The Art of Problem Solving | The Art of Problem Solving
Problems start with asking questions, interest in unknown phenomena, or simply curiosity. They are general and inclusive at first, but become exclusive when the questioner adds her own details and constraints, and forming a special case.
Example
Image Classification is a general problem, instantiated in many sub-problems considering different situations. In its simplest form, such as Face Recognition, there are many disjoint labels, and each image could only accept one of them. However, in cases like Scene Understanding, images can be labeled with many tags, called Multi-label Classification. Additionally, if we want to classify product images into Amazon hierarchy, the label set is very large, and structured in a hierarchy; In this case, we call the problem Extreme Classification. There may be many more challenges in real world problems; the dataset may be imbalanced or we may have missing values. Hence, adding constraints to the general problem results in different individual cases.
Fig 1. Sub-Problems. Photo by Hello I'm Nik on Unsplash.
Solving a specific problem starts with getting to know the general problem it is derived from, its different aspects, and what others have done till now, by studying the prior works and literature. It should be followed by taking enough time to think and let your mind process what you have found about different ideas and their pros and cons. And finally, you may come with a new idea for your own specific problem.
Example
Recently our company decided to have an automatic Description Generator that takes the product title, image, and tags as input, and writes a description, describing the product. This could be a helpful tool for shop owners without copy-writing skills. That time, I knew that Natural Language Generation (NLG) is pacing fast, but I didn’t exactly know how NLG models work. So I decided to start with the literature and read GPT papers, where I found out how to fine-tune a pre-trained model on a new dataset. Since we had experience working with eCommerce data, we easily collected a large dataset, cleaned it with standard NLP models, and used it for fine-tuning. We have also trained a multi-label image classifier to predict some labels for the image, and feed them together with the title and tags to the NLG model, which highly boost our results when we have no text data. | https://towardsdatascience.com/the-art-of-problem-solving-36f2b7e16b61 | ['Ali Osia'] | 2021-09-13 14:10:52.522000+00:00 | ['Research', 'Academia', 'Problem Solving', 'Ideas', 'Problems'] |
Make Yourself a Web Page Widget to Monitor Your Crypto Portfolio Value (Only Very Simple PHP /HTML Skill Required) | This article is purposefully super-simplistic, but I was thinking that there are probably a lot of people out there who are into crypto, have a web site somewhere, and would like to put up a private (or, hell, maybe public) page somewhere to display your crypto-portfolio value in real-time. Yet, you may not know how to code this up yourself.
This is massively easy to do by using the free Coinmarketcap API. When you add some styling to it, there’s no limit to how fancy you can make your portfolio widget, and/or how much extra math and calculations you can add to it.
For mine, I basically just did a little Bootstrappy table & have it set to output my own meager “portfolio.” On the page where I have it, it renders out like so:
Man, so easy now! I was getting sick of updating a spreadsheet with coin values whenever I wanted to know how ridiculously rich I’ve become. Hasn’t happened yet… lol. But, I’m having a good time.
All you need is a web site to screw around with — just any normal server that will run Wordpress, for example. In the sample code below, I’ve taken out the bootstrap stuff, so it should simply render a plain old HTML table. You’ll probably want to add custom classes or other cool stuff (e.g., making negative percentages red, and positive ones green).
To begin, you’re going to need to tell the code how much of each currency you own. I’ve used an array to hold that — called $myCoins in the code below. Hopefully, you can see how to customize that for yourself, using the symbols of your own currency, and putting in the balances where those go. Note that, anytime you buy more crypto, and/or change your holdings, you’ll need to update your balance in this $myCoins part of your script.
Anyway, here’s the basic code, and I’ll include some more comments below. :-)
<?php $myCoins = array(
'BTC' => array ( 'balance' => 0.0093 ),
'ETH' => array ( 'balance' => 0.235724420 ),
'XRB' => array ( 'balance' => 2.524402070 ),
'MIOTA' => array ('balance' => 33.000000000 ),
'XRP' => array ( 'balance' => 49.000000000 ),
'XLM' => array ( 'balance' => 105.894000000 ),
'TRX' => array ( 'balance' => 599.400000000 )
);
$coinbasePublicAPI = '
$coinData = file_get_contents($coinbasePublicAPI);
$coinData = json_decode($coinData, true); // ok now hit the api...$coinbasePublicAPI = ' https://api.coinmarketcap.com/v1/ticker/' $coinData = file_get_contents($coinbasePublicAPI);$coinData = json_decode($coinData, true); echo '<table>'; echo '<tr>';
echo '<td>NAME</td>';
echo '<td>SYMBOL</td>';
echo '<td>PRICE</td>';
echo '<td>HOLDINGS</td>';
echo '<td>VALUE</td>';
echo '<td>1hr</td>';
echo '<td>24hr</td>';
echo '<td>7day</td>';
echo '</tr>'; $numCoinbaseCoins = sizeof ($coinData); $portfolioValue = 0; for ( $xx=0; $xx<$numCoinbaseCoins; $xx++) { // this part compares your coins to the data...
$thisCoinSymbol = $coinData[$xx]['symbol']; // if you have it, this var is true...
$coinHeld = array_key_exists($thisCoinSymbol, $myCoins); // comment the next line out & you will see ALL of the coins
// returned (not just the ones you own):
if ( !$coinHeld ) { continue; }
echo '<tr>';
// name:
echo '<td>' . $coinData[$xx]['name'] .'</td>';
// symbol:
echo '<td>' . $thisCoinSymbol .'</td>';
// price:
$thisCoinPrice = $coinData[$xx]['price_usd'];
echo '<td>$' . number_format($thisCoinPrice,2) .'</td>';
// holdings:
echo '<td>';
if ($coinHeld) {
$myBalance_units = $myCoins[$thisCoinSymbol]['balance'];
echo number_format($myBalance_units,10);
}
echo '</td>';
// track running total value of coins:
if ($coinHeld) {
$myBalance_USD = $myBalance_units * $thisCoinPrice;
$portfolioValue += $myBalance_USD;
} // value:
echo '<td>$'. number_format($myBalance_USD,2) .'</td>'; // 1h market change:
echo '<td>' . $coinData[$xx]['percent_change_1h'] .'%</td>'; // 24h market change:
echo '<td>' . $coinData[$xx]['percent_change_24h'] .'%</td>'; // 7d market change:
echo '<td>' . $coinData[$xx]['percent_change_7d'] .'%</td>';
echo '</tr>';
} echo '<tr>'; echo '<td colspan="4"><strong>TOTAL</strong></td>';
echo '<td colspan="4"><strong>$' . number_format($portfolioValue,2) . '</strong></td>'; echo '</tr>'; echo '</table>'; ?>
… and that’s all you need. Just customize that initial $myCoins array, and it should render your table. In all likelihood, your portfolio is more impressive than mine, as I’m pretty new to all of this and am still kind of learning about crypto investing.
Notes
The above script hits the Coinmarketca.com API. The API methods and other notes are here: https://coinmarketcap.com/api/
They ask that you hit the API no more than 10 times per minute, so, maybe don’t put this on a web site that gets crazy traffic 24/7.
The above routine hits the main API just once, and so it only pulls in the top 100 coins. If you’re investing in a coin that’s way down the list, you’ll need to customize the above script to iterate through multiple hits to the API, which can be done by adding a “start” parameter to the end of the URL, as in: https://api.coinmarketcap.com/v1/ticker/?start=100 You’d want to setup a loop on the API hit and build out a larger dataset from the results before parsing it all out to the screen.
OTOH, I suppose that, to get them ALL (I think they have about 1,500 coins on there), you would need to hit their API more than 10x, and so it’s not a good source for doing any huge development or projects. Apparently, they’re coming out with a paid API for stuff like that. See the site for details.
If you run Joomla or Wordpress, see another piece I’ve posted on how to run PHP in a module or widget.
Besides adding styling, etc., you may want to build out your coin listing to include richer information. For example, instead of the simple array I showed, maybe yours could look like:
$myCoins = array(
'BTC' => array ( 'balance' => 0.0093, 'wallet' => 'Coinbase', 'notes' => 'whatever', 'buy-in-price' => '8005.22' ),
'ETH' => array ( 'balance' => 0.235724420, 'wallet' => 'Trezor', 'notes' => 'whatever', 'buy-in-price' => '555.88' ),
'XRB' => array ( 'balance' => 2.524402070, 'wallet' => 'Binance', 'notes' => 'whatever', 'buy-in-price' => '1.25' ),
'MIOTA' => array ('balance' => 33.000000000, 'wallet' => 'GDAX', 'notes' => 'whatever', 'buy-in-price' => '0.25' ),
'XRP' => array ( 'balance' => 49.000000000, 'wallet' => 'Kucoin', 'notes' => 'whatever', 'buy-in-price' => '1.25' ),
'XLM' => array ( 'balance' => 105.894000000, 'wallet' => 'Paper wallet', 'notes' => 'whatever', 'buy-in-price' => '2.50' ),
'TRX' => array ( 'balance' => 599.400000000, 'wallet' => 'Bittrex', 'notes' => 'whatever', 'buy-in-price' => '0.054' )
);
… and then your widget or report could be much more exciting. I actually like the idea of a little database application to track the balances instead of having to update the code anytime your balance changes. But, to me, that’s about as easy as anything else, and it’s fast … and of course, my balance is rather pathetic. But, the idea is that you’re not limited to just storing coin balances; you can store other info there, too, and use that to calculate and/or display results on your widget or financial report, or whatever you’re building.
Simplistic, I know… but a bit of fun, and hopefully helpful to a few people who’d like to pull some Coinmarketcap.com data into their site. Enjoy. :-) | https://medium.com/hackernoon/make-yourself-a-web-page-widget-to-monitor-your-crypto-portfolio-value-only-very-simple-php-html-c29cf8bc09b2 | ['Jim Dee'] | 2019-11-11 18:33:10.422000+00:00 | ['Cryptocurrency Investment', 'Ethereum', 'Cryptocurrency', 'Bitcoin', 'Investing'] |
2. List-Based Collections | -
2. 1. Array
Contiguous area of memory consisting of equal-size elements indexed by contiguous integers.
Adding values
You can add elements to the end of an array using the append method.
// create a empty array of integers
var numbers: [Int] = [] for i in 1...5 {
numbers.append(i)
print(numbers)
// [1]
// [1, 2]
// [1, 2, 3]
// [1, 2, 3, 4]
// [1, 2, 3, 4, 5]
} print(numbers)
// [1, 2, 3, 4, 5]
To insert an item into the array at a specified index, call the array’s insert(at:) method.
var numbers: [Int] = [1, 2, 3] numbers.insert(0, at: 0) // numbers will be [0, 1, 2, 3]
numbers.insert(9, at: 1) // numbers will be [0, 9, 1, 2, 3]
You can also append another array using the += operator.
var numbers: [Int] = [1, 2, 3] numbers += [4, 5, 6] // numbers will be [1, 2, 3, 4, 5, 6] // or just one value
numbers += [7] // numbers will be [1, 2, 3, 4, 5, 6, 7]
Removing Values
To remove an item from a specific index call the remove(at:) method.
var numbers: [Int] = [1, 2, 3] numbers.remove(at: 0) // numbers will be [2, 3]
Multi Dimensional Array
// Create a two-dimensional array with nested brackets.
var points: [[Int]] = [[10, 20], [30, 40]] // Access all individual elements.
print(points[0][0])
print(points[0][1])
print(points[1][0])
print(points[1][1]) / / append an item to one of the arrays
points [1].append(50) print(points)
2. 2. Linked List
The definition of Linked List is a data structure that stores data in such a way that each node has data and pointers and is connected in a single line.
The types of Linked List are as follows
- Singly Linked List
- Doubly Linked List
- Circular Linked List
Features of Linked List The common characteristics of the basic list data structure are as follows.
- List data structures store data side by side. It does not prevent storing duplicate data.
- Basic Principles of Linked Lists Dynamically allocate structure variables one at a time and link them as needed.
public class Node {
var value: String
init(value: String) {
self.value = value
}
var next: Node?
weak var previous: Node?
} public class LinkedList {
fileprivate var head: Node?
private var tail: Node?
public var isEmpty: Bool {
return head == nil
}
public var first: Node? {
return head
}
public var last: Node? {
return tail
}
public func append(value: String) {
let newNode = Node(value: value) if let tailNode = tail {
newNode.previous = tailNode
tailNode.next = newNode
}
else {
head = newNode
}
tail = newNode
}
public func nodeAt(index: Int) -> Node? {
if index >= 0 {
var node = head
var i = index
while node != nil {
if i == 0 { return node }
i -= 1
node = node!.next
}
}
return nil
}
public func remove(node: Node) -> String {
let prev = node.previous
let next = node.next
if let prev = prev {
prev.next = next
} else {
head = next
}
next?.previous = prev
if next == nil {
tail = prev
} node.previous = nil
node.next = nil return node.value
}
public func removeAll() {
head = nil
tail = nil
} public var description: String {
var text = "["
var node = head
while node != nil {
text += "\(node!.value)"
node = node!.next
if node != nil { text += ", " }
}
return text + "]"
}
}
2. 3. Stack
The data that is inserted later is first subtracted and processed.
Push and Pop in the same direction: behind the array
LIFO: Last In First Out
Application: Function, recursive call
struct Stack {
fileprivate var array: [String] = []
mutating func push(_ element: String) {
array.append(element)
} mutating func pop() -> String? {
return array.popLast()
} func peek() -> String? {
return array.last
}
}
2. 4. Queue/Dequeue
First, the inserted data is first subtracted and processed.
Insert and delete different directions.
Enqueue: Behind array
Dequeue: before array
FIFO: First In First Out
public struct Queue {
fileprivate var list = LinkedList<Int>()
public mutating func enqueue(_ element: Int) {
list.append(value: element)
}
public mutating func dequeue() -> Int? {
guard !list.isEmpty, let element = list.first else { return nil }
list.remove(node: element)
return element.value
}
public func peek() -> Int? {
return list.first?.value
}
public func description() -> String {
var result = "["
var i = 0
while (true) {
let node = list.nodeAt(index: i)!
result = result + String(node.value)
if (node.next == nil) { break } else { result += "," }
i += 1
}
return result + "]"
}
}
2. 5. Priority Queue
Insert (x): Insert new element x (Enqueue)
func insert(_ x:Int,_ n:Int){
data.append(x)
var size = n+1
while (size>1 && data[size/2] < data[size]){
let temp = data[size]
data[size] = data[size/2]
data[size/2] = temp
size = size/2
}
}
Extract_Max (): Deletes and returns the element with the highest priority value (Dequeue) | https://medium.com/journey-to-tech-company/2-list-based-collections-fa813f981ffe | ['Yohan Hyunsung Yi'] | 2018-05-16 21:20:09.508000+00:00 | ['Swift', 'Programming', 'iOS', 'Swift Programming', 'Algorithms'] |
Open AIs Revolutionary New NLP Model GPT-3 | We have recently heard that GPT-3, a Natural Language Processing(NLP) models have been made available by Open AI and is touted to be an invention bigger than blockchain. The extent of this transformation is such that this deep learning model can help things shown as sci-fi to become accessible to be implemented by college students.
Technically speaking GPT-3 is a highly adaptable algorithm that generates models but let’s not get bogged down by semantics.GPT-3 has been defined as:
task-agnostic Natural Language Processing Model that requires manual tuning.
So it can do specific tasks like text generation and qualitative queries with minimal user adjustment. Historically models have been either task agnostic or have required minimal tuning, but GPT-3 has both the properties and hence is quick to train and easy to tune.NLP is a branch of AI that deals with and enables the interaction between computers and humans using the natural language. The NLP model achieves this by reading, deciphering, understanding, and making sense of human words in the given context.
GPT-3 has been trained using half a trillion words (Common Crawl Dataset) and based on that has generated 175 Billion parameter model. Parameters are different rules that are set in different conditions by learning how language works. The complexity of the problem GPT-3 is trying to solve is demonstrated by the fact that the model consumed $12 Million worth of compute to generate this 175B parameter model. | https://medium.com/swlh/open-ais-revolutionary-new-nlp-model-gpt-3-c8b000432800 | ['Prashant Chamarty'] | 2020-07-22 21:28:50.990000+00:00 | ['Deep Learning', 'Machine Learning', 'Artificial Intelligence', 'Natural Language Process', 'Gpt 3'] |
form validator with Javascript | Form validator is a basic function when building a website with sign-in and sign-up sites. Normally the backend will require the user’s name ,email and password,meanwhile double type the email to verify the email is no typo error.
The form contains username,email,password and verifying password.
HTML
The container includes form which can submit. Four inputs with labels.
And also link the css style and js script.
snall is the section to show up the messager.
CSS
set up the label to block to show in one line.
when input is being focused,change the brorder color and hidden the outline.
hide the message first,when error happends,the message shows up.
and also the border color changes when the status of input changed.
when focusing the button, the style changes.
JS
select the DOMs first ,with getElementById() or querySelector()
When the form submits, it’s time to validate all the functions.
first prevent the submition before all the events async.
when error happends,showError function works.The small tag shows up.
when success,showSuccess function works. the class name is success and the message is hidden.
check every input value,if it’s empty,then showError() message.otherwise run showSuccess()
And also there is other functions to validate.W3C
And the “required” attribute in html is a good way. More details from here
check input length.
If username or password is too short or too long which is not qulified. There is number restrict length.
The getFieldName() is to convert the first character to uppercase.
check the email format .It’s very convenient to do with RegEXP. stackflow
function validateEmail(email) {
const re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
return re.test(String(email).toLowerCase());
}
whether the two passwords is same.If not, show error message.
Here is some related form validility here:
MORE
there are lot validator libraries can be used for developer.Such as Formvalidation is a Javascript library.
For exanple,the other way to validate the email with the library
And I found a good explaination for basic Javascript for validation from freecodecamp
Another form validator with Javascript article | https://medium.com/@diyifang/form-validator-with-javascript-f44114cf5496 | [] | 2020-12-08 06:18:55.873000+00:00 | ['JavaScript', 'Frontend Development', 'Frontend', 'Front End Development', 'Web Development'] |
How to get a good Job in Machine Learning? | Computer scientist Arthur Samuel is rumored to have said that machine learning is an aspect of his field that gives "computers the ability to learn without being explicitly programmed." That’s why machine learning is also considered an element of artificial intelligence, or AI, which deals more generally with how computers can figure things out for themselves. Essentially, the idea is that, given a good set of starting rules and opportunities to interact with data and situations, computers can program themselves, or improve upon basic programs provided for them.
In the mid-1980s, computer scientists hoped to reshape computing and the ability of computers to understand and interact with the world but with no resemblance to Programing Language. Python Came to existence in 1989 and Google started using Python for his betterment and enlist. Up till then There was a huge infusion of interest, enthusiasm, and cash at that time, but AI did not change the world as we knew it then. Over time, AI was found to be suitable for a relatively narrow set of computing tasks, such as creating viable configurations for complex computes. But AI neither set the world on fire nor redefined its boundaries and shape.
More than 30 years later, AI in general and machine learning are enjoying a spectacular renaissance. These technologies are being successfully applied to deal with all kinds of interesting problems in computing, and are enjoying a broad range of success. Notable accomplishments for machine learning include email filtering, intrusion detection, optical character recognition, and computer vision. Machine learning and AI have proven quite effective in applying computation statistics to use data analytics to make predictions and spot trends. Machine learning is hot, hot, hot, in a boom in this Current World Scenario. Because some companies build or use technologies that employ machine learning and AI, there has been considerable demand for skilled and knowledgeable researchers and developers. But if anything explains a sudden, sharp spike in demand for such people, it's the increasingly pervasive use of predictive analytics across many fields of business. Most of the Fortune 500, and a great many other companies and organizations outside that fold, are now using predictive analytics to seek a competitive edge or to improve their overall ability to deliver goods and services to customers, clients or citizens. Individuals trained in machine learning are now in considerable demand across the entire employment spectrum. That explains the six-figure salaries that are increasingly the norm for those who land such jobs. Of course, for many who already work in IT or who are heading in that direction, this raises the question of "how can I get a job in AI or machine learning?" The answers are straightforward, if somewhat labor-intensive, and time-consuming.
The traditional approach: Get a degree
The field is intriguing for many who may also have a bachelor’s degree in computer science, engineering, or some similar discipline under their belts. In fact, it’s hard to find a reputable graduate computer science program that doesn’t include machine learning amidst its targeted subject matters. If you want to aim for the stars in taking the back-to-school route toward machine learning proficiency then follow:
Make the most of MOOC offerings
For those who can't break away from life and work to pursue a full-time degree on campus, massively open online courses, aka MOOCs, offer a variety of alternatives. MOOCs can encompass actual degree programs at reputable universities, certificate programs that provide ample training but don't confer a full-fledged degree, or mapped-out curricula in machine learning or AI that cover the ground in as much depth as one might wish to learn the subject matter.
You can Visit my previous Medium Blog on useful MOOCs and certifications on Machine Learning
A quick search on machine learning at the MOOC Search Engine produces millions of hits that include the following:
Udacity offers hundreds of courses of varying length, complexity, and depth in this area.
edX's machine learning offerings include a certificate program from Microsoft, as well as numerous graduate-level courses and curricula from well-known colleges and universities.
MIT offers a plethora of online courses in this area, for paid-for college credit or free online audit.
Stanford also offers a collection of machine learning courses for credit or audit.
Hands-on is where learning gets real
There's no substitute for rolling up your sleeves and digging into development work if you want to really understand the principles of AI and machine learning. Expect to devote yourself to your mouse and keyboard, as you start small with toy data sets and basic applications, then work your way up to more serious, real-world problem-solving and solutions. The capstone project for the Microsoft Professional Program in Data Science (not a degree) runs for four weeks, for example, and challenges you to develop a solution to a data set using machine learning to test your skills.
Anyone who digs into this subject matter should anticipate spending upward of 15 hours a week on programming tasks, in addition to attending lectures, completing reading assignments, writing papers and all the other tasks that modern learning demands of students nowadays.
When you're ready to rock, let the world know
Once you've finished that degree, obtained your certificate or knocked off a significant chunk of curriculum, you can start positioning yourself to current or prospective employers as someone with skills and knowledge in machine learning and AI. Unless you also have picked up some hands-on, real-world experience in reaching this professional milestone, remain humble about your skills and abilities in this arena. Warnings aside, the prospects for those who can see themselves through the time, effort, and expense of mastering machine learning and AI should be bright.
Long-term prospects
Lots of people question the long-term prospects of work in artificial intelligence or machine learning. After all, won’t that work be automated along with everything AI else will automate? It’s a valid question, but for now, it’s important to consider artificial intelligence in the same vein as industrial revolutions of the past: something that allows for people to gain new capabilities and create whole new economies. ATMs are correlated with an increase in bank tellers.
Yet, ATMs may be responsible for long-term structural unemployment. The future, as ever, is murky. Yet we can learn from the history of ATMs that automation doesn’t automatically mean job loss, though it certainly means that new technologies can upend established truths.
Compensation and roles
Data scientists have one broad split in the categorical definition here: data analysts also fall under their purview. The main difference is that data analysts lean more toward communication data and doing one-off queries of established data models, which tend to be defined by data scientists. This article dives deeper into the split between a data analyst and data scientist roles.
The difference can be quite material. In the United States, the average salary for data analysts is about $60,000. The average data scientist will earn about $30,000 more a year.
Meanwhile, data engineers will also earn an average of about $90,000 a year, similar to their data scientist peers. However, engineers focused specifically on implementing machine learning earn significantly more, easily going above $100,000 a year, and at its upper tiers, a $200,000-a-year average among top-paying companies. Well-known names in the AI field will sometimes get millions of dollars in cash compensation and stock, though they tend to be AI practitioners who are doing cutting-edge work and research at top universities or laboratories around the world.
Broadly speaking, if you want to develop your career in artificial intelligence, you can get started with a software development background and pick up the machine learning theory, or you can start off with the machine learning theory and communication skills and gradually pick up the programming chops to work in machine learning.
Skills required
In order to work with artificial intelligence/machine learning, you generally need four skill sets:
The software engineering chops to implement models in practice. You’ll often work with tools like Python, Pandas, Scikit-Learn, TensorFlow and Spark. The ability to ably work within that toolset will determine your ability to process, “wrangle,” clean, and manage your data so you can use it to process the large streams of data required in a production-level model.
The knowledge of machine learning theory so you know what model to implement and why, and the downsides or upsides of applying certain approaches to certain data problems.
The ability to use statistical inference to quickly evaluate whether or not a model is working.
Domain-level knowledge and the ability to communicate insights from data to business stakeholders. It’s important not only to be able to gain insights from data, but also to be able to push the right answers in front of business-level units so you can help drive solutions.
In practice, machine learning engineers will lean more on their software engineering chops, while data scientists rely more on their knowledge of machine learning theory and statistical inference, along with the ability to communicate those data insights.
Resources
Here are some resources that can help you pick up the skills you need to place your best foot forward when it comes to applying to the AI jobs that are out there (mostly a hybrid of data science or machine learning engineering roles).
Software engineering for artificial intelligence
Machine Learning in Python Course
This free, curated course will run you through the basics of how to use powerful Python frameworks to wrangle data and build basic models for it. You’ll start working with critical data science tools, such as Pandas and sci-kit learn, and get a real feel for how to put machine learning theory into practice.
Apache Spark on Databricks for Data Engineers
This tutorial for Apache Spark helps introduce how to work with big data sets for data engineers and machine learning engineers.
Learning TensorFlow
Working with TensorFlow will be an important part of understanding and implementing artificial intelligence models. This website offers a bunch of beginner-level tutorials that can help you quickly understand this powerful deep learning framework.
Publicly Available Big Data Sets
This collection of different big data sets will give you open-source data you can play around with as you look to build big data pipelines of your own.
Machine Learning/Artificial Intelligence Theory
A Tour of the Top Ten Algorithms for Machine Learning
This Medium article summarizes the different machine learning algorithms you can use for your data, complete with visualizations on how they treat your data.
Modern Theory of Deep Learning
This highly technical piece talks about the possible statistical and mathematical roots of why deep learning models seem to function so well.
Statistical Inference
A Concrete Introduction to Probability with Python
This interactive Python notebook by AI legend Peter Norvig will help you reason with basic probability concepts and play around with them, gaining a critical skill set and perspective into statistical inference.
Bayesian Statistics for Dummies
This handy tutorial simplifies Bayes Theorem, a crucial part of reasoning with changing probabilities and an important perspective to have with ever-shifting machine learning models.
Statistics for Evaluating Machine Learning Models
This tutorial goes over the statistical foundation for calculating confidence intervals, a foundational part of machine learning evaluation.
Job boards/places to find ML work
All of this theory is great, but where do you actually go to find job postings related to AI? Here are some places where you might find artificial intelligence work, ranging from specific communities to AI-focused mailing lists or job boards.
Ask HN: Who is hiring? (October 2018) | Hacker News
Hacker News, a technically focused community wrapped around the YCombinator accelerator for startups, has monthly “Who is hiring” threads that tend to bring up a lot of work in artificial intelligence. Just ctrl+f for “machine learning engineer” or “data scientist” roles with different companies. As a bonus, hiring managers tend to post directly, which should help you get in touch with the right people faster.
AngelList
AngelList is a repository of startup jobs, and there are several listings for machine learning jobs. Look around and apply with one click.
Data Science Jobs & Careers | Data Elixir Jobs Board
Data Elixir is a data science specific mailing list, and it also offers a job board for positions in industry that deal with artificial intelligence and data science. There are often positions for machine learning engineers as well.
KDnuggets Jobs
KDNuggets is filled with data science and artificial intelligence resources and it serves as a useful place for job postings as well, with job postings dedicated to data engineers and machine learning engineers as well.
Artificial Intelligence Job Board | crunch data
This AI job board curates some opportunities in the field. While it can be hit or miss when it comes to curation of the job posts presented, there are enough postings that are relevant to make up for it.
Interview/networking Tips
In order for you to get into a position to do artificial intelligence work, you’re likely going to have to network and do informational interviews with people in artificial intelligence roles. Then you’re going to have to interview.
This interview guide to data science roles will help with more comprehensive information. You’ll want to practice interview questions with lists such as these machine learning questions.
Thank You!
Happy learning.
For any doubt, feel free to comment and if you like the approach and find it helpful, click on clap to gratitude. | https://medium.com/machine-learning-with-abs/how-to-get-a-good-job-in-machine-learning-3277c2c11de0 | ['Arpit Bhushan Sharma'] | 2020-04-30 22:25:49.761000+00:00 | ['Machine Learning', 'Jobs And Money', 'Data Science', 'New Technology 2019', 'Good Job'] |
Make Your Application Essay Harvard-Ready with AI | How AI Can Make Your College Application Essay Harvard-Ready
What AI says about what makes successful Harvard and Yale admissions essays work.
With deadlines looming for college applications, as many as two million college applicants nationwide will spend the holiday season putting their final touches on that most important and feared aspect of the college admissions process: the college application essay.
Because of COVID, the essay is more important this year than ever before. Many applicants have been unable to sit for standardized tests like the SAT, placing an even greater emphasis on grades and essays in this admissions cycle.
Given my long-running interest in AI storytelling, I started with a simple question: can AI improve the college application essay?
The answer is a resounding YES, and not in years or decades, but right now. You should follow this advice before submitting your application essay in the coming weeks!
And I have the data to prove it.
I analyzed more than 100 successful application essays that I found online. The dataset includes 55 successful Harvard application essays, 50 successful Yale admissions essays, and more than 30 “before” and 30 “after” publicly-available essays submitted by actual applicants prior to any editing or coaching and the same essay post-coaching.
I ran each essay through Grammarly’s AI to compile helpful statistics and to look for commonalities. My questions included:
Is there something different about successful Harvard and Yale admissions essays not shared by their less prestigious brethren?
Are Harvard essays better scoring than Yale essays?
Are there any actionable insights for applicants to improve their essays?
Do paid essay coaches improve the score of the essays based on “before” and “after”?
While I have no connection to or relationship with Grammarly, I chose the platform because it is among the leading consumer writing AIs on the market, and it’s mostly free. According to TechCrunch, that company is valued at more than $1 billion, and it has invested at least $200 million in its technology, so I figured it was worth a shot. Additionally, I chose EssayMaster as the source of essays because they have many successful essays accepted by top schools and because I advised the founding editor there, who was a long-time head of admissions at a university.
My Awful Harvard Admissions Essay
But, maybe, the real reason I went down this rabbit hole is that I just had to know if my application essay, the one I wrote to get into Harvard, was as perfect as I imagined it was 25 years ago.
Twenty-five years ago, I submitted my essay to Harvard, and nobody but me edited it, no machine or human. I didn’t even show it to my parents, and it worked, I got in.
But based on my analysis, I would not have gotten in today. My essay scored atrociously on Grammarly, with an overall score of 83. When compared to modern successful examples, it is not even in the same league. If a similar applicant with my grades and test scores submitted the same trite, poor-scoring garbage to Harvard today, that poor soul would almost certainly be denied, and, based on the data, probably even Yale wouldn’t take her.
I still remember the name of my essay, and I even managed to find it: “Hiking to Understanding.” I’m afraid the essay did not improve from its cringe-worthy title, and today, I’m horrified by the adverb-laden text. After reading On Writing by Stephen King, I’ve learned to hate adverbs, although I fail to hate them enough apparently 😊.
My sister whose application essay scored better than mine
But, that terrible score, that 83, would be fine, so long as my sister, Catherine, who is eleven years younger than me and went to Georgetown, had a worse score. So I asked her for her essay. When I saw her proud title, “The Four Corners of Me,” I thought I had a chance.
She scored a not particularly respectable 90. As it turned out, compared to modern Harvard and Yale goers we both stunk, but I stunk far worse. So now I have that to deal with at family get-togethers.
Needless to say, I would never have submitted an 83, because today I would not be foolish enough not to avail myself of AI-assisted editing. In fact, I would go so far as to say that submitting an application essay without any reference to an AI is an anachronism, like gas-powered automobiles.
The truth is AI can improve your admissions essay, and I will tell you how, but first, it’s important that you know this one thing about what the AI is doing: it is beyond your comprehension.
As it turns out, that’s not an insult.
For the purposes of the admissions essay and for this article, all you must know about deep-learning algorithms is this: the reason why the computer composes one sentence and not another or says one thing is wrong and not another is completely incomprehensible to a human seeking to deconstruct the algorithm, even in principle. That simply is the nature of machine-learning.
No less an authority than Wired Magazine has observed that the nature of the technology is that it “produces outcomes based on so many different conditions being transformed by so many layers of neural networks that humans simply cannot comprehend the model the computer has built for itself.”
Ok, so now that you know you can’t understand it, how can AI improve an admissions essay?
Five Easy Steps to Improve Your College Application Essay with AI
Here are the goods. Based on the data, you should do these 5 things to optimize your essay:
1. Score at least a 95 on Grammarly for “Overall score.”
The successful Harvard and Yale essays in the data set scored an average score of 97.4 and a median score of 98. Meanwhile, the average “before” for an essay in the EssayMaster dataset is an 88.1. This is a significant difference, but should surprise no one that applicants to Harvard and Yale generally write better than the average applicant; however, the data also shows this gap can be closed. Interestingly, the average “after” score for an essay is a 97.6 — a score in-line with what a student is expected to have for Harvard or Yale admission. Wise applicants should run their essays through Grammarly, it’s free for the basic service, to see how you score and to work to improve that score.
2. All college admissions essays should score “Very Engaging.”
This is an important baseline. Every single successful college admissions essay accepted by Harvard or Yale in the dataset was “very engaging” based on Grammarly’s score. You have all the time in the world to write your essay. If your essay is not scoring “Very engaging” you should consider why and see if you can improve it. Needless to say, my essay did not score at that level. My 1995-written essay was a bit bland by Grammarly’s metric, apparently, the kiss of death given every single essay in our dataset accepted by Harvard and Yale scored “very engaging.” My sister’s successful Georgetown essay, unfortunately, also achieved this bar, scoring “very engaging.” Kudos Catherine!
3. Get the delivery “Just Right”
About 87% of accepted Harvard and Yale essays had a delivery that scored “Just Right,” the rest were “Slightly Off.” Though less important than being engaging, getting the delivery correct and tonally accurate is important for a successful essay. With that said, the 13% of the essays that were “slightly off” still got into Harvard and Yale. Not surprisingly, a higher proportion, nearly one-third, of the “before” essays were “slightly off.” To improve your delivery, there are free resources, like this admissions essay help course, to learn how to improve an essay’s delivery yourself.
4. Use 50–55% unique words and ~33% rare words but don’t thesaurus-ize!
The percent of unique words is a measure of how many total words are in the essay just once over the total number of words. The percent of rare words is words that are less frequently used in English. The Harvard and Yale essays had an average of 54% unique words compared to the other essays’ 48%. The minimum number of unique words in the Harvard and Yale essays was 40% versus 34% for the other essays. Rare words told a similar story. The percent of rare words used in a Harvard or Yale admissions essay was 33% versus 31% for the “before” essays.
But do NOT thesaurus-ize.
Stephen King’s advice is more true today than ever before:
“Any word you have to hunt for in a thesaurus is the wrong word. There are no exceptions to this rule.” — Stephen King
If the word doesn’t come naturally to you, you could be committing a horrific error in language and make it the easiest possible “No” for an admissions officer.
5. Be at least “clear.” 66% of Harvard and Yale essays scored “very clear.”
Of Harvard and Yale essays, 66% scored “Very clear” on Grammarly’s clarity metric, while 11% were “mostly clear,” and 23% were “clear.” That being said, this appears to be the least useful metric reported by Grammarly, given that a greater percentage of the “before” essays were very clear. The takeaway is this: So long as you are “clear” or better, then you are in good company.
So, are Harvard essays better than Yale essays … and other urgent questions?
We started this journey with a few urgent questions. Here are the answers:
Successful Harvard and Yale essays score better than other applicants’ essays by about 10 points on Grammarly. They use more unique and rare words, and they have “just right” delivery.
On the question of whether Harvard essays score better than Yale essays, Harvard beats Yale 98 to 98 . That is, Harvard essays are NOT better scoring than Yale essays. The medians of both were 98. Yale has slightly more unique words and Harvard slightly more rare words.
. That is, Harvard essays are NOT better scoring than Yale essays. The medians of both were 98. Yale has slightly more unique words and Harvard slightly more rare words. There are some pretty obvious things applicants can do to improve their essays. Most importantly, have an overall score of at least a 95 on Grammarly, and aim for a “very engaging” score and a “just right” delivery score. Don’t sweat the clarity score, so long as it is “clear” or better.
On the question of whether paid essay coaches improved essays, it was self-evident that they did, at least by the measure of Grammarly AI. The before set scored an 88 and the after set scored a 98 for the essays in the dataset.
As far as the role of AI in the practice of writing, it appears we are in a goldilocks zone, prior to the ultimate ascendancy of automated storytelling, where the best writers will not only be skilled at their craft but also proficient masters of AI.
I’d expect that for the next decade or two, the state-of-the-art in storytelling will consist of an AI-assisted human edit. In no more than five to ten years, computers will reliably suggest reasonable next sentences and topics for future paragraphs, and, it will end in the singularity of Deep Story AI, where human-produced writing is clearly inferior to machine-produced creativity.
In a future where sophisticated machines are producing stellar admissions essays, then the only capable scorer of such nuance will be other machines. At that point, the audience for machine writing will be machine scoring.
If the perception is that admissions committees operate in a star chamber today, just wait until AI renders their candidate decisions incomprehensible, even in principle. Perhaps that day has already arrived.
P.S. — Grammarly scored this article an 84 with an engagement score of a bit bland. Sorry about that. I guess not much has changed in 25 years. 🤦 | https://medium.com/towards-artificial-intelligence/how-ai-can-make-your-college-application-essay-harvard-ready-90f9dde79a90 | ['Geoff Cook'] | 2020-12-26 22:02:31.762000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'College Admissions', 'Technology', 'Writing'] |
So close to a REAL volcano | In April 2019 we went to Guatemala to see the country but also to volunteer at a school there. The school was called Sosep Santa Maria de Jesus. When we were done teaching we would have the rest of the day to do whatever we wanted. One of these days we went to see Volcano Pacaya. I have been to see many volcanoes from far. But all of the ones I have been to were dormant. This was my first time seeing a real volcano up close and seeing oozing lava and hearing eruptions. You can see from the picture the lava. We got so close to it we could actually roast marshmallows. We spent a long time roasting and eating marshmallows. Once in a while there would be an eruption sound and lots of rocks and pebbles would fall and you would see the red lava come out a little bit. There was a really cute dog who followed us there all the way just so he could eat our marshmallows.
I dont know how to put up videos here yet so I cannot put the one I took. But here is a video of the volcano I found on YouTube. Mine is very similar and thats exactly how loud the sounds are. At first you get a bit scared but then its just so cool and fun. I think it is one of the coolest experiences I have had so far! By the way I took this picture!
https://www.youtube.com/watch?v=AANiWbtyrh8 | https://medium.com/@mayashouse/so-close-to-a-real-volcano-60936c561420 | ['Mayas House'] | 2021-02-26 05:07:00.621000+00:00 | ['Ecotourism', 'Voluntourism', 'Guatemala City', 'Volunteering', 'Traveling'] |
Species for Sale: Manta Ray | by Thomas Gomersall
Manta rays (Genus: Manta) are giant cartilaginous fish found in tropical and subtropical seas worldwide. They are planktivores that migrate along well-established routes and gather in large numbers at predictable zooplankton hotspots to feed, ingesting large volumes of water and filtering plankton from it using rigid sieving pads called gill plates. They are slow breeding, naturally uncommon animals, with females normally producing one pup per year and the largest populations numbering in the low thousands (Couturier et al, 2012; Cornish, 2020).
Item on Sale:
Photo credit: Paul Hilton WWF
Removing the gills of a manta ray. Photo credit: Andy Cornish WWF
Along with the closely related devil rays, manta rays are targeted for their gill plates, which are sold dried for traditional Chinese medicine (TCM) under the trade name pengyusai (O’Malley et al, 2017). Although gill plates are rarely prescribed by TCM practitioners (who have even acknowledged the lack of evidence for their effectiveness), they became highly sought after over a decade ago thanks to traders in Guangdong aggressively marketing them as a health tonic ingredient. As a result, some fishermen who had once not targeted manta rays and would even release ones they accidentally caught switched to actively hunting them (Hilton, 2012; Cornish, 2020, O’Malley et al, 2017).
Dried seafood shops line the streets of Sheung Wan, where gill plates are sold. Photo credit: WWF-Hong Kong
Hong Kong is one of the five biggest Asian importers of manta gill plates, which are often sold in dried seafood shops or on online shopping platforms like Taobao and can reach over US$430 per 500g (Hau et al, 2016). Sheung Wan has by far the highest concentration of shops selling gill plates and the availability of gill plates here is even higher than in Guangzhou; the biggest gill plate consumer (O’Malley et al, 2017; Hau et al, 2016). Interestingly, despite these large volumes, demand and consumption amongst Hong Kongers is not that high (Cornish, 2020) and a 2016 study found evidence suggesting that the main buyers of gill plates here may actually be mainland Chinese, with e-commerce sales mainly targeting this demographic and three out of five gill plate sellers in Hong Kong likely to be advertising to mainland Chinese customers (Wu, 2016).
Price:
Manta ray at fish market. Photo credit: Andy Cornish WWF
Manta ray habitats and migration routes often overlap with fisheries, which combined with their tendency to congregate at predictable sites, makes it easy to catch large numbers of them (Couturier et al, 2012; Germanov and Marshall, 2014). Given their very slow reproductive rate and natural uncommonness, an annual global catch of 3,400 individuals — not accounting for unreported and subsistence catches -– is unsustainable and some of the largest fisheries, like Indonesia, have seen major declines (Hilton, 2012; Lewis et al, 2015).
Photo credit: James Morgan
Since not much is known about the ecology of manta rays, it is unclear what the wider environmental effects of their decline are. But one possible consequence is a disruption of marine nutrient cycling. Studies have found that mantas feed in deeper, nutrient-rich waters by night then return to the nutrient-poor shallows and surface waters during the day (Braun et al, 2014; Peel et al, 2019). As water usually doesn’t mix much between different ocean layers on its own, in areas where they are more common, mantas could play important roles in transferring nutrients between them through defecation or simply by their large mass shifting water as they move.
“When [manta rays] travel vertically, because they’re such large animals, they are pulling water behind them and that is pulling some of the nutrient-rich water from deeper waters into shallow waters,” says Andy Cornish, leader of WWF’s global shark and ray programme in Hong Kong.
How can Hong Kong help?
Despite Hong Kong’s historical role in the gill plate trade, recent developments have made it easier than ever for the city to combat it. In 2014, manta rays were listed on Appendix II of the Convention on International Trade of Endangered Species (CITES), meaning that export permits for their parts can only be granted if the exporting country can prove that fishing is sustainable. But as manta fisheries are inherently unsustainable, no country is known to have attempted to legalise a sustainable trade, the implication being that any international gill plate trade today is illegal. This means that manta gill plate imports into Hong Kong are much more tightly regulated than before the CITES listing (Cornish, 2020).
“While a CITES Appendix II listing is designed to allow for a sustainable legal trade, manta rays have such low rates of reproduction that they should really be fully protected,” says Cornish. “I would be very doubtful that any country could undertake the CITES sustainability assessment necessary to show that [manta ray fisheries] could be sustainable.”
Photo credit: Paul Hilton WWF
Authorities also need to monitor the Hong Kong stockpile of manta gill plates that were imported prior to the CITES listing in 2014. Although these can be sold legally, as with elephant ivory, there is still the risk that the gill plates of recently killed manta rays could be laundered with the old gill plates. If the stockpile does not significantly decrease over time, this might suggest that it is being illegally supplemented with new gill plates, in which case authorities will need to increase their efforts to prevent their entry into Hong Kong and enhance the monitoring of stockpiles (Cornish, 2020).
There’s more to learn about rays and their relatives, the sharks, which will be the focus of an upcoming WWF-Hong Kong Ocean Celebration event at Hoi Ha Wan Marine Life Centre on 23–24 May. Bookings available here.
References:
· Braun, C.D., Skomal, G.B., Thorrold, S.R. and M.L. Berumen. 2014. Diving behavior of the reef manta ray links coral reefs with adjacent deep pelagic habitats. PLoS ONE, vol. 9(2): e88170
· Cornish, A.S. (PhD), interviewed by Thomas Gomersall, 2020, WWF-Hong Kong.
· Couturier, L.I.E., Marshall, A.D., Jaine F.R.A., Kashiwagi, T., Pierce, S.J., Townsend, K.A., Weeks, S.J., Bennett, M.B. and A.J. Richardson. 2012. Biology, ecology and conservation of the Mobulidae. Journal of Fish Biology, vol. 80: 1075pp.–1119pp.
· Germanov, E.S. and Marshall, A.D. 2014. Running the gauntlet: Regional movement patterns of Manta alfredi through a complex of parks and fisheries. PLoS ONE, vol. 9(12): e115660.
· Hau, C.Y.L., Ho, K.Y.K. and S.K.H. Shea. 2016. Rapid survey of mobuild gill plate trade and retail patterns in Hong Kong and Guangzhou markets. BLOOM Association Hong Kong: 1pp.–20pp.
· Hilton, P., ‘There is a catch’. South China Morning Post, 7 October 2012, https://www.scmp.com/magazines/post-magazine/article/1053711/there-catch (Accessed: 5 March 2020).
· Lewis, S., Setiasih, N., O’Malley, M.P., Campbell, S., Yusuf, M. and A. Sianipar. 2015. Assessing Indonesian manta and devil ray populations through historical landings and fishing community interviews. PeerJ Pre-Prints, vol. 3: e1642.
· O’Malley, M.P., Townsend, K.A., Hilton, P., Heinrichs, S. and J.D. Stewart. 2017. Characterization of the trade in manta and devil ray gill plates in China and South-east Asia through trader surveys. Aquatic Conservation: Marine and Freshwater Ecosystems, vol. 27: 394pp.–413pp.
· Peel, L.R., Daly, R., Keating Daly, C.A., Stevens, G.M.W., Collin, S.P. and M.G. Meekan. 2019. Stable isotope analyses reveal unique trophic role of reef manta rays (Mobula alfredi) at a remote coral reef. Royal Society Open Science, vol. 6: 190599.
· Wu, J. 2016. Shark fin and mobulid ray gill plate trade in mainland China, Hong Kong and Taiwan. Traffic Report: 1pp.–78pp. | https://medium.com/wwfhk-e/species-for-sale-manta-ray-f5c8eedec8c2 | ['Wwf Hk'] | 2020-05-19 08:28:07.764000+00:00 | ['Sustainable Seafood', 'Hong Kong', 'Manta Ray', 'Gill Plates', 'Conservation'] |
What are the most common mistakes in UX Design ? | The UX process is often complicated, involves multi-functional areas and time consuming. Hence we have listed below a list of mistakes one makes while designing a product.
User Experience Error
1. Typography:
Poor use of typography invariably ruins a good design. The fonts used should be legible, web friendly and should have correct line spacing. One should test how it appears on different devices like desktop, laptop, tablet and smartphones both under light and darkness. One can choose fonts from Google’s font
2. Too Much Text:
Too much text leaves a browser confused and uninterested in navigating further. One should aim for minimum text with relevant information without compromising the gist.
3. Design Heavy:
Some designers put excessive importance in designing leaving too many design elements on the site. However some places require simplicity like forms, tariffs etc.
4. Poor Contrast:
Website with poor contrast leaves a bad taste for the browser. Poor contrast with thin fonts even makes it worse. Hence designers should look to this aspect while designing, particularly those following modern minimalistic pattern.
5. Cluttered Layout:
Any design that appears cluttered sure to fail. Be it for web or any other media. Unclutter it at any cost to get the most out of it.
6. Slow Loading:
Websites that load faster are a delight. In contrast a slow loading site may fail even to register the click. You can check your speed of your Website here.
7. Ease of navigation:
If the navigation is confusing then browser will visit elsewhere. There should be clarity in it. Buttons should be in place where a visitor expects it to be.
8. Animation Heavy:
Any site with heavy dose of animation makes the user experience sour as first of all many don’t prefer animation and secondly many does not understand them.
9. Satisfying Yourselves:
Websites are designed for users you don’t even know. When you design it blindly gratifying only your own satisfaction then it becomes a problem. That is why user tasting and feedback is paramount. You can find a good example by looking this website design where they achieve a good User Experience.
10. Unintuitive Buttons:
Navigation takes place through buttons. Hence buttons that are too small, in the wrong place or difficult to locate because of color or contrast should be avoided.
11. User tasting:
Websites must be user tasted extensively before being thrown open to browsers. This process prevents a lot of UX issues. A designer can not determine it by himself the different usability.
12. Responsiveness:
Website must look the same across platforms and devices. If the website does not look the same in different devices then user may simply leave. It is important that all elements of a page must appear across all devices. Google has its own mobile friendly test tool. You can find that here.
13. Turning Blind to Feedback:
Any feedback is like gold dust. Since we are designing it for the users improvement will come only through listening to users and users only.
14. Designing Without Content:
Many design the pages putting LOREM IPSUM as content. Though it can be a reference point, it can no way replace the actual content and during putting the content there may arise unforeseen problem where one has to change the complete layout.
15. Bad Forms/forceful subscription:
Many websites use this tactics when landing on their pages where users are mandated to either subscribe or fill up the form. This is an extremely negative practice because of which browser stays away.
16. Review/Testimonials:
A good website always have scope for review or testimonials. However not having review page is considered negative. | https://medium.com/@cornerstone-digital/what-are-the-most-common-mistakes-in-ux-design-1d71fec9624a | [] | 2020-12-19 06:53:18.977000+00:00 | ['UX Design', 'UX Research', 'User Experience', 'Ux Strategy', 'User Experience Design'] |
You never knew the Indian RAW and Israeli Mossad were this close! | The relationship of RAW and Mossad reached a greater height when Ajit Doval was appointed as the National Security Advisor for India.
If you have closely watched the Facebook live video of Israeli PM Benjamin Netanyahu during Modi’s visit to Israel, when Bibi introduced Yossi Cohen, their NSA who is also the current chief of Mossad, I remember PM Modi telling “Doval is in close contact with your NSA”.
While going through the news, I read the 7 agreements India and Israel singed during Modi’s visit and one among them was Intelligence Sharing.
The RAW and Mossad have same motto that is zero tolerance towards terrorism and the Indian intelligence promised to share any kind of threat to Israel from Asia and Mossad promised the same from Middle East.
The bromance of Mossad and RAW started in early 70’s when India was under Operation Kahuta where it spied the Pakistani Nuclear base but all thanks to Morarji Desai, a retard PM who destroyed the game which eventually resulted in assassination of all undercover agents in Pak.
It is also a well know news that during PM Modi’s G20 summit visit in Turkey, the RAW had gathered intelligence of a probable attack on Modi and many other world leaders. The Indian intelligence agency RAW managed to gain help from British intelligence MI6 and Mossad for the complete security of Narendra Modi till he landed back in India.
Ajit Doval and Yossi Cohen. Real life James Bonds :)
Footnotes,
Mossad, MI5 roped in to shield Prime Minister Narendra Modi in Turkey?
India-Israel intelligence co-operation | https://medium.com/@vikyathrten/you-never-knew-indian-raw-and-israeli-mossad-were-this-close-d8d85fd941ad | ['Vikyath Kumar'] | 2020-12-09 12:29:01.002000+00:00 | ['Mossad', 'India', 'Research And Analysis', 'Israel'] |
The Twisted Psychology of Giving | The Twisted Psychology of Giving
Do we give because we care? Or because it makes us feel good? And what happens if we come into a pile of money?
What would you do with a big influx of cash? Image: Pixabay/Maklay62
If you knew your favorite uncle might someday leave you a fortune, would you be willing to commit today to giving a chunk of it to charity? If you weren’t sure whether the old man had you in his will, or how generous he’d be, odds are good you’d make a solid pledge, research suggests. But soon as ol’ Uncle So-and-So was six feet under and your bank account six figures stronger, odds are all bets would be off.
Much about the psychology of giving remains a mystery, but a picture is emerging that’s a little twisted. First …
Why We Give
Donating to charity has been shown in some studies to make people feel good. Sounds logical. But despite lots of expert explanations about how and why giving makes us happy, the evidence is pretty thin (especially in terms of any lasting happiness). But much of the research is based on experiments in lab settings, involving fake money or points.
Social psychologist Elizabeth Dunn of the University of British Columbia did a series of experiments a decade ago that indicated money was linked to happiness, but only if people gave it away versus spending it on stuff for themselves. Giving once might bring a brief bout of joy, Dunn figured, but “if it becomes a way of living, then it could make a lasting difference.”
However, studies like these, as with many studies in psychology, typically struggle to show cause and effect: Does a giving spirit creates happiness, or are happy people more generous?
Such research also doesn’t typically address the philosophical elephant in the room: Do we give because we simply care, or do we give because it makes us feel good (or because of the tax deductions)? Some research shows that people are more likely to donate if they get swag in return — something that suggests at least a bit of selfishness, and which reduces the effective amount of their gift, of course.
Don’t expect a conclusion on the altruism question anytime soon.
Meanwhile, what’s really fascinating — and a little easier to pin down with research — is how a person’s perception of money and giving change when they suddenly have more of it.
The Windfall Effect
When people come into big money — from, say, the lottery or an inheritance — research suggests they’re be more willing to donate a portion of that windfall than they had been willing to part with their regular earnings.
But that’s not the whole story, as David Reinstein, a University of Exeter economics lecturer, has shown.
At the end of a web survey, Reinstein and colleagues told a few hundred UK residents they had a 50/50 chance of winning 10 pounds (about $13). Not exactly a windfall, but scientists don’t typically have lot of money to throw around (hence all the experiments with points or fake money).
Before they knew if they’d won some cold hard cash, some of the participants were asked if they’d like to donate part of their potential mini-windfall to one of two well-known charities. Others were told they’d won the cash and then were asked if they’d like to donate.
Those who were asked about their generous nature prior to winning were about 50 percent more likely to give some away, and their donations per participant were twice as much the other group.
“People are more generous before they know how much money they will receive,” says Reinstein, co-author of the study published in the January issue of the Journal of Public Economics.
(For the record: The participants who won were actually paid; the donations were actually made; and the losers actually got nothing.)
What Gives?
I asked Reinstein why getting the influx of cash tends to turn people from generous to stingy. His team, drawing from their own work and other research, offers several possible explanations, and Reinstein created three cartoons to summarize the ideas:
1. People adapt quickly to having money in hand and are reluctant to part with it. Economists call it “loss aversion” or “endowment effects.”
2. People are more generous with less tangible income and wealth, and the perception of money changes when people actually have more of it.
3. “People pursue reputation and self-esteem by committing to behave pro-socially in the future,” Reinstein said. In some cases, “this can lead people to commit to give greater amounts from outcomes when committing conditionally” for some situation that may or may not happen.
But will 10 pounds have the same effect on a person’s goodwill as scoring the Mega Millions jackpot? Reinstein acknowledges that gap, but he says other real-world research, involving greater sums of money, suggests the motivations his team has uncovered “would carry over” to real windfalls.
There’s a point to all this.
“These findings have real implications for fundraisers who are trying to find the best time to ask someone to commit to giving a donation,” Reinstein says.
And that time is, of course, before people come into serious money.
Reinstein hopes to test his ideas in a big way. Through his website Give if You Win, he’s encouraging businesses and charities to get together and create programs in which white collar employees are encouraged to pledge a percentage of their year-end bonuses before they know how big those bonuses will be (or if they’ll even get one).
He figures such pledges could make the culture of big corporate bonuses more palatable. “When bankers succeed, so will charities,” he says. | https://medium.com/luminate/the-twisted-psychology-of-giving-386418f53abe | ['Robert Roy Britt'] | 2019-02-20 17:07:14.315000+00:00 | ['Self Improvement', 'Charity', 'Science', 'Culture', 'Psychology'] |
Coming Soon: Trade Decentraland items with WAX Tokens | We are excited to announce that items for virtual reality platform Decentraland will soon be tradable with WAX Tokens on OPSkins!
This addition is different from other ERC-721 tokens that we’ve added in recent months because Decentraland is a decentralized virtual reality platform which allows users to create, experience and monetize their content, games, and applications. Decentraland’s mission is to create a virtual world that is owned by its inhabitants.
Decentraland is made up of thousands of 10 square meter parcels of virtual space. These parcels are represented by LAND, a non-fungible token, and are identified using cartesian coordinates. LAND owners can build and publish 3D content and applications, such as Ethereum games, on their LAND. Each parcel of LAND is unique since its geographic location and distance from the center of the virtual world play a large role in determining how much foot traffic (and visual traffic) it gets.
The LAND is tracked on the Ethereum network using non-fungible ERC721 tokens, which can be bought and traded on OPSkins.com with WAX, bitcoin, Ethereum, and fiat currency.
“Blockchain-based non-fungible virtual items add an exciting and lucrative new concept to the cryptosphere,” said Malcolm CasSelle, CIO of OPSkins and President of WAX. “Decentraland is helping further establish the Crypto Collectible marketplace, which OPSkins and WAX are excited to be key players in developing. The integration of Decentraland onto the OPSkins platform creates additional utility for WAX Tokens and incentivizes more players to join the ecosystem.”
Decentraland’s non-fungible ERC721 tokens will be the latest to be added to OPSkins and tradable with WAX Tokens — the first was CryptoKitties in December 2017, and there are more to come. OPSkins is becoming the go-to marketplace to trade Crypto Collectibles. OPSkins boasts over two million weekly transactions on its platform, has millions of active customers and adds 200,000 new users each month. WAX allows anyone to create a virtual goods marketplace, onboarding millions of gamers to the growing cryptocurrency space.
Join the discussion on Telegram at https://t.me/wax_io | https://medium.com/wax-io/coming-soon-trade-decentraland-items-with-wax-tokens-5a7a45c9bf7b | ['Wax Io'] | 2018-03-27 19:07:10.266000+00:00 | ['Blockchain', 'Cryptocurrency', 'Ethereum', 'Cryptocollectibles', 'Bitcoin'] |
Biggest Red Flags In A New Relationship. | As we all know, when we are so deep into someone we only focus on them and how to make them like us back, but along the way we ignore the biggest signals that the universe keep sending us, and that could actually protect us from all the negativity and toxicity that will consume us off guard.
Here are the red flags you shouldn’t be ignoring when getting into a new relationship:
Trust your intuition; trust your instincts. because they are always true.
Whatever you’re feeling about this person deep down is probably true, even when you don’t wanna admit it to yourself. If you find yourself suffering from feelings of regret, emptiness and confusion, you’re defintely with the wrong person.
Someone who makes you question your worth or leave you feeling unwanted and left behind is someone you should avoid at all costs.
If they do these things repeadtly: lying, acting cold and distant, only talk to you when they need something, blame you for things that are out of your control. You should stay as far as possible from them.
If they constantly make you feel like you are the bad guy, when in fact you have not did anything wrong.
Stay away from people who get angry at you for the smallest unreasonable things, especially who gets angry with you for being upset over something they’ve done to you. They usually do that so they won’t have to apologise.
Stay away from people who manipulate you into getting their way by bringing up your past. It is so toxic.
Stay away from narcissists, they have a hard time respecting boundaries and accepting different points of view. You will find yourself pressured into talking or doing things that are out of your comfort zone.
When you are the only one that is taking the initiative, you feel like you are doing all the chasing by making plenty of efforts that rarely seems to be reciprocated. This is when you know that you are with the wrong person.
When you are the only one that is interrested in knowing every single detail about your partner, you find yourself the only one who is asking questions and wanting to get to know more of them. This is when you know that you are with the wrong person.
These constant mistreatments and negative actions will affect your mental well being and can often make you question yourself, they will also destroy the foundation of your self-esteem.
You decided to read this for a reason, trust that reason.
Whoever is reading this now, I hope you can find the peace in being alone and protecting your mental health rather than being with the wrong person.
Finding real love and the right person means letting go of those who intend to hurt you.
-Ilhem | https://medium.com/@unicornsxreal/biggest-red-flags-in-a-new-relationship-dffa18add410 | [] | 2020-12-19 21:02:10.008000+00:00 | ['Writers On Writing', 'Goodreads', 'Social', 'Relationships', 'Romance'] |
6 German Songs to Give Your Language Skills a Boost | Learning a new language is a lot like learning to sing. German is no different. In fact, German songs are a great way to help you get to know the language a better.
The more you listen, the more words and phrases will stick in your ear.
Here are six of my favorite German songs from a wide range of styles, along with a neat phrase from each one to show you the language-learning aspect. I’ve ordered them roughly from easiest to hardest, though of course that’s subjective.
The Beatles — Sie Liebt Dich
Did you know the Beatles released two of their own songs in German while they were living in Hamburg? They strongly preferred singing in English, so that’s why we don’t have a full German collection of their songs. Pity!
Oh, ja sie liebt dich, schöner kann es garnicht sein
Oh, yeah, she loves you, it can’t be better than that
Frank Schöbel — Schreib Es Mir In Den Sand
This incredible overlooked gem was used a couple of years ago in the film This Ain’t California, a fictionalized documentary about skateboarders in East Germany. It’s actually a cover of a wildly popular Hungarian song called Gyöngyhajú lány or The Girl with Pearls in her Hair.
Tage und Träume… mit dir…
Days and dreams… with you…
AnnenMayKantereit — Nicht Nichts
AnnenMayKantereit is a relatively famous group from Cologne that somehow hasn’t achieved much popularity outside of Germany. Once you listen to lead singer Henning May’s voice, you’ll wonder why as well. This is the first song of theirs I heard, and the unique feeling of frustrated helplessness in the lyrics will never get old.
…und ärger mich, weil ich immer liegen bleibe…
… and get mad at myself, because I just keep lying there…
Von Eden — Land In Sicht
This song was written and performed for the film Feuchtgebiete (Wetlands), in which the lead singer of Von Eden also has a key role. The band even made a quick guitar tutorial and put it on YouTube — of course, it’s all in German!
…und meinetwegen!
… and all because of me!
Liederjan — Die Weber
This is an old folk song that a professor of mine introduced me to in a literature course. It’s based on a nineteenth-century poem in honor of some Silesian weavers who rose up in protest of industrialization. Perhaps that sounds dry, but when put to music you’ll probably start feeling the revolutionary spirit yourself.
Der den letzten Groschen von uns erpreßt…
Who squeezed every last penny from us…
Nena — 99 Luftballons
You might already know the English version of this eighties hit, 99 Red Balloons. Personally I think the song is much better in German and you can really connect to the Cold War feeling through the lyrics.
Dabei waren dort am Horizont nur 99 Luftballons…
On the horizon there were just 99 balloons….
Import German Songs into LingQ
Do you want to make your German studies more efficient? Well, instead of passively listening to German music, I’ll teach you how you can read the lyrics and save and review new vocabulary in LingQ.
Just so we’re clear, you can import any German song (and its lyrics) and create interactive lessons that you can view on both your desktop and mobile (thanks to LingQ).
Here’s how:
First, click the import button at the top right of your screen once you’ve logged into LingQ (desktop).
Paste the lyrics into the designated field. Be sure to add the audio too so you can listen to your favorite song and study the lyrics at the same time.
Now, here’s a trick that will help make your translations even better. Most German lyric websites offer English translations. You can add these translations into LingQ so you don’t have to worry if the translations in your lesson are a bit off.
To do this, go to the Clips tab and add your translation under each clip. Make sure the translations drop down menu has English selected (see the red circle).
Now, when you save and open your lesson, each sentence will show a Translate Sentence button underneath. Click it and you will see the translations you’ve entered from the Clips tab. Please note, you must be viewing your lesson in sentence mode to get this feature.
There you have it.
Instead of passively listening to music, you now have a full-blown lesson that helps you read and learn the lyrics faster, without having to bounce back and forth between online dictionaries. LingQ has everything (and more) to help you focus and build momentum as you study.
Oh, it’s also on mobile too. Listen to your favorite German songs on your headphones and study at the same time!
Keep studying hard, and may the music never leave your ears! Give LingQ a free try today.
Learn German Now | https://medium.com/the-linguist-on-language/6-german-songs-to-give-your-language-skills-a-boost-c1a1f1f36453 | [] | 2018-12-07 00:59:29.956000+00:00 | ['German', 'Language Learning'] |
I Mother No One | I Mother No One
For Mothers Lost, Mothers Yet Still Mothering, and Mothers Who Mother Others
It is Mother’s Day and I am outside walking the dog, listening to the sounds of the closest highway, hearing them, there is nothing that I can say that I have not already said about this day. But, I will say what I can. Mothers, you have a gift. You were given the knowledge to raise and keep up with little versions of you. How tiresome that must be on a daily basis. How incredible the strength must be to last for days on end. Knowing that you would be someone that someone else would look up to is a pressure and a weight that I cannot even bear.
Mothers, I appreciate you.
As I walk the hills of my apartment complex, I envision the days that my mother and I had our outs. But, we survived and are surviving. I am grateful for the chance to say that we moved through a tumultuous time and we are rising to the top. It is 2019, and I have entered my 39th year, and I still mother no one in the actual defining terms of a mother — one who gives birth to someone. But, I did mother. I do Mother. I am mothering younger versions of me, my cousins, and others and I get to see what this life could have been, but only part-time. And that is best for me. The older I get, the more I know this to be true.
Part-time mothering of others is significantly different from Full-time mothering of your own.
Fake Balloons|Photo Credit: Tremaine L. Loadholt
On this day, I wish you peace, love, light, a home-cooked meal that does not come from your hands and toil in the kitchen, and the overwhelmingly powerful gift of appreciation. You deserve it. If you are mothering the way you should — you deserve it. If your children can say positively that you are their mother and they say it proudly — you deserve it. If you have given your all, including everything left after it — you deserve it. If you messed up, lost track, received help, and are on your way to the betterment of both you and your children — you deserve it.
I wish I could make each and every one of you smile, offer a hug, a kind word on more than just one day of this year, but here are a few…
For those of you yet still mothering, those who mother others, those who are growing from the pain of not being mothered, and all others who fall in the category of mothering and the mothered…
We are sending you a heartfelt Happy Mother’s Day. | https://medium.com/a-cornered-gurl/i-mother-no-one-9ab6ff45eb6f | ['Tre L. Loadholt'] | 2019-05-12 13:35:05.785000+00:00 | ['Mothers', 'Mothers Day', 'A Cornered Gurl', 'Letters', 'Love'] |
Going Full Greta | As with many others, Greta Thunberg is my current number one hero. She makes me think. And, importantly, she walks her talk. Compared to most Americans I have a teeny tiny carbon footprint but Greta has got me thinking about how I might make it even smaller. I’ve realized there is more I can do but I’ve also realized that I am not sure I can go ‘Full Greta’ and I’ll explain why in a minute.
Greta does not use fossil fuel powered vehicles for transportation. For many Americans that would be unthinkable. At my last job I had a co-worker who lived just two blocks from the office yet she drove to work each day. When her car broke down she was beside herself. She didn’t know what to do. She was calling all her friends trying to get rides to work. IT NEVER OCCURRED TO HER to simply go out her front door and walk two blocks. IT NEVER OCCURRED TO HER! Sadly, this is the mindset of so many Americans. They simply will not go anywhere without driving.
I have not owned a fossil fuel powered car — or any vehicle — in six years. I’ve gone for long stretches of time without a car several times in the past as well. I walk! And my legs do not pollute. And they don’t require license plates, a license, insurance, gas, or constant repair bills. They have not only saved the planet from some pollution but they have saved me countless tens of thousands of dollars. And walking has also greatly improved my health.
Of course I must admit that going car-less has not always been about environmental activism. Usually it was brought about by debilitating poverty. It was not until I was forced to go car-less that I began understanding the environmental impact of doing so. Now I’m quite happy and healthy being car-less. As poverty continues to skyrocket in America many others will be forced to give up their expensive car addictions. This will probably help the environment a little but I don’t think this is the best way to address the climate crisis. The results are not as powerful when we are forced into action as they are when we consciously and purposefully take action through choice.
Greta also does not utilize air travel, which is by far one of the most polluting forms of travel carbon-per-person-wise. The last time I was on a plane was in the early 1990s. Back during the first 30 years of my life I was an ardent lover of air travel. I flew a lot and enjoyed the heck out of it. But then the joy seemed to drain away and I quit flying. So there’s not much I can do to shrink my carbon footprint by cutting back on flying since I don’t fly anyway.
Greta tries to never buy new clothes. She wears hand-me-downs from her older sister and also gets clothes from thrift stores. When I was a kid I absolutely hated being given hand-me-down clothes from my older brother. I simply refused to wear them. I didn’t want his vibes on me. And for most of my life I would not even consider buying clothes at a thrift store. Buying used clothes? Ewe gross! Right?
But over this last decade I have been slowly softening my hard-headed stance about this. It all started about 8 years ago when I needed a blazer for a certain event. I had burned all my ties and business suits way back when I quit the corporate world and I had zero money to buy a new blazer. So I bit the bullet and went to the local thrift store. To my surprise I found a blazer that was in mint condition that fit me perfectly. I bought it for $2 and when I got home I found a $5 bill in one of the pockets. I dare anyone to find a deal like that at any popular clothing retail fashion store.
I have since shopped at the thrift store on a somewhat regular basis, mostly for household items and books and potential birthday presents for people under the age of 10. But I have also bought some shirts and trousers and jackets. I even bought the first umbrella I’ve ever owned at that thrift store. But to be honest I have to say that I vehemently draw the line at socks and underwear. I buy those at the nearby evil Wal-Mart.
There is one part of Greta’s environmentally friendly lifestyle that I personally have trouble with, though, and that is the fact that she is a vegan. I just don’t think I can take that step. Over recent decades I have radically decreased my intake of meat — around 90%. I don’t eat pork and I rarely eat beef or chicken but one of my favorite foods is organic, grass-fed bison meat. It’s expensive, though, so I can rarely afford to buy it. But I still manage to have around 5 to 7 bison burgers a year — usually on special occasions or holidays. I feel good knowing the local rancher and his family who raise the organic, grass-fed bison and I know the spiritual ways in which they handle the entire harvesting process. And their ranch is only about 95 miles away so not much gas is used in transport.
I know that I could take that step and give up meat entirely but there are two animal products that I simply cannot imagine giving up and those are organic, cage-free chicken eggs and organic real butter made from grass-fed cows. I consume those products almost every single day.
Egg yolks are probably my very favorite food in the world. I only eat the yolks, never the whites. 97.41869% (approximately) of all the nutrition found in an egg is in the yolk. And 99.03574% (approximately) of all the flavor of an egg is found in the yolk. The whites are useless and go in the compost bucket.
There is no more orgasmic culinary experience than plopping a hot, yet still liquid, egg yolk into one’s mouth and letting its yumminess explode with flavor throughout the mouth.
And real, natural, organic butter made from grass-fed cows is perhaps the most crucially important food we can eat to maintain cardiovascular health.
I like the fact that we don’t have to kill the chicken to get the egg and we don’t have to kill the cow to make the butter. The problem, of course, is that we think we must have huge factory farms to produce animal products to scale for an exploding population. (‘Scale’ is my least favorite word in the American Business Lexicon.) So I always try to ‘source’ organic, cage-free chicken eggs from local farmers, several of which I know personally who live just outside of town.
I just don’t think I can go 100% ‘full Greta’ but I can get close. I can try. The important thing that I am grateful for about Greta is that she has prompted me to make closer observations of my own actions in order to find new ways to help be a part of mitigating the effects of the climate crisis. Very importantly, she is helping to expand mass awareness of how everyone can help. I give her my own personal Nobel Prize.
But there is something that I’ve never heard Greta talking about…
For years, many woo-woo masters have talked about how our outer environment reflects our inner environment. As the saying goes, ‘As within, so without,’ or something like that. The pollution in our outer environment is a reflection of the pollution within us so we cannot fix the outer pollution until we fix the inner pollution; our psychological pollution, our emotional pollution, our collective attitudes and beliefs and self-loathing, our fear, our guilt, our hate, our prejudices, our greed…
I’ve been working on cleaning out my inner pollution for decades. Every time I clean out some inner pollution I find more inner pollution that needs to be cleared out. I clear it out then find another layer of inner pollution that needs to be worked on. There are so many layers of inner pollution. Sometimes I feel like a big old fat onion. But I keep working on it.
So as we begin to observe our outer behavior in order to help heal the planet’s environment we must also be sure to more closely observe our inner workings to see what can be healed within. I feel that we will achieve much greater and faster success by working both without and within simultaneously. One thing is for certain and that is that we simply cannot continue to go on in an unobserved path of somnambulist denial. | https://whitefeather9.medium.com/going-full-greta-a532991ec2d7 | ['White Feather'] | 2019-10-20 16:30:51.142000+00:00 | ['Vegan', 'Environment', 'Self', 'Climate Change', 'Food'] |
Tips on Passing Your AWS Certification | 1. Actually Build Something Using AWS
Just like any good student of computer science, I decided to purchase a handful of online courses to help me prepare for the exam. While the courses provided a structured learning experience and were effective at outlining the theory at a high level, I found myself struggling to learn effectively by following along with the simple “hello world” examples.
I found the hands-on exercises presented by the instructors to be far too simple and not representative of what you would be doing in the real world. Who runs Nginx/Apache on EC2 without SSL/TLS or a JavaScript function on Lambda without a handful of libraries in node_modules ?
This is not to say that instructors/courses are ineffective at preparing you for the exam — they are perfectly adequate for you to pass — but if you are like me and learn best with a hands-on, in-depth approach, then you might need to do a bit more than just pull httpd from ECR to your Fargate cluster and call it a day.
So right after learning the high-level theory for a particular service, I skipped the instructor-provided labs and decided to get some real-life hands-on experience by either migrating a particular app of mine from Heroku to AWS or simply deploying one already on AWS in a different way. In the interest of having everything under one roof, I also migrated from Godaddy DNS to AWS Route 53.
By the end of my practice/study period, I had migrated all my apps out of Heroku (four to be exact). So in addition to slashing my hosting costs by over 60%, I also gained valuable hands-on learning. A win-win situation in my book. Here is my AWS bill compared to Heroku:
Heroku (left) vs. AWS (right). I went from spending $126 USD/month to $42 USD/month.
For each app migration, I made a conscious effort to use a particular service set from AWS that would cater to the needs of the app. For example, for a large enterprise-grade Rails app, I made sure to use RDS for the PostgreSQL database, ECS on EC2 for application servers, and Codepipline for CI/CD. For a small agency website, I figured a T2.micro EC2 instance with certbot would suffice.
I embraced serverless when I migrated a React app hosted on Heroku to AWS using nothing more than an S3 bucket, a Cloudfront distribution, and an SSL/TLS certificate sponsored free of charge by ACM. | https://medium.com/@williams-10638/tips-on-passing-your-aws-certification-ededba5f52ac | [] | 2020-12-17 18:47:26.174000+00:00 | ['Startup', 'DevOps', 'AWS', 'Career Development', 'Programming'] |
WHAT DID HE SAY? | What did he just say?
It felt like I was suddenly pulled out of a strange state of trance.
“All this is your fault!” He yelled, voice thick and unnaturally piercing.
I stared at him, but he just sent another shot down his throat.
“All this, this goddamn colony, these odd fucking jobs that I keep doing, all my fucking life, all this is your fault! Your idea. A freaking experiment that doesn’t work, over and over again.”
A shiver ran through me, uncontrollable, willful.
“Your life? I thought it was our life. Here at the colony, and before — I thought we were in this together. So now it’s just about you?” I found my voice finally, not recognizing the serenity in it. Was it really me who spoke?
“You are not listening, just as usual,” he growled, eyes bloodshot, bottle clutched firmly in his hand. “It’s not what I said. You never listen. You just hear what you wanna hear.”
He poured another shot, spilling the spirit on the dusty floor.
“You ruin everything. You are impossible to talk to. You said — you said — you’d fix it. That it’d be better here. That you’d take care of everything. It’s better there, you said. And you just keep having these stupid ideas. But none of it fucking works! If you could do anything, but nope, you can’t do shit. It all just gets worse, and worse, and worse.”
“You regret coming here with me?”
“That’s not what I said. See, you never listen!”
“Then what? You want to go back?”
“Are you kidding me? We can’t just go back! We don’t have a freaking home anymore. We’ll have to sell the spacebus if that’s ever gonna be enough. And what about everything else? Not gonna work.”
“We could try if you don’t like it here…”
He pointed to the insides of the room with the bottle, as if it contained the answer to the purpose of life, as if the gray of the walls held all the colors of the universe.
“Here!” — He breathed the spirit’s ugly flavor right into my face, finally managing to look me in the eye, — “You wanted this. Enjoy!”
Ungracefully, he turned on his heels and left, the bottle still with him.
I braced for pain. For the guilt to consume me, like it always did.
It was me who risked it all and brought us here. A new place with new rules, where it was hard to breathe and harder to live.
Culture shock. New biorhythms. Constant self-doubt. I had to think before I said something, and act before I thought about it. I’d made many mistakes before I gained anything, and lost a lot with a single misstep.
There were jobs here, though: in the mines, in the docks, anywhere. I worked a few at a time, just to make the ends meet, yet I couldn’t find what I was looking for. So I kept searching, and he kept bringing up our old life, and it always ended the same way…
What did he just say? It felt like I was pulled out of a strange state of trance.
“I’m sorry, I shouldn’t have yelled.”
I turned to face him just in time to take the empty bottle out of his hands. He was swaying, blood-shot eyes unfocused.
He turned around and went into the depth of the spacebus.
When I entered the sleeping area, I heard him snore. Strange how some things never change.
He said sorry. Isn’t it easy? That’s what we’ve always done. That’s what I’ve always done. Just forgive, right?
And suddenly, I couldn’t.
I went to bed, I shut my eyes, I took a deep breath. I never felt more sure in my whole life.
Sign up for more stories at https://irynaunguryan.substack.com/ | https://medium.com/@iryna-unguryan/what-did-he-say-d452dcabba31 | ['Iryna Unguryan'] | 2020-12-21 15:17:58.834000+00:00 | ['Relationships', 'Self Discovery', 'Fiction Writing', 'Lovestory', 'Spirituality'] |
How to Buy Education Tokens (LEDU) on Mercatox | Education tokens (LEDU) are now trading on Mercatox. This article will explain how to purchase tokens on the Mercatox exchange. See also How to Buy Education Tokens (LEDU) on Exrates, IDEX and Livecoin.
Sign Up
1) First, if you’re not registered on mercatox.com click the “Sign up” button to view the following sign up window.
Deposit
2) The next step is to deposit ETH or BTC into your exchange account.
3) When depositing ETH you will be asked to send ETH to an address provided by the exchange. If depositing BTC you will be asked to send BTC to an address provided by the exchange.
4) Open your ETH wallet and send the amount you would like to deposit to that address. If you are using BTC, open your BTC wallet and send the amount you would like to deposit to the address provided.
5) Once you have ETH on your Mercatox.com account then you are ready to purchase LEDU tokens.
Purchase
6) LEDU tokens on Mercatox.com are available in BTC and ETH pairs, meaning you can use BTC or ETH to purchase. Locate the LEDU/ETH pair and click on trade to purchase Education coins if you want to use ETH. If you want to use BTC locate the LEDU/BTC pair.
7) You will be taken to the trading page where you can buy LEDU tokens. Simply enter the amount you wish to purchase and click the BUY ETH — -> LEDU to purchase LEDU tokens.
8) Once you’ve completed your purchase the token balance will be visible in your Mercatox exchange account.
Withdraw
10) Now that you’ve purchased LEDU tokens you can withdraw them to your ERC20 token wallet. Simply enter the withdrawal address where you wish to send the tokens and amount.
Get LEDU Coin
Get LEDU coins now on Exrates, Livecoin, Mercatox and IDEX or join the LEDU OTC Trading program for large purchases. Read more about LEDU coins on our project page and ask any questions you might have in our Telegram group chat. | https://medium.com/ledu-tokens/how-to-buy-education-tokens-ledu-on-mercatox-88a73d4c9b9a | ['Dr. Michael J. Garbade'] | 2019-01-11 19:33:42.338000+00:00 | ['Ethereum', 'Token', 'Bitcoin', 'Blockchain', 'Cryptocurrency'] |
Interesting views Imad. | Interesting views Imad. For me, vitriol should never be given a platform, if you can't say something positive keep your rancid mouth shut. Opposing views are fine and make the world turn round. If you have to resort to misleading and lies, then you had a weak argument and should never be given a voice. Just like those Aryan twins. Keep up the good fight Imad. Always. J. 🙏☘✨☘🙏 | https://medium.com/@jamesgbrennan/interesting-views-imad-e58d6808bdb4 | ['James G Brennan'] | 2020-11-27 13:56:16.676000+00:00 | ['Free Speech', 'Resistance Poetry', 'Cancel Culture', 'Poetry', 'Social Media'] |
GraphQL: Making Sense of Enterprise Microservices for the UI | GraphQL: Making Sense of Enterprise Microservices for the UI
This blog details how Adobe Experience Platform engineering uses GraphQL with over 40 internal contributors across 40 API endpoints at Adobe to improve their agility and velocity.
GraphQL has become an important tool for enterprises looking for a way to expose services via connected data graphs. These graph-oriented ways of thinking offer new advantages to partners and customers looking to consume data in a standardized way.
Apart from the external consumption benefits, using GraphQL at Adobe has offered our UI engineering teams a way to grapple with the challenges related to the increasingly complicated world of distributed systems. Adobe Experience Platform itself offers dozens of microservices to its customers, and our engineering teams also rely on a fleet of internal microservices for things like secret management, authentication, and authorization.
Breaking services into smaller components in a service-oriented architecture brings a lot of benefits to our teams. Some drawbacks need to be mitigated to deploy the advantages. More layers mean more complexity. More services mean more communication.
GraphQL has been a key component for the Adobe Experience Platform user experience engineering team: one that allows us to embrace the advantages of SOA and helping us to navigate the complexities of microservice architecture.
Adjusting to a Microservice-Oriented World
HTTP Overhead
One of the issues facing UI teams is increased HTTP overhead. Consider first an example situation where a UI needs to fetch every “A” object, along with its related “B” objects. Each “B” also optionally has a “C”. Before GraphQL, the flow looked something like this:
Figure 1: UI network calls before GraphQL
There are a number of user-experience challenges with this approach:
The calls are larger in number, compared to a single call to a monolith. While some techniques (like multiple subdomains) can mitigate this effect, modern browsers also limit the number of concurrent calls.
These network segments are long and traverse the public network, causing increased latency and decreased client application performance.
GraphQL installations allow enterprise teams to access multiple object domains with a single call, significantly reducing the resulting HTTP overhead. The call flow instead looks more like this:
Figure 2: UI network calls using GraphQL
In this example, a single, expensive network call is made over the public network. Once inside the data center, GraphQL can orchestrate all the dependent calls that need to be made inside the data center, where network I/O is much less expensive.
Data Stitching and Call Orchestration
This brings us to two related topics:
How do objects from various services get reformatted and joined together for presentation in a UI?
Which parts of an application house (and ideally centralize) this logic?
For example, if I want to show a list of datasets in Adobe Experience Platform, I may also want to show the related XDM schema, along with the first and last name of the user who created it. These objects come from three different systems and need to be formatted and joined for presentation in a table in the UI. Additionally, I want to optimize the call orchestration. Once I have dataset metadata, I can fetch schema and user profile information in parallel.
Before GraphQL, this code was often spread throughout various pieces of a large UI application. This is likely due to a number of reasons:
Our UI is maintained by many teams and dozens of engineers. Coordinating and detecting this sort of duplication poses a difficult and manually-enforced collaboration problem.
UIs are often built for a specific flow. Ubiquitous objects (like user profiles, pagination metadata, or a core domain object) are used in various contexts throughout the experience. Anticipating these contexts is difficult.
GraphQL’s graph-oriented consumption pattern and its strong typing solve this problem. Every time our engineering team sees a dataset in an API response, it always looks the same. This makes a huge difference not only in terms of code duplication and application maintenance, but it offers a level of standardization we haven’t been able to embrace in the past. Here is a GraphQL query example fetching related data from three services:
query dataset($id: String!) {
dataset (id: $id) {
name
state
createdUser {
firstName
lastName
}
schema {
id
name
createdDate
modifiedDate
}
}
}
REST offers us a standard way to interact with domain objects in terms of URL familiarity, statelessness, and HTTP verbs. However, GraphQL gives engineering teams a level of semantic standardization (and validation!) that is hugely beneficial. Knowing how to build components against common data shapes accelerates UI development and makes fixing bugs and extending our applications quicker and easier.
Here’s a snapshot of the graph we interact with as UI engineers working on the AEP UI. Traversing and documenting this complexity manually just isn’t scalable. By integrating our upstream APIs with GraphQL, we get this sort of discoverability out of the box.
Figure 3: Example of Adobe Experience Platform data graph (partial)
API Heterogeneity
APIs created by different teams tend to accrue slight differences despite our efforts to discourage it. UI engineering teams looking to use multisource data often encounter these differences:
Pagination schemes
Authentication
Authorization
Varied representations of common data objects
REST API standardization is an important effort, and GraphQL helps us identify inconsistencies that teams can fix. It also gives us the agility to abstract away these differences for UI engineers consuming the graph.
Even in the sunny case that an enterprise organization’s standardization efforts are perfect, there will still be events out of our control that cause latency between the practical and the ideal. We’ve added some top-notch folks to Adobe in the past few years, and these new teams and their APIs will take some time adjusting to our technology and standards. In the meantime, the UI teams can bridge those gaps with GraphQL integrations.
It’s worth noting that UI engineering teams often build views for API data that doesn’t yet exist. We’ve had great success at Adobe in defining our own GraphQL schemas before the upstream data services are live. This gives us the flexibility to move forward with mock data today, allowing us to connect the final plumbing later.
Summary of GraphQL Advantages
Easy to learn and use
Amazing toolset and IDE integration
Vastly improves UI engineering team agility and velocity
Faster client application performance
Cross service discoverability, standards, semantics, and validation
Operations
0 downtime, 0 customer outages
CPU usage low (5–10%) on K8s cluster, using 0.5 cores across instances
Low memory usage (<200MB)
Engineering Collaboration
Over 40 internal contributors, across UI engineering, data services engineering, and operations
10–30 commits per week
3000–5000 additions per week
40+ API endpoints integrated and in use
“GraphQL for Adobe Experience Platform has been running in some form since the spring of 2019. We recommend using GraphQL for enterprises and their UI engineering teams, based on our experience.”
GraphQL Challenges
As various UI teams have started picking up GraphQL throughout Adobe, we ran into a few challenges along the way. None of them were insurmountable and we were able to learn a few things in the process.
One Graph Principle
As our team in Adobe Experience Platform started investigating GraphQL, we were surprised to find out that several other groups throughout the company were at various stages of adoption as well. Several of these implementations also contained objects we wanted to include within our graph. Our desire for a single unified graph dictated that we needed to converge. However, within a large organization like Adobe, it would be very difficult to bring two separate GraphQL APIs together into one.
A few options emerged to solve this issue. We explored publishing reusable GraphQL modules via npm, we looked at a fork model where teams could push and pull changes from a single upstream repository and a few other approaches. Ultimately, we decided to use Apollo Federation which allowed us to have a single gateway with federated schemas. This allowed each team to continue developing their own GraphQL API independently, but house them all behind a single gateway that understood how to federate queries between these instances. This topic deserves its own separate post, so keep an eye out for a follow-up article.
Thinking in Graphs
Another challenge for us was helping our teams rethink their objects as graphs. While many of our internal UI models were already structured in this manner, we were also very used to the data shapes provided by our API service teams.
Typically with an API response, you get a foreign key identifier in the response. It’s up to the developer to make the API call, get the response, and stitch the response to the previous response to create the model for the view. We had scenarios where a developer stitched six responses at once in complex models. This creates complex maintenance of the stitched responses.
Using GraphQL, the UI developer asks for the data in the shape they need. This eliminates the complexity and maintenance required for multiple API responses across UIs. The tendency for developers to include foreign keys in their GraphQL models was a challenge. We had to discourage this practice in favor of graph composition. As a group, we identified these common pitfalls, and through presentations and meticulous code review we were able to establish patterns and learn as a group.
In another case, an upstream API had a recursive object structure. It took us a considerable amount of time to figure out how we would represent the result because it was not apparent to the UI how many levels of depth would be required. We ended up completely reshaping the response to an array type to avoid the temptation to make everything a generic JSON type.
Versioning
GraphQL best-practices are clear when it comes to versioning. Don’t do it. As a group, we were tempted on several occasions to take the path of versioning things like a typical REST API. We opted for a process that allows for schema evolution through a combination of schema deprecation markers and robust logging. This helped us know when to drop an old object or field. Finally, this gave us the data on which teams to talk to in order to accelerate this process.
What’s Next
Expect a follow-up blog from us on our GraphQL innovations.
Follow the Adobe Tech Blog for more developer stories and resources, and check out Adobe Developers on Twitter for the latest news and developer products. Sign up here for future Adobe Experience Platform Meetup.
References | https://medium.com/adobetech/graphql-making-sense-of-enterprise-microservices-for-the-ui-46fc8f5a5301 | ['Jaemi Bremner'] | 2021-03-05 16:39:09.445000+00:00 | ['Open Source', 'Platform', 'GraphQL', 'Microservices', 'Adobe Experience Platform'] |
Music Review: Lou Baron’s “Don’t Promise Forever” | Music Review:
Song: “Don’t Promise Forever” carrier song from the Album “Back To Romance “
Artist: Lourdes Duque Baron
Producer: Timeless Entertainment Inc.
Music Producer/Arranger Andrew Lane.
Review:
This experimental project expands its already broad lyric & instrumental palette for a Jazz album fusing Lou’s voice and jazzy instruments without succumbing to the tropes of either Lane and Baron, because Jazz is a genre that is not easy to pull off, true jazz experience happens onstage in front of an audience—but in this case “Don’t Promise Forever” broke the rule, the song kicks off with Lou’s unique voice, and Lane’s -penned piece that brings a certain spell and dramatic quality. This is followed by piano, solo-bass & guitars which serves to bridge the path that takes us from the start & the whole piece to cool off with unspeakable comfort and endearment. | https://medium.com/@urduja2000/music-review-lou-barons-don-t-promise-forever-d704b13226a4 | ['Maria Urduja Osit-Li'] | 2020-12-14 17:21:24.419000+00:00 | ['Jazz', 'Los Angeles', 'Andrew Lane', 'Lourdes Duque Baron', 'Music Review'] |
THE SINK HOLE HYPOTHESIS | THE SINK HOLE HYPOTHESIS
Space is that region where matter and energy can exist (i.e., region where bosons and fermions are alive!) . For the reason that parallel beams of light were also found to intersect each other , space has a non — Euclidean geometry.
From the next line onwards , everything I write are only my thoughts . If the known ones are failing to reason my words please excuse this as a pure sci-fi .
As denoted in my previous and very first post , I prefer a double bounded geometry to be the universe’s.
The inner bound where matter and energy coexist and the outer bound , I don’t really know , let it behold a secret trade between the dark matter/energy and the ‘pure’ matter/energy. | https://medium.com/@must4u/the-sink-hole-hypothesis-ec6c4b0be440 | ['Adharsh Nair A.S.', 'A.D.H. Khift'] | 2020-12-14 09:33:38.205000+00:00 | ['Origin Of Universe', 'Universe', 'Space', 'Sink Hole Hypothesis', 'Theory Of Everything'] |
ATT&CK 101 | This post was originally published May 3, 2018 on mitre.org.
Why ATT&CK was Created
MITRE started ATT&CK in 2013 to document common tactics, techniques, and procedures (TTPs) that advanced persistent threats use against Windows enterprise networks. ATT&CK was created out of a need to document adversary behaviors for use within a MITRE research project called FMX. FMX’s objective was to investigate use of endpoint telemetry data and analytics to improve post-compromise detection of adversaries operating within enterprise networks. Much of that work is documented here: Finding Threats with ATT&CK-based Analytics and the Cyber Analytics Repository.
Based on our research, we decided we needed a framework to address four main issues:
Adversary behaviors. Focusing on adversary tactics and techniques allowed us to develop analytics to detect possible adversary behaviors. Typical indicators such as domains, IP addresses, file hashes, registry keys, etc. were easily changed by adversaries and were only useful for point in time detection — they didn’t represent how adversaries interact with systems, only that they likely interacted at some time. Lifecycle models that didn’t fit. Existing adversary lifecycle and Cyber Kill Chain concepts were too high-level to relate behaviors to defenses — the level of abstraction wasn’t useful to map TTPs to new types of sensors. Applicability to real environments. TTPs need to be based on observed incidents to show the work is applicable to real environments. Common taxonomy. TTPs need to be comparable across different types of adversary groups using the same terminology.
We strongly believe that offense is the best driver for defense. An organization’s ability to detect and stop an intrusion improves greatly by maintaining strong offense and defense teams that work together. Within FMX, ATT&CK was the framework used to build adversary emulation scenarios. The emulation team used these scenarios to inject real-world inspired activity into the network. Then the team used the tests to verify that the sensors and analytics were working to detect adversarial behavior within a production network. The approach resulted in a rapid improvement in detection capability, and, most importantly, in a measured and repeatable way.
ATT&CK became the go-to tool both for the adversary emulation team to plan events and for the detection team to verify their progress. This was such a useful process for MITRE’s research program that we felt it should be released to benefit the entire community, so MITRE released ATT&CK to the public in May 2015. ATT&CK has since expanded significantly to incorporate techniques used against macOS and Linux, behaviors used by adversaries against mobile devices, and adversary strategies for planning and conducting operations pre-exploit.
What is ATT&CK?
ATT&CK is largely a knowledge base of adversarial techniques — a breakdown and classification of offensively oriented actions that can be used against particular platforms, such as Windows. Unlike prior work in this area, the focus isn’t on the tools and malware that adversaries use but on how they interact with systems during an operation.
ATT&CK organizes these techniques into a set of tactics to help explain to provide context for the technique. Each technique includes information that’s relevant to both a red team or penetration tester for understanding the nature of how a technique works and also to a defender for understanding the context surrounding events or artifacts generated by a technique in use.
Tactics represent the “why” of an ATT&CK technique. The tactic is the adversary’s tactical objective for performing an action. Tactics serve as useful contextual categories for individual techniques and cover standard, higher-level notations for things adversaries do during an operation, such as persist, discover information, move laterally, execute files, and exfiltrate data.
Techniques represent “how” an adversary achieves a tactical objective by performing an action. For example, an adversary may dump credentials to gain access to useful credentials within a network that can be used later for lateral movement. Techniques may also represent “what” an adversary gains by performing an action. This is a useful distinction for the Discovery tactic as the techniques highlight what type of information an adversary is after with a particular action. There may be many ways, or techniques, to achieve tactical objectives, so there are multiple techniques in each tactic category.
The ATT&CK™ Matrix
The relationship between tactics and techniques can be visualized in the ATT&CK Matrix. For example, under the tactic Persistence (this is the adversary’s goal — to persist in the target environment), there are a series of techniques including AppInit DLLs, New Serviceand Scheduled Task. Each of these is a single technique that adversaries may use to achieve the goal of persistence.
The ATT&CK Matrix is probably the most widely recognizable aspect of ATT&CK because it’s commonly used to show things like defensive coverage of an environment, detection capabilities in security products, and results of an incident or red team engagement.
Cyber Threat Intelligence
Another important aspect of ATT&CK is how it integrates cyber threat intelligence (CTI). Unlike previous ways of digesting CTI that were used primarily for indicators, ATT&CK documents adversary group behavior profiles, such as APT29, based on publicly available reporting to show which groups use what techniques.
Usually, individual reports are used to document one particular incident or group, but this makes it difficult to compare what happened across incidents or groups and come to a conclusion on what types of defenses were most effective. With ATT&CK, analysts can look across groups of activity by focusing on the technique itself. When deciding how to focus defensive resources, analysts might want to start with techniques that have the highest group usage.
Examples of how particular adversaries use techniques are documented in its ATT&CK page, which represents that group’s procedure for using the technique. The procedure is a particular instance of use and can be very useful for understanding exactly how the technique is used and for replication of an incident with adversary emulation and for specifics on how to detect that instance in use.
Where ATT&CK is Today
ATT&CK has expanded quite significantly over the past five years, from Windows to other platforms and technologies. It’s in use by many different government organizations and industry sectors, including financial, healthcare, retail, and technology. The public adoption and use has led to significant contributions back to ATT&CK to keep it up-to-date and useful for the community. We want to continue this trend, so MITRE has big plans to keep growing ATT&CK to ensure its future as a valuable public resource.
Continuing This Series
Now that we’ve covered some of the basics, you can look forward to future blog posts that go into more detail on topics covered within this post. We’ll discuss the use of ATT&CK with cyber threat intelligence, behavior-based detection analytics, and adversary emulation, as well as additional areas. | https://medium.com/mitre-attack/att-ck-101-17074d3bc62 | ['Blake Strom'] | 2020-06-24 22:19:52.077000+00:00 | ['Cybersecurity', 'Mitre Attack', 'Information Security', 'Threat Intelligence', 'Threat Hunting'] |
Coding at Night | Coding at Night
Coding at night has been very effective for me
Image source: chenspec on Pixabay
As a programmer, I generally prefer to start coding after dinner and into the wee hours of the morning before heading to sleep at almost all weekends. While working in the salon, which made me so tired during the day, I would spend the late nights learning to program. If you ask a random programmer when they do their best work, there’s a high chance they will admit a lot of it’s done late at night. Some earlier, some later. Some people are naturally not morning people, so they sleep till noon and work afternoons and late nights, still maintaining a healthy amount of sleep.
I love coding at night due to no constant interference when there’s nothing to disturb your aloneness. Coding during the day means having to deal with interruptions in the form of people, calls, texts, and life in general. But in the wee hours of the night, there is no one to disturb me, no social notifications to bug me, and I can code just the projects I want to.
The tranquility of the night is when the background noise of endless activity around you, like cars passing by, people talking, and whatnot, becomes completely muted, so much so I can have a pin-drop silence. If that is not the perfect atmosphere to work or chill, I don’t know what is. You might say that we can have a similar atmosphere in the daytime by using noise-cancelling headphones and getting your groove on listening to your favorite music.
But constantly listening to music on headphones should be avoided to keep your hearing sense in good health. It is actually recommended not to use headphones continuously for more than an hour and to take breaks in between. The quiet atmosphere at night actually feels a lot better for mentally stimulating tasks like coding.
The brain works best when it's late at night because it turns out that late at night/really early in the morning, the brain gets tired enough that it can only focus on one task, not on multiple tasks. There will be flexible and creative thinking.
Studies have proven beyond doubt that night owls/early birds tend to be more intelligent and creative than others.
When you code at night, interacting with humans is at a minimum. There’s nothing better you can do than become a programmer. Not only will you not have to see people during the night because everyone’s asleep, but you can also avoid them during the day because you are asleep!
Again, coding at night always puts me in the flow: I start working on the problem with full focus, leaving behind the world around. At such wee hours, I am much more likely to get in the flow of things, developing the project without thinking about things happening around.
No matter the time you prefer to work, always keep in mind that developers need an adequate amount of sleep just like everyone else. If I don’t sleep, I tend to screw up more, so I always make sure that I have sufficient hours of sleep and a proper sleep schedule to prevent feeling burnt out and weakened during the day.
The core reasons I work at night or very early in the morning have to do with deep thoughts, flow, focus on one’s work.
The main lifestyle factors that affect coding at night are:
Freelancer or employee
Scholar of some sort
Have projects
Spouse and/or kids
A popular trend is to get up at 4:00 a.m. and get some work done before the day’s craziness begins, just to avoid distractions. You might ask, “what’s so special about the night?”
I think it runs down to the maker's timetable, the tired brain, and the luminance of computer screens.
You might ask, “why do we perform our most mental work when the brain wants to sleep, and we do simpler tasks when our brain is at its sharpest and brightest?” Because being tired makes us better coders simply because when your brain tired, it has to focus. There isn’t enough leftover brainpower to afford losing concentration. Being tired makes you pointless enough that the task at hand is enough.
You might ask, “if I keep staring at a bright source of light in the evening, what happens?” Your sleep cycle gets delayed.
Programmers work at night because it doesn’t impose a time limit on when you have to stop working, which gives you a more relaxed approach. Your brain doesn’t keep looking for distractions, and a bright screen keeps you awake.
Plan. Break down your tasks. Get a timetable of what to do each day and keep doing it.
There’s magic in the nighttime. The peace and quietness, the internal serenity. There’s just you, your work, and an infinite abundance of time. You are alone.
As a society, we know that smart, talented people work at night. Often in a lonely place, they solve problems mere mortals could only dream of.
I hope this article will disclose to many people that a late-night work schedule is the key to creativity and productivity for many open source programmers. | https://medium.com/better-programming/coding-at-night-276875b562d2 | ['Ohagorom Onyinyechukwu J'] | 2020-10-27 14:36:26.876000+00:00 | ['Programming', 'Women In Tech', 'Software Development', 'Productivity', 'Learning To Code'] |
Survivor’s Diaries: Chapter Two | I tried not to gawk because the restaurant was just so much more glamorous than anything I was used to. The chandelier hanging from the ceiling looked like a spider web sheathed in light, illuminating the room with a soft incandescent glow. Everything else was muted; taupe, hazel, and beige palettes colored the walls, floors, and ornamental décor. Each table was adorned with a tiny candlelit flame.
Perhaps I should have sported a more formal hairstyle; a sleek chignon or French twist perhaps.
I sighed. Too late now.
“Doesn’t it take forever to get a reservation at this place?” I whispered as he tucked in my chair. I was also pleased with myself that I had made it to the table in my five-inch platform sandals without tripping once.
“Yeah, usually, but I take clients here all the time and I know the owners,” he shrugged.
I nodded, acting like that was normal.
He smiled at me as he sat down. I looked away instinctively.
“How was your week?” he asked.
“Good, actually,” I perked up. “I’m settling into my new place nicely. Finally got everything unpacked. I’m so happy to be on my own. Don’t have to follow anyone else’s rules but my own. . .” I trailed.
He nodded attentively. “And how are you liking San Francisco? Is it different than what you expected, the same. . .?”
“San Francisco is cold,” I whined. “It’s beautiful, don’t get me wrong. I just got back from Europe and still think it’s the most beautiful city in the world; the vistas, the cable cars, the ocean views. . .but I just need to order an electric blanket or something because I’m freezing at night, even with the heat on. Maybe it’s just me.”
He laughed. At this point I began to redirect the conversation towards him. I got him talking about his job, which he loved, his family, which he also loved. But soon after he tried to steer the dialogue back in the other direction.
“I don’t want to talk about me,” he frowned, finishing his last thought. “I want to know more about you.”
“There’s really not much — ”
“not much to know about you,” he laughed, finishing my sentence. “Why do you keep saying that?”
“Because it’s true.”
There was a long pause.
Jesse shook his head and squinted. “But that can’t possibly be, I can tell there is so much more to you than meets the eye.”
I just smiled. He tilted his head and glared. By the looks of it he was determined to break down the steel fortress I had erected between us. But instead of wondering what was going through his head, I took the time to admire the size of his Adam’s apple and the tousled inky locks of his hair.
“Where did you grow up?” he asked.
Okay, that was an easy one. “Los Angeles.”
“Would you ever go back to LA?”
“No,” I answered a little too quickly.
“Why not?” He probed.
“LA is too hot.”
Well, this was partially true. LA was too hot. The honest answer to this question wasn’t bad or anything; the truth was, I just didn’t have the happiest memories growing up there.
For one, my home life was consumed with my parent’s rancorous marriage and eventual divorce. Most nights I fell asleep to the sound of them throwing the kitchen sink at each other.
Second (and more importantly) I was bullied a lot in school. We moved around frequently when I was younger, so I was always the new kid with no friends.
The worst was when I entered high school in East LA. I already didn’t fit in, being one of the few white girls and probably the only white girl with bleach blonde hair. I was a naive freshmen and didn’t understand the unwritten rules of behavior and social mores for someone like myself, a young, extremely slight female with no friends, protectors, or established prior reputation that might otherwise keep threatening actors at bay.
One day, a guy on the lacrosse team offered to carry my bags and walk me home. I obliged, more out of politeness than anything else. He was pretty, but too old for me and I wasn’t ready to start dating yet. But somehow, his girlfriend found out and days later cornered me about it with a group of her friends after school as I was leaving class. They pulled me behind the auditorium that day and roughed me up pretty good.
I wasn’t permanently maimed or anything. They mainly just pushed me down, kicked me, pulled my hair, and called me names. Most of the damage came from me falling. I’m so small that the slightest push sends me tumbling. When I got home I told my mom I fell down the stairs at school. She had no trouble believing that.
But even after that day those girls never left me alone.
None of these incidents has anything to do with the city in which they transpired of course; acrimonious marriages and toxic bullying exist everywhere. But LA, with its palm trees, warm weather, and beach boardwalks formed the potent backdrop for these events and consequently I simply don’t possess any desire to return if I can avoid it.
So, yeah.
LA is too hot.
Jesse continued to question me but this time with laser-like precision. He seemed to be entering full-on lawyer mode because what proceeded felt more like a cross-examination than anything else. He began with very open-ended questions at first, then narrowed his focus based on my answers, redirecting when he saw new ground to explore and latching on when he sensed withholding, points which I navigated acrobatically and delicately.
I didn’t know why men were always like this, at least to me. For some reason they always want to know everything about you, so they can place you in one of the neat little boxes in their head. He probably already had me pegged. He was a lawyer, he was smart. . .
Damaged girl, troubled girl, sad girl. . .
I worried a lot that guys noticed this right when they saw me and pounced, thinking they could take advantage of it. But I didn’t get this vibe from Jesse. He seemed genuinely curious — for whatever reason — in every dimension of my life.
By the time he was done we had pretty much covered everything: my childhood, adolescence, college years, and life up until now, I suppose. We talked about my plans for graduate school and so on.
I thought I was doing well, answering every question honestly and accurately if not completely and fully.
He observed me very attentively the entire time, nodding every so often, squinting here and there, and noticing when I looked away, sighed, broke eye contact, or ran a hand through my hair.
The rest of the night continued like this, with me talking and him listening. To my surprise, I didn’t mind. After a while I started to relax my guard. I unclasped the bottom of the chair with my hands and uncoiled my ramrod straight, Jackie Kennedy-esque posture. I leaned forward into the table so much so that I could feel the warmth from the flame in between us.
I barely touched my wine the entire night. I was acutely aware of how low my weight was and knew half a glass would have me feeling sick or falling asleep.
A few weeks earlier at a university party, a friend of a friend pressured me into having a shot of tequila, which I obliged after getting tired of declining, but shortly after became very, very ill. The poor guy instantly regretted it and drove me to the hospital because he thought I was dying.
I was glad he did though, in the end. I felt much better after I received the required fluids and glucose. Once I was lucid a few hours later, the doctor came back and lectured me about the perils of alcohol poisoning, instructing me very bluntly that I was not to drink any more alcohol of any kind until I had gained at least fifteen pounds.
Hence, I brought the wine glass to my lips many times throughout the night without allowing any liquid to enter my mouth.
As the evening progressed, I began to admire Jesse’s personality more and more. He was attentive, in an authentic way, funny without trying, and very smart. He wasn’t the least bit sarcastic (I hated sarcasm), was self-assured and confident without being arrogant.
All guys are like this though in the beginning, I reminded myself. They reveal their true colors later once they have discarded you.
For now though, I was enjoying this. I was even smiling and laughing without having to fake it.
But the cross wasn’t over yet.
I was in the middle of talking about my postsecondary academic career — a subject which I was more than happy to discuss at length — when he interrupted mid-sentence.
“Wait but there’s a gap so you left LA when you were 19 but you didn’t move here until you were 21 so what happened in between then?”
My lips parted and my breath came out with a whooshing sound.
He had found it. I had never had someone so accurately pin down my own personal dark ages.
The hole pulsed in my chest, stinging around the edges, like a warning.
His eyes flashed. He knew he had found a weak spot.
I could feel the blue flames licking the outer edges with more force now.
I realized I needed to breathe. I exhaled slowly and recovered, resisting the temptation to wrap my arms around my torso, like a basket case.
Looking down I answered, “When I was 19 I left LA to go stay with my extended family in Kansas.”
All true.
He nodded slowly.
My face must have looked awful because he didn’t press any further.
How pathetic.
“So,” he concluded, leaning back, “a beautiful, free-thinking, intelligent, and headstrong young woman with a bright future ahead of her.”
I smiled weakly and relaxed as the burning blaze subsided, the fingers of the flames retreating from the circumference of the hole and back into embers, then ashes at the change of subject.
The scalding feeling had dissipated, and was replaced by a hollow emptiness I was used to. It was painful in its own way, but I would take it over the inferno any day.
“I’m really glad you came with me tonight,” he said, reaching for my hand.
I reciprocated. We stayed like this for a while, with our fingers intertwined.
He then outstretched his hand completely and placed it up against mine. It was reminded me of the Disney scene, where Tarzan places his hand against Jane’s in the rain, the day they met.
“Your hands are so small,” he mumbled. “And your wrists too.” He wrapped his hand around my wrist and then my forearm.
I measured my wrist once, out of curiosity. The circumference was less than four inches.
I was used to men remarking about my smallness. I’m average height for the record. I just have a really small frame. It’s almost childlike.
For some reason men always feel the need to tell me this. It was annoying. Every man who has ever seen me naked without fail has said something along the lines of: “wow, you are like really small.”
To which I usually squinted, glowered, or rolled my eyes. Yes, I know I’m small, I have had to inhabit this body for my whole life thank you very much.
Tonight though, my forearms looked more skeletal than usual, especially next to his. It looked neither healthy nor attractive.
His touch felt nice, though. I was usually very apprehensive when being touched by strange men, but not here.
He got the check (I don’t even want to know how much money that dinner costed), and drove me home, holding my hand the entire way. Neither of us said anything, which gave me much needed time to process my emotions.
I felt oddly light and buoyant.
Happy? No, of course not, don’t be ridiculous. Stupid girl. But I felt okay.
Distracted.
Hollow, but not burning.
Definitely not on fire.
Then I sighed as reality set in.
I didn’t know what this meant. And I didn’t want to get hurt even more than I already was. I couldn’t take it.
He walked me to my door and I was relieved he didn’t ask to come in.
Then he stepped closer and grabbed both of my hands this time. “Thank you, for coming with me tonight.”
I nodded, leaning in, until our foreheads were almost touching.
I couldn’t believe I did this. I was so shy. “Too shy to function” is what my friends called me. But I felt strangely confident here.
“I love your eyes,” he whispered, tucking a lock of hair behind my ear.
I smiled and began to look down as I always did when a man complimented me physically.
But I couldn’t because he grabbed my jaw with both hands and tilted mine back upward, pulling me into a kiss. It was a perfect kiss — sweet, soft and gentle. He ran his hands through my hair as he did, then kissed my cheek and my neck several times before pulling away.
“Good night, Fiona.”
He embraced me, and kissed my hair one last time. | https://medium.com/@fionathenymph/survivors-diaries-chapter-two-d0fa0cb06717 | ['Little One'] | 2021-03-21 00:36:42.839000+00:00 | ['Pain', 'Trauma', 'Violence Against Women', 'Fear', 'Sadness'] |
Not Distributed in Topics | You are not going to find love
You might get a hint of it
Out there
In the simpering folds of panic’s desperate worry
An inveigling taste perhaps
While the heart waits and waits for it’ll never properly
Want
And the heel’s played again without any
Arclight
Or even an abiding angel to its name
Pass the peas and the skeletons
Get swept up and then over
But you
You are not going to find love
Not out there
With your hurried mind and your misshapen soul
Your perpetually packed bags and your lost hindsight
Not in the top shelf of a well-lit bar
Or the warped plaster and mold of a dingy dive
You
You are not going to find love
Not in delirious conversations about Barthelme in the back of two-in-the-morning cabs
Not in all the happy-hours or parties you’ve ever been bored at
Not in the flick of methamphetamine’s furious fire
Not in a poolroom’s popcorn machine or an alley’s noir-like glint or an airport security checkpoint or a crowded elevator ride
Not even inside what you consider to be your deepest and most romantic thoughts
Love
Love is for the birds and the newscasters and the dogs and the pamphleteers
Love is for Randy Newman and Elvis and The Munchkins and the underpaid groundskeepers and those who sneak on the bus for free
Love is not for you
To find
It is just something that happens
Like a spot of vomit on the sleeve
Or a trapeze artist sipping Beefeater on a trampoline
Before wishing goodbye to all the black dresses of her past
Silent and barefoot and brave
Some squashed eyeball of a thing no longer wandering
Love
Out of it for good
Love
Missed beneath the seashells or a dumpster dripping rain
Love
You’ll never have enough and always too much
Love
With no cash value or interest earned at all
Love
Love
I’m quite sure
It must happen
All of the goddamn time | https://davycee.medium.com/not-distributed-in-topics-7ef45d4cb6bf | ['Davy Carren'] | 2019-03-10 00:00:17.453000+00:00 | ['Poetry', 'Love'] |
Lessons on Building a Backend API in Phoenix — Plugs | …I said PLUGS
This post is a loose continuation of my previous post on implementing OAuth in a Phoenix backend API. Read it if you want to. I’m not your boss. Actually it’s more likely that you’re my boss, but I digress. Authentication is kind of pointless without leveraging session data to authorize users for given actions, so the logical next step upon implementing it is to create an authorization layer.
…but
There’s kind of an in-between step that I didn’t cover in the previous post — setting the current user in the session. This should happen in order to make data regarding the current session’s user available to controller actions that need it, and it’s worth going over. I’ll get to authorizations in the next post.
Let’s say we’re making a backend API for a basic blog, with a PostController that’s responsible for the CRUD actions for a Post resource. A Post belongs to a User , such that a signed-in user can create, edit, and delete their own posts. These actions are going to need information about the current session’s user in order to make the right associations and apply the right authorizations, so let’s add a test for PostController#create/2 to check that the current user was added to the session:
Normally, a controller’s tests probably wouldn’t be testing this functionality directly, but I think writing this test will help us better understand some of the concepts we’re covering. I’m using Plug.Test because I need it for init_test_session/2 , which I need in order to use functions like put_session/3 to manipulate the session for the test without getting errors barked at me about fetch_session/2 never having been called. Just try running this test without init_test_session/2 and you’ll see what I mean. Took me forever to figure out. Maybe it shouldn’t have, but it did. I create a User , initialize a session with an id token stub on line 19, and add their UUID to the session on line 20. Then I make a POST request with some dummy data to the route associated with PostController#create/2 , and assert on line 23 that the user stored in the session ( conn.assigns stores the session’s shared user data) matches the stubbed user.
Make sure to add use Plug.Test to the top of your test file and call init_test_session/2 if you want to stub out a session in your tests!
Currently, running this test will produce a failure something along the lines of
Assertion with == failed
code: assert conn.assigns()[:user] == user
left: nil
right: %BlogApi.Accounts.User{
__meta__: #Ecto.Schema.Metadata<:loaded, "users">,
...
because we’re not yet setting the user in the session. In order to get this test passing, we’re going to have to create a Plug.
Plugs and Pipelines
Phoenix Plugs can be used to operate on the conn struct that gets passed as the first argument to every controller action. Custom Plugs can be created to operate on conn before it is passed to a controller action. Phoenix automatically implements a number of Plugs for you when you create a new project. If you look at your router.ex file, you can see the Plugs that requests are being piped through. For instance, I have an :auth pipeline that implements a lot of the same plugs as the boilerplate :browser pipeline that gets generated when you create a new Phoenix project:
pipeline :auth do
plug :accepts, ["json"]
plug :fetch_session
plug :fetch_flash
plug :protect_from_forgery
plug :put_secure_browser_headers
end
This pipeline functions such that any scope that calls pipe_through :auth will pass conn through every one of these Plugs before it reaches the controller action. For instance :fetch_session will ensure that the current session data is available via conn to the controller actions that are associated with routes that pipe_through :auth :
scope "/auth", BlogApiWeb do
pipe_through :auth get "/signout", SessionController, :delete
get "/:provider", SessionController, :request
get "/:provider/callback", SessionController, :create
end
This way, my SessionController#create/2 action can utilize functions like put_session/3 without having to explicitly fetch_session in the controller. Which, quick aside, if you ever see the following error, it’s because you are trying to act on a session in a controller action that hasn’t been piped through a :fetch_session .
** (ArgumentError) session not fetched, call fetch_session/2
Pipelines are one way to implement Plugs, but this applies the plug to every single route in the scope , and I haven’t been able to find a way around that. I wanted to write custom Plugs that I could selectively apply to specific controller actions. I had a hard time figuring out how to do that even outside the pipeline, but eventually got it.
Function Plugs and Module Plugs
Plugs can be created as functions or as modules. Function Plugs are usually defined in a controller, and apply only to actions in that controller. These are functions that take conn and params as arguments and return conn after operating on it in some way. Defining Plugs in a module allows them to be utilized by multiple controllers, or implemented in a pipeline. These are modules that define two required functions, init/1 and call/2 . According to the docs,
init/1 …initializes any arguments or options to be passed to call/2 .
I haven’t implemented any custom functionality with it, so I cannot elaborate at this time. call/2 is the meat of the Plug. This is basically just a function plug — it takes conn and params as arguments and returns conn after operating on it in some way.
Function Plugs must be implemented within the controller in which they’re defined — and Module Plugs can be implemented in any controller — using the following syntax, given a Function Plug authorize_change/2 or a Module Plug AuthorizeChange :
# Function Plug:
plug :authorize_change when action in [:create, :update, :delete] # Module Plug:
plug BlogApi.Plugs.AuthorizeChange when action in [:create, :update, :delete]
The when action in condition is optional, but that is how you selectively apply a plug to a specific controller action or set thereof.
Setting the Current User
Now that we have an only moderately vague idea of what a Plug is and does and how to make one, let’s use one to add the current user to the session and get our test passing!
By convention, Plugs are stored in lib/<project_name>_web/plugs/ . We’ll create a file called set_user.ex and define a new module plug there:
This is fairly straightforward. If there’s already a user in the session, return conn (remember, Plugs always have to return conn ). Otherwise find the user based on the UUID that was stored in the session by SessionController#create/2 and store them in the session, or stick nil in there instead if no one’s logged in. Now all we have to do is add this Plug to our :api pipeline, and pipe our /api scope through it in router.ex : | https://medium.com/@travis_13686/lessons-on-building-a-backend-api-in-phoenix-plugs-and-authorization-part-i-a1d128d4e933 | ['Travis Petersen'] | 2019-08-09 23:36:43.059000+00:00 | ['Web Development', 'Elixir', 'Phoenix', 'Backend Development', 'Programming'] |
The design bonanza: moving from reactive to proactive through cross-team collaboration | A while ago I was given an exciting opportunity to redesign Just Eat’s consumer-facing app with the initial goal to present a vision piece to senior executives. The vision was set to be a bolder and fresher take on the then-existing app. My colleague and I created an end to end customer journey prototype with the goal to get buy-in from leadership, Product and Tech Managers. But we only had limited time to do this.
Working on fast-paced vision pieces is something I find very exciting and rewarding, but it can leave me wishing for a little more time to explore different routes. I wondered if there was a way for me to have some of the thinking and a bit of the exploration groundwork done beforehand, so when I needed to work on vision pieces I would be ahead of the game.
I spotted an opportunity to introduce a way of working where I would bring designers from across our organisation to work on various design themes through a series of recurring workshops. The output of these sessions could be concepts that will either feed into a vision piece or used as a strong foundation for bigger projects.
The birth of the bonanza
I chose to name it ‘Design Bonanza’ — the word bonanza sounds fun and I wanted to make sure that the sessions had a light and playful atmosphere for the participants. I also like the meaning of the word Bonanza.
The aim is to get a large number of design concepts out of the session,
For a bit of context at Just Eat, our organisation is split into pillars, with specific design teams in each; the Customer pillar looks at the product you order food with, Restaurant focuses on the relationship between Just Eat and the restaurants on our platform, the Courier pillar creates experiences for drivers, and the Operations pillar builds all our internal tooling.
I choose to make the sessions optional so the number of participants can flex, but in each session, we expect a mix of UX, UI and Product designers from across the business. Designers are asked to work in pairs, preferably a mix of discipline and pillars. Each session happens every eight weeks and the focus of the sessions alternate between a customer or restaurant problem.
The first part of the Bonanza is a 90-minute workshop where all attendees unpack the problem and later on ideate potential solutions. In the second part, the pairs work on their solution for three hours at their desks. We reconvene two days later to share the final concepts.
Example of unpacking the problem exercise, ‘Rethinking the Menu Experience’ Design Bonanza
Running the sessions
Depending on the problem, different exercises are required. But the sessions usually run in this order:
1. Introducing the Problem
This part can be very quick, for instance, “Hello everyone, today we’ll be looking at dark mode”. The introduction is done, move on to the next step.
However, I have found that bringing an expert on the subject can be of great benefit. What has worked very well in a previous session was to bring a Product Owner and Business Analyst, not only were they able to share domain knowledge and existing pain points, but they became instantly more involved in the process — suddenly what was concept work can now become the beginning of a design solution.
2. Unpacking the problem
There are quite a few ways to do this — I choose to go with exercises I had done in previous workshops. Empathy mapping seems to work very well for more UX led problems, and as for UI themes competitor analysis or parallel worlds will get the room engaged.
3. Ideation
Example of crazy 8’s, ‘Gamifying rewards & loyalty Design Bonanza
A fairly straight forward step, crazy eights is a popular and efficient option to get lots of ideas in a short space of time. Depending on the problem you can skip the crazy eights and run a more focused version of it getting pairs to create storyboards. I find that having storyboards is good practice because the designers have something more concrete to take back to their desks and work on final concepts.
The benefits of running a Bonanza
A break from your day to day role
The bonanza gives designers the chance to work in a different area of the business, on a different product to their day to day role.
Team building by cross-collaboration and enhanced knowledge
We have show & tell sessions where designers see the work from other products, but getting to spend some time on design solutions and ideation for problems in other pillars gives designers a deeper insight into other parts of the business — especially if you’re paired with someone who already has a good understanding of that area.
Getting ahead of the game
The outputs of past bonanza sessions have proven to be invaluable — in both demonstrating vision pieces for executive presentations, and also to kick-start early-stage projects by having concepts to base our user research on.
On a personal level, I have really enjoyed running these sessions, mostly because conducting a workshop is a soft skill I hadn’t had a chance to exercise much as a UI designer. It also helped me with my confidence in public speaking. It’s been a big learning curve for me, here are my main learnings:
Five things to takeaway
Bring the experts
An expert on the theme can help to bring insight and keep the designs a bit more grounded on what is technically possible. Even if this is for a vision piece, you want that vision to have the hope of seeing the light of day. The experts may or may not be designers so leave it open to them whether they would rather watch or participate with one of the pairs.
Make it fun
People are not forced to come to these sessions so keeping it fun is crucial. If it is a fun atmosphere, that makes it easier for people like me who are newbies at running workshops to do so.
So much to do, so little time
Sometimes there can be such a thing as too much fun, and you get lost in discussions. Make sure you have set times for each activity, and sometimes you might have to stop some conversations and move onto the next step. Savage but necessary.
Keep it topical, keep it focused
There are always buzz topics that will increase participation — these can either be market trends such as dark mode or gamification that will get designers excited and wanting to be involved, or newly announced business goals that these kinds of outputs will help kick start projects. Find the main focal point of that area and build your exercises around it.
Keep track
I keep a spreadsheet with links to slides, design files and miro boards used in the bonanza, as well as who ran it and when it happened.
In a separate tab, I try to keep a healthy backlog for future sessions. This is where our designers can add ideas for the next themes.
In summary
So if you were to come to me and say “OK Bruno, I want to run a Bonanza, what do I need to know?”. Here is a short description:
Part 1 — Brief and Design (1h 30m)
What is the problem we are trying to solve? Can you bring an expert on that subject?
Get people to collaborate, a mix of discipline and pillars.
The first activity should be about getting into that world.
Part 2 — Desk time (min 2h)
Block two hours in people’s calendars to get what they did in session one to a high-fidelity.
For ease, I create a copy of our design system for this part, so we are not messing with the core library.
Part 3 — Playback
Get people to add their designs to the same deck you used for Part 1, that way there are fewer decks and you have all the designs in one place.
Then get people to talk about the work, feedback on it, discuss etc.
And that is it, obviously feel free to tweak the details to the needs of your team, like cadence, who is involved, mandatory or optional. The current format is what works best for Just Eat.
I hope you enjoy running a Design Bonanza for your teams. | https://medium.com/justeatdesign/the-design-bonanza-moving-from-reactive-to-proactive-through-cross-team-collaboration-47c65c6d51d5 | ['Bruno Molena'] | 2021-04-28 10:43:59.185000+00:00 | ['Workshop', 'Team Collaboration', 'Collaboration', 'Design Vision', 'Design Process'] |
KDPS FUNDING TO REMAIN VIRTUALLY UNCHANGED IN 2021 | Earlier this week, the Kalamazoo City Commission held its 2021 City Budget Work Session. At this session the various city departments presented their projects for 2021 and the Commission was presented the proposed budget. Two major things stood out, firstly that there will be no service reductions across City departments due to the economic impact of COVID-19. And secondly, that despite of numerous humiliating incidents throughout 2020 and 1000’s of residents demanding to ‘defund the police’ among other demands, KDPS’ budget will remain the same.
For context, the Kalamazoo Department of Public Safety’s budget takes up a whopping 47% of the cities’ annual budget and is, by far, the most well funded department in the city. However, despite these rich resources the department seemed to fail to respond adequately to the major challenges it faced.
Kendall Warner | MLive.com
Early in the Summer, KDPS officers were dispatched in riot gear to forcibly move a crowd of peaceful protesters gathered on the mall, near Michigan Avenue. This move was met with harsh criticism from the community but the excuses from then Chief Karianne Thomas were ultimately accepted by the Commission. Over the next few days and weeks, KDPS violently escalated matters by tear gassing protesters who were lying on the ground, tear gassing peaceful protesters for breaking a curfew, welcoming in the National Guard to help lock down the City and fueling rumors of dangerous ‘outside agitators’ looking to start trouble. Of course when in August, these out of state agitators did come into the city looking to start trouble KDPS were nowhere to be found.
On Saturday Aug. 15 the Proud Boys, a violent white nationalist group, descended on Kalamazoo. The group of mostly out of state fascists was allowed to march freely, without KDPS intervention, towards a counter protest in Arcadia Park. Naturally, when the two groups met violence occurred yet KDPS were nowhere to be found. Once counter protesters has pushed the Proud Boys back into a parking garage, officers showed up in riot gear to disperse the crowd of counter protesters. The Proud Boys, after violently assaulting City residents, were allowed to leave the parking structure by KDPS while illegally concealing their license plates. At the same time, officers were busy arresting a local Black journalist and a legal observer. Not a single dangerous ‘out of towner’ was caught nor charged, they were allowed to leave after causing chaos.
Samuel J Robinson | MLive.com
In the aftermath, the organizer of the legal law abiding counter protest, Rev. Nathan Dannison said that “It looked like they [KDPS] were protecting the Proud Boys.” A month later City Manager Jim Ritsema did not reappoint Rev. Dannison to the Citizen Public Safety Review and Appeal Board, aka the police accountability board.
This is no surprise considering the cozy relationship Jim Ritsema has with KDPS.
In a City Commission meeting after the Proud Boys incident Jim said “In hindsight, sure, we shouldn’t have arrested people from Kalamazoo and should have arrested Proud Boys,” a statement that many residents in Kalamazoo would agree on. After all the counter protest was legal, peaceful and locally organized and the Proud Boys were out of town fascist invaders. However the day after this mild criticism, Jim apologized privately to KDPS saying “I should have said I am not in a position to make those determinations”.
Eventually, in September it was announced that KDPS Chief Karianne Thomas was to step down. It was reported that Mayor David Anderson said she decided to retire on her own and that the feeling was ‘mutual’. Earlier this week we learned that this was a straight up lie. Chief Thomas was in fact fired and without cause to ensure she’d receive a fat pay out of public money. The man in charge of hiring and eventually firing her, was Jim Ritsema. Chief Thomas’ firing seems to be the only consequence of the horrible summer for KDPS.
The lack of any serious budget changes comes as a disappointment to many who were protesting on the street his summer. The call for police reform and reallocation of funding was a major component of the Black Live Matter movement yet it has seem to have had almost no impact on cities budget.
If the protests against police brutality, and KDPS’ failures in the summer are not enough for their budget to be decreased then what is? Bigger, potentially violent protests? More disruption? Perhaps it is indication that the City are only willing to listen to those who follow the official channels.
At the end of the meeting, Vice Major Patrese Griffin encouraged residents to get in touch with the City Commission with their feedback and comments on the 2021 budget.
For those wanting to still have their say, the public can attend the virtual 2021 City Budget Public Session on Jan 4. The budget itself will be finalized before the end of January. | https://medium.com/@thekzoocall/kdps-funding-to-remain-virtually-unchanged-in-2021-299052f88457 | ['The Kalamazoo Call'] | 2020-12-18 17:21:04.989000+00:00 | ['Kalamazoo', 'Police', 'Michigan'] |
Top Job Posting Sites For Employers In 2021? | Some of the best job posting websites allow you to post any job for a low price and some allow you to post it for free, providing you the best monthly membership that offers such free trials. Some ideal websites also provide free job postings, free trials, apps, and many other services.
If you are planning to post an open job in many jobs then you always count on the multiple job sites in each submission. So in this article, we prepared some of the best top job posting sites for employers that provide you a large-scale network of job postings. Here you can post the same type of posting over and over.
Indeed
Indeed one of the best job posting websites for posting a free job post. This job searching engine allows you to search for any job in the world. Indeed provides no-cost job postings and it has a large resume database and has more than 180 million unique users each month.
Monster
Monster is one of the most reliable online job board platforms with a large number of jobs with numerous resume databases. In Monster your number of postings depends on the pricing you buy & the more you buy, it gets cheaper on every post.
Internships.com
This job posting offers employers in each job posting and helps you to get the right intern in college. The internship also offers a part-time or a summer job program and helps you to create a talent pipeline from the ground. Here you can post any open job for free & you can create their database after registration.
AngelList
This job board is a primary hub for any startup & very popular for startups raising funds. AngelList is a different job board platform with compelling advertisers to explicitly state the salary.
GetWork
GetWork is the most competent fastest growing online platform that allows employers to post any open job. This platform has trusted hiring partners of thousands of ideal companies with 18k + Institutes, University. In GetWork you can connect with one lakh active applicants & TPOs in just one click. | https://medium.com/@vikas-verma-27503/top-job-posting-sites-for-employers-in-2021-1869365da5f2 | ['Vikas Verma'] | 2021-06-08 08:16:03.742000+00:00 | ['Jobs', 'Job Hunting', 'Job Search', 'Job Interview', 'Jobintentservice'] |
Joe Rogan is a Colossal Hypocrite | I was not a fan of Fear Factor. It seemed wrong to make people do such unpleasant things for money. Not only that it does not really put Americans in a good light to see us chasing money in that way. Joe Rogan even admits he got the gig because he did not even take it seriously.
In the years since, Rogan has risen to fame as a podcaster. My sons had turned me on to him a few years ago. I have found him interesting and EARNEST. The importance of being earnest and honest cannot be underestimated…IMO. He has given a platform to many an outlier personality.
I do not agree with many of the man’s opinions, BUT I do support the many personae non gratae and taboo subjects he brings onto his show. This is apparently where “journalism” has gone. People who have lots of money and can ward off lawsuits and law enforcement because they have deep pockets. It is why Joe Rogan has been able to explore many illicit substances on his show.
However, one would be foolish if one were to surmise being in California did not HELP Joe achieve his success and explore these taboo subjects. He has been an extremely vocal proponent of cannabis. Of course, cannabis is legal in California so there is no problem. Yes that WAS true.
Now Joe has made a BIG DEAL about moving to Texas, because he thinks California sucks. He thinks Texas is better. It certainly is if you have money and are willing to talk trash on California. It seems Texas law enforcement is just as hypocritical as Joe Rogan. Texas law enforcement has no problem breaking down the doors of hapless dope smokers. They have been doing it for decades. However, Joe seems to be able to smoke cannabis IN Texas with NO CONSEQUENCES.
How can this be? Are not Texas law enforcement sworn to uphold Texas law? A state where more than half of the prison population is made up of non-violent drug offenders. Texas is really pretty hard core about incarcerating drug offenders. Texas law enforcement KNOW exactly where Joe Rogan is, yet they do not break down his door.
Is it because he picked the “sanctuary city" of Austin? Most likely that is why he chose Austin as the city council has given the police department a directive to make pot enforcement a low priority. They didn’t make it legal as in NO priority, just LOW priority. That means if Joe pisses anyone off in law enforcement, they can still bust down his door any time they want. The truth is Joe REALLY can’t be Joe in Texas. He cannot completely throw caution to the wind can he? Can Joe be Joe and thumb his nose at authorities in Texas?
I have a Fear Factor challenge for Mr. Rogan. Why don’t you start stumping for legalization in your new state. Better yet, use your podcasting reach and power to free many of the cannabis offenders rotting in Texas jail. I dare you to do so Mr. Rogan. Show your fans and America what a GREAT state Texas REALLY is. Otherwise you can wear the big H on your forehead! Such hypocrisy! I am deeply disappointed in someone I thought was intellectually consistent and earnest. I guess I was wrong. | https://medium.com/@atrigueiro/joe-rogan-is-a-colossal-hypocrite-7bc4de533299 | [] | 2020-12-14 03:26:17.834000+00:00 | ['Cannabis', 'Marijuana Legalization', 'Marijuana', 'Joe Rogan'] |
Being White Is a Handout | Being White Is a Handout
The anti-handout beliefs and rhetoric from white Americans is a painful paradox exposing the lack of awareness and active denial among white people. Paul Thomas Nov 24, 2020·7 min read
Photo by Jeremy Yap on Unsplash
My 4.5 year journey as an undergraduate and the first five years teaching high school English were spent mostly in the Reagan era.
While this was many decades before terminology such as “fake news” or “post-truth,” I literally lived during those years a painful and now embarrassing conversion from white denial and ignorance (believing in reverse discrimination, for example) to racial awareness and seeking a life dedicated to racial equity grounded in my own awareness of white privilege.
I had been raised in racism and white denial that pervaded my home and community so when I returned to my hometown high school to teach, I felt compelled to help my students make a similar conversion as mine but not have to endure the stress of experiencing that growth as late as I did.
Reagan in part depended on bogus American Myths (such as bootstrapping and a rising tide lifting all boats) and thinly veiled racist stereotypes, such as the infamous welfare queen myth evoked by Reagan and Republicans with great effect.
No one called this fake news then, but I invited my students to investigate and interrogate these overstated and unfounded claims as we examined race through nonfiction in the first quarter of my American literature course.
That unit began with canonical American thinkers — Ralph Waldo Emerson, Henry David Thoreau, and Margaret Fuller — contextualized with Howard Zinn’s confrontation of the Christopher Columbus myth of discovering America. From there, we moved to race in the U.S. by reading and discussing texts by Martin Luther King Jr., Malcolm X, W.E.B. DuBois, Marcus Garvey, and Booker T. Washington in order to emphasize the diversity of thought among Black leaders throughout the early and mid-twentieth century.
The culmination of the unit was anchored by a consideration of the life of Gandhi (linked to Thoreau and King).
What was my agenda in this unit?
The writing goal was to explore nonfiction writing, specifically argumentation. But I also asked my students to begin to form their beliefs about the world based on credible evidence and not cultural myths and stereotypes.
One brief activity I used, and continue to use, is to have students brainstorm what percentage of the U.S. they believed to be classified as white before asking them to identify what percentage of the world was classified as white.
In the 1980s, students living in rural upstate South Carolina tended to wildly miss these statistics in their guesses; then, about 70% of the U.S. was white, with about 12% constituting Black Americans. The world statistic really forced them to rethink race, and whiteness, since I had found a chart that portrayed about 1 in 10 people in the world being white.
These statistics created a great deal of disorientation for students even as I helped them recognize that about 4–5 out of 10 people in the world were Chinese or Indian (a context they had never considered).
One of the most memorable moments of these lessons over the years was a Black student who grew livid with me, calling me racist, because she entirely rejected that only 12% of the U.S. was Black.
Her anger was grounded in a similar experience I was highlighting for students in general; for many people, the U.S. looked then very white ( a gaze that allows people not to see that the world is not as white), but this Black student believed that the U.S. was fare more Black than it was because she was hyper-focused on SC, where 25–30% of the citizens were Black (significantly disproportionate to the entire country).
The anger and disorientation grew for my students as I asked them to research data on welfare; they discovered that the average person on welfare was white and that people on welfare tended to have fewer children than the general population — all of which contradicted the myths they had lived by, heard from their parents, and witnessed in the political propaganda of the Reagan era.
These teaching experiences with mostly rural white and Black students very much like me are now about three decades behind me, but I think about this teaching often — and it is discouraging.
It is discouraging because I watched and listened as Lindsey Graham and others refused to extend jobless benefits during the pandemic because he framed that as a handout, a disincentive for working.
It is discouraging because I am watching the move to forgive student loans begin to crumble against a similar mantra about fairness and the usual “handout” rhetoric.
There are two ways that people (mostly white) need to investigate the handout myth, just as my students confronted race and racism in the 1980s.
First, the arguments against student debt relief are grounded in misinformation and racism in similar ways that arguments against welfare have been since Reagan (and including the Bill Clinton era).
Just as antagonism against welfare by white people was rooted in false perceptions that it was a handout to Black people with lots of children, the specter of student loan relief being a handout to Black people cannot be ignored in white rhetoric against that relief:
According to the Department of Education, Black college graduates have nearly twice as much student loan debt as the typical white grad. The National Center for Education Statistics reports that the typical Black borrower owes 114 percent of their original student loan debt 12 years after graduating with a bachelor’s degree. White students, on the other hand, usually owe 47 percent of their original debt. Not only is this crisis exacerbated by higher Black unemployment, wage disparities and the racial wealth gap, but loan companies charge Black students higher interest rates. So, Black grads have less money before they attend college; earn less money after college and have to pay back loans at higher interest rates.
Second, as Harriot adds, “There’s no such thing as a ‘government handout.’”
Student debt relief would address a failure of public funding, a lack of political will that decides how tax money is spent.
There is no shortage of money in the U.S. for social programs such as fully publicly funding K-16 education for all, but there is a lack of political will to allocate money for the common good as opposed, for example, more military spending or militarizing the police forces across the country.
Allocated tax money is not a handout since it is the pooled money of all Americans that then must be designated in ways that serve those Americans.
A final point that cannot be emphasized enough, however, is that those most enraged by anything they deem as a “handout” are mostly White conservatives who like my students before our lessons on race and racism have failed to interrogate the truth about their white privilege: Being white is a handout.
The white handout looks like this:
And these:
The anti-handout beliefs and rhetoric from white Americans is a painful paradox exposing the lack of awareness and active denial among white people.
Privilege is an unearned advantage, starkly displayed in the data above. But for many white Americans that handout of being white is invisible since they cannot experience life in any way other than white.
White privilege, the handout, is no guarantee of success or a perfect shield against pain and suffering (or even inequity), but struggling while white is almost always less severe than struggling while Black.
This discussion here, however, is not white bashing; I understand that white people have not asked for that advantage, but I also recognize that a great deal of white anger is grounded in an unexamined fear of losing the handout, of having to live in a world of racial equity — ultimately a fear of achieving the meritocracy many whites falsely believe exists.
If in fact handouts erode people’s work ethic, the ultimate paradox is that for the white people who believe that their white privilege, that handout, must be eradicated.
I, again, think about the hard lessons my white and Black students wrestled with in rural SC throughout the 1980s and 1990s; they often grew into smarter and kinder people. They always gave me hope.
That hope is weakening for me however under the weight of 70-plus million Americans choosing the myths, the lies, and refusing to investigate the evidence.
If handouts aren’t good or fair for America, then it is well past time to end the greatest handout of all, white privilege. | https://medium.com/@plthomasedd/being-white-is-a-handout-507572e9cd6 | ['Paul Thomas'] | 2020-11-24 11:53:20.118000+00:00 | ['White Privilege', 'Race', 'Handouts', 'Racism', 'Student Debt'] |
Tgree Tips to Overcome the XY Problem | Tgree Tips to Overcome the XY Problem
You’ve had problems with files. Probably due to their file extensions. Let’s say you wanted the last three characters to determine the file type. You ask some people questions about it.
You search for code to find the last three characters. Your coworkers probably have some suggestions, so you ask them as well.
You’re stuck on your solution, without looking back at the problem.
You’ve got into the XY problem. Let’s go into the details of it. | https://medium.com/@robert_35655/tgree-tips-to-overcome-the-xy-problem-fc2d64460f5 | [] | 2020-12-16 16:23:34.461000+00:00 | ['Business Development', 'Software Engineering', 'Soft Skills', 'Software Development', 'Business'] |
Share All My Sorrows | Share All My Sorrows
Marsha Stevens-Pino’s “For Those Tears I Died” means more than you know
June 23, 1969. Five days before the Stonewall riots were to begin.
A sixteen-year-old hippie girl sits down and writes a song to express her love of Jesus, and to share that love with her baby sister and with her friends at school.
She never thought anyone outside of her own family and friends would hear the song, let alone people all over the world. She never dreamed we would still be singing it today.
Marsha Carter grew up in a household troubled by alcoholism. When she found Jesus, she found a freedom and a love that she had to share. A lover of music, she searched for songs of Jesus but found nothing that spoke to her heart. So she decided to write her own. And she made history.
You said You’d come and share all my sorrows,
You said You’d be there for all my tomorrows.
I came so close to sending You away,
But just like You promised You came there to stay;
I just had to pray!
Marsha’s song “For Those Tears I Died (Come to the Water)” would be recorded by history as the beginning of a new music genre, the birth of Contemporary Christian Music (CCM). Marsha’s band “Children of the Day” were pioneers in the “Jesus People” movement, a group of hippies who loved Jesus. When I was growing up in the church, we looked back at the Jesus People movement with a kind of blurry reverence that faded them all together into a monolith. We saw them as heroes of the faith, but we knew nothing about them as individuals.
When Marsha divorced her husband Russ Stevens and came out as a lesbian, much of the church and the music industry she had helped to birth and build turned against her. People tore the pages containing her songs out of their hymnals and her record company tried to withhold royalties from her. But some people continued to enjoy her music while never talking about her. I had not heard of her until a few years ago, despite “For Those Tears I Died” having been one of my favourite songs when I was a teenager in the 90s. As with Ray Boltz, we just didn’t talk about gay Christians. We pretended they didn’t exist.
And Jesus said, “Come to the water, stand by My side,
I know you are thirsty, you won’t be denied;
I felt ev’ry teardrop when in darkness you cried,
And I’m here to remind you that for those tears I died.”
Marsha says she has met people who sang her song in countries where it was against the law to be a Christian, where they were literally risking their lives to sing it. But the stories that really touch her heart are those from queer young people, those who learn that their mom or grandma’s favourite hymn was written by a lesbian. If the woman who wrote that beautiful worship song is gay, maybe they are okay too. Maybe Jesus does love them.
The poetry of Marsha’s lyrics has always meant so much to my soul, and the simple beauty of her melody catches in my ear and stays with me. After listening to it once, I find myself singing it for days. And I don’t mind.
There is something countercultural in the song. It was not like anything that came before it. And, while songs that followed have copied from it, Marsha’s music is and will always be the first, the original. It stood out in its time; it was queer.
Today, Marsha runs Balm Ministries with her wife Cindy. They are affiliated with the Metropolitan Community Church (MCC) and Marsha has written theme songs for every MCC General conference since 1985. She is still sharing the love of Jesus with those the organized evangelical church mostly ignores.
The love of Jesus shines from Marsha’s face. Despite everything her church family did to her, she remains a light of love. She forgives them as Jesus forgives. She is truly a hero of the faith. She is a Jesus person in every way. | https://medium.com/prismnpen/share-all-my-sorrows-60812aec6253 | ['Esther Spurrill-Jones'] | 2020-12-30 08:03:24.024000+00:00 | ['Music', 'LGBTQ', 'Christianity', 'History', 'Creative Non Fiction'] |
Securely Controlling Hardware Devices with Blockchain | Securely Controlling Hardware Devices with Blockchain CoreLedger Follow Apr 9 · 7 min read
Blockchain technology sometimes get’s thrown around as a universal fix-all, with use cases ranging from tokenizing high-value artworks to supply chain tracking. But some of the most important and relevant blockchain use cases sound very boring because of their simplicity, though in reality they are neither simple (technologically speaking) nor boring. Think, for example, of the security implications of using blockchain to communicate instructions to a hardware switch.
Smart-home devices and IoT appliances are quickly becoming common place in our homes, but not enough time is spent talking about security. This is one area where blockchain technology can provide a straightforward and obvious value add, allowing you to securely control everything from a simple on/off switch, or even more complicated connected hardware controls for things like the lighting in your home, a garage door or smart lock, or really any other connected hardware switch you might want. But before we get into the actual applications, let’s discuss the technicalities, because there are some important differences in a blockchain powered solution.
The Problem with Conventional Smart-Switch Controllers
Let’s start with current smart solutions and see how they work via conventional methods. If you want to remotely control a smart light switch, then you typically need a (hopefully) secured web service which grants you access after you’ve provided login and password. You can then send the switch a command, and the web service will instruct the hardware to turn the light on or off.
The conventional way to control a connected smart device is fundamentally un-secure
The problem with this setup is that the web services provide so-called “write access.” A client sends instructions, which are then translated into hardware instructions via the web service and then executed by the attached device, in this case a light bulb.
Now, all write-access is by definition un-secure. No web service is perfect. There are always holes in the security. Whenever you grant access to a device from “the outside,” there is a chance that someone can hack this access and abuse the service for nefarious purposes. The vulnerability comes from a single point of failure, the point where the authentication is checked and where instructions from the external client are accepted.
Controlling Connected Hardware Devices with Blockchain
The security issues with IoT and smart devices are well documented and, while the average consumer might be ok with the risks, enterprises, governments, and any other conscious entity certainly won’t be. Blockchain technology can solve the security issues inherent in the single point of access by wrapping the client command in a smart contract.
The smart contract contains the state (on/off etc.) of the switch and controls the permissions for changing its state. Doing so requires a signed transaction from the owner of the smart contract’s private key. The transaction is then submitted to any blockchain client. A hacker does not know which client it will go to and even if they were able to intercept the signed transaction, they can only do a denial-of-service by preventing the message from reaching the blockchain client and being mined. Anything that is mined is synced to multiple nodes within the network.
The ‘On’ command is wrapped up in a smart contract and confirmed by multiple third parties. Goodbye single point of failure!
Now, blockchains such as Ethereum usually require a certain number of “confirmations” from other clients. This ensures that the transaction has been mined and correctly embedded into the blockchain, meaning you have multiple third-party “witnesses” ensuring the transaction’s authenticity. Even if a hacker manages to hijack a single client or intercepts requests to sync the state and modify them, they wouldn’t be able to hijack all of them simultaneously and trick the network into believing that the switch is off when it should be on.
Using a smart contract, the switch itself doesn’t have any external access. Let there be light!
Thanks to the smart contract acting as the vehicle for the command, and that only after multiple third party confirmations, the lightbulb end of all this no longer needs any external accessibility, turning it into a read-only node that just handles outbound connections and syncing with other blockchain clients.
Again, there are more clients, all of them controlled by individual entities, who ensure the safety and security of their devices with individual verification. A hacker might be able to trick one, or even a few, of them into communicating a wrong state… but to do that to the entire network? That’s practically impossible.
The added security of this setup comes from the fact that the system which changes the switch now just has to read instructions from a smart contract, rather than opening up to instructions from the outside world.
How secure is this really?
Extremely. Security is provided by proven cryptographic means, where the private key we mentioned above allows you to create an un-forgable signature, which can then be verified as genuine by any third party without knowing the private key itself. This allows you to make signatures a one-way process. Blockchain by itself is essentially a secure database combined with a range of programming options, called smart contracts. Despite their name, a smart contract is nothing more fancy than a simple computer program. These computer programs can be programed to only trust instructions from an authorized actor. This actor authenticates themself with a signature from their private key, which can be kept entirely secret. In this way, there is no breach of security when sending instructions to the blockchain. So far there is no proven way for an attacker to intercept and modify the message that is sent to the smart contract. This is in contrast to a client-server system with a central database and a multitude of security layers, all of which are needed to fend off attackers.
Blockchain protocols, if properly implemented, are a perfect decentralized information bus. Blockchain technology even has a built-in fault tolerance mechanism to solve the problem of concurrent transactions (one person switching off, another one switching on) Usually this is for double-spending prevention, but the same mechanism can be applied to the switch problem.
A final security advantage is the decoupling of writing and reading from the hardware switch. A conventional client-server system always has to protect its database behind layers of security, since it contains all the important information. By contrast, a blockchain distributes its information around a network. In a client-server system it is sufficient to attack a single node to alter a system’s state. To accomplish the same thing in a blockchain system, you must attack and subvert every individual node, which is practically impossible.
Other factors to consider
Let’s assume that the same blockchain technology described above, such as Ethereum (it doesn’t even have to be the expensive Ethereum Mainnet), is being used by a financial institution. You can assume that the blockchain clients will always be up-to-date with regards to their security. Let us also assume that the network has a sufficient number of blockchain clients, because it is used for things other than just setting switches. Both of these are safe assumptions on any moderately popular blockchain. The only vulnerability of the setup as described is the security of the private key itself, which can be lost or stolen. Fortunately, there are many ways to keep a private key safe, recover it in case of a loss, or even replace it.
What can you use this for?
There are many reasons why someone would might want the security of a blockchain powered remote access system, such as controlling an alarm system. Of course there are a plethora of applications for the simple on/off switch, but thanks to smart contracts you can also set up more elaborate instructions, where numbers, strings, dates and addresses being communicated to the switch. For example, you could run an entire hotel’s door lock system on this foundation, with each lock reading the public addresses of the activated keys off the blockchain. The keycard would initiate a challenge-response process, which takes just milliseconds to process, in order to authenticate the user. And there you have a smart-lock, just as fast as conventional methods and far more secure. It’s also much more versatile. You could apply the same system to a car-sharing service, storage lockers, or a high-security facility. This solution can also be used for other applications. For example, it could be part of a supply-chain monitoring process where the “switch” indicates whether a shipment has reached a warehouse or not. One additional benefit of a blockchain-based system is the fact that any additional security measures, such as requiring multiple-signatures to complete the command, can easily be built into the smart contract. For example, you could require that 2 out of 3 keys have to sign in order to authorize an action. For enterprise clients, governments, or other highly complex or highly secure applications that require remote control, this is a vital factor for security and flexibility.
Smart home devices, connected hardware, and IoT are already fundamental parts of our lives, and that will only increase. As they do, however, the need for security increases. Using smart contracts and blockchain technology is a simple way to secure the access and control of a connected hardware switch, with extremely versatile applications that are perfect for enterprise applications. This use-case fulfills blockchain’s promise to secure our lives and bridge the physical/digital gap, and the possibilities for this hardware control application are endless.
CoreLedger’s mission is to help businesses of all sizes quickly and affordably access the benefits of blockchain technology. From issuing a simple token, to enterprise- grade token economy solutions, we have all the tools you need to integrate blockchain into your business.
Interested in our results-focused, real-world approach? Then visit our website for more information, or get in touch with us directly to discuss your project. | https://medium.com/coreledger/securely-controlling-hardware-devices-with-blockchain-642af4308529 | [] | 2021-04-09 08:54:50.534000+00:00 | ['Blockchain Technology', 'Connected Home', 'Smart Contracts', 'IoT', 'Smart Home'] |
Negotiating the Contested Psychological Territories of Cyber Warfare | ‘I made Steve Bannon’s psychological warfare tool.’
Christopher Wylie is the coder who created the algorithm that made Cambridge Analytica and AggregateIQ psychographic profiling firms (the IP of both being owned by Robert Mercer) capable of mutating the national souls of two Western democracies by mining and weaponizing Facebook personal data hoards. Cyberspace is so new, we haven’t adapted. We can’t parse it. It eludes, and elides, our multi-million-years-old sensorium and ability to formulate a rational analysis of benefit or threat. This is a test.
As a species — as a story-telling, narrative-constructing kind of animal, collectively — we, as a whole, as a group, are woefully ill-equipped to parse and process military psi-ops perpetrated against us by ‘bad actors.’ Even less so can we perceive the vast, virtual territories contested in cyber-wars commissioned by aggressively tyrannical regimes or corporate marketing. We subscribe to the fictive meta-narratives contrived by our cyber- ‘handlers’ to capture our attention, if not our credulity, in a manner more magico-religious than consumer-savvy.
This is why U.S. intelligence agencies have so successfully been deploying psychographic methods (‘military psi-ops’) in third world countries and unstable nations they want to ‘convert’ for decades, and why ultra-right wing agents (Robert Mercer, Steve Bannon, Nigel Farage, Kushner and Parsdale on behalf of the stupid ignorant Trump, their vessel, who himself, personally doesn’t even go online except to Tweet — doesn’t even do email, doesn’t know how) so successfully deployed military psi-ops against the domestic civilian populations of the U.S. and the U.K. (for the first time ever), leading up to the Brexit and Trump victories. Like shooting fish in a barrel.
We (collectively) can’t perceive the threats or benefits as clearly as, say, we may assess the threats or advantages of a lightning storm as opposed to a mild rain. Yet cyberspace is a virtual landscape we more and more inhabit. Like the magical landscape of shamans or journeyers in spirit, the landmarks, cyber-features, peaks, valleys and sink-holes must be discerned for safe passage.
In the traditional magical landscape of Eurasian shaman stories and wonder tales, the beautiful ‘silk meadow’ is a terrain inhabited by demons. Its languid, waving movement lulls the senses and conceals the lurking supernatural threat and malign ill will hiding under the surface. Much of cyberspace is the same. | https://medium.com/@yewtree2/negotiating-the-contested-psychological-territories-of-cyber-warfare-3365bd8c4f94 | ['Yvonne Owens'] | 2019-12-29 20:26:01.095000+00:00 | ['Cybersecurity', 'Cambridge Analytica', 'Cybercrime', '2016 Election', 'Psychology'] |
Understanding Starsky Robotics’ Voluntary Safety Self-Assessment | Understanding Starsky Robotics’ Voluntary Safety Self-Assessment
Enhancing Highway Safety for the Long-Haul
Eight months ago, I joined Starsky Robotics as our Director of Safety Policy. I’ve previously worked on FAA certified avionics, automotive electronics systems, and most recently focused on safely integrating unmanned aerial vehicles into the U.S. national airspace. Safety is a critical concern for each of these industries and is at the core of our approach to automation at Starsky. To that end, Starsky’s first step after raising our Series A was bringing me on board to ensure our commitment to safety is an actionable part of everything we do when putting driverless trucks on America’s highways.
Our safety strategy begins with an understanding of the term “safety” itself. At Starsky, we use the International Organization for Standardization’s definition, which is commonly employed by OEMs designing automobiles: safety is the absence of “unreasonable risk.” No human designed system can ever be perfectly safe. This means that instead, enhancing safety is first and foremost about understanding, quantifying, and mitigating risk. Developers should take steps to identify and mitigate as many risks as possible when designing any safety-critical system. Starsky’s threshold for deployment is to achieve “no unreasonable risk” — that is, we must ensure that an unmanned truck is as safe or safer than trucks currently on U.S. roadways. This idea provides the basis for our engineering and testing process.
Today, we are releasing our Voluntary Safety Self-Assessment (VSSA), which was written to provide insights into our approach to safety in developing unmanned trucks. Eight other AV companies have published VSSAs to date. Starsky is the first company to publish a VSSA specific to automated trucks (and by an order of magnitude the smallest team to release one). We hope you find our report informative and interesting — we want readers to come away having learned something about Starsky and safety engineering. It may be dry reading, but what it lacks in form it makes up for in substance.
You can read our full report here, but if you’re looking for the short version, I’ve got you covered.
Starsky was founded on the idea that long-haul trucking is a difficult job. In many respects, trucking is more a lifestyle than a profession — long hours, immense responsibility, and often, months on the road away from home and family. Today, the 50,000-person long-haul driver shortage is one of the most significant pain points for the trucking industry and has a very real impact on the cost of goods.
Driving large trucks requires skilled workers. However, the most painful part of over-the-road trucking — long, monotonous stretches of highway that keep drivers away from home — are actually the easiest parts of the driving task to automate. Our system allows well-trained, highly-skilled truck drivers to sit in an office and control a truck remotely (known as teleoperation). Drivers can use their skills to remotely drive a truck from a distribution center to a highway, where automation can take over. At this point, the truck driver is only monitoring the truck to help with complex, context-based decisions — instances where automated decision-making is quite difficult. When the truck exits the highway, the driver regains control and uses remote control to get the truck to its final destination.
Starsky’s teleoperation center
Our VSSA describes our thinking about safety processes and the implications of our very specific application of automation. Starsky trucks do not need to drive everywhere in all environments and conditions. They only need to drive themselves on deliberately selected highway routes that we can survey and authorize. In other words, our Operational Design Domain (ODD) is very specific.
Our initial ODDs will define the exact routes where our trucks are allowed to drive under particular lighting, weather, and traffic conditions. Unlike most companies working on passenger vehicles, we don’t need to understand how to operate automated systems in complex environments like urban centers: our skilled teleop drivers provide the smarts for these complicated situations. Starsky is unique in this respect — we highly value the importance of using well-trained remote drivers as a key part of the decision-making process for an unmanned vehicle.
This means we can define narrow, deterministic tasks that our system must must accomplish (“stay in lane”, “adjust speed relative to traffic ahead”, etc.) We can create performance standards for each narrow task and execute tests that objectively demonstrate that our system meets these standards. Our job is to understand when our system can meet these criteria and when it cannot (if we cannot, for example, adequately identify lane lines on a specific road in adverse weather conditions we will not operate our trucks on that road, in that environment). We use this understanding to hone our ODD rules about when, where, and under what conditions we allow our trucks to drive autonomously.
Our VSSA also explains our strategy for handling issues or potential system failures by detecting these events and implementing fallback behaviors to put the system in a safe state. When the truck detects any problem that would compromise safe driving, the system will achieve a Minimal Risk Condition (MRC), such as pulling over to the shoulder. When possible, the MRC will prioritize keeping the remote-human driver in the loop to allow the truck to pull off the road in a safe, controlled manner.
The VSSA we publish today is reflective of our current technology and practices. We will improve our systems based on an iterative process of adding features, expanding our ODD, and continuous improvement procedures. Like our technology, our VSSA will be updated as we continue to evolve and improve our system. Readers should expect our VSSAs to continue to communicate detailed, specific information about our safety procedures and design process.
Starsky is committed to keeping policymakers, stakeholders, and the public in the loop throughout our development effort. Through the entirety of this process, a robust safety culture and the implementation of systems engineering practices are key to achieving our goals. We’re excited to release our VSSA as part of a critical partnership with government and the public as we work to make driverless trucks real. | https://medium.com/starsky-robotics-blog/understanding-starsky-robotics-voluntary-safety-self-assessment-1d70318a1459 | ['Walter Stockwell'] | 2018-12-19 17:28:25.319000+00:00 | ['Safety', 'Autonomous Cars', 'Transportation', 'News', 'Self Driving Cars'] |
How Skystra is Structured as a Remote Company | Disclosure : I am the founder of Skystra.com. Everything detailed here happens, exactly as described. The only edits are to names or places, protecting the privacy of those involved.
The customer is the boss.
That’s the saying, or something close to it. Pretty cliche isn’t it? But it’s absolutely true. As a medium sized business, we rely heavily on customers, both new and especially renewing customers.
So what does the customer being the boss really mean though? A lot of companies say it, and then proceed to stomp all over their customers with unfriendly processes and rigid policies. At Skystra, we take a very fluid view on this.
In the first part of this series, I’ll go over how the customer is positioned within our company, how it affects the organization chart, decision making and the general influence of being positioned where they are.
Normally, most organizational charts are top down. And I truly have nothing against that model. Human psychology being what it is, a form of hierarchy is always needed, even in the most holistic organizational structures. There has to be someone, or a group, ultimately calling the shots. But it’s where this group is in relation to the customer and their experience that matters the most.
In a top-down structure, the management team may have a good understanding of how their company works, and how it provides services to them. In my opinion, the biggest fundamental problem with top-down structures is every time a new level is added, it is extremely hard for the leadership team to stay in touch with the lifeblood of their company : customers.
And of course, every new layer in the organizational chart adds additional challenges, like more managers, training programs, QA programs and more.
At Skystra, I won’t tell you our organizational chart is perfect, not even for us. However, it comes after a lot of revisions and iterations to our actual structure over the years, from completely bootstrapped startup to a solid medium sized business that finally has everything running in the same direction.
The key : The customer is in the middle of every decision we make.
Secondary key : All teams are led by experts, not by managers. We firmly believe we can create and run systems that help with the overall process management, but we can’t build experts.
The chart makes it obvious : Our customers are in the middle of everything. Without customers, we would not be around.
Our leadership team are all experts in their own fields. Each of their departments and the team within them are led by those with expertise in each field. There are no managers for the sake of being managers.
This structure allows us to have honest, and sometimes hard, discussions about the direction of the company. However, it is always in service of the customer. There are always going to be various considerations about how to approach a decision, but in the end, the customer matters most. | https://medium.com/@skystra/how-skystra-is-structured-as-a-remote-company-f1f8a63bcd10 | ['Skystra Cloud'] | 2020-12-17 23:02:21.436000+00:00 | ['Remote Working', 'Company Culture', 'Customer Centricity', 'Small Business', 'Remote Work'] |
Replace Object with Map in your JavaScript code | Photo by Timo Wielink on Unsplash
In this post, I would like to share my research on Map, a newly introduced JavaScript collection type in ES6. Recently, I am considering refactoring my JavaScript code in the hope of improving optimization level of my web application. Replacing old objects with Map could be one way of achieving the goal.
It is inevitable to discuss Map without comparing Object because they are very similar like brothers and sisters in that both hold key-pairs in their collection. In most cases, they can be used interchangeably.
However, if you are serious about optimizing your application and maintaining your codes as concise as possible, you better consider replacing some old Objects on your code with Maps. Especially, if your application handles a large set of data, frequently adding and removing data, Map may improve your application in many ways.
1. Map.size vs Object.keys(object).length
Map’s “size” property is powerful in counting the length of itself. It can be used just like the “length” property of Array. | https://medium.com/@tofusoup429/replace-object-with-map-in-your-javascript-code-d205caec0334 | ['Steve Kim'] | 2020-12-04 09:42:27.862000+00:00 | ['JavaScript', 'Javascript Development', 'ES6', 'Map'] |
You Free Me | Photo by Zoltan Tasi on Unsplash
You free me.
Free me to be myself.
For on my own,
ㅤ I am uncertain.
With another,
ㅤ I am uncertain.
But with you,
ㅤ I am able to be…
Me
Though I am less than I could be…
Though I am less than I think I should be…
You…
You free me.
Free me to be myself.
My true self.
The self I am meant to be.
The self I…
Am
You free me.
Though I doubt,
ㅤ though I deny,
You insist.
Though I wrestle,
ㅤ though I run,
You persist.
You remind me…
Who
Who I am.
And in the reminding,
ㅤ you encourage me.
In the insisting,
ㅤ you direct me.
In the persisting,
ㅤ you free me.
Free me from myself.
Free me to be me.
Enable me to be,
ㅤ to become,
The one true me.
Yes,
ㅤ as we wrestle,
You free me. | https://medium.com/poets-unlimited/you-free-me-bb9f3f6348da | ['Steve Frank'] | 2019-09-26 21:09:49.592000+00:00 | ['Relationships', 'Love', 'Doubt', 'Self', 'Poetry'] |
Dear Office Dead-Weight, | Dear Office Dead-Weight,
You disgust me…I think.
Photo by Spencer Russell on Unsplash
I had high hopes for you
being shiny and new; potential
to care, unlike this jaded crew.
We could collaborate! Open
an idea floodgate — I was certain
that would be our fate.
Who is this guy? I wondered
inside, his smile is big — it meets
his eyes. His demeanor is cheery
(a good trait in theory)
And yet — I find myself weary.
You complain too much, work
too little; at best, your statements
are brittle and they always belittle.
Was it fair for me to hope
you’d care? Was it stupid to dare
to believe you’d do more than
breathe air?
It pays the bills you say, when
asked about your workday,
but what are you doing
to earn that pay? What little work
you do, comes to me to redo
while you enjoy another brew.
Are you actually a genius?
sly with your idiot sweetness, while I
set impossible bars with my keenness
and drown in the work, while you
play with your penis? Sure —
this industry’s lame, but the product
carries my name — and that’s one
thing I refuse to shame. | https://wormwoodtheweird.medium.com/dear-office-dead-weight-29ae49ca9b80 | [] | 2020-03-10 22:14:38.264000+00:00 | ['Life Lessons', 'Business', 'Self-awareness', 'Perspective', 'Poetry'] |
“Engage Hudson 2.0” Launches in Ohio | The City of Hudson, Ohio launched the newly redesigned “Engage Hudson 2.0” service, which is powered by SeeClickFix. City officials partnered with SeeClickFix to make it easier for citizens to request non-emergency improvements, and for city staff to take action.
According to Assistant City Manager for Operations, Frank Comeriato, Hudson’s information technology team is always on the lookout for innovative, user-friendly technologies for the city. “SeeClickFix is easy, flexible, and offers both photo and video features that better inform local government staff,” said Comeriato.
Officials say they are especially excited about SeeClickFix’s ability to geocode requests, which will save the city time when crews are out searching for issues that need fixing. After all, potholes don’t have addresses.
“Hudson residents deserve prompt, high-quality service,” said Comeriato. “The new app will ensure that all resident concerns are routed to the appropriate service department and addressed as efficiently as possible.”
SeeClickFix is now integrated with Hudson’s internal work order management system, Dude Solutions Asset Essentials. It will allow SeeClickFix to serve as the public-facing tool, channeling requests directly into the city’s existing workflow.
Hudson and SeeClickFix share a passion for continuous technological improvement. “We can see that SeeClickFix is as forward-looking as we are, so we can rely on the solution to adapt to residents’ needs in the future,” said Comeriato.
Engage Hudson is available for iOS and Android devices, and can also be accessed via the city’s website. | https://blog.seeclickfix.com/engage-hudson-2-0-launches-in-ohio-d2cd9c1bd012 | ['Fahoum Fahoum'] | 2020-06-10 14:28:12.560000+00:00 | ['Partner Launch', 'Civictech', 'Civic Engagement', 'Government', 'Govtec'] |
A Mid-Autumn Day’s Matinee | My play Translation by was just published here on Medium — you could say it’s “in previews” if you want to seem like a real theatre geek. In the traditions of Shakespeare in the Park and midweek midday matinee performances, I am unlocking it Wednesday (+ 3 others) for all to read, enjoy and share.
Translation by — Doing my best not to spoil it (yet still sell it) I can say it is a comedy of errors centering on a group of diverse players trying to ready a translation of a work… and not fully succeeding. | https://medium.com/the-coffeelicious/a-mid-autumn-days-matinee-8c8b6a864b13 | ['Ernio Hernandez'] | 2017-11-22 12:30:39.225000+00:00 | ['Reading', 'Fiction', 'Writing', 'Culture', 'Play'] |
287 Disturbing Words to Conceive While Engaged in a Staring Match with Our Turtle in the Road | What would you do if you walked the first 40 years of your life on your right foot only. Then, one day, plop! You’re left foot falls out of your leg, onto the ground. Suddenly, you’re walking with both a left leg and a right one.
Wouldn’t it freak you out? Suddenly, something which had always been a consistent law in your life changes. It’s like learning about the existence of interconnectedness, the quantum multiverse, and the fact that we are all one whilst living the life of as an extremely selfish asshole in a hyper-capitalist country. Imagine the disconnect. I wonder how long it would take that person to realize their fucked up karma and the laws of this universe.
It’s enough to make one giggle.
But seriously, what are we going to do when time starts flowing two directions? The Earth is headed towards a dimensional blossoming, a fourth density awakening. That would involve some unexpected addition to the human experience. Don’t cha think?
Truth and Lies become interesting elements when experiencing reality fourth dimensionally. Donkeys and Bazookas don’t bake easily into the same pie together. At least, they don’t historically. Who knows? Maybe we won’t need the bazookas as long as we have enough whip cream to climb the staircase to Candy Land Lane. South Paw can be tricky to navigate this time of year.
The snakes in the bathroom make it difficult to take a shit. I don’t like the way their glowing red eyes pierce my soul as they look through me when I’m sitting on the toilet.
Upstairs and downstairs, together? What do I think? I think it’s a horrible idea! Whoever thought of it should be taken out and shot.
It was Gods idea.
Oh. Um. Well, then. I suppose we’re all just gonna have to play along with it.
I have no idea what they’re thinking. Either of them. God or myself.
Alexa told me I should keep a thought journal. I think that’s profound advice from a kitchen appliance.
Upstairs and downstairs together. Holy Hell, God is an asshole. You have no idea how mentally disturbed you people are downstairs.
Wait a minute. Are we downstairs now?
366 words later, I’m wondering what could the 287th word possibly be? It is it. | https://medium.com/illumination/287-disturbing-words-to-conceive-while-engaged-in-a-staring-match-with-our-turtle-in-the-road-87120611a44f | ['Markus Scorelius'] | 2020-12-18 01:06:43.247000+00:00 | ['Ascension', 'God', 'Nonsense Verse', 'Free Verse', 'Storytelling'] |
Cloudera’s CCA 175 — The 2020 Update: What Changed, What Didn’t, and How to Prepare for the Exam | Cloudera’s CCA 175 — The 2020 Update: What Changed, What Didn’t, and How to Prepare for the Exam
The new version of Cloudera’s CCA-175 certification removes all of the legacy tools and puts the focus entirely on Apache Spark
Cloudera’s logo
A few weeks ago, Cloudera re-launched their Spark and Hadoop Developer Exam (CCA 175) with an updated exam environment and a set of key updates to the exam’s contents.
The exam continues to have a hands-on approach with a set of 8 to 12 performance-based tasks that the test-taker needs to perform on a Cloudera Quickstart virtual machine using the command-line, with access to most Big Data tools (Hive, Spark, and HDFS are all accessible via their corresponding commands).
If you’re planning on taking the exam in the upcoming weeks, below are the key elements to keep in mind and the most crucial tools and functionalities that you should focus on while preparing for it:
It’s all about Spark
For some reason, Cloudera kept giving at least two Sqoop-related tasks in the CCA 175 exam until the last months of 2019. Sqoop, although it was widely used at the start of the past decade, has been more of a legacy tool for a few years, and a very limited number of companies still expect candidates to be familiar with it.
Luckily, Cloudera took the cue and removed Sqoop-related tasks from the exam. Flume-related tasks (which weren’t as frequent) have also been removed, making the exam completely focused on the usage of Big Data’s most prominent tool, Apache Spark.
The key update of the 2020 version is that Spark 2.4 is now provided by default within the exam environment (instead of Spark 1.6), accessible via both Spark Shell (for Scala) and Pyspark (for Python), or even via Spark-submit if you prefer using scripts while performing the exam’s tasks. Spark 2.4 introduced a set of very useful features that make performing well at the exam a much easier task, notably when working with Avro files.
Get familiar with Hive and HDFS
Certain tasks within the exam still necessitate basic knowledge of Apache Hive and HDFS. Mainly, you need to be comfortable with the process of creating Hive tables with the different options related to the location of the data (if you’re asked to create an external table) and its format.
Also, at certain points, the examinee will need to use HDFS commands (mostly ls and tail ) to determine the format in which the input data is stored and to get an initial look at the data that they’ll work with.
The exam’s new version is definitely Spark-oriented, but not knowing basic Hive and HDFS commands can cost you multiple tasks and eventually prevent you from receiving the certificate.
Read, process, write
The exam’s tasks are actually different versions of the same problem. You have to read data from HDFS in a certain format (mostly text or parquet) using Spark, process it via filters, maps, aggregations, and joins, and then write it back to HDFS in a different format and using a certain compression codec.
Mastering this process will guarantee you a pleasant experience when taking the exam because you’ll simply be asked to do it multiple times on different datasets.
Time management is also an important factor, so make sure to start by reading all of the tasks at the beginning of the exam, and then start right away with the ones that you’re most comfortable with. | https://towardsdatascience.com/clouderas-cca-175-the-2020-update-what-changed-what-didn-t-and-how-to-prepare-for-the-exam-716413ff1f15 | ['Mahdi Karabiben'] | 2020-03-24 10:34:23.149000+00:00 | ['Apache Spark', 'Hadoop', 'Cloudera', 'Big Data', 'Certification'] |
The King of Positivity | The King of Positivity
Here is where I feel alive, where I can be my true self. I take in the energy vibrating out from the stage and radiating from the close community. I am surrounded by free spirits, travelers, seekers, and ordinary people, all drawn to the Junebug Ranch campground in Tennessee every year to replenish their souls and disconnect from the world for a weekend. We are all here to experience the music and the community that has been created by the Muddy Roots Music Festival. For me and many others, the Muddy Roots Music Festival has become an annual quest to feel alive, commune with friends, and enjoy a musical escape from the stresses and worries of the outside world.
We come from all over, by plane, train, camper, packed in small cars and on foot, each with a unique story about what brought us to this place every year and why we keep coming. Over the years, I have become deeply curious to hear and share some of these stories. In 2018, I obtained a press pass and walked around the festival collecting stories from musicians, friends, campers, and staff.
I knew the first person I wanted to approach was James Hunnicutt, since he has been a performer and attendee of Muddy Roots from the very beginning.
James Hunnicutt is one of the most fiercely loyal, genuine people you will ever meet. He spreads unconditional love and positivity everywhere he goes and touches the lives of everyone around him. He is loved by nearly everyone at Muddy Roots. I caught up with him and asked him a few questions at Muddy Roots this year to learn more about his inner workings.
What ignited your journey into the world of music?
“Well, I started playing in 1986.” Hunnicutt told me he would constantly fiddle with his mom’s guitar when he was young, as she was a musician herself. When I asked him what inspired him to join the metal scene, he explained, “I had been listening to a lot of hard rock, heavy metal, and new wave. I heard ‘Master of Puppets’ by Metallica and it changed my life”. When James heard that record, he had a revelation. He thought to himself, “Holy shit, I want to do this.” The record inspired James to start taking guitar more seriously. It’s also when he delved into his incredible passion for music. He took this passion and ran with it. “I started my first band in ‘87. We were like a junior high thrash cover band, trying to do Metallica, Slayer, and Iron Maiden…very much rebelling against a lot of the music I listened to as a kid.” Hunnicutt grew up listening to a lot of old country, rock and roll, and jazz. While these are all genres he loves now, in his teenage years, Hunnicutt was all about metal and punk. “When I got into the Misfits, they really bridged that gap back to the older music.” Hunnicutt realized how punk rock was influenced by old blues and rock n roll and began to see their similarities rather than the differences. Just like people in our culture — when we focus on the differences too much — we lose the commonalities we share.” Hunnicutt started playing solo 20 years ago, but didn’t become a full-time soloist until about 10 years ago. He’d pick up a guitar at the end his band’s performance and fans would always ask him to play some Elvis songs. He left his job in 2007 to pursue music full time and went on to perform 200–300 shows a year from 2008 to 2014. While he was doing something he loved, the sleepless nights filled with booming music and large crowds eventually caught up with him. After burning the candle at both ends for such a long time, Hunnicutt took a break from music.“I feel like I’m selling myself short, and like I’m selling everyone else short, if I don’t love what I’m doing.” Luckily, Hunnicutt is back to loving music, and he is better than ever. At Muddy Roots this year, he performed a solo set and helped form a new band, the Mudfits. Hopefully, there will be many more performances to enjoy!
One of the many reasons Hunnicutt is well-loved by so many is the positive mindset he carries with him everywhere he goes. I often wondered how he keeps this uplifting vibe through the chaos of the world surrounding him.
How do you stay so consistently positive?
“It’s tough. It’s just like anything, like riding a bike. The more you do, the better you get at it. It’s very much a practice.” Hunnicutt even promotes a movement called PMA (positive mental attitude). He gives out guitar picks with the message “PMA” to remind others about the importance of positivity. He does, however, emphasize that there will be times of darkness where being upset is perfectly okay. “With the PMA thing, I think some of people misunderstand. They think you just decide to be happy and you’re happy all the time. I get upset and there are many things that depress me. I get incredibly angry because there’s a lot going on in the world. It’s hard it because you think, ‘damn we are regressing as a species right now’. But for me, more than anything, doing what I love with music is huge because I can do something constructive that I enjoy. It helps me get through all those dark times.” He then went on to discuss the spiritual path he has chosen and how that affects his everyday life. “Even more important and more prominent than music is meditation. It’s my spiritual path, that’s the strongest thing in my life. I’m not a Buddhist per se but Buddhism really resonates with me.” Meditation inspired Hunnicutt to fall in love with the world, but more importantly, himself. Some wise words from Hunnicutt: “The most incredible person you will meet is yourself, and the most important relationship to have is with yourself.”
Hunnicutt has been to every Muddy Roots, so he has seen it all. He has a special connection with the festival, so I wanted him to hear some of his best experiences at Muddy Roots.
What is your favorite memory from Muddy Roots?
“I look at [Muddy Roots] as this beautiful experience that we get to revisit and add more memories to each time we’re here” Hunnicutt emphasized that he has no one favorite memory. He looks at Muddy Roots as one giant show, and he believes each year is a different song. “The music grows but the experience of being here is like this mountain of awesome, and each year we come back and put some more awesome on top of it. It’s this beautiful snowball effect.” One of Hunnicutt’s favorite moments is playing at Muddy Roots for the first time because it was “the beginning of something beautiful.” He related it to a first love because there’s “something special and endearing about the first time” His first year playing, the power went out and bands were delayed by a few hours. He was pissed and upset at first, but then he took the opportunity and ran with it. He thought, “Fuck that, I can play unplugged. I’ve done it before. It’ll be me and an acoustic guitar, and even though it’ll be quiet, I’m gonna do it.” Everyone huddled up in the mud and rain and began singing “Teardrops”, one of James’ most well-known songs. Now, it’s a tradition for the crowd to sing the chorus with James. He crowed, “It happens every year, but it never gets old.” Fans have even created movements to go along with the song. “Teardrops has become such a beautiful sing-along with everybody, and every time that happens, it blows me away.” The first time the crowd sang along to Teardrops, James “completely lost it” because hundreds of people were singing his song to him. James came full circle by explaining that “sometimes the difficult shit that happens is the greatest fuel for your happiness and growth.” | https://medium.com/@anjolietowle/the-king-of-positivity-eadaeec87bef | ['Anjolie Towle'] | 2020-12-22 00:29:26.900000+00:00 | ['Articles', 'Music', 'Festivals', 'Interview'] |
Assessing your CX maturity | Find where you are in the journey towards the best customer experience
Assessing the maturity level of anything in life is not a straightforward process. Bananas go yellow and darken as they ripen, but the smell, taste, feel, and other variables are needed to paint the full picture. Similar to ripening fruit, we can assess business maturity across multiple channels and variables. Unlike ripening fruit, the goal isn’t to take a delicious bite but to know where you are before taking additional steps.
Consultants popularized maturity assessments as they looked for ways to assess their clients’ current states. In addition to tailoring recommendations for the next steps, a well-built assessment also provides a framework to bring teams together and build mutual understanding. Leveraging a maturity assessment consists of having informed stakeholders go through easy-to-follow checklists and rate items on a scale (e.g., 1–5). A facilitator brings the team together and ensures conversations happen as needed to keep everyone on track. Once the stakeholders finish rating, the results are totaled and compared, empowering stakeholders to articulate a unified story about their company’s maturity.
Multiple free maturity assessments covering CX are available online. However, some frameworks are better than others, and teams must decide which will best help them achieve their goal. To make the appropriate selection, teams should have sufficient domain expertise across the relevant business area(s), industry, problem area, and target organization. Cross-function teams will provide the best mix of breadth and depth of experiences. As always, the better the team and the better the tools available, the more likely the outcome is to be successful.
As part of my efforts to coordinate a client’s upcoming quarterly planning meeting, I wanted to ensure the team would dedicate time to focus holistically on Customer Experience. My research for CX Maturity Assessment templates took me to many places, but most importantly, it took me to Qualtrics. The Qualtrics maturity assessment does a great job of presenting items in a way that was easy for all of my client’s stakeholders to understand, regardless of their current role/previous experience. Below you can see the first page of the document:
Clients usually hire consultants because they are looking for help tackling a tough job or all they know is that they have a problem (don’t ask what or where). Having the right tools in place will go a long way towards building mutual understanding around the problem. A maturity assessment is but one of the many tools available for the job, with plenty of quality templates available online. On the off chance you cannot find one that fits your team’s needs, I hope there’s enough in this article to help you build your own. | https://medium.com/@roberto-portolorena/assessing-your-cx-maturity-9c91ad94c3cb | ['Roberto Porto Lorena'] | 2021-03-09 13:47:04.839000+00:00 | ['Maturity Model', 'Best Practices', 'Strategic Planning', 'Customer Experience', 'Metrics'] |
LEU for JFT: Our Platform | LEU FOR A BETTER JFT: MEMBERS RUN THIS UNION
Recognizing the need for new leadership, we are putting forth an energized slate of candidates for election to the Executive Council of Jefferson Federation of Teachers. We want to recommit JFT to being a membered-powered union that is dedicated to better treatment and better pay for all workers in Jefferson Parish Schools.
What is LEU?
Formed in 2018, Louisiana Educators United is an informal group, centered in Jefferson Parish, where activist teachers in Louisiana organize for better unions and better conditions for all students and teachers. During the coronavirus crisis, our weekly LEU organizing meetings became a stronghold for teachers to share, strategize, and organize as we faced the most challenging year of our careers. Without the support of union leadership, we led rallies in front of the JPS administration building that pushed back school reopening when Jefferson Parish Schools was dangerously unprepared. We forced leadership to meet more of teachers’ safety demands as the COVID-19 crisis unfolded in the schools of Louisiana.
Our slate is running for Jefferson Federation of Teachers Executive Council to re-energize JFT with new leadership and a new vision as we EDUCATE, ADVOCATE and ORGANIZE on behalf of all members. Candidates on our slate are emerging from the spirit of activism in Louisiana Educators United, and we are ready to bring that same energy to benefit our union.
EDUCATE
LEU for JFT’s goal is to work with all stakeholders to create a school system in which teachers and students can thrive. We are all career educators who see educating students as our most essential work. We believe a strong union leads the way to positive, equitable and inclusive schools.
LEU for JFT will create a Jefferson Federation of Teachers that is a powerhouse of information, including a robust website with FAQ sections, an informative and active social media presence, and fast responses to all phone calls and emails. Our goal is to educate all school workers about their contract, their rights, and how their union can help.
Most importantly, we want members to educate us. We will always listen to members.
LEU for JFT will open JFT offices to welcome members, extend office hours beyond the school day and increase availability of resources to members. We will use our resources to provide opportunities for teachers to educate each other and partner with Jefferson Parish families to educate the community about the issues in our schools.
We will offer support, committees and programming to meet the needs of all our members — elementary, middle and high school teachers, special education teachers, ELL and DLL teachers, paraeducators, guidance counselors, social workers, itinerant teachers, nurses, librarians, clerical workers, and cafeteria workers.
Our members have unique concerns and needs, and all our members deserve a union that recognizes and supports what they do.
ADVOCATE
LEU for JFT will advocate for better pay and better treatment for all members.
We will advocate for better working conditions and greater teacher autonomy. We will advocate for the end to unpaid overtime and unpaid labor during the school day. We will take back our planning periods from unnecessary cluster meetings and demand to be paid for both in-person and virtual professional development as is stated in our contract.
We will advocate for safe and up-to-date buildings with fully functioning heating and cooling systems that provide adequate ventilation. We will insist on proper plumbing and other necessary updates to make schools cleaner and safer.
We will advocate for sane schools, in which administrators at the school and the 501 level work to support and encourage teachers, not intimidate, harass or bully teachers. LEU for JFT commits to work systemwide to create beneficial, collaborative processes for professional development, planning, observations and teacher evaluation.
We will advocate for children by working to reduce class sizes and the amount of time devoted to teaching to the test and administering standardized tests. We will advocate for enrichment in the arts and music in our elementary schools. In middle and high schools we will advocate for vocational education, restorative discipline practices, the reduction of the number of administrators and security guards, and an increase in the number of social workers, guidance counselors, and school psychologists.
We will demand to be part of decision making processes about grades, curricula, instructional technology, leadership and all matters that affect our everyday work lives.
ORGANIZE
The most important work of any union is to organize the members. We need to build a strong union that appeals to all eligible JPS teachers and staff. Our goal is to substantially increase membership for the greater good of all. Recruiting, electing, training and supporting effective building representatives as our building-level frontline is the best way to build a responsive union members can rely on.
We need to be able to support and help elect members of the Jefferson Parish Schools board and other elected officials who are unapologetically pro-teacher, pro-students and pro-public schools. We need to organize with families because we know that teachers have their children’s best interest at heart. We need to organize with community groups and other unions that support teachers and share our goal of creating positive, equitable and inclusive public schools.
We must be able to back up our advocacy for better pay and better treatment. Every union must have a strike fund and be ready for the day when it is time to stand up for the rights of students, families and teachers. The next crisis will come and we have to be prepared.
We cannot be caught off guard as we were when the coronavirus crisis forced JFT members into unsafe, chaotic and traumatizing working conditions. Teachers were literally asked to risk everything to keep our schools open, and teachers’ voices were not listened to and not heard during the mismanaged transition to in-person learning.
Now is the time to organize for what is right: professional pay and professional respect. When you vote for our slate of candidates (LEU for JFT) you are voting so that going forward we will have a union ready to face crises big and small. Together, we will create a union that is ready to demand what we deserve after risking our lives and our sanity for our students in the 2020–2021 school year. | https://medium.com/@louisianaeducatorsunited/leu-for-jft-our-platform-cb72f578c4d5 | ['Louisiana Educators United'] | 2021-03-18 13:36:42.803000+00:00 | ['Platform', 'Louisiana', 'Teacher Unions', 'Teachers Unions', 'Jefferson Parish'] |
I have always read and heard from others who categorize ego as a bad or negative emotion yet it… | I have always read and heard from others who categorize ego as a bad or negative emotion yet it remains to be a significant trait in many humans. What is ego? Why is it so bad to be egoistic? After all, the internet says that ego is just a person's sense of self-esteem or self-importance and why would that be harmful? Befriending my mind was one of the most amazing decisions I took in my life and the deeper I dive in, the more beautiful it gets. I am able to unlock some of the rusted doors to make way for new wisdom to enter. Psychology, Cognitive Behavioral Therapy, The Buddhist philosophy of life, are making me realize how powerful our mind is and what all can be achieved if we learn to control our minds. Practicing mindfulness has led me to slowly and gradually drift from living in "what is" rather than my former style of living in "what if?"
Ego is considered as one of the negative emotions because it blinds us into believing that we are superior in one way or the other and resists us from being humble, being kind. Excess ego makes us forget that we are all connected, interdependent and part of the same planet. Self analysis is helping me spot the circumstances that push my ego switch which has been restricting my progress. I have highlighted the situations where I felt my ego taking the driver's seat:
Every time I have a conflict of thought with someone
Someone tries to correct me
Someone doesn't take my suggestion/advice
Someone questions my ability
Someone gives a feedback on a belief that I hold dear
Someone tries to threaten my dignity
Someone lying to me
My ego wasn't just preventing me from seeing the outer world more compassionately but it was blocking me from seeing within. It was time to regulate the supply of ego and to keep it in check that it doesn't obstruct the flow of wisdom that I was so eager to receive. "I", "Me", "Mine" had kept me protected in a shell that made me live in a false illusion that I didn't need anyone, that I was independent and do not need to rely on anyone other than myself. How can I say I am independent when my whole existence was dependant on another human being before my birth? I relied on my mother to provide nutrition and warmth. After birth, I was still dependant on them to teach me how to live. This realisation has not only changed my perspective about interdependence but it has also gotten me connected with other beings on a spiritual level. I am now able to listen to someone talk without having the urge to respond, without preparing a mental debate or a sarcastic joke just to prove a point. I am able to be present, able to pause my meaning making machine for time being and just be present in the conversation.
My apologies are more genuine than before
I am more attentive and present in conversations now
My growing knowledge is giving me a sense of connectivity rather than making me feel a sense of superiority
If someone cuts me on the road, my ego won't tell me to honk back in aggression. I won't back down to apologise the moment I realize that I have made a mistake. I won't deny helping someone just because they had wronged me once. I won't feel embarassed to admit if I don't know something. This doesn't mean I won't pat myself every now and then to boost my self-esteem but it sure will never nudge me away from staying grounded. | https://medium.com/@aandezhath/i-have-always-read-and-heard-from-others-who-categorize-ego-as-a-bad-or-negative-emotion-yet-it-451ba6063899 | ['Anila Andezhath'] | 2020-12-21 00:49:11.165000+00:00 | ['Positive Thinking', 'Trainyourbrain', 'Peace Of Mind', 'Ego', 'Mindfulness'] |
Fire to Night | while mist gathers frost
the molecules dense
I retreat with sun
& the heaters inaugurate
singed dust scent
welcoming fire to night
against chill outer pane,
space populates with light:
strobing box
the stove bulb hot,
candle’s umbra;
each resonate a facet of life
sectioned off in moody lumens,
pondering night
in probing halos,
strings of Halloween lights:
the dusk-ripened orange of Samhain
and purple alight magic to eye,
these textures of transience brought in
with nod to Death;
I stead permeable in time | https://medium.com/get-inside/fire-to-night-9f583101dde9 | ['Jessica Lee Mcmillan'] | 2020-10-13 17:41:59.179000+00:00 | ['Poetry', 'Fall', 'Life', 'Light', 'Halloween'] |
Ampleforth + DeversiFi | Today we’re happy to announce that Ampleforth (AMPL) is now open for trading on DeversiFi! There are a few things that make this a special milestone.
DeversiFi
DeversiFi is a non-custodial exchange platform, based on the 0x protocol. This means you can trade directly with the security of your own wallet and the trade settlement happens onchain.
One big challenge of Decentralized Exchanges (DEXs) is achieving deep liquidity. Since the fees have traditionally been higher for DEXs, it’s been tough for these decentralized alternatives to compete with centralized ones.
DeversiFi, however, shares liquidity with Bitfinex. This guarantees that you’ll see the combined order book depth of both DeversiFi and Bitfinex through the same trade engine.
AMPL’s first 0x Exchange
This marks the first time AMPL has existed on a 0x-based exchange. AMPL has already operated on “Automated Market Maker” exchanges like Uniswap, Kyber, and Bancor.
DeversiFi offers another way to trade AMPL in a decentralized way, but using an order book rather than liquidity pools.
We’ve avoided other 0x exchanges up to now, because of lack of order book depth. However, DeversiFi’s liquidity sharing makes this no longer an issue.
AMPL on Layer-2
In April, DeversiFi 2.0 will launch in conjunction with Starkware ZK-Rollups. Starkware is a Layer-2 solution that leverages Zero Knowledge Proofs to drastically reduce fees and increase throughput. DeversiFi estimates they’ll be able to support over 9000 transactions per second, rivaling centralized exchanges.
It will be exciting to see AMPL’s novel supply adjustment mechanism working in tandem with an offchain Layer-2 like Starkware. We’re looking forward to scaling Ampleforth with this latest technology.
Brandon Iles, ampleforth.org
You should follow me on Twitter: @brandoniles | https://medium.com/ampleforth/ampleforth-deversifi-206ede4a3143 | ['Brandon Iles'] | 2020-03-23 16:35:40.120000+00:00 | ['Ampleforth', 'Ampl', 'Dex', 'Ethereum'] |
BEST SMARTPHONES UNDER 15000 IN INDIA | The under Rs.15,000 value point is significant to the Indian cell phone industry since it is available to many individuals. Throughout the long term we have seen different producers endeavoring to catch this portion by offering better cameras, quicker processors, and better batteries. To be the best in this portion, producers have overwhelmed the sub-Rs. 15,000 value section with a scope of good cell phones.
The Qualcomm Snapdragon 660 which was before selective to cell phones above Rs. 20,000 is currently accessible under Rs. 15,000. You can likewise discover mightier processors like the Qualcomm Snapdragon 675, and MediaTek Helio P70 at this value point, giving clients incredible equipment at a deal cost.
Great camera innovation has likewise streamed down to the sub Rs. 15,000 section throughout the long term giving individuals the alternative to catch great photographs without paying a bomb for lead gadgets. A portion of the ongoing cell phones we have found in this value point pack in as much as a 48-megapixel sensor. With countless alternatives on the lookout, getting a cell phone isn’t simple. Be that as it may, we’ve assembled a rundown of the best telephones under Rs. 15,000 for you. Obviously, we’ve confined ourselves to telephones that have fared well in our survey cycle.
1. Realme 7
The Realme 7 gets three principle updates over the Realme 6 — another SoC, a greater battery, and another essential camera sensor. It includes a mirror-split plan, which makes some intriguing examples when light hits it. The Realme 7 is really thicker (9.4mm) and heavier (196.5g) than the 6, because of its bigger battery, and this is truly perceptible in day by day use.
The Realme 7 is the main telephone to make a big appearance with the MediaTek Helio G95 SoC. This is a refreshed adaptation of the Helio G90T, which was found in the Realme 6, yet it is anything but a significant update. Execution is pretty good. The Realme 7 uses Realme UI, in light of Android 10, which worked easily. Face acknowledgment and the side-mounted unique mark sensor are likewise fast. The Realme 7 is acceptable with games as well. Fight Prime looked extraordinary at the most elevated designs settings and interactivity was smooth.
The Realme 7 gets a stout, 5,000mAh battery and you can charge the battery decently fast as well, on account of the 30W Dart Charge quick charging.
The new essential back camera in the Realme 7 offers a recognizable improvement in pixel-binned pictures, comapred to the Realme 6. It displays improved unique reach and introduction, with better subtleties as well. Low light photographs look cleaner as well, with less grain. Shots caught utilizing Night mode look additionally satisfying, contrasted with what the Realme 6 can deliver. The Realme 7 can shoot recordings at up to 4K, however without adjustment. Tones are somewhat on the hotter side.
2. Motorola Moto G9 | https://medium.com/@shivanshudagar999/best-smartphones-under-15000-in-india-c83db7b9e3a | [] | 2020-12-22 20:45:00.331000+00:00 | ['Blogging', 'Mobile', 'Tech', 'Technology', 'Smartphones'] |
The problem with Vitamin C in skincare | Vitamin C is probably the most hyped up skincare ingredient in the skincare world. It is truly the hero ingredient for everybody who is looking to brighten their complexion, get rid of dark spots and even out the skin tone. But, in many cases it really just is not that effective! This blog post is all about the problems with vitamin C and how best to make it work.
The main problem with Vitamin C is that it is an unstable molecule. What this is means is that it oxidises quickly on exposure to light, heat and air, making it physiologically ineffective. Product packaging and potency will determine whether you are getting the real deal, or buying an imposter product. Dr. Zamani gives us a guide on potency here.
Ingredient potency
“When looking for a suitable Vitamin C product, finding the right concentration is important. A potency of 10–20 per cent means that results for the skin will be seen quicker and more uniformly across the skin…A concentration of between 3 and 10 per cent will still be effective, in an L-ascorbic acid or ascorbic acid form.”
Maximum skin absorption of Vitamin C occurs at 20% strength. Increasing the potency beyond this point does not equal better skin absorption! Furthermore, using agents with higher potency levels such as 10% strength and 15% strength will give you more apparent results in a shorter amount of time; but you will risk irritating the skin. With skincare it is best to start with the least potent ingredients so that you can give your skin an opportunity to adapt towards these new and serious ingredients. You can do more harm than good if you just go straight in with 20%, you can always work towards the big boys — please bare this in mind, particularly if you have sensitive skin.
Packaging is more than just what meets the eye!
Before you go and spend £200 on a vitamin C product, because we both know you can find one for that price tag, you should know something about packaging.
Vitamin C can become totally useless if not packaged and stored in the right way. Make sure that you keep it away from light to avoid oxidisation and weakening the potency of the ingredient. Also ensure that the packaging is tightly sealed as too much air has the same effect! Dr Zamani recommends to opt for formulas in “air-tight packaging, pumps or single-use, individually wrapped products”.
Individually wrapped products are great for experienced Vitamin C users. You can be the most sure that the ingredients are stable in this form. DIY kits are a great example, as they isolate the vitamin C powder away from the essence to mix it into, until you are ready to use it.
The different types of Vitamin C, and what to look out for
Now, I highly encourage everybody to really analyse the ingredient list at the back of all products, just to be aware of what you are putting on your face. Ingredient lists go in the order of highest potency: the first ingredient is most present in the product, and the last ingredient is the least present. Vitamin C is not necessarily labelled as ‘vitamin C’ in the ingredient list however, as it comes in many different forms. Dr. Mahto has your vitamin C dictionary, and explains it all here:
Investing in a product that combines vitamin C with ferulic acid and Vitamin E can result in even better skincare results. “Vitamin C can be combined with the anti-ageing, UV damage fighting antioxidant vitamin E, or hyaluronic acid, which penetrates into the dermis boosting the elasticity and hydration of the skin. The protective barrier on the skin locks in moisture, which gives the skin a youthful appearance. It is also often combined with ferulic acid, a powerful antioxidant that combats the free radicals in your skin.”
How should I use vitamin C? A cream? A wash? A mask? A serum?
How you use an ingredient is just as important as the effectiveness of an ingredient; it doesn’t matter how much a product can do for your skin if you are only going to wash it down the sink, right? Serums are your best bet for really getting the most of the super-ingredient, serums usually are the form that include a concentrated and active vitamin C. Serums are also great because it means that you can layer your skincare with minimal problems, whereas using something a little more direct may cause some interference with what you can and cannot mix on top of the vitamin C.
Now, you have had the run down of all of the problems with vitamin C and how to get around them; it might feel like you just do not know what products there are left that actually work! Here are some of my personal favourite vitamin C products, and what I have found actually work.
People with sensitive skin, or people who are trying vitamin C for the first time.
No7’s youthful vitamin C fresh radiance essence is an amazing product to start with. At £19.50, you can pick this up from your local Boots or online from several retailers. The reason that I recommend this so much is because it is a DIY product.
You have to mix and shake a 5% vitamin C strength in with a gel like elixir and the concoction lasts you for a two week ‘course’. Due to the low potency of the product, it is great for sensitive skin and for people who are trying Vitamin C for this first time. Working up to higher potencies is the key to success!
The one thing I really urge with this product is to keep it away from light. As it is a two week course, it does not have too much time to go completely unstable, however you can always prevent this through keeping it away from the light.
People with oilier skin
The Phloretin CF gel is a pricey yet very effective form of vitamin C. At £150.00, Phloretin is dermatologist recommended as a superior brand, but it really does work. This product not only features a 10% strength vitamin C, but it also has Ferulic acid that enhances the benefits of this whole product. Due to its water base, it does not feel too thick or heavy on oily skin, and glides on to the skin, seeping in almost instantly.
The packaging helps to keep the product stable, as it is in a dark and opaque bottle, but always keep vitamin C away from the light anyway, especially if you spent £150 on a bottle!
Best for dry skin
Drunk elephant’s C-firma day serum is an excellent product that combined 15% L-absorbic acid (vitamin C) with Vitamin E. The product also has exfoliating fruit enzymes and hydrating Sodium Hyaluronate to leave the skin looking not only bright and even, but also very smooth thanks to the AHA’s. Alongside the vitamin E, plant oils within the product also provide nourishment to the skin, making it a dream for anybody with dry skin. Although this product is £67.00, it is an amazing investment to get your skin on the right track — I always recommend this product because it just has never ending benefits!
That’s all folks! I hope that this blog post has helped you to understand Vitamin C a little bit better, and hopefully help you to spend your money more wisely.
See you soon,
Roubs,
Xx | https://medium.com/@roubamustafa/the-problem-with-vitamin-c-in-skincare-454768168413 | ['Rouba Mustafa'] | 2019-09-14 20:43:38.496000+00:00 | ['Uneven Skintone', 'Beauty', 'Skincare', 'Vitamin C', 'Brightening'] |
Christmas Greeting From Garrett On the Go: Christmas 2020 | Christmas Greeting From Garrett On the Go: Christmas 2020
May this Christmas be one where we get rid of apathy and get involved! Let us utilize the gift that God brings every Christmas to heal as He heals through Jesus Christ !
This Christmas and every day, Thanks Be to God! | https://medium.com/@alexginnyc/christmas-greeting-from-garrett-on-the-go-christmas-2020-737dc87e08ab | ['Alex Garrett'] | 2020-12-26 09:07:57.728000+00:00 | ['Hope', 'Christmas', 'Inspiration', 'Faith', 'God'] |
My 2019 Museum Digital Engagement Resolutions | Every year, I take some time to think about what I’ve accomplished and what I’d like to accomplish in the next year.
Focus More on Instagram Stories
Even with my personal social media, I tend to be the type that takes the photo and then shares it hours or even days later when I’ve had a chance to think about what I want to say. This type of approach doesn’t really work well with the Instagram Stories feature, which is meant to share things as they occur in a playful manner.
However, Instagram stories are not going away- they are not a fad. In fact, they are increasing in engagement when the rest of the social media world seem to be decreasing.
I resolve to make an effort to share more through Instagram stories. To achieve this goal, I’m going to:
Make an Instagram stories strategy that is separate from Instagram; by treating it as its own channel, I’ll be able to produce content that is perfect for that style of interaction and use it to the fullest
Set calendar alerts to remind myself every morning to check engagement; stories are difficult to track because we don’t have great analytics, so I’m going to make it a priority to check on them every morning
Crowdcourse content and reshare content; I’m increasingly tied to my desk (or working remotely with my ten week old baby), so I’m going to double my efforts to get co-workers to send me content, and celebrate visitor content by sharing more
Bonus: create branded GIFs… this is one of my champagne dreams for the year. I’d love for us to have some fun historic GIFs we could share. Fingers crossed this happens.
Use Analytics More to Inform Strategy
I hate to admit this since I’m a huge fan of data, but when things get busy, analytics are the first thing I drop. Why? Because really diving into the quantitative and qualitative data takes time, time I don’t often have. Instead, I’ll do a surface check of the big numbers and move on. This year, I’d like to start paying more attention to the details of our analytics to craft strategies that are better informed.
Of course, I’m still tight on time so here’s how I’m going to do this:
Take a little time at the beginning of this year to review all the different measures we could possibly look at and the institution’s broader goals to determine what are the most important
Set a calendar event for myself every week, and a longer one each month, and an even longer one every quarter, where I dedicate time to creating a report that sums up what we are seeing from analytics and how it relates to our goals
Bonus: write up some qualitative thoughts and insights from each of these meetings as notes to myself that I can refer to. The double bonus would be taking the time to share those with the broader museum and social media community.
Build Something
I like to have one project each year that taps into my creative side and gives me the chance to flex my digital muscles. In 2017, I was able to work with my cousin, an artist, to create a video game for an exhibition, and I crafted a new front-end for a database. Last year, I started learning more about Photoshop so I could create engaging layouts and designs for our website and social media, and I developed an interactive skin for a different database (yes, this seems to be a recurring theme in my work).
This year, we’re hoping to tackle some big projects that will improve engagement in exhibitions, and I want to build something that gets visitors excited about what we do.
This goal isn’t as concrete, but here’s how I’m thinking I can do this:
Find a project that pushes my skills; I want a challenge, something I can learn from, but something that is also doable. There’s a digital card project that fits, as well as a a project using an API that is possible.
Write up the entire experience; when I was in grad school I would share the progress of my digital learning- I miss doing that, so I’m going to make this a priority with my next digital project.
Bonus: hide an easter egg in it for other nerds 😉
Work More with Influencers
This is a world that I’m still not totally comfortable with; even though we’ve seen some good results from doing events and programs for influencers. The influencer world is not going away, it is merely changing. Now, we’re seeing a trend towards micro-influencers and authenticity. That means finding people who are trusted within their little online communities; which is a tough thing to do.
My goal this year is to identify at least 10 influencers locally and 5 nationally that I can target. How will I do this?
Start following relevant hashtags to find the micro-communities on social media and follow along to determine who fits our brand and approach, and would be good to bring to the museum.
Create interesting private events for influencers, or public events led by influencers, that will attract new audiences. When we’ve done this in the past, it has been highly successful, so I want to make it more of a priority.
Find a diverse range of influencers to fit the diverse range of visitors we have- not only do we want locals and tourists, we want researchers and scholars. This means finding people who aren’t traditionally ‘influencers’ to come out and share their experience at the museum.
Bonus: create a low-maintenance experience for influencers that would be VIP, cool and unique, but that doesn’t require a lot of staff time and is easy to repeat. | https://medium.com/@kmeyersemery/my-2019-museum-digital-engagement-resolutions-5e11344fdd6 | ['Kate Meyers Emery'] | 2019-01-24 16:16:38.017000+00:00 | ['Museums', 'Social Media'] |
The Great Realignments of Sports Viewing | The Great Realignments of Sports Viewing
By Ryan Miller
Media conglomerates and new OTT platforms are firmly entrenched in a contest for content in what has come to be known as the “streaming wars.” While this battle is currently being waged over original content, there lies a much more lucrative opportunity in live sports content looming over the horizon.
Media rights to the major American sports leagues are set to expire beginning in 2021, which, coupled with shifting viewer behavior, could mean that we are on the precipice of a fundamental realignment rivaling the magnitude of The Great Schism. Historically, the only companies with the requisite buying power to acquire sports rights have been traditional linear TV entities, but as emerging OTT and online streaming platforms continue to scale and mature, there exists an opportunity for them to disrupt the traditional model when the current contracts for sports broadcast rights expire.
The Lucrative Yet Fragmented Sports Market
As it stands, the U.S. is home to four of the top six professional sports leagues by revenue worldwide: the National Football League ($6 billion annually), National Basketball Association ($2.5 billion+ annually), Major League Baseball ($1.5 billion annually) and the National Hockey League (200 million annually). Broadcasting rights for these leagues comprise a significant portion of the $22.4 billion the United States sports market is set to generate in 2019, according to the SportsBusiness Global Media Report,representing nearly half (44%) of the total global sports right market.
The National Football League’s deal extends across four major ownership groups: the CBS Corporation, Comcast, Fox Corporation and The Walt Disney Company/Hearst Corporation. Why not strike a deal with just one multimedia conglomerate to broadcast all the NFL games, you may ask? For one, a $6 billion price tag + exclusivity fees that would inevitably be tacked on to the agreement are ultimately too steep for one network to shoulder independently on an annual basis.
Additionally, there would be increased regional fragmentation for the dissemination of content. As it currently exists, the NFL — and its major American sports counterparts — broadcast live sporting events on a geographical basis in order to get the maximum coverage out of their networks. What that means is that although there may be two games being played simultaneously on a network, you’re going to be served the one that aligns with the market you’re in. Regional and local sports right further complicate this coverage conundrum. Though the NFL’s relatively light schedule allows for all games to be broadcast across national networks, sports with longer seasons — everyone else — partner with other stations to ensure 100% coverage throughout the year. For example, you might be able to watch the Knicks when they’re on ABC, but you’ll need to subscribe to a supplemental network such as the independently owned Madison Square Garden network in order to watch the Knicks games not broadcast on ABC.
This regional fragmentation poses a major problem for most consumers as 64% of fans live far away from where their team plays according to Digital TV Research Limited.Media entities are recognizing the importance, and the need of catering to this increasingly distant and digital consumer with a finite amount of resources available at their disposal in the linear realm.
The Upcoming Realignment
Technological evolution has shepherd an entire generation away from pay TV in the era of “cord-cutters” and “cord nevers” — with eMarketer reporting 50 million people will have dropped cable or satellite TV subscriptions by 2021. More importantly, eMarketer also notes that a loss in viewership of that order will slash the ratio of pay-TV subscribers to cord-cutters and nevers from 4:1 to 2:1. While these audience trends are certainly not a death knell for pay TV, the trend is forcing more programmers, especially ones that focus on live content, to build streaming products to serve shifting consumer attention.
Broadcast networks are building or already have streaming add-ons in their portfolio, available both as an independent purchase and a supplemental payment tacked on to your pay TV package. ESPN+, NBC Sports Gold, and B/R Live are Disney, Comcast and Turner’s OTT solutions respectively and offer a host of unique programming in addition live sports coverage that’s simulcasted across their family of networks.
These solutions were contrived not only to expand and supplement these major corporations’ distribution reach, but to appeal to disappearing younger audience. A recent McKinsey study showed that more Gen Xers than millennials follow sports closely (45% versus 38%) and according to a Magna Global study of Nielsen TV ratings of 24 sports, all but one (women’s tennis) have seen the median age of their TV viewers increase during the past decade. Though a dip in sports viewership amongst younger audiences is evident, it’s not solely due to disinterest. The way in which Millenials and Gen Z consume content is radically different than that of their predecessors.
The Shift in Sports Viewing
With the human attention span continuing to dwindle in the digital era — fans are seeking innovative viewing experiences that go beyond planting yourself in front of your home entertainment center for hours on end.
Social has become an increasingly important part of the sports conversation from both a discussion and content perspective. The aforementioned McKinsey study identifies that 60% of Millennials use at least one social media platform for sports coverage — a 20% increase to that of Gen X at 40%.
Statistics like the one above reinforce the fact that sports coverage is fundamentally changing, more importantly demonstrate the problem linear TV has in keeping audience attention. According to Deltatre, 2021 is the year OTT is set to explode from a sports investment perspective, with an estimated $6.8 billion to be spent in the space. A study conducted by Telaria + Adobe justifies this shift in investment as 42% of people keep the cord exclusively for live programming, with 30% of people willing to cut the cord if there was a streaming solution for live content.
In turn, we’re seeing a response from leagues, broadcasters, and other online viewing platforms offering direct digital solutions. In addition to the streaming add-ons currently offered by traditional media providers, Axios has identified other types sports streaming services — skinny bundles, pure plays, and tech platforms — that could stand to steal some of the market share.
Perhaps the most troubling to the linear TV empire are the OTT solutions offered directly through the leagues. Each of the four major American sports leagues offer subscription based content-on-demand models. Take the NFL, the biggest U.S. sports league, for example.Currently, an NFL Game Pass subscription, which grants access to all of the out-of-market games, costs $99 annually and is significantly more cost-effective than paying for a cable or satellite subscription just for NFL coverage. Satellite and cable providers have long provided services such as NFL Sunday Ticket, but even that price points ring in a little high for dedicated fans at $293.94. When DirecTV purchased NFL Sunday Ticket — coverage of every out-of-market game — in 1994 it cost them $25M, that price tag has since surged to an incredulous $1.5B annually. Cord cutting is having a significant impact on DirecTV’s sports distribution strategy, per the Wall Street Journal, causing the AT&T Warner Media owned entity rethink the value of the deal. Though these services enable professional sports leagues to cut out the middleman, it seems improbable this model will see widespread adoption as it requires a tremendous amount of additional infrastructure investment in order to ensure 100% reliable service delivery as the offering scales.
In addition to direct-to-consumer league solutions, there exist other options for sports packages outside linear TV. Skinny bundles have become increasingly popular amongst sports fans uninterested in paying for full cable packages. Both Hulu — who now have live sports in case you haven’t heard — and YouTube offer add-on subscriptions to both their free and premium models for sports fans.
Big tech companies such as Facebook, Twitter, and Amazon’s Twitch are also venturing into sports broadcasting. Facebook experimented with sports live streaming in 2018 through a 25-game trial with MLB afternoon broadcasts to mixed results. Latency and quality proved to be an issue, but the significance of creating an interactive environment was not lost on MLB officials.
As of the 2019 season,Twitch secured the rights to broadcast Thursday Night Football games on its platform. Perhaps most interesting about the agreement is that not only is the game available for viewing through the Thursday Night Football channel, but Twitch users have the capability to broadcast over the stream on their own channel. This again demonstrates the big leagues’ awareness as to the increasingly social interaction heavy nature of modern sports viewing.
Maximizing fan engagement through OTT solutions leads to a 24% uptick in subscriber acquisition according to Deltatre, proving that experimentation with innovative formats could be worth it for major sports leagues. At the end of the day, tech companies are also equipped with a bigger cash flow to spend on acquiring sports right, should they choose to compete with the declining Pay TV providers.
The Future of Sports Viewing
With the media rights free-for-all in the not so distant future, what does all this mean in terms of posturing and positioning from each of these content distribution platforms?
It’s obvious that mega corporations like Disney, Fox and Comcast have a legacy advantage. Not only have they provided reliable service to the major sports leagues for years, most of the audience still resides there. These partnerships are forged in years of lucrative success and are extremely tight-knit, according to Bloomberg Businessweek, “one reason the NFL has been so reluctant to dive into streaming is that it understands its importance to the legacy networks and wants to protect them.”
Perhaps best positioned to dethrone the linear sports options are the big tech companies. While skinny bundles offer the sports content consumers want at a much more desirable rate, they lack the requisite innovation to position themselves for the future.
Tech companies like Twitter are building integrations that keep fans engaged like their partnership with Turner Sports and the NBA. Offering live-stream games on the social platform with the option for viewers to vote on a single player to follow during the stream.
While Twitter’s collaboration encouraged fan participation through the polling system, the player-lock capability is something that e-sports have specifically honed in on in recent times. 2019 saw Riot’s League of Legends debut its pro-view for LEC and LCS matches while Blizzard’s Overwatch League mirrored that move with its professional viewer. As opposed to the broadcasts locking in on a player, these tools enable viewers to control their viewing experience in totality by allowing them to seamlessly jump between different players’ POVs and the live game feed.
Amazon’s Twitch already has the infrastructure to support live-viewing content, hosting millions of visitors each day, though more widely distributed than a live-sports stream. Twitch also has the ability to add interactive features much like Twitter, with interesting in-stream overlays, polls and peripheral content. Earlier in 2019, Twitch debuted a new functionality in conjunction with Reese’s that gamified the viewing experience by adding interactive elements directly on screen for users. While this use case might not translate one-to-one for sports, there are companies that are considering this experience specifically from a sports perspective.
A sports-specific example of an extension built specifically for sports would be Court Vision from the LA Clippers powered by technology company Second Spectrum. CourtVision uses machine learning, data visualization, and augmented reality to enhance the broadcast. It also offers a variety of different camera angles and audio commentary options. Perhaps the most interesting functionality of Court Vision is the hyper customization aspect that allows viewers to track game information in real time, such as advanced statistics and heat maps for respective players.
While it’s nigh impossible to forecast who will emerge victorious from this battle that begins a year from now, all the early signs strongly indicate there is going to be an increased diversification in terms of where content is hosted and eyeballs are moving.
Brand Takeaways
If sports viewing goes totally digital,there will be a significant increase in ad opportunities. No longer will brands be limited to :15 and :30 spots or title sponsorships. Brands will not only have the opportunity to provide additional value to the experience — through custom integrations and interactive overlays — but will be able to target individuals on a one-to-one level, subsequently increasing media’s efficiency.
Sports moving to online platforms also affords brands opportunities to push products at the point of sale, making ecommerce strategies more seamlessly implementable. Twitch recently rolled out Amazon Blacksmith — allowing streamers to showcase their favorite products so that viewers can purchase directly through the in-stream overlay. This level of accessibility could easily be translated to sports broadcasts with plenty of opportunity during breaks in the action to bring you a word — and product recommendation — from your sponsors.
Live sports moving to a digital-first platform would provide brands an opportunity to surround the conversation more holistically. All ancillary sports content already lives on the web — from post-game recaps, to pre-game custom content segments and even player-promoted social posts. This approach to content creation gives brands an opportunity to surround the conversation from kickoff to the final whistle, or carve out a slice of the action if they’re seeking a more efficient media play.
One obstacle that will prove challenging to brands should live sports ultimately move to OTT solutions is the employment of ad-blocking tech. Per Statista, 25.8% of internet users employed ad blockers in 2019, with that figure only set to rise over the course of the next few years. Though ad blocking is not a new phenomena, it’s certainly another consideration for brands when determining how to most effectively run on OTT. Ultimately, the brands best positioned to take advantage of this potential shift to online viewing are the ones pursuing innovative formats that do not detract from viewing experiences.
As 2021 draws closer, the prospect of linear TV behemoths relinquishing any of the broadcasting rights they currently possess would only expedite the mass exodus from linear TV that has already begun.Skinny bundles will no longer be “skinny” but “standard” and cable boxes may become relics of ancient history.There are grave implications for the future of television should sports start to emigrate elsewhere, but what is certain is that the way we consume sports will never be the same. | https://medium.com/ipg-media-lab/the-great-realignments-of-sports-viewing-f46941229e33 | ['Ipg Media Lab'] | 2019-11-22 17:07:28.241000+00:00 | ['Sports', 'Innovation', 'TV', 'Streaming', 'OTT'] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.