title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
6 Most Useful Homescreen Widgets for iOS 14 — (December 2020) | 6 most useful Homescreen Widgets for iOS 14
These are useful and my favorite iOS14 home screen widgets for iPhone and iPad. I am sure you all will also like them. So let’s start:
1. Documents by Readdle
Documents by Readdle is a very famous app. Many people use this app for years to manage their documents on their iPhones and iPad. This app recently updated for iOS 14 with very useful widgets. It allows full customization, you can display different information and perform different actions. You can quickly access your favorite actions such as Music, Video, or Browser. Or even recently opened files and folders. The widget comes in three different sizes (small, medium, and large).
If you already have the Document by Readdle app it means you can set the widget by going into the jiggle mode of your iOS 14 device. If you don’t have the app then you can download it from here. | https://medium.com/macoclock/6-most-useful-homescreen-widgets-for-ios-14-december-2020-fb65ec3f46df | ['Umar Usman'] | 2020-12-27 11:12:10.636000+00:00 | ['iPhone', 'Technology', 'iOS', 'Apple', 'Ios 14'] |
Autoscaling in Oracle Cloud and OKE (Oracle Container Engine) as load generator | In this post, we will look at Autoscaling in Oracle Cloud. From the horse’s mouth:
Autoscaling enables you to automatically adjust the number of Compute instances in an instance pool based on performance metrics such as CPU utilization. This helps you provide consistent performance for your end users during periods of high demand, and helps you reduce your costs during periods of low demand.
Let’s try this with an example:
The application will be a simple Nginx on a compute Create an Instance Configuration of the compute. This will allow us to use a pre-defined configuration to use when creating compute instances as part of an instance pool. Create an Instance Pool based on the Instance Configuration and this will allow us to provision multiple instances with the same configuration. Attach a private Load Balancer to the instance pool. This will ensure that when more instances are added to the instance pool, they will also be automatically added to the Load Balancer’s backend set and traffic routed to the new instances when they are deemed to be in a healthy state. Create an Autoscaling Configuration and set the necessary policy to scale out and scale in. To generate the necessary amount of traffic, we will use OKE (Oracle Container Engine) and the jmeter-operator.
Let’s get started.
Deploying the infrastructure
Clone the terraform-oci-oke project and keep the default topology (3), number of node pools (1) and number of worker nodes per subnet (1). We will scale them later. For now, we just need the basic infrastructure (VCN, gateways, bastion, subnets etc).
Set the following parameters:
newbits = {
"bastion" = "8"
"lb" = "8"
"app" = "8"
"workers" = "8"
} subnets = {
"bastion" = "11"
"lb" = "12"
"app" = "13"
"plb" = "14"
"lb_ad1" = "32"
"lb_ad2" = "42"
"lb_ad3" = "52"
"workers_ad1" = "33"
"workers_ad2" = "43"
"workers_ad3" = "53"
}
Note that you do not need to use the same values as mine. You just need to ensure your subnets do not overlap.
Once the basic VCN and OKE has been provisioned, create an “app” subnet and a corresponding security list. The app subnet should be regional and private.
Ingress Rules for app subnet
Egress rules for app subnet
app subnet
Next, create a security list and a subnet for a Load Balancer. Note that this will be different from the load balancer security list and subnet created for OKE. The load balancer subnet should also be regional and private.
Ingress rules for Private Load Balancer Subnets
You can set the egress rules the same as for the app subnet.
Private Load Balancer subnet
Next, create a private Load Balancer and ensure you select your vcn created when you provisioned OKE and the private load balancer subnet you created above.
Load Balancer Health Check
Create an HTTP listener
Add the compute, instance pool and autoscaling
Create a compute instance and pick the smallest available shape:
and ensure you add your ssh key and you assign it to the app subnet:
Click on Advanced Options and paste the following cloud-init script:
#cloud-config package_update: false
packages:
- nginx
runcmd:
- systemctl enable nginx
- systemctl start nginx
- firewall-offline-cmd --add-service=http
- systemctl restart firewalld
Once the instance is running, click on “Create instance configuration”:
Then, click on “Create Instance Pool” to create one:
You will be presented with the following screen. Enter 1 for the minimum number of instances, check the “Attach Load Balancer” checkbox and select your load balancer and the backend set. Set the port to 80 and under Availability Domain Selection 1, select your VCN and set the subnet to app.
Click on additional selection twice and select AD2 and AD3 for the additional selections. If your region has only 1 Availability Domain, you can skip this step.
Finally, create an Autoscaling Configuration from the instance pool:
Configure your Autoscaling configuration as follows and leave the cooldown period at its minimum of 300s:
and configure your autoscaling policy as follows:
Autoscaling policy
Set the minimum number of instances to 1, maximum to 6 and initial to 1. For the scaling rules, we will set out a ridiculously low thresholds so we can get the autoscaling event to trigger.
Your configuration is all done.
Using OKE as a load generator
We now need to generate enough traffic to the instance to cause a CPU spike above 2% for 3 mins so an autoscaling event will occur and cause the instance pool to scale out and add at least 1 more instance. For that, we will use Apache JMeter, or more specifically, the JMeter Operator running in OKE.
First, scale out the OKE cluster and increase the number of workers per subnets to at least 10 and run terraform apply to make the chance. You can also do this in the OCI Console.
Login to the bastion and clone the JMeter Operator project:
cd jmeter-operator git clone https://github.com/kubernauts/jmeter-operator cd jmeter-operator
Edit the jmeter-deploy.yaml as follows:
apiVersion: loadtest.jmeter.com/v1alpha1
kind: Jmeter
metadata:
name: tqa-loadtest
namespace: tqa
spec:
# Add fields here
slave_size: 30
jmeter_master_image: kubernautslabs/jmeter_master:5.0
jmeter_slave_image: kubernautslabs/jmeter_slave:5.0
grafana_server_root: /
grafana_service_type: ClusterIP
grafana_image: grafana/grafana:5.2.0
influxdb_image: influxdb
grafana_install: "true"
grafana_reporter_install: "true"
grafana_reporter_image: kubernautslabs/jmeter-reporter:latest
influxdb_install: "true"
The fields I changed are highlighted in bold above. Ensure the slave_size matches the number of worker nodes.
Download the autoscaledemo.jmx:
and change the BASE_URL_1 from 10.0.14.4 to the IP Address of the private load balancer:
<Arguments guiclass=”ArgumentsPanel” testclass=”Arguments” testname=”User Defined Variables” enabled=”true”>
<collectionProp name=”Arguments.arguments”>
<elementProp name=”BASE_URL_1" elementType=”Argument”>
<stringProp name=”Argument.name”>BASE_URL_1</stringProp>
<stringProp name=”Argument.value”>10.0.14.4</stringProp>
<stringProp name=”Argument.metadata”>=</stringProp>
</elementProp>
</collectionProp>
</Arguments>
Ensure all your worker nodes are active and follow the steps on the jmeter-operator page to install it.
At this point, your environment should be stable and look like this:
Stable pool
Stable Instance Configuration
Stable backend set
Metrics stable
Initialize your JMeter cluster and run a test:
./initialize_cluster.sh
./start-test.shEnter the Jmeter Namespace: tqa
Enter path to the jmx file autoscaledemo.jmx
Now, let JMeter run and check after 5 mins. The autoscaling event is triggered and an additional instance is created.
Instance Configuration with Target Count=2
2nd instance being provisioned
Eventually, things stabilize and the 2nd instance is provisioned:
Instance pool with 2 instances
and the 2nd instance is automatically registered as an additional backend with the load balancer:
When the test run has completed and the traffic stops, the average CPU utilization also decreases triggering an autoscaling event. This time, the instance count will be reduced:
Scaling Down
Instance Pool Scaling Down
And eventually, the instance is terminated: | https://medium.com/oracledevs/autoscaling-in-oracle-cloud-and-oke-oracle-container-engine-as-load-generator-2e9ccb44b44 | ['Ali Mukadam'] | 2019-07-04 00:38:27.835000+00:00 | ['Jmeter', 'Oracle Cloud', 'Kubernetes', 'DevOps', 'Autoscaling'] |
RL — Trust Region Policy Optimization (TRPO) Explained | Modified from source
Policy Gradient methods (PG) are popular in reinforcement learning (RL). The basic principle uses gradient ascent to follow policies with the steepest increase in rewards. However, the first-order optimizer is not very accurate for curved areas. We can get overconfidence and make bad moves that ruin the progress of the training. TRPO is one of the most quoted papers in addressing this issue. However, TRPO is often explained without the proper introduction of the basic concepts. In Part 1 here, we will focus on the challenges of PG and introduce three basic concepts: MM algorithm, trust region, and importance sampling. But if you are comfortable with these concepts already, scroll down to the end for Part 2 in detailing TRPO.
Challenges of Policy Gradient Methods (PG)
In RL, we optimize a policy θ for the maximum expected discounted rewards. Nevertheless, there are a few challenges that hurt PG performance.
First, PG computes the steepest ascent direction for the rewards (policy gradient g) and update the policy towards that direction.
However, this method uses the first-order derivative and approximates the surface to be flat. If the surface has high curvature, we can make horrible moves.
Modified from source
Too large of a step leads to a disaster. But, if the step is too small, the model learns too slow. Picture the reward function as the mountain above. If the new policy goes too far, it takes actions that may be an inch too far and fall off from the cliff. As we resume the exploration, we start from a poorly performing state with a locally bad policy. The performance collapses and will take a long time if ever, to recover.
Second, it is very hard to have a proper learning rate in RL. Let’s assume the learning rate is specifically tuned for the yellow spot above. This area is relatively flat so this learning rate should be higher than average for a nice learning speed. But one bad move, we fall down the cliff to the red spot. The gradient at the red dot is high and the current learning rate will trigger an exploding policy update. Since the learning rate is not sensitive to the terrain, PG suffers from convergence problem badly.
Third, should we constraint the policy changes so we don’t make too aggressive moves? In fact, this is what TRPO does. It limits the parameter changes that are sensitive to the terrain. But providing this solution is not obvious. We adjust the policy by low-level model parameters. To restrict the policy change, what are the corresponding threshold for the model parameters? How can we translate the change in the policy space to the model parameter space?
Fourth, we sample the whole trajectory for just one policy update. We cannot update the policy on every time step.
Why? Visualize the policy model as a net. Increasing the probability of π(s) at one point pull the neighboring points up also. The states within a trajectory are similar, in particular when they are represented by raw pixels. If we upgrade the policy in every time step, we effectively pull the net up multiple times at similar spots. The changes reinforce and magnify each other and make the training very sensitive and unstable.
Consider there may be hundreds or thousands steps in a trajectory, one update for each trajectory is not sample efficient. PG needs over 10 million or more training time steps for toy experiments. For real-life simulation in robotics, this is too expensive.
So let summarizes the challenges of PG technically:
Large policy change destroys training,
Cannot map changes between policy and parameter space easily,
Improper learning rate causes vanishing or exploding gradient, and
Poor sample efficiency.
In a hindsight, we want to limit the policy changes, and better, any change should guarantee improvement in rewards. We need a better and more accurate optimization method to produce better policies.
To understand TRPO, it will be better off to discuss three key concepts first.
Minorize-Maximization MM algorithm
Can we guarantee that any policy updates always improve the expected rewards? Seems far fetching but it is theoretically possible. The MM algorithm achieves this iteratively by maximizing a lower bound function (the blue line below) approximating the expected reward locally.
Modified from source
Let’s get into more details. We start with an initial policy guess. We find a lower bound M that approximate the expected reward η locally at the current guess. We locate the optimal point for M and use it as the next guess. We approximate a lower bound again and repeat the iteration. Eventually, our guess will converge to the optimal policy. To make this work, M should be easier to optimize than η. As a preview, M is a quadratic equation
but in the vector form:
It is a convex function and well studied on how to optimize it.
Why does the MM algorithm converge to the optimal policy? If M is a lower bound, it will never cross the red line η. But let hypothesis the expected reward for the new policy is lower in η. Then the blue line must cross η (the right side figure below) and contradict that it is a lower bound.
Modified from source
Since we have finite policies, as we keep the iteration, it will lead us to a local or a global optimal policy.
Now we find the magic we want.
By optimizing a lower bound function approximating η locally, it guarantees policy improvement every time and lead us to the optimal policy eventually.
Trust region
There are two major optimization methods: line search and trust region. Gradient descent is a line search. We determine the descending direction first and then take a step towards that direction.
Modified from Source
In the trust region, we determine the maximum step size that we want to explore and then we locate the optimal point within this trust region. Let’s start with an initial maximum step size δ as the radius of the trust region (the yellow circle).
m is our approximation to the original objective function f. Our objective now is finding the optimal point for m within the radius δ. We repeat the process iteratively until reaching the peak.
To control the learning speed better, we can be expanded or shrink δ in runtime according to the curvature of the surface. In the traditional trust region method, since we approximate the objective function f with m, one possibility is to shrink the trust region if m is a poor approximator of f at the optimal point. On the contrary, if the approximation is good, we expand it. But calculating f may not be simple in RL. Alternatively, we can shrink the region if the divergence of the new and current policy is getting large (or vice versa). For example, in order not to get overconfidence, we can shrink the trust region if the policy is changing too much.
Importance sampling
Importance sampling calculates the expected value of f(x) where x has a data distribution p.
In importance sampling, we offer the choice of not sampling the value of f(x) from p. Instead, we sample data from q and use the probability ratio between p and q to recalibrate the result.
In PG, we use the current policy to compute the policy gradient.
So whenever the policy is changed, we collect new samples. Old samples are not reusable. So PG has poor sample efficiency. With importance sampling, our objective can be rewritten and we can use samples from an old policy to calculate the policy gradient.
But there is one caveat, the estimation using q,
has a variance of
If the P(x)/Q(x) ratio is high, the variance of the estimation may explode. So if two policies are very different from each other, the variance (a.k.a. error) can be very high. Therefore, we cannot use old samples for too long. We still need to resample trajectory pretty frequently using the current policy (say every 4 iterations).
Objective function using importance sampling
Let’s go into details of applying the importance sampling concept in PG. The equations in the Policy Gradient PG methods are:
We can reverse this derivative and define the objective function as (for illustration simplicity, γ is often set to 1):
This can be expressed as importance sampling (IS) also:
As shown below, the derivatives for both objective functions are the same. i.e. they have the same optimal solution.
The presence of two policies in the optimization objective presents us with a formal way to restrict the policy change. This is a key foundation of many advance Policy Gradient methods. Also, it can produce a way for us to evaluate possible policies before we commit the change.
Now, we finish the three basic concepts we need for our discussion and ready for the last part for TRPO.
For those that what to look into the articles in this Deep Reinforcement Learning Series. | https://jonathan-hui.medium.com/rl-trust-region-policy-optimization-trpo-explained-a6ee04eeeee9 | ['Jonathan Hui'] | 2018-10-21 21:48:43.844000+00:00 | ['Programming', 'Data Science', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning'] |
Patience Comes Through Practice | Patience Comes Through Practice
Patience is not a gift but a skill; skills require practice.
Photo by Little John on Unsplash
Patience is important, but like any skill it must be learned, so impatience is common (and can have serious consequences).
Impatience differs from impulsiveness. In impulsive behavior, the issue of patience does not arise: a person who habitually acts on impulse does so not because they are impatient but because they have poor impulse control. Controlling impulses is important, and developing that skill may require patience, but the two things — impulsiveness and impatience — are different. Impatience is a feeling (a kind of frustration), and impulsiveness is a habit of action (one worth breaking).
You can be impatient with others as well as with yourself. In either case, the feeling seems to arise from frustration at not having control over what you want to happen — either no control of another or no control of yourself. That feeling of frustration is what we call “impatience.” We are frustrated that we cannot make happen what we want to happen as fast as we want it.
A person can deal with impatience by taking two steps: first, understand better what is involved in the process at hand; and second, understand better how to work the process.
Better understand what the process is
If you develop a better understanding of the process at hand, you know the rate of progress to expect and also become able to see the signs of progress — to see that things are indeed happening. By knowing what to expect, you can understand what must happen and why it takes the time it does.
Example: Suppose you want to stifle some particular bad habit you’ve noticed in yourself. Without understanding the process of behavior change, you might well be impatient, so that the first couple of failures makes you impatient with yourself.
But if you understand how a change in behavior progresses, you will be more patient regarding the process — and also see progress that you might otherwise not notice. Initially, you don’t notice the undesired behavior at all; then you become conscious of the behavior after the fact; then you become aware of the behavior right after you did it (the “doh!” moment); then you become aware of the behavior even as you’re doing it; and, finally, you become aware of the behavior just before you do it, and at that point you can interrupt the cycle. From that point, by avoiding the behavior, you extinguish the habit.
Understanding these stages helps you be more patient with yourself — you understand the sequence (and its necessity), and you can see your progress by noticing where you are in the sequence. Rather than being dejected by a “failure,” you can see that you begin recognizing the “failures” sooner and sooner after the moment they occur and thus see you are getting closer to success. (And it’s better to view those early efforts as “practice” rather than “failure.”)
Impatience in this area arises when people don’t understand that the speed of growth/change is much slower than the speed of insight. Insight is instantaneous: you might realize in a flash “I must get stronger,” or “I must change my eating (spending, study, etc.) habits,” or “I must stop saying ‘You know?’ so often.” Those realizations occur with the speed of insight, but actually making the change (as opposed to seeing its necessity or desirability) occurs much more slowly, with the speed of growth. Having realistic expectations reduces impatience. If you decide to (say) change your handwriting and learn how to write italic, you should understand that the skill (unlike the pen) will not arrive via two-day delivery.
Mastering (or even getting good at) a skill — cooking, playing a musical instrument, learning a language, performing a card or coin trick — will take time. If you expect expert performance within a few days, you are likely to become discouraged and quit. But if you focus, not on your performance, but on your progress, you can see the skill’s growth — and seeing that something is happening will reduce impatience.
As you learn skills and observe your process of learning, you also learn how you acquire skills, the steps that you go through. You will see the stages of development, and how errors wither with patient practice.
Human learning, whether the knowledge is abstract or concrete, theoretical or practical, is much like training a neural net: patient correction of missteps by noticing the error and repeating slowly and more carefully the correct response until it is second nature. The process of human learning is like training a neural net because it is training a neural net, the original neural net: your brain.
The book Changing for Good, by James Prochaska, John Norcross, and Carlo Diclemente, is an excellent guide to changing one’s behaviors. (There are other books with the same title, so note the authors.) The authors studied groups of people who wanted to make changes (such as quitting smoking), and looked at the differences between those who were successful and those who were not. They learned that such change is a process that has six stages, and at each stage one must accomplish some key tasks to move successfully to the next stage.
In summary: understanding the nature and signs of progress can help with impatience. You avoid the feeling that “nothing’s happening” because you better perceive what is happening.
Better understand how to work the process
The second way of dealing with impatience is to get a better understanding of what you should be doing to move the process along and then do those things, which gives you things to do and thus fills the time so that it does not feel like empty waiting.
My Go teacher once told me that I was moving too quickly, and said that I must learn to move more slowly. He did not mean that I should simply look at a watch and wait for (say) two minutes to pass before I made my move. He meant that there were things I needed to be doing before I moved, and if I did those, then one sign of that would be that I would not be moving so quickly. (Another sign, presumably, would be that my moves would be better.)
For example, I should take time to look for immediate threats and judge how serious they are and how to respond — and whether they could be turned to my advantage. I should think about what I could gain from different moves I might make. I should make sure to look over the entire board and not focus too narrowly on the local fight underway — it may be that a move elsewhere would gain more than a good response in the local fight.
This study and judgment of possible moves takes time. One reason beginning Go players play quickly is that they focus on one small part of the board and ignore (or don’t notice) the larger picture. Whole-board awareness, vital for success, requires time — time to learn and time to execute.
Another example: in making a big decision, one can readily become impatient and simply make the damn decision, but if you understand the process — as laid out in, say, Decision Traps and/or Winning Decisions, then you understand the steps the process requires. You know what must be done, and you can tell by what you’ve done how much progress you’ve achieved toward making the decision and what still remains to be done to ensure the optimal decision. Knowing what you must do in the process (and doing it) keeps you from feeling impatient: time is not spent in waiting but in doing.
The same holds true in negotiation. Novices are impatient to begin negotiating because they do not yet understand what preparation involves. Because they don’t grasp what preparation entails, they don’t know what they should be doing, and so the time before the face-to-face negotiation begins is for them merely waiting, and that empty time results in impatience.
Roger Fisher and William Ury of the Harvard Project on Negotiation describe the process of negotiation and the tasks it requires in their book Getting to Yes: Negotiating Agreement Without Giving In. The time prior to negotiating in person, rather than being empty, is filled with urgent and important tasks to accomplish.
Knowing a process (what it involves and requires) minimizes impatience because (a) you are busy accomplishing the key tasks the process requires to move along, and (b) you can see the signs that show that you are making progress. Feeling impatient indicates a lack of understanding of the process you (or others) are undertaking.
In summary, impatience often comes from not seeing or not understanding what is involved, with the result that it feels as though the time is empty and there is nothing one can do. A deeper understanding will provide plenty to do, and one’s attention and energies, fully occupied, leaves no “empty” time: impatience is squeezed out of existence. | https://medium.com/age-of-awareness/patience-comes-through-practice-2a8c15cfc619 | ['Michael Ham'] | 2020-12-26 01:18:07.902000+00:00 | ['Patience', 'Self-awareness', 'Self Improvement', 'Change', 'Learning'] |
Quiet People in Meetings Are Incredible | As a corporate man by day, and an entrepreneur by night, I’ve attended my fair share of meetings over the last decade or so.
Meetings can be an odd experience. Before you know it the meeting can get out of control. Leaders with pinstripe suits or hair that is turning grey quicker by the day can lose the plot. They flex their ego with words. In other words, they talk a lot. As the meeting wears on the duel continues. Leaders throw words around. Those looking for their next promotion do the same.
Looking smart is key. You use spreadsheets and insert phrases into customer’s mouths that they never said to ensure you’re seen as right or most in touch with the customer.
It’s all bullshit. The meeting is a waste of time. No resolution is reached.
But it’s not all bad. Meetings have taught me one valuable lesson: watch the quiet people.
There are these hidden people that attend meetings. They say nothing. You can attend ten meetings in a row and never hear them say a word. Their words are a privilege reserved for the royal family. You find yourself dying to know what they would say. They act like a fly on the wall. With every meeting, they get smarter, by saying nothing at all.
They observe the loud beasts, rather than become a beast.
I used to be loud in meetings. These quiet people changed my mind. Now I try to sit quietly in most meetings and not say a word. I’m a long way from mastering this skill but it has already taught me so much. | https://medium.com/the-ascent/quiet-people-in-meetings-are-incredible-7bb05ef9acd1 | ['Tim Denning'] | 2020-09-18 20:45:11.674000+00:00 | ['Leadership', 'Startup', 'Work', 'Meetings', 'Life'] |
The Myth Of Medusa Is A Trauma Journey | Greek mythology is laden with symbolic significance for humanity; in this particular story, you can see that the myth of Medusa is closely related to the development of a traumatic event.
Medusa seduced by the God of the sea Poseidon and impregnated in the temple of Athena. The Virgin Goddess Athena’s rage came down upon Medusa however, she cursed Medusa to being of monster form, with a head full of snakes, instead of hair.
Even this start to the story speaks so much of life experience. A woman born of beauty is seduced by a powerful man into doing something that angered a broader power — then punished. Does that speak to you of victim-blaming?
In any case, she was sent back to the land where she grew, to be with the Gorgons.
“Her face was so hideous and her gaze so piercing that the mere sight of her was sufficient to turn a man to stone.”
Later in the story, Polydectes, the king of Seriphos, was trying to get rid of a young warrior, a half-God, Perseus, so he asked him to fetch the head of Medusa, thinking that he’d be killed.
However, Perseus enlisted the help of Athena, Hermes, and Hades. Athena gave him the knowledge that you cannot look directly into Medusa’s eyes, or you’ll turn to stone. She gave Perseus a bronze shield with which to protect himself. Hades, God of the underworld, gave him an invisibility cloak and Hermes gave him winged sandals.
These three Gods have a pertinent message: immaculate purity, subconscious underworld, and the messenger.
Going into the land of trauma healing takes clarity of mind, purity of mind. You have to go in there with the intention to heal, and not to re-traumatise.
I recently listened to a fantastic session with David Treleaven, as part of the Embodiment Conference 2020.
He was talking about the double edge sword of mindfulness; a sensitivity to trauma, and it’s so true. My journey through trauma and mindfulness has been fraught with close calls in re-traumatisation. I’m glad this conversation is developing.
He mentioned three main aims of mindfulness around trauma:
recognise symptoms
respond skilfully
prevent re-traumatisation
This is work for the practitioners and teachers.
Treleaven says that Medusa is much like trauma because you can’t look directly at her. The same is true of trauma; often, trauma survivors want to orient towards the trauma and the senses that remind them of the trauma, because they want to make sure they’re safe.
The sad fact is that most of the time they redirect themselves back into re-traumatisation, and can develop an obsessive compulsion with providing safety, through the trauma sensations, to no avail.
He suggests three ways to bring trauma sensitivity into mindfulness:
Focus on sounds
Bring the awareness to bodily sensations besides the breath. (how do your feet feel on the ground, what does your chest feel like, how are your arms feeling)
Bring the breath into focus, and also notice the physical aspects of the breath.
When Perseus went into the labyrinth to find Medusa he found her sleeping, he took in a bronze shield to see her without meeting her gaze; he took an invisibility cap and winged feet.
To me, this speaks to healing trauma — the dissociative practices required to understand and contextualise traumatic memory. Whether you achieve this in mindfulness or psychotherapy, it’s essential to revisit aspects around the traumatic memory, to build up a picture of safety.
You can’t just go in and stare into the eyes of the beast; the trauma. It’ll turn you to stone. The turning to stone aspect speaks of the overload, or stress response, in which your body exists.
Trauma response is over-stimulated and dysregulated parasympathetic response. It means your body is in overdrive to keep you safe; to survive.
What often occurs for trauma survivors is they can’t switch that off. They live in that mode. It’s exhausting, and it causes a lot of despair.
What’s interesting about this story is Medusa probably comes from the ancient Greek word for “guardian.” Snakes are an ancient symbol for intellect and wisdom, and the bronze is often a symbol of wealth, safety, and courage.
One of Athena’s animals; the Goddess of wisdom, handicraft, and warfare, was the snake.
Medusa represents trauma or that which is freezing to a human being. A knowledge that cannot be directly known.
He has to take a shield. He needs the lateral mirror, a fractal understanding.
After this, Perseus gave Medusa’s head to his benefactor Athena, as a votive gift. The Goddess set it on Zeus’ aegis (which she also carried) as the Gorgoneion. She also collected some of the remaining blood and gave most of it to Asclepius, who used the blood from Medusa’s left side to take people’s lives and the blood from her right side to raise people from the dead. The rest of Medusa’s blood — a vial containing two drops — Athena gave to her adopted son, Erichthonius; Euripides says that one of the drops was a cure-all, and the other one a deadly poison. — Greek Mythology
This speaks well to the two sides that an awareness practice can have on the trauma survivor.
Socrates said:
An unexamined life is not worth living.
Yes, and he also went on to elaborate on how to examine correctly.
One side of trauma awareness is a direct key to a different life, an understanding of nature, and a deeper understanding of the essence of you. The other is an immersion into the darkness; a deadly poison, which takes you back into the traumatic memory, and even reinforces the problematic behaviours that come directly from struggling to survive from a traumatic event.
It’s a balance, and it’s essential to get it right.
There are many gifts within trauma, yet you cannot approach a trauma experience head-on, you have to contextualise it, and approach it from a safe and secure environment.
To finish up with, Treleaven shared a profound exercise.
Make a fist with your left hand, and grip it tight. Then try and prise it apart, try to break the grip, whilst still maintaining the energy of the closed fist. Impossible right? Futile.
Now, take your left and hand, still gripped in a fist, and place it in your cupped right hand. Notice how your hand feels. Notice how supported you feel, and notice what your left hand wants to do. Does it want to unfold?
The very least you may feel is safe, and supported, right?
Well, this is how to heal from trauma, not by doing, but by safety, disconfirming experiences of connection and trust, and security of knowing that you belong.
We can learn much from the old mythology; all the symbols are still there. | https://medium.com/invisible-illness/the-myth-of-medusa-is-a-trauma-journey-9e71190b03c0 | ['Peter Middleton'] | 2020-10-19 07:27:47.805000+00:00 | ['Literature', 'Self Improvement', 'Trauma', 'Mental Health', 'Mythology'] |
Failing To Form A More Perfect Union: Our Fault | Photo by Kelli Dougal on Unsplash
The failure of progressives to tell the story of the world we want is largely to blame for the dismal state of the world we’ve got. And we’re running out of time to fix it.
This sorry tale begins on an up-note with Gouverneur Morris, a sadly obscure founder of our nation. The wealthy 35-year-old New Yorker was, in effect, the first communications consultant to the (not-yet-fully-formed) U.S. government. In the summer of 1787, in Philadelphia, after the initial drafting work on the Constitution was done, the Convention’s leaders tasked its “Committee of Style,” of which Morris was a member, with polishing the language of America’s founding document. The final draft was led by Morris, who added a single introductory paragraph to the top of James Madison’s dirt-dry plan for self-government.
THE CONSTITUTION OPENS WITH A TIMELESS STORY
Morris could have opted merely to summarize the Constitution’s main talking points. Instead, his Preamble was a study in compact storytelling that created coherence and added inspiration, tying the Convention’s turgid compromises to the timeless narrative of humanity’s desire for belonging, justice and freedom. The Preamble has no legal validity, but it remains the only thing most of us recall from the Constitution and it arguably confers all of the document’s moral authority. In case your constitutional memory is as foggy as mine was:
“We the people of the United States, in order to form a more perfect union, establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, do ordain and establish this Constitution for the United States of America.”
Morris’s Preamble had the audacity and vision to narrate the missions our new government had to accomplish for us, the people. His primary mission statement, which became one of the most famous phrases of post-Revolution rhetoric, asserts eloquently that we reorganized America “in order to form a more perfect union.”
WE’RE TO BLAME FOR POLARIZATION AND PARALYSIS
Does anyone really need to ask how that’s working out? As we approach the 2018 midterms, the nation is beset by a panoply of the founders’ worst political nightmares — uncontrolled factionalism, regionalism, corruption. The U.S. government has recently been shut down (again) by partisan bickering. Non-urban areas war with cities. The president is enriching himself off the presidency. The country is progressively isolated from the globalizing world that America largely made. With racism driving policy from the White House, race remains a corrosive divider despite a bloody Civil War, the 14th Amendment and the bloody Civil Rights Movement. And…
At the heart of this polarized mess is our whole, long-standing approach to political and social campaigning. For decades now, everything has been presented as a short-term binary choice: Two parties; pick one. Here’s a policy; support it or oppose it. Here’s a poll; are you for or against? All our data, all our messages, all our talking points, all our ideas play on simple divisions that promote polarity and imperil our country’s mission to form a more perfect union. We use the power of computers to slice and dice the citizenry into ever-tinier and more-isolated factions. None of us, apparently, ever ask our pollsters, “But what do all these segments have in common?” Our entire approach requires polarization and rejects unity.
Trump didn’t create this catastrophe. We did.
WE LOST OUR STORY AND OUR STORYTELLING
The right-wing-ish pundit David Brooks, for whom I have no patience most days, wrote a smart New York Times op-ed last year in which he said:
“One of the things we’ve lost in this country is our story. It is the narrative that unites us around a common multigenerational project, that gives an overarching sense of meaning and purpose to our history.”
There are mounds of scholarly research identifying the Old Testament’s Book of Exodus as the root of our national story. The story of Exodus may no longer resonate with all Americans, but it is a simple way to understand the critical and eternal connection among stories, and their influence on behavior and beliefs, in both the long- and shorter term.
Exodus is the prototypical nation-building narrative — the ur-myth that establishes the power of story in political movements. For better and worse, Exodus united the Israelites for a few thousand years . . . and, beginning in the 16th century, the same mythic tale supported and preserved Pilgrims as they fled persecution to the New World, Europeans as they colonized the Americas and nearly wiped out the native inhabitants, enslaved Africans who were kidnapped to the Americas and needed a mythic story to keep hope alive, North American colonists who required inspiration in their revolution against the world’s most powerful nation, and, finally, crusading civil rights leaders who needed to spread the faith that they could successfully push back on America’s racist, genocidal foundation.
All the world’s talking points and taglines never could have driven such a multi-millennial, multi-ethnic project of freedom-building and culture change. But the story of Exodus could and did.
Proof is all around us that American politics — especially on the left — has forgotten the power of storytellers armed with a unifying narrative. One proof point is the fractured, degraded nature of our union. Another, more mundane proof point is the website of The Office of Hillary Rodham Clinton. There, if you click on “Learn about Hillary’s vision for America,” all you will find is 41 disconnected talking points linked to 41 disconnected policy memos. More binary choices. Not a unifying story or an actual vision in sight.
THE TIME FOR STORIES IS NOW
Our many failures as communicators have been vastly compounded by our general failure to grasp the huge shift in audience attitudes and behavior that the advent of digital has brought about. In the pre-Internet era, any campaign — commercial or political — could buy attention and force people to listen to its messages. But the web ushered in a post-advertising age with a complete arsenal of message-blocking and ad-avoiding weapons available to everyone for free. Today, virtually the only “messages” people hear or see are the ones they choose to hear or see. Everything else is banished from their feed.
In this era, the only way to engage an audience is to tell stories that the audience perceives as authentic and compatible with their view of the world. In the post-ad age, the audience is the beginning and end of everything we do. The audience is the most significant arbiter of what gets attention. And what gets attention, according to all metrics, is storytelling.
Given all this, if we are to consistently win campaigns and drive long-term culture change, we need to migrate the practice of political and social campaigning:
from talking points, policy memos and ads to storytelling worth sharing
from campaign-centric to audience-centric
from just talking to listening before talking
from top-down pronouncements to stories that are authentic and emotionally resonant
from slicing audience into ever-tinier segments to creating ever-more-inclusive communities
There’s no time to waste and, once we make this turn, there will be no time to rest. We can correct the way we view, categorize and talk to people. We can quickly end the right-wing’s hold on national government. We can actually build a lasting movement that will, over time, banish official racism, reverse growing inequality, provide justice, change the culture and form the more perfect union we’ve been seeking for centuries.
But first, we’ve got to change our own ways right now.
Read more from A More Perfect Story on Medium | https://medium.com/a-more-perfect-story/failing-to-form-a-more-perfect-union-our-fault-1ee609910262 | ['Kirk Cheyfitz'] | 2019-10-03 14:17:20.263000+00:00 | ['Social Justice', 'Storytelling', 'Content Marketing', 'Politics', 'Communication'] |
Genetic Data Tools Reveal How Pop Music Evolved In The US | The history of pop music is rich in details, anecdotes, folk lore. And controversy. There is no shortage of debate over questions about the origin and influence of particular bands and musical styles.
But despite the keen interest in the evolution of pop music, there is little to back up most claims in the form of hard analytical evidence.
Today that changes thanks to the work of Matthias Mauch at Queen Mary University of London and a few pals who have used the number crunching techniques developed to understand genomic data to study the evolution of American pop music. These guys say they have found an objective way to categorise musical styles and to measure the way these styles change in popularity over time.
The team started with the complete list of US chart topping songs in the form of the US Billboard Hot 100 from 1960 to 2010. To analyse the music itself, they used 30-second segments of more than 80 per cent of these singles — a total of more than 17,000 songs.
They then analysed each segment for harmonic features such as chord changes and for the quality of timbre, whether guitar or piano or orchestra based, for example. In total, they rated each song in one of 8 different harmonic categories and one of 8 different timbre categories.
Mauch and co assumed that the specific combination of harmonic and timbre qualities determines the genre of music, whether rock, rap, country and so on. However, the standard definitions of music genres also capture non-musical features such as the age and ethnicity of the performers, as in classic rock or Korean pop and so on.
So the team used an algorithmic technique for finding clusters within networks of data to find objective categories of musical genre that depend only on the musical qualities. This technique threw up 13 separate styles of music.
An interesting question is what these styles represent. To find out, the team analysed the tags associated with each song on the Last-FM music discovery service. Using a technique from bioinformatics called enrichment analysis, they searched for tags that were more commonly associated with songs in each music style and then assumed that these gave a sense of the musical genres involved.
For example, they found that style 1 was associated with soul tags, style 2 with hip hop, style 3 with country music and easy listening, style 4 with jazz and blues and so on.
Finally, they plotted the popularity of each style over time.
The results make for fascinating reading. They found that the frequency of style 4 (jazz, blues etc) declined from 1960 onwards. Styles 5 and 13, which relate to rock music, fluctuate throughout this time. And style 2 (rap) is rare before 1980 but expands rapidly after that and becomes the dominant genre for the next 30 years before declining in the late 2000s.
The data allows them to settle some long standing debates among connoisseurs of popular music. One question is whether various practices in the music industry have led to a decline in the cultural variety of new music.
To study this issue, Mauch and co developed several measures of diversity and tracked how they changed over time. “We found that although all four evolve, two — diversity and disparity — show the most striking changes, both declining to a minimum around 1984, but then rebounding and increasing to a maximum in the early 2000s,” they say. Beyond that, their conclusion was clear. “We find no evidence for the progressive homogenisation of music in the charts,” they say.
Instead, they say that the evolution of music between 1960 and 2010 was largely constant but punctuated by periods of rapid change. “We identified three revolutions: a major one around 1991 and two smaller ones around 1964 and 1983,” they say.
The characters of these revolutions were all different with the 1964 revolution being the most complex. This consisted of an increase in popularity of styles 1, 5, 8, 12 and 13 which were enriched at the time for soul and rock-related tags. At the same time, styles 3 and 6 declined, enriched for tags such as doowop.
The 1983 revolution is associated with an increase in popularity of songs with tags such as new wave, disco and hard rock and a decline in soft rock and country tags.
The 1991 revolution is associated with the rise of rap-related tags.
Another question hotly debated by music commentators is how British bands such as the Beatles and The Rolling Stones influenced the American music scene in the early 1960s. Mauch and co are emphatic in their conclusion. “The British did not start the American revolution of 1964,” they say.
The team say the data clearly shows the revolution underway before The Beatles arrived in the States in 1964. However, British bands certainly rode the wave and played an important part in the way the revolution occurred.
That’s fascinating work. Because musicians copy, repeat and modify song styles they like, this leads to a clear pattern of evolution over time. So it should come as no surprise that techniques developed for the analysis of genetic data should work on music data as well. “The selective forces acting upon new songs are at least partly captured by their rise and fall through the ranks of the charts,” they say.
And there is much work to be done. Mauch and co point out that these number crunching techniques are quit general and so could be widely applied to cultural phenomenon, provided the data is available.
For the moment, they have their sights set on further music analysis. Their next task is to gather data going further back in time. “We are interested in extending the temporal range of our sample to at least the 1940s — if only to see whether 1955 was, as many have claimed, the birth date of Rock’n’Roll,” they say.
Worth waiting to find out!
Ref: arxiv.org/abs/1502.05417 : The Evolution of Popular Music: USA 1960–2010 | https://medium.com/the-physics-arxiv-blog/genetic-data-tools-reveal-how-pop-music-evolved-in-the-us-48ad60bf495b | ['The Physics Arxiv Blog'] | 2015-02-27 18:04:43.422000+00:00 | ['Culture', 'Music', 'Data Visualization'] |
A Gentle Introduction Into The Histogram Of Oriented Gradients | Machine learning is a unique field that is rapidly evolving as the years progress; once optimal algorithms are often replaced within ten to twenty years to make way for more efficient methods. The Histogram of Oriented Gradients detection method (HOG for short) is one of these antique algorithms, being almost a decade old; however, one thing that differentiates it from others is that it is still heavily used today with fantastic results.
The Histogram of Oriented Gradients method is mainly utilized for face and image detection to classify images, like the one that you see above. This field has a numerous amount of applications ranging from autonomous vehicles to surveillance techniques to smarter advertising. This article will do a deep dive on the implementation behind the HOG detection method and understand why it is still so commonly used today. I hope you all are as interested as I am to get into the nitty-gritty of HOG, so let’s get started!
Note: This article does require a high level of understanding about machine learning and mathematical concepts like normalization, gradients, and polar coordinates, which can be a bit confusing once we get into it.
Preliminary Steps
Feature Descriptors
You may hear this name pop a lot throughout this article, but feature descriptors simply mean the representation of an image that simply extracts the useful information and disregards the unnecessary information from the image.
In the case of HOG feature descriptors, we also convert the image (width x height x channels) into a feature vector of length n chosen by the user. Although it may be hard to view these images, these images will be perfect for image classification algorithms like SVMs in order to produce good results.
Example of HOG feature descriptor on images. Credit: Analytics Vidhya
Now, you guys might be wondering how the HOG feature descriptor will actually sort through this unnecessary information. It simply does this by using the histogram of gradients which are used as the features of an image. Gradients are extremely important for checking for edges and corners in an image (through regions of intensity changes) since they often will pack much more information than flat regions.
Preprocessing
A key mistake that individuals often perform when doing HOG object detection is that they forget to preprocess the image so that it has a fixed aspect ratio. A common aspect ratio (width:height) is 1:2, so your images can be 100x200, 500x1000, etc.
For a particular image that you choose, make sure that you identify the section that you want so that it correctly fits the aspect ratio and allows for easier accessibility in the long run.
An example of pre-processing the data. Credit: learnopencv.com
Calculating the Gradients
To make the HOG feature descriptor, as discussed above, we need to calculate the respective horizontal and vertical gradients to actually provide the histogram that can be used later in the algorithm. This can be done by simply filtering the image through these kernels:
Kernels like these are often used in image classification mainly in convolutional neural networks in order to find the edges and important points in a particular image. If you are interested in diving deeper into understanding kernels, here is an article that describes these kernels visually.
Then, the magnitude and the direction of the gradients can simply be found by using the following formulas (note that this is simply converting from Cartesian to Polar coordinates in a sense):
Now, while these formulas above may seem a bit daunting, just remember that much of this implementation will be handled by the computer, so there is no need for actually computing these gradients by yourself. If you are more interested in delving deeper into the mathematics behind calculating these gradients, I suggest that you check out this article here.
If you did not understand how the gradients were calculated, that’s perfectly fine. The main takeaway that you should get from gradients are that the magnitude of the gradient increases wherever there is a sharp change in intensity. The picture below highlights an optimal example of gradients do this as they fire around the edges of the images through these gradients. The unnecessary information is removed like the background and only the essential parts remain.
To summarize, understand that gradients have a magnitude and direction where the magnitude is calculated through the maximum of the magnitude of the gradients from the three color channels and the angle is calculated from the angle corresponding to the maximum gradient out of the three channels evaluated. These can produce images like the ones above where it can detect important information and disregard the unnecessary parts.
Making A Histogram From These Gradients
To move on to the next step of the HOG algorithm, make sure that the image is divided into cells so that the histogram of gradients can be calculated for each cell. For example, if you have a 64x128 image, divide your image into 8x8 cells (this will involve a bit of tweaking and guessing!).
Feature descriptors will allow for a concise and succinct representation of particular patches of the images; taking our example from above, an 8x8 cell can simply be explained using 128 numbers (8x8x2 where the last 2 are from the gradient magnitude and directional values). By further converting these numbers to calculate histograms, we allow for an image patch that is much more robust to noise and more compact.
For the histogram, make sure to split it up into nine separate bins, each corresponding to angles from 0–160 in increments of 20. Here’s an example of how an image with the respective gradient magnitudes and directions can look like (notice the arrows get larger depending on the magnitude).
Now, how do we decide where each pixel goes inside of the histogram? It’s simple. A bin is selected depending on the direction chosen, and the value that is subsequently placed inside of the bin is dependent on the magnitude. Note that if a pixel is halfway between two bins, then it splits up the magnitudes accordingly depending on their distance away from each respective bin. After performing this process, a histogram can be formed, and the bins that have the most weight can easily be seen.
Placing the pixel in the bins depending on their direction and magnitude. Credit: learnopencv.com
Block Normalization (Optional)
Lighting variations are another major factor that can mess up how these gradients are calculated. For example, if the picture was darker by 1/2 of the current brightness, then the gradient magnitudes and subsequently the histogram magnitudes would all decrease by half. Therefore, we want our descriptor to be devoid of lighting variations so that it is unbiased and effective.
The typical process of normalization occurs by simply calculating the length of a vector through its magnitude and then simply dividing all elements of that vector with the length. For example, if you had a vector of [1,2, 3], then the length of the vector, using basic mathematical principles, would be the square root of 14. By dividing the vector by this length, you arrive at your new normalized vector of [0.27, 0.53, 0.80].
This process of normalization can be performed depending on your preference (whether you want to perform on the 8x8 block or even a larger 16x16 block). Just remember to first turn these blocks into element vectors so that the normalization highlighted above can be performed.
Image Visualization
Example of final HOG visualization. Credit: Twitter
In many instances, the HOG descriptors are often visualized with the image on the right in order to get an accurate representation of the shape of the person. This visualization can be extremely useful in understanding where the gradients shift and knowing where the objects are inside of the image.
Applications of HOG
The Histogram of Oriented Gradients object detection method can undoubtedly lead to great advancements in the future in the field of image recognition and face detection. From boosting AR tools to improving vision for blind individuals, this field can have a large impact on multiple industries.
The application that I’m the most excited for is in the field of autonomous vehicles. There are many different approaches to computer vision to detect vehicles and objects, so it makes sense that there aren’t many companies implementing HOG in their self-driving vehicles. I hope that some companies see the viability of HOG and utilize this in the future.
There aren’t many resources online explaining how to use HOG for image detection, so comment below if you all would be interested in getting a future article on the coding implementation of HOG!
TL;DR
The Histogram of Oriented Gradients method (or HOG for short) is used for object detection and image recognition.
HOG is based off of feature descriptors, which extract the useful information and discard the unnecessary parts.
HOG calculates the horizontal and vertical component of the gradient’s magnitude and direction of each individual pixel and then organizes the information into a 9-bin histogram to determine shifts in the data.
Block normalization can be further utilized to make the model more optimal and less biased.
HOGs can be used everywhere from the field of autonomous vehicles to the field of AR and VR (mostly anything involving image detection).
Additional Resources | https://medium.com/analytics-vidhya/a-gentle-introduction-into-the-histogram-of-oriented-gradients-fdee9ed8f2aa | ['Karthik Mittal'] | 2020-12-21 12:27:13.809000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Image Classification'] |
Fixing design in squads | Sprint therapy
You may be thinking now that this was just half-baked agile — and you would be right, but it wasn’t the only problem. One solution might have been to double down on resource and recruit more designers, so that there were at least 2 per squad. That option wasn’t available to us. With fewer than 8 designers facing off with around 60 engineers in 7 squads, we would have to think laterally and cross-tribe. The plan was delivered in 5 stages.
Stage 1: Detach the design process from the development process
The short term priority was to give the design process visibility. We created a new Jira project per tribe, which gave us access to a backlog, sprint planning, velocity and retros. Some of these agile ceremonies were happening in the squad boards, but they were engineering focused, and gave little constructive data or feedback for improvement for designers.
Organising design at a tribe level allowed us to pool talent and resources. Designers might have experience working from research to prototyping and UI production, but they are rarely deeply skilled at everything. Some designers are more comfortable working at the research and strategic end of the process, some love to solve problems with screen design and some work with developers to polish up interfaces. With a single designer embedded in each team, it would usually mean some tasks would be rushed or skipped entirely.
We split our backlog into research, discovery, flow analysis, wireframing and UI tasks. This not only gave us a good idea of who would be best working on it, it also allowed us to estimate on each design step in isolation. This gave stakeholders and PMs much better visibility of the end to end design process, and how long it would take, allowing them to prioritise more effectively. We made it clear to stakeholders that we wouldn’t be following a design process dogmatically, only introducing design tasks where they were relevant and necessary.
At a tribe level, we were able to introduce regular informal feedback sessions — peer review is much less intimidating than a guild-level session.
Difficulties with stage 1
There was some concern that having separate design boards might cause confusion by splitting the documentation. We linked the design issues in Jira with their corresponding issues in squad boards. Many developers came to prefer this, as long design conversations and design proposals were confined to a separate channel, rather than cluttering the technical channels.
Stage 2: Initiate design sprints, one sprint ahead
The first few design sprints were galling for the PMs, as they needed to start thinking ahead, supplying us with fully researched, prioritised and briefed work to complete two weeks ahead of the developers.
We were flexible on the content of project initiations, but not on briefs. Briefs took the form of user stories with hypotheses and supporting data. If a brief wasn’t fully formed, we could turn it into a discovery task, to help the PM identify goals and outcomes, to ask questions and banish assumptions using qualitative and quantitative evidence.
For some PMs, this was an education exercise, because they were unaccustomed to interrogating ideas to this level of detail or communicating that detail to designers. Despite some reluctance, rarely could they argue against the case we made for fully fleshed out briefs.
Being thorough with our briefs and discovery phases helped focus everyone’s attention on the problem rather than the solution. On more than one occasion, it helped us find evidence that users didn’t want the feature to begin with, so we could be comfortable terminating the project long before developing it.
We started with no idea of velocity, and the first couple of two week sprints were completed without strict estimation.
The squads needed to work on non-feature work for a sprint or two before the design work came in. We suggested they work on technical debt until the features were ready.
Difficulties with stage 2
Squads accustomed to continuous deployment found it difficult not to work on new features, and some attempted to shortcut the process by starting dev early and by bypassing discovery and design altogether. Both attempts were highly unsuccessful mainly due to unclear goals and outcomes. We were later able to highlight and improve these in retros.
To compensate for the perception of inflexibility in the process, we agreed to hand over work as soon as it was ready, rather than at the end of sprints, and engineers agreed to wait until work was handed over before grooming solutions. In the end though, the new rhythm became clearer, and the engineering squads followed our lead and began to work in sprints as well.
Stage 3: Plan, plan, and plan again
We began to run sprint planning ceremonies where all the product managers and directors in the tribe could review, discuss and prioritise our fully fleshed out design backlog. We encouraged attendance by gently reminding PMs that their work may not be prioritised if they tried to feed their work in by other channels.
As designers, we were simply there as facilitators of the discussion. I had initially imagined these sessions might be confrontational. But the product manager peer group was engaged, reasonable and focused on getting the best outcomes for the whole tribe. In the sessions, poorly thought out ideas were called out by peers and pushed down the priority list or back to discovery, which compelled PMs to properly frame problems and brief their work.
With full and consistent briefs emerging from squads, it was much easier for the product director to keep a view on outcomes, and they became a more effective, predictable and, less reactive arbiter and representative for business strategy.
After two sprints, we introduced story point estimation and after one more sprint we had a very good idea of the velocity of the team. This gave us a very clear idea of the effort that could be spent, and sprint planning ceremonies began to go very smoothly indeed — we’d often finish well before the 1hr time limit.
Problems with stage 3
There was a feeling from some PMs that detaching the design process from the engineering process would result in a loss of control over the programme of work by PMs. We asked for their faith here, and agreed with them that if the new operating model was to prove less effective, we would roll back. Once they had become accustomed to the planning sessions, they realised they still had full control over priority and schedule.
There was the perception that designers were being removed from squads to join a new team. Managers are always fiercely protective of resources. Again this perception was dismissed after the first sprint planning meeting. PMs became much more aware of resources across the tribe, and encouraged each other to justify that resource to their peers, rather than ringfence it.
Priorities in startups change quickly and constantly. Some PMs weren’t happy to wait two weeks to discuss new stories or assess if the sprint needed reprioritising. At their request we added a half-sprint replanning ceremony. But this wasn’t well attended and we decided to stop them. I interpreted this as a change in behaviour amongst the PMs, who had become used to thinking ahead, and deploying resources more proactively.
Stage 4: The ceremonies: Technical reviews, show & tells, wireframe and UI Handovers
In the original embedded design process, engineers felt connected to the design solution because they were co-located. The designer would show designs to developers on the local machine, informally and one to one, allowing constant feedback and iteration in the design. Technical considerations were fed constantly into the design.
But some of the changes were suspect — for the sake of convenience and speed of development. Decision making was opaque and undocumented. Originally the team we colocated in Spain, this way of working became increasingly unsustainable as the team decentralised.
Moving to sprints allowed us to schedule more formal reviews and checkpoints for technical and product oversight.
Within the team, we requested ad-hoc peer review sessions. Focussed on idea generation rather than critique, these were useful in getting over periods of creative block and stagnation, and were sanity checks on more unusual ideas.
sessions. Focussed on idea generation rather than critique, these were useful in getting over periods of creative block and stagnation, and were sanity checks on more unusual ideas. For the product managers, we booked in weekly show & tell sessions, to show work in progress, review work against goals, and accommodate feedback early on.
sessions, to show work in progress, review work against goals, and accommodate feedback early on. For all of the engineers, we organised technical reviews of low fidelity wireframes. Discussions centred around feasibility, effort and the quality of the solution, and new ideas were welcome.
of low fidelity wireframes. Discussions centred around feasibility, effort and the quality of the solution, and new ideas were welcome. Once the solution had been agreed, we worked on hi fidelity UI specifications, ending with a UI handover to the FE team . Discussions at this point could be centred around usability, content, style and branding.
. Discussions at this point could be centred around usability, content, style and branding. We asked for a UI QA step before deployment. The product of this would be screenshots of the release candidate, a list of small snags in the UI. These were prioritised and remedied before release.
With UX designers able to operate graphics software, low fidelity wireframing was previously seen as an unnecessary step. But we demonstrated that design ideas and proposed solutions were better assessed as early as possible in the sprints and at the lowest fidelity possible. It allowed back end teams to analyse any new data requirements for the design and to get started earlier. In many cases this meant the data structures and APIs were ready before the front end team started work.
The front end teams could analyse interactions and carry out spikes to discover the effort required to solve them. Any changes that came through technical limitations could be incorporated in low fidelity. The hi fidelity UI step could then be produced much more quickly, with iterations focused only on solving creative problems rather than business ones.
To facilitate communication of designs, we used a combination of Jira, Figma and Zeplin.
Difficulties with stage 4
A technical review isn’t always the best forum for feedback. In general, we found our engineers tend to be an introverted bunch, preferring to prepare responses than in reviews. There’s also not much time to review things in depth. We found it useful to send out links the day before reviews and handover sessions, allowing the engineers to prepare their questions and comment directly on the artwork.
Showing work in progress and low fidelity work through Figma gave access to engineers of unfinished designs. With unlimited access to this work in progress, they became confused about what was ready to develop and what wasn’t. A couple of times, they started development before the designs were ready and had to roll back when the designs were handed over with major iterations.
Although Figma’s cloud file hosting and sharing makes Zeplin redundant in a lot of ways, we found it useful to separate the “in progress” and “handover” environments. Designs that were ready to develop were “committed” to Zeplin.
Stage 5: Retros
No process is ever perfect. Retros have outsized importance when introducing operational changes. Before implementation, we used retros to help persuade reluctant PMs that the process was malleable and adaptable. In the retros themselves, people were encouraged to share their views without blame, and the whole team accepted responsibility for the problem. We made some major tweaks to the process at first, focusing on making estimation more accurate, introducing flexibility into the delivery, and improving communication by adding lots of new sync meetings. | https://medium.com/swlh/fixing-design-in-squads-f9291309c0a8 | ['Andy Birchwood'] | 2020-04-13 09:38:54.038000+00:00 | ['Agile Design', 'Design', 'Design Operations', 'Design Process'] |
Look-alike models for Social Good | Background
Community Hubs is a not-for-profit organisation with the aim to help migrant and refugee families, especially mothers with young children to connect, learn and receive health, education and settlement support. The primary unit of impact are hubs — schools where the CHA sessions are run.
The goal of my analysis is to ease the selection of schools to establish new hubs by identifying Australian schools similar to existing CHA hubs. I will employ some basic look-alike modelling for this purpose. There are two ways of solving this problem — unsupervised and supervised. However, there are no negative instances (all schools where hubs don’t exist currently are not negative cases). Hence, the supervised method is not very suitable. To keep things simple, we will measure the ‘similarity’ of every school with the current hubs, sum it and rank the schools in decreasing amount of similarity.
Diving in
Like any other data science problems, selecting features to measure the similarity is based on the business problem. I started looking for features which could identify schools in LGA (Local Government Areas) where there are more migrant/refugee families (generally non-English speaking) esp. with young kids (census data, early childhood development).
I narrowed my search to these data sources:
Existing Hub Profile
ACARA school profiles with information such as enrolments (girls, boys, total), staff (full-time, teaching, non-teaching), SEA quarter of the students (Socio-Economic Advantage), language background other than English, type and location of schools (primary, secondary, combined)
ACARA school profile sample
Australian PHIDU Social Atlas for Local Government Areas (LGA) with indicators about Families, Migrant Statistics (Skilled, Humanitarian, Family), Birthplace (Non-English Speaking Countries), Early Childhood Development, Child care, Income Support
Preprocessing
The hub's data has school name, latitude, longitude so the first step is to merge it with other ACARA schools to get other information — enrolment, staff. I used the Google Maps Reverse Geocoding API to identify the address (postcode, state, suburb etc.) from the latitude, longitude. This list was joined with the schools through postcode and school name. The process identified most of the schools associated with the hubs. Some hubs were still not identified due to mismatch in school names (e.g. words like primary or suburb name might be present in school names).
Code to reverse geocode hubs
ACARA school profiles track year on year changes, hence the dataset is filtered to keep the latest profile for every school. Some additional features, girls % enrolment, boys % enrolment, year start (e.g. Prep, K, U)/end, teaching/non-teaching staff by enrolment were created since the hubs are operated with the help of volunteer and school staff. School names were also cleaned (removed suburbs, primary/east/west etc.) since it might be easier for CHA to approach other branches of existing hubs.
Since PHIDU datasets are on a LGA (Local Government Area) level, to merge them with schools/hubs information geographic boundaries were fetched from data.gov.au. The library geopandas was used for all geospatial analysis. The latitude/longitude information for all ACARA schools was fetched using Google Maps Geocoding API (note: we have the latitude/longitude information for hubs but not all schools). Finally, the datasets were merged using the spatial join ( sjoin ) functionality of geopandas.
Modelling
To find the similarity, we need a distance metric — Gower is chosen for this analysis to account for numerical (e.g. enrolment, PHIDU, staff) and categorical features (e.g. suburb, postcode, school name).
Gower dissimilarity is calculated as an average of the dissimilarity of all the features. If the feature (f) is numerical, the ratio of the absolute difference of the values and the range of the feature is used. For categorical values, the feature is similar if both the hub and the school have the same value.
Gower distance between a hub (i) and a school (j)
Such similarity is calculated for every hub-school pair and summed for every school to get a ranked list of schools.
Since Gower similarity is a sum-based metric, any feature will have an equal impact. But some features might not be relevant and can sometimes deteriorate the ranking. Hence, feature selection is important. For this analysis, we use information value and weight of evidence to determine which features are most important in defining a “hub-worthy” school. The top 20 features:
Features importance to select “hub-worthy” candidates
Our goal is to find a ranking which identifies the existing hubs out of all schools pretty quickly and early. In other words, we should aim to maximise the recall/sensitivity (The number of hubs identified in Top K schools). I selected K as # of hubs * 1.5 ~ 105 to give ample opportunity for the model to identify all the hubs in the top 105 schools.
Finally, the model iterates through the list of features ranked by their information value and select the ones which increase the recall. Two models were created — one with location (Suburb/LGA) and school name information and one without it. If the business wants to venture into new suburbs without any existing hubs they can use the model without the location features. The algorithm selected the following features:
Model with location/school name features (65% recall):
Suburb, school_name_processed, % Children developmentally vulnerable in the communication domain
Model without location/school name features (30% recall):
% children in jobless families, Language Background Other Than English (%), % Permanent migrants under the Humanitarian Program (2000 to 2006), Pensioner Concession Card holders, Health Care Card holders, Jobless families with children under 15 years, Children developmentally vulnerable in language and cognitive domain, People receiving an unemployment benefit, People receiving an unemployment benefit for less than 6 months, School_Year_Start
Post Analysis
We can ascertain if the ranking does actually work by analysing if the Top 105 ranked schools exhibit similar behaviour to hubs for the selected features. We will analyse the model without location/school name information.
Feature summary for all schools:
Feature summary for all ACARA schools
Feature summary for the Top 105 ranked schools:
Feature summary for Top 105 ranked schools
Feature summary for existing hubs:
Feature summary for existing hubs
As a final spot check, I looked at the existing hubs and their ranks in Model 1 vs Model 2 and verified that they are ranking higher. | https://medium.com/weareservian/lookalike-modelling-for-dummies-12166420dd36 | ['Shraddha Patel'] | 2020-06-04 04:37:31.436000+00:00 | ['Python', 'Feature Engineering', 'Modeling', 'Data Science', 'GIS'] |
FPL Gameweek 1: And so it has begun.. | State of things
Last week, we came to the conclusion of using the FDR strategy — Focus on choosing players from teams with a higher chance of victory. With the higher chance of victory, we also gain valuable points in defense, our points should generally be higher across the field and we allow the possibility of significant points from goals scored as well.
After taking a look at the data we accumulated and analyzed, we came to the conclusion that it would be a good starting strategy to pick players from either Everton, Wolves, Tottenham, Leicester, or Chelsea. Of course, we’re all learning along here as we try different methods (especially me), so there are several things we can observe from our results.
Here is also a link to the Gameweek 1 results by fixtures.
Fantasy Data Gameweek 1 Team [FPL]
For our 15 man squad, we chose:
3 Players from Wolves (Jota, Traoré, Patricio), 3 Players from Everton (Keane, Richarlison, Doucouré), 3 Players from Tottenham (Kane, Alderweireld, Moura), 3 Players from Leicester (Schmeichel, Barnes, Söyüncü)
Since we can only choose a maximum of 3 players from each team and the choosing of these above players left only 1 spot in a FWD Position and 2 spots in DEF, we chose Azpilicueta from Chelsea and Van Dijk, Firmino from Liverpool for the remaining spots.
Pros of our selections:
12 of the 15 players we chose played in a winning team. The only loss coming from Tottenham but this was bound to happen as we chose players from both Tottenham and Everton. The result could have gone the other way around. The reason for choosing both was due to the fixtures over the first 6 Gameweeks. We did sacrifice some points here in Gameweek 1 but we may have a better chance over time with our strategy. Our defensive players did quite well with the most points accumulated in the team with 22 points vs 5 from midfielders and 6 from attackers. This presents a hypothesis that we generally do not need to have the best attackers to score points. A strong defense is also valuable. Granted, we are dealing with very limited data at this point. This is something to explore.
Cons of our selections:
Captain points matter so much — we haven’t yet taken this critical point into consideration. For each week captain points are doubled. Taking a look at the best team of the week below, we can clearly see that if we had chosen Salah as our captain, we would have been able to accumulate over 40 points from a single player! That’s 2 points more than our current team total. As the saying goes, When in doubt, choose Salah. We also didn’t consider the free transfers allowed per week. In Fantasy Premier League, the rules state that we are allowed one free transfer per week. We had strategized with the first 6 Gameweeks in mind without any transfers.
Highest Scoring Players of Gameweek 1 [FPL]
This scenario allows for more dynamic team selections. Needless to say, FPL has so many things one must consider and it’s easy to miss something. We can start to consider this into our future team selections. One could also argue whether it’s a better option to keep the same team for 6 Gameweeks to see their progression or to switch players often, this is a study worth considering.
This past week, I came across a wonderful lecture by Joshua Bull of the University of Oxford. Joshua is a post-doc who researches in the area of Mathematical Biology at Oxford but in his free time, enjoys playing Fantasy Football. He attempted to tackle FPL with data as well last year and goes into great detail over all the strategies he attempted during the 19/20 season. He also discusses the struggles of trying to create a magic formula for a winning team each Gameweek. Something we just witnessed above with our own team. It was a great listen and I definitely recommend it for anyone interested in this. If you’re reading this, I’m guessing you are. Oh, and it should be mentioned that Joshua also won the 19/20 Fantasy Premier League season. So don’t miss it!
So why these players?
Using the data from FBRef, we can see that there are a large number (Well over 150+) of varied stats we can choose from. It can get quite difficult to analyze each and every stat. So as a base starter, we will choose a few stats that we can consider to be key separators to the rest when it comes to choosing the most effective players.
npxGxA90 - Non-penalty Expected Goals & Assists per 90. This is another key stat that reveals the most creative and productive player in a team.
- Non-penalty Expected Goals & Assists per 90. This is another key stat that reveals the most creative and productive player in a team. SCA90 - Shot-creating actions per 90. This stat reveals the most creative player in the team. The true playmaker of the team. This stat is valuable when trying to find key players that might otherwise go unnoticed with npxGxA90. These are those players that are next in line to add points after the most effective players.
- Shot-creating actions per 90. This stat reveals the most creative player in the team. The true playmaker of the team. This stat is valuable when trying to find key players that might otherwise go unnoticed with npxGxA90. These are those players that are next in line to add points after the most effective players. PrgDistPass - Progressive Distance Passing. These are the players that have passed the longest and the most towards the opponent’s goal. This could be a useful stat to find the most attacking players, possibly from defense. The effectiveness of this stat is still subjective but we did use it for the past Gameweek.
- Progressive Distance Passing. These are the players that have passed the longest and the most towards the opponent’s goal. This could be a useful stat to find the most attacking players, possibly from defense. The effectiveness of this stat is still subjective but we did use it for the past Gameweek. KP - Key Passes. This stat helps us find the most effective passers.
- Key Passes. This stat helps us find the most effective passers. AerialAcc - Aerial Duels Won % - These players have the highest chance of scoring from set-piece plays due to their great heading ability.
Aerial Duels Won % - These players have the highest chance of scoring from set-piece plays due to their great heading ability. Clr - Clearances. The number of times a player cleared a ball out of their own goal-end. Possibly a valuable stat for defenders in FPL Value.
- Clearances. The number of times a player cleared a ball out of their own goal-end. Possibly a valuable stat for defenders in FPL Value. PassesCompAcc - Pass Completion Accuracy. These are the most accurate passers in the game. The passes of these players have a higher probability of reaching their team-mates.
- Pass Completion Accuracy. These are the most accurate passers in the game. The passes of these players have a higher probability of reaching their team-mates. Ast90 - Assists per 90. The most amount of Assists over 90 minutes by any player.
- Assists per 90. The most amount of Assists over 90 minutes by any player. Gls90 — Goals per 90. The most amount of Goals over 90 minutes by any player.
— Goals per 90. The most amount of Goals over 90 minutes by any player. SoT - Shots on Target. The players who score higher here are the most clinical shooters.
Using these stats above, let’s take a look at just the players from the teams we chose. We excluded new signings, they have a possibility to start but also a highly unlikely one to not in a new team especially in Gameweek 1.
Defenders
One of the key areas in our team selection was in defense. As we’re able to see in hindsight the number of points we accumulated here. A key criterion to consider was the number of matches each player played last season. So as a base requirement, we are only highlighting those players that have played the most amount of games for their team. Barring injuries, this allows for a solid chance that they will feature in most games this season as well.
Taking a look at the # of clearances a player made last season and combining it with the most capable aerial players gives you an interesting stat. This combined selection allows us to consider defenders that have a higher chance of scoring both from set-pieces while also being strong in the back.
No doubt, Player of the Year Van Dijk stands out. Last season, he scored 178 FPL points and even more the year before! (208)
So it was a no-brainer to add him to our team. We also see that some other solid additions would have been Evans, Saiss, Söyüncü, Keane, and Alderweireld.
Evans would have been a solid choice but he was injured for Gameweek 1. So we added all the other players except Saiss to our list. We will regret not choosing Saiss as he scored 15 points this past Gameweek. Part of it is luck. But looking at this data, It could have been any one of these players. Another hypothesis we could consider is opposition’s defensive ability by pairing these teams against each other. This is an area worth exploring for any of you out there.
Clearances vs Aerial Duels Won % (2019/2020 Season) [FBRef]
Another player you might have noticed in the list is César Azpilicueta. Looking at the key passes players made last season combined with the progressive distance they covered, we see these following players stick out. Considering both Alexander-Arnold (£7.5) and Robertson (£7.0) cost the most among all the defenders, we were left with 2 options, Digne and Azpilicueta. It was a toss-up between the two. However, Chelsea as a team seemed more formidable to Everton (who faced Tottenham in Gameweek 1), so we chose Azpilicueta. In the end, Azpilicueta didn’t start and Digne scored a goal. Cmon Azpi! We failed to consider the Right back competition at Chelsea with Reece James. Azpilicueta played all last season (36 Matches) until an injury forced him out at the end of the season. A parameter that was missed. Albeit another reason for Digne’s omission was the selection of Keane who cost only £5 (£1 less) and did fairly well himself. Alas, we will tinker with the squad and consider more metrics as time progresses.
Key Passes vs Progressive Distance Passes (2019/2020 Season) [FBRef]
Mids and Forwards
Take a look at this following spread where we combine the most creative attackers with the most effective attackers. You’ll see some familiar expensive faces. Salah (£12), Mane (£12), Firmino (£9.5) but some valuable cheap options too in Jimenez (£8.5), Barnes (£7), Jota (£6.5), Maddison (£7.0), and Richarlison (£8.0). Instead of going for the most expensive options, we chose a strategy of continuing to choose from our preferred teams. Jota seems like an insanely good value-for-money considering how he ranks in this spread. Unfortunately, he seems to be down the pecking order at Wolves and only came on as a substitute. Salah should have been a no brainer with his 20 points but we also have to take into consideration that if we had picked Mané, we would have scored just 2 points this past Gameweek. So more expensive isn’t always the better option.
Shot-Creating Actions per 90 vs Non-penalty Expected Goals and Assists per 90 (2019/2020 Season) [FBRef]
Barnes played exceptionally well in his first Gameweek. He had 5 shots overall (1st among all LEI players) and 2 Shots-on-target as well. It was only a pity he couldn’t join in on the fun with Vardy and Castagne.
Also Richarlison did this over the weekend.
When did Richarlison turn into Fernando Torres? via Streamable
Additionally, taking a look at the Pass Completion Accuracy tied into the total Assists per 90, we can see that in addition to the popular choices, Traoré is a good cheap buy and a good addition from Wolves. We’re hoping he produces more over the coming Gameweeks. We also found another cheap but great pick in Lucas Moura who believe-it-or-not compared fairly well to Salah himself on SoT value. | https://medium.com/fantasy-tactics-and-football-analytics/gameweek-1-and-so-it-has-begun-b38e090ccc74 | ['Tom Thomas'] | 2020-10-05 13:33:22.575000+00:00 | ['Fantasy Sports', 'Premier League', 'Data Science', 'Soccer', 'Fantasy Football'] |
Manage Up: Take Charge of Your Own Growth | How Do You Manage Up Effectively?
Photo by the author.
The thought of managing up could be intimidating. How do you manage your own manager?
How about rephrasing it as “choosing to communicate effectively upwards”? Notice the word “choosing,” which makes us feel powerful through autonomy and a sense of control. So, next time the thought of managing up gives you the creeps, simply rephrase it as “choosing to communicate effectively upwards.”
The key to managing up is in the decision to act by taking charge of building your relationship with your manager, behaving in ways that build trust, aligning your priorities with their priorities, and doing work that shares their goals and mission. It requires opting out of the drama triangle and taking on the role of the creator instead of acting as the victim.
So, while we cannot change others, let’s do our best to influence them by practicing these seven strategies for managing up effectively.
1. Manage up to build trust
Spend time understanding your manager by learning about the challenges of their job, what motivates them, what keeps them occupied, what they care about, and what their priorities are. A simple way to gather this information is to simply ask.
You don’t need to approach it as a formal one-on-one or define a meeting agenda to learn this information about your manager. Use informal chats to inquire and gather this data on a regular basis, show curiosity, and ask questions. While talking to them, ask some of these questions:
Why is this important to you?
What are your top priorities?
What’s one thing that’s at the top of your mind?
What about this bothers you?
I am curious to learn about the challenges of your role. Would you be open to sharing some?
What about your work inspires you?
This information will enable you to establish a better line of communication with your manager by doing work that aligns with their goals, sharing your concerns in a manner that highlights a potential risk to their plan, and collaborating together to find better solutions to problems.
Managing up in this manner will build trust by enabling alignment on ideas and a desire to do better through openness to constructive conflicts.
2. Manage up to define success
We are so focussed on our individual goals that we fail to recognise what success as a team looks like. Without an understanding of what it means to succeed as a team, you may achieve your individual goals but still fail as a team.
By knowing the success criteria of your team, you can realise and contribute to opportunities beyond your own goals. Once you understand what success looks like, you can be part of that success by:
Negotiating and setting your own priorities.
Seeking clarity in your work.
Questioning the effectiveness of your work.
Identifying and taking on additional responsibilities.
Saying no to work that does not align with the success of your team.
Managing up by being part of your team’s success as opposed to focussing on your individual goals can supercharge your growth.
3. Manage up to get the support you need
Want more responsibility and autonomy? Be your own advocate and ask for it.
You may also need support to do your job better. Unless your manager is deeply involved in your day-to-day activities and takes time to connect with you outside work, it may not be possible for them to identify what kind of support you need.
If you do not take charge of your own effectiveness and efficiency at work, you will be limited in the outcome of your work and the impact it can generate.
So, take the lead to identify the support you need by discussing:
What prevents you from achieving your goals.
What you can do to overcome these obstacles.
How they can help.
Managing up by recognising your own needs and finding the best ways to fulfil those needs can make you effective in your job, enabling you to do more, better.
4. Manage up to make your work visible
Your manager may know your intent, but unless you show them the impact of your work, they may not realise the value you bring to the team.
Expecting your manager to pull updates from you can leave you at the mercy of their time and willingness to show interest in your work. By utilising a push model, you can provide them with necessary updates at the right time — including both good as well as bad news.
By keeping them in the loop on important issues, you not only build trust but also provide them with an opportunity to attend to issues before it’s too late.
Isn’t push more efficient than pull for both you and your manager?
By taking the initiative to showcase your work and seeking help on important issues, you can get your manager’s support to do more forward-looking work.
5. Manage up to help them see their blind spots
Most managers are oblivious of how others perceive them in the workplace and the impact their behaviour and actions have on others. These blind spots can make them take certain decisions or act in ways that are detrimental to the growth of the team.
By helping your manager see their blind spots, you can enable them to recognise and make changes in their way of working with others.
Be careful, though. While conveying the message is crucial, it should not be disrespectful. Your manager will engage in a constructive discussion as long as it doesn’t hurt their ego.
Let’s take an example. Instead of saying, “You micromanage,” try, “I think I am now ready to have less supervision and guidance on my work. I hope you can trust me to update you when there’s significant progress and reach out for any support I may need. Meeting multiple times during the day distracts me from my work and prevents me from making progress on my tasks. What do you think?”
The first message can make them defensive, while the second one allows them to think and reflect on the intent.
Managing up by sharing feedback is not only beneficial for the manager, but it can be highly useful for the team, which benefits from the changes that such feedback can provide.
6. Manage up to align on work preferences
Do you need to work from home on a specific day of the week, be home by a certain time in the evening to look after your child, or have any other work preferences?
If your manager does not ask about your work preferences, which they won’t in most cases, it’s your responsibility to communicate them.
Without keeping them apprised of your situation and how you plan to manage work, you can cause confusion, misalignment of expectations, and prevent them from helping you do your work effectively.
Openly discuss your work preferences and learn about your manager’s expectations.
Seek advice as opposed to being demanding.
Discuss how meetings can be organised for maximum participation.
Discuss how and when you will be available to your team members.
Discuss how you plan to manage other dependencies.
Your manager won’t anticipate your needs and cannot help you unless you talk to them about the subject. They may have work preferences too. So, remember to ask them about it and be flexible to agree on a common solution that works for both of you.
Managing up by being transparent about your specific needs can shift your mind to focus on the work as opposed to worrying about its implications.
7. Manage up to communicate effectively
Not knowing your manager’s preferred mode of communication for different kinds of issues can be highly ineffective for the team. You may end up sending an email for an urgent issue without knowing that your manager only checks their email twice a day or send them a text update on information that’s best conveyed through email.
By using the preferred mode of communication, you can get their attention on the right things at the right time.
Managing up by learning your manager’s preferences and aligning on communication guidelines can help you to be highly productive by utilising the right channels to communicate and get the desired attention.
Imagine a team in which everyone communicates upwards effectively and makes an effort to take charge of their own growth. You can be the one to bring about this change in your team. Make the choice now. | https://medium.com/better-programming/managing-up-149a93854ca9 | [] | 2020-05-19 14:55:10.635000+00:00 | ['Personal Development', 'Management', 'Startup', 'Self Improvement', 'Programming'] |
I’m Growing Weary of Growth | Want to stand out from the crowd? Deliver value? Want people to actually read what you have to say? Try addressing some of the points below.
No, it’s not your job to write something just for me. It’s your job to deliver something of value.
How do you grow? For something non-quantifiable, what benchmark(s) do you get to use? When we think of growth, does it always have to be positive? Maybe there are times when we grow in a bad way. After all, how many times this year have we heard, “things are growing worse?”
My kids are growing, and that’s good. But it also means they’re getting older, and that’s bad (for me, anyway).
People talk about “growth” in relationships, but isn’t that just a cute way of saying “I give a shit and put in the work?” That’s the default option, not something medal-worthy.
How does a guy on an assembly line “grow” his career? If he enjoys the work, isn’t that enough?
Do we even need to grow, or is that a construct that’s been forced on us by the self-help industrial complex? Do we need to “hack” everything?
What happens when we grow weary of growth? | https://medium.com/2-minute-madness/im-growing-weary-of-growth-e471bc874ed5 | ['Kevin Alexander'] | 2020-12-19 10:49:33.314000+00:00 | ['Self Improvement', 'Writing', 'Growth Hacks', 'Relationships', 'Life'] |
From App Engine Flex to Kubernetes | Do you want to move your app to Google Kubernetes Engine? Then read on.
Treat this like a getting-started guide. It will not go into details/concepts but rather help you get you started on GKE quickly.
The general outline of the steps are as follows:
Prepare your Docker image. Guess what? You should be able to run the same app engine docker image on GKE without any problems! (if you use the same service account). However, you might wanna consider a cleaner option.
Guess what? You should be able to run the same app engine docker image on GKE without any problems! (if you use the same service account). However, you might wanna consider a cleaner option. Prepare and setup your Kubernetes cluster . Google sets up Kubernetes and all of its services on the default node pool.
. Google sets up Kubernetes and all of its services on the default node pool. Create a Kubernetes Deployment . In short, a deployment is a construct for a stateless application. For something which requires state, for example a MySQL instance, you would use a StatefulSet.
. In short, a deployment is a construct for a stateless application. For something which requires state, for example a MySQL instance, you would use a StatefulSet. Expose your deployment with a service . If you are using a “LoadBalancer” service, then you are creating HTTP access. HTTPS requires a little more work which I will discuss later.
. If you are using a “LoadBalancer” service, then you are creating HTTP access. HTTPS requires a little more work which I will discuss later. Setup metrics and monitoring using Stackdriver. This is key! One of the things that I was worried about was that GAE shows a ton of good charts which are not available in GKE directly. But you can do all of that, probably better with Stackdriver.
Google already has a documentation on the exact same topic: https://cloud.google.com/appengine/docs/flexible/python/run-flex-app-on-kubernetes. I recommend taking a look at that first. There are some nuances which I would like to highlight.
Anyway, let’s get started!
Preparing your Docker image
Assuming you are an app engine flex user, you should already have a docker container. The catch is that GKE will not read/use your app.yaml file. Thus, any configuration/settings/environment variables that you have set up needs to be moved to your DockerFile.
One important thing to note is that if you are using other Google services like Google Cloud SQL for example, you may need to export a credentials file with it. You can create one by following this doc.
# Credentials for GCloud Access
ENV GOOGLE_APPLICATION_CREDENTIALS=/path/to/json/key
You can run the command below to build and save your Docker image in Google Container Registry (GCR)
cd ~/path/to/my-awesome-project/publish-dir-which-has-Dockerfile
gcloud builds submit --tag "gcr.io/my-awesome-project/0.1.0"
Creating your Kubernetes Cluster
I am a fan of gcloud console and would be using it for the most part. You will need to install GCloud SDK and Kubernetes CLI (kubectl) to move along. If you have gcloud SDK, you can install kubectl simply by doing.
gcloud components install kubectl
A Kubernetes cluster can contain multiple node pools and can host multiple services. It is initialized with a ‘default’ node pool.
NOTE! You can’t change the machine type once the cluster is created. Neither can you change the boot disk size, image, service account, etc.
Also, if you are creating a single node cluster, please note that there are bunch of kubernetes services that will live on that node like kube-proxy, logging-service, etc which consumes up to a GB of RAM, so please allocate accordingly.
gcloud beta container \
--project "my-awesome-project" clusters create "kc1" \
--zone "asia-south1-c" \
--username "admin" \
--cluster-version "1.10.11-gke.1" \
--machine-type "n1-standard-1" \
--image-type "COS" \
--disk-type "pd-standard" \
--disk-size "30" \
--num-nodes "1" \
--enable-cloud-logging \
--enable-cloud-monitoring \
--no-enable-ip-alias \
--network "projects/my-awesome-project/global/networks/default" \
--subnetwork "projects/my-awesome-project/regions/asia-south1/subnetworks/default" \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--no-enable-autoupgrade \
--enable-autorepair \
--maintenance-window "21:30"
This will setup your new Kubernetes cluster ‘kc1’ with a single node with standard configuration. If you want to change that, go through this doc: https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool
Before we move on, run this command to control your new cluster from the terminal
gcloud container clusters get-credentials kc1
Creating your Kubernetes Deployment
Once you have uploaded your Docker image to GCR, you can create a Kubernetes Deployment in a single command.
kubectl run my-deployment \
--image=gcr.io/my-awesome-project/0.1.0 \
--port=8080
This will run your application. However, there are a couple of configuration entries which are crucial in getting this right. You can edit your deployment using this command.
kubectl edit deployment realia-server-deployment
This opens up the deployment yaml in vi. Look out for the entries rollingUpdate and minReadySeconds. On saving, the settings are updated spontaneously.
spec:
...
# Please ensure that your server starts in 10 sec. Else the readiness check fails
minReadySeconds: 10
...
strategy:
rollingUpdate:
maxSurge: 1
# If you have just 1 instance to start, you need to set
# set this to zero to have a zero downtime rolling upgrade
maxUnavailable: 0
type: RollingUpdate
As you have newer versions of your app image, you can update your deployment using. This should be a zero downtime rolling update if the deployment config is right.
kubectl set image deployment/my-deployment my-deployment=gcr.io/my-awesome-project/0.2.0
Create a service
In order for your app to be visible to the outside world, you need to expose your app as a service.
You can use type ‘LoadBalancer’ to expose your service HTTP as is mentioned in the Google doc. Here I am going to outline the steps for HTTPS.
One way to achieve this is by creating a NodePort service. This blog is a good discussion on the available options. I am following the route described below.
Deployment > Service > Ingress > GCloud HTTP(S) Load Balancer
Expose the deployment via a service
Create an ingress to the service
Add an HTTPS frontend using GCloud HTTP(S) Load Balancer
In order to do this you need to first create a NodePort service using the following
kubectl expose deployment my-deployment --target-port=8080 --type=NodePort
Now create an ingress.
kubectl create -f gke/my-ingress.yaml
Here is a sample my-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
backend:
serviceName: my-deployment
servicePort: 8080
And that’s it! This will automatically create a HTTP(S) load balancer with a default HTTP Frontend. You can view this on the web.
Google Cloud Console > Network Services > Load Balancing > Ingress
But wait! what about HTTPS? You can add multiple Frontends (IP and port) using the web interface (I haven’t done this via CLI).
Open your ingress and choose edit. You should see a screen like this
Load Balancer edit screen on Google Cloud Console
You should be able to add a new frontend and choose HTTPS as shown.
Upload your SSL cert. If you do not have one you can create one from the UI.
Don’t forget to change ephemeral to static IP and update DNS settings!
Note:
Please note that the new cert takes close to 30 min if not more to get deployed.
Even when the UI shows a bright green tick, the service may still show as unavailable.
Your browser caches SSL certs so try a different browser or close/reopen your browser and check.
Monitoring and Metrics
Volla! Your service is finally running and available on https! But wait. How do you know how’s it doing?
Well, for starters, you can always monitor traffic from LoadBalancer’s monitoring console.
Load Balancer Monitoring tab in Google Cloud Console > Network Services
However, just traffic stats isn’t enough for any application. That’s where stackdriver comes in.
In the stackdriver console,
Start by creating a dashboard
Here you can add charts easily by selecting a metric. For example, start with the resource type: Google Cloud HTTP(S) Load Balancer and metric: Backend request count
This would show you all the requests and their response code. You can group across dimensions like country, etc to get exactly the information you need
Stackdriver interface to edit metrics
The following metrics have helped me quite a lot in understanding the health of the app
External Request Count from GKE LB
Backend Request Count from GKE LB. (There are cases where these 2 may emit different response codes)
Total end to end latency
Backend latency
Container metrics: CPU, Memory, Disk, Network
Pod metrics: Memory, CPU, network per pod
At the end you may get something like this. :)
Stackdriver Dashboard Sample
And now you can finally sit, watch and relax with a cup of coffee!
To conclude; setting up Kubernetes can be a daunting task initially. But once you start getting the hang of it, it is a fantastic system to work with. I know I haven’t explained why I moved to GKE but that is probably a story for another day.
If you need a quick response, send a tweet to @suvodeeppyne, else feel free to leave comments below. | https://medium.com/google-cloud/from-app-engine-flex-to-kubernetes-a7a7aad9e66f | ['Suvodeep Pyne'] | 2019-02-13 15:57:40.082000+00:00 | ['Stackdriver', 'Docker', 'Kubernetes', 'Load Balancing', 'Google App Engine'] |
How To Get People To Do Stuff: #2 — Break Through A Confirmation Bias | A confirmation bias is a form of “cognitive illusion”. People tend to pay attention to what they already believe and filter out information that doesn’t fit with their opinions and beliefs. You can breakthrough these biases, however. Watch the video to find out how:
For more information check out:
Daniel Kahneman’s book Thinking Fast And Slow
and my new book (when it comes out in March 2013 — available for pre-order now at Amazon) How To Get People To Do Stuff
In order to get through a confirmation bias, start first with something you know the person or your audience already believes. That way they will let the information/communication in through their attention gate. Once you’ve made it past the confirmation filters you can then slip in a new idea.
What do you think? Have you tried this to break through a confirmation bias? | https://medium.com/theteamw/how-to-get-people-to-do-stuff-2-break-through-a-confirmation-bias-737d952948e8 | ['The Team W'] | 2016-09-21 22:12:08.264000+00:00 | ['Attention', 'Psychology', 'Influence', 'Presentations', 'Persuasion'] |
Scientifically Speaking | My career as a scientist has revolved around one goal. To stay in the workplace.
Photo by Amplitude Magazin on Unsplash
Three decades ago, after doing well academically throughout high school, I expressed interest in choosing science as a major in college. My mother was not very supportive. I was surprised at her reluctance. As an educated Indian woman, she was my biggest supporter. But she was also the most pragmatic person in my life.
Her hesitation did not stem from a belief that girls innately lacked the ability to do well in scientific fields. Her concerns were more practical. She could foresee the potholes on the road that I had chosen.
For a woman, balancing home and career is hard enough, even when it is not tied to working in the laboratory. While girls in India may receive the same level of science education as the boys, the abysmally small number of women who continue to pursue a career in science bears witness to this fact.
Fortunately for me, fate intervened. I went to college, got married, moved to the United States, and then pursued my dream of getting a Ph.D. I landed a job before motherhood beckoned. By the time the baby arrived, I had already savored the excitement of working on innovative research in search of better medicines for human diseases.
The challenge of solving complex problems, the thrill of working in laboratories with state-of-the-art equipment, and the honor of being in the presence of eminent scientists who I could respect for their scientific knowledge and their humility, was a part of my everyday work life. But reconciling the demands of parenting while watching a chemical reaction in the lab or scheduling breastfeeding in the midst of a molecular biology experiment presents unique challenges.
For women scientists who walk this tightrope, role models are few, genuine supporters rare, and wholehearted well-wishers practically non-existent. Moreover, there is a gender disparity not only in salary but also with respect to opportunities for advancement. It is a miracle that the few women who stick around, actually stay the course.
The ability and drive to succeed in scientific fields is ably demonstrated by distinguished women who have succeeded spectacularly. A handful have won Nobel prizes and many more have tasted commercial success.
While the romance of making it big strikes a chord with some, for most of us trying our best to just get through each day, it is not the lure of phenomenal success that keeps us going. What helps are words of encouragement, a non-discriminating environment, and support from colleagues and superiors that is not condescending.
During the years that I worked in the United States, sometimes I had the dubious distinction of being the only immigrant woman in a meeting. When I returned to India, I was not a racial minority but was the only woman in many conference rooms. It was impossible to ignore the additional burden I carried.
By my presence and influence in that room, I tried to set an example for other women like me who may have started off on par with men while receiving an education, but found themselves unsupported when they tried to follow through.
I saw myself as a symbol of what was indeed possible, although not easy.
Almost twenty-three years since I began my career as a research scientist, I still have days when there are too many unfinished chores and unfulfilled commitments. Why do I continue this juggling act?
Photo by Sai De Silva on Unsplash
When my daughter was in preschool, I remember taking her to my workplace on “Bring your child to work day”. I was fascinated by the concept of taking your children to your workplace, thereby intersecting home and work, the two distinct spaces that formed the continuum of my life.
Given the gender disparity in the sciences, and as a pharmaceutical scientist myself, I wanted my daughter to realize early on that for her, a career in science was very much a possibility.
I remember thinking “It would be worthwhile if I could influence one woman to tread the scientific path, even if that is my own daughter. Who knows, she may even win a Nobel?”
In a few months, she will graduate with a quantitative but non-scientific degree. Does this mean I have failed in my attempt to inspire the person most likely to follow in my academic footsteps? Was I not passionate enough? Should I have pushed her?
When I think back to my own teenage years, I recollect my ambition arising from within myself, with no prompts from family. It was my inherent passion that got me started on my journey and has kept me going despite obstacles that thwart most people.
Perhaps I have not succeeded in getting my child into a scientific career, but I have set an example by my persistence.
As I stand on the sidelines watching my daughter embark on her career, I can only wish her well. And hope she has what it takes to run the long race. In life, as in science, perseverance is a better indicator of success than talent. | https://medium.com/swlh/scientifically-speaking-9223fd728664 | ['Ranjani Rao'] | 2019-10-12 12:00:39.671000+00:00 | ['Women', 'Equality', 'Work Life Integration', 'Science', 'Careers'] |
The Demise of the SIEM Fuels The Rise of Security Data Lake | Exciting attractions lie ahead; read below for the “movie review”. (Image source: http://www.redkid.net/generator/sign.php)
A decade ago, log management was commonly used to capture and retain events for compliance and security use cases. As adversaries and their TTP’s grew more sophisticated, simple logging evolved into security information and event management (SIEM) and the power of rule-driven correlation made it possible to turn raw event data into potentially valuable intelligence. Albeit challenging to implement and make everything work properly, the ability to find the so-called “needle in the haystack” and identify attacks in progress was a huge step forward.
Today, SIEM’s still exist, and the market is largely led by Splunk and IBM Q-Radar. Many customers have finally moved into cloud-native deployments, and are leveraging machine learning and sophisticated behavioral analytics. However, new enterprise deployments are fewer, costs are greater, and — most important — the overall needs of the CISO and the hard-working team in the SOC have changed. These needs have changed because security teams have almost universally recognized that they are losing against the bad guys. The reduced reliance on the SIEM is well underway, along with many other changes. The SIEM is not going away, but its role is changing rapidly, and it has a new partner in the SOC.
Why the role of the SIEM is rapidly Diminishing?
It is Too narrowly focused: the mere collection of security events is no longer sufficient because the aperture on this dataset is too narrow. While there is likely a lot of event data to capture and process in your events, you are missing out on vast amounts of additional information such as OSINT (open-source intelligence information), consumable external-threat feeds, and valuable information such as malware and IP reputation databases, and even reports from dark web activity. There are endless sources of intelligence, far too many for the architecture of a SIEM. COST (Data explosion + hardware + license costs = bad outcome): With so much infrastructure, both physical and virtual, the amount of information being captured has exploded. Machine-generated data has grown at 50x, while the average security budget grows at 14% y-o-y. The cost to store all of this information makes the SIEM cost-prohibitive. The Average cost of a SIEM has skyrocketed to close to $1 million dollars annually, which is only for license and hardware costs. The economics force teams in the SOC to capture and/or retain less information, in an attempt to keep costs in check. This causes the effectiveness of the SIEM to become even further reduced. I recently spoke with a SOC team who wanted to query large datasets searching for evidence of fraud, but doing so in Splunk was cost-prohibitive and a slow, arduous process, leading to the effort to explore new approaches.
The results are terrifying — A recent survey by the Ponemon Institute surveyed almost 600 IT security leaders and found that, despite spending an average of $18.4M annually, and using an average of 47 products, a whopping 53% of IT security leaders “did not know if their products were even working”. Clearly, a change is in order!
Enter the Security Data Lake
Security-driven data can be dimensional, dynamic, and heterogeneous, thus, data warehouse solutions are less effective in delivering the agility and performance users need. A data lake is considered a subset of a data warehouse, however, in terms of flexibility, it is a major evolution. The data lake is more flexible and supports unstructured and semi-structured data, in its native format and can include log files, feeds, tables, text files, system logs, and more. You can stream all of your security data, none is turned away, and everything will be retained. This can easily be made accessible to a security team at a low cost. For example, .03 cents per/GB/per month if in an S3 bucket. This capability makes the data lake the penultimate evolution of the SIEM.
If you are building a security data lake, you will be able to focus on more strategic activities:
Threat hunting: Sophisticated adversaries know how to hide and evade detection from off the shelf security solutions. Highly skilled security teams will follow a trigger — which can be a suspicious IP or an event — and go on the attack to find and remediate the attacker before damage occurs. The experience of the threat hunting team is the most critical element for success, however, they are highly reliant on vast amounts of threat intelligence so they can cross-reference what they are observing internally, with the latest threat intelligence to correlate and detect a real attack.
Sophisticated adversaries know how to hide and evade detection from off the shelf security solutions. Highly skilled security teams will follow a trigger — which can be a suspicious IP or an event — and go on the attack to find and remediate the attacker before damage occurs. The experience of the threat hunting team is the most critical element for success, however, they are highly reliant on vast amounts of threat intelligence so they can cross-reference what they are observing internally, with the latest threat intelligence to correlate and detect a real attack. Data-driven investigations: Whenever suspicious activity is detected, analysts begin an investigation. To be effective, this must be an expeditious process. With the industry average of 47 security products in use in the typical organization, this makes it difficult to gain access to all of the relevant data. However, with a security data lake, you stream all of your reconnaissance into your data lake and eliminate the time-consuming work of collecting logs. The value of the process is to compare newly observed behavior with historical trends, sometimes comparing to datasets spanning 10 years. This would be cost-prohibitive in a traditional SIEM.
The data lake automates the processing of the data when loaded (known as parsing), making it even easier for the security team to focus on the most critical elements of their job, preventing or stopping an attack.
Large volumes of historical data, often going back a decade, to determine if a specific pattern is typical or an anomaly.
Interesting companies to power your security data lake:
If you are planning on deploying a security data lake or already have, here are three cutting edge companies you should know about. I am not an employee of any of these companies, but I am very familiar with them and believe that each will change our industry in a very meaningful way and can transform your own security data lake initiative.
Team Cymru is the most powerful security company you have yet to hear of. They have assembled a global network of sensors that “listen” to IP-based traffic on the internet as it passes through ISP’s and can “see” and therefore know more than anyone in a typical SOC. They have built the company by selling this data to large, public security companies such as Crowdstrike, FireEye, Microsoft, and now Palo Alto Networks, with last week’s acquisition of Expanse, which they snapped up for a cool $800M. In addition, cutting-edge SOC teams at JPMC and Walmart are embracing what I espouse in this very article and leverage Cymru’s telemetry data feed. Now you can get access to this same data, you will want their 50+ data types and 10+ years of intelligence inside of your data lake to help your team to better identify adversaries and bad actors based on certain traits such as IP or other signatures. Varada.io: The entire value of a security data lake is easy, rapid, and unfettered access to vast amounts of information. It eliminates the need to move and duplicate data and offers the agility and flexibility users demand. As data lakes grow, queries become slower and require extensive data-ops to meet business requirements. Cloud storage may be cheap, but compute becomes very expensive quickly as query engines are most often based on full scans. Varada solved this problem by indexing and virtualizing all critical data in any dimension. Data is kept closer to the SOC — on SSD volumes — in its granular form so that data consumers can leverage the ultimate flexibility in running any query whenever they need. The benefit is a query response time up to 100x faster at a much cheaper rate by avoiding time-consuming full scans. This enables workloads such as the search for attack indicators, post-incident investigation, integrity monitoring, and threat-hunting. In short, Varada can help your team gain access to the data they need, get consistent and interactive performance, and stop worrying about managing usage costs or dealing with data ops. Panther: Snowflake is a wildly popular data platform primarily focused on mid-market to enterprise departmental use. It is not a SIEM and has no security capabilities. Along came engineers from AWS and Airbnb and created Panther, an open-source security platform for threat detection and investigations. The company recently connected Panther with Snowflake and is able to join data between the two platforms to make Snowflake a “next-generation SIEM” or — perhaps better positioning — evolve Snowflake into a security data lake. It is still a very new solution, but it's a cool idea with a lot of promise for the future for Snowflake customers.
In summary, the average security organization spends $18M annually and has been largely ineffective at preventing breaches, IP theft, and data loss. The fragmented approach has not worked. The Security Data lake, while not a simple, “off the shelf” approach, centralizes all of your critical threat and event data in a large, central repository with simple access. It can still leverage an existing SIEM, which may leverage correlation, machine learning algorithms and even AI to detect fraud by evaluating patterns and then triggering alerts. However configured, the security data lake is an exciting step you should be considering, along with the three innovative companies I mentioned in this article.
I would love to hear your thoughts about how a security data lake can help you and your team and what it means for your existing SIEM investment. You can reach out directly at [email protected] or https://www.linkedin.com/in/schoenbaum.
If you’d like to read more of my articles — focused on cybersecurity and advice for investors and executives on how to improve company go-to-market can be found here: https://schoenbaum.medium.com/ | https://towardsdatascience.com/the-demise-of-the-siem-fuels-the-rise-of-security-data-lake-dace0df83306 | ['Dan Schoenbaum'] | 2020-11-19 04:23:56.659000+00:00 | ['Siem', 'Ciso', 'Data Lake', 'Data Science', 'Security'] |
The Friday Evening of Life | Image: Boulderpress.com
I had to smile when I learned from a friend that he believed me to be a peaceful, thoughtful man, and was surprised to hear I was taking part in a protest march at my age. You’re not even in your own country! Haven’t you been involved in enough shit for one lifetime?
So, I smiled, because yes, I’ve fought, screamed, and caused trouble all round the world. I’ve been hunted like an animal, and thrown into several different prisons with varying degrees of hostility. They didn’t throw me there because I was thoughtful, or peaceful, so in truth, my friend is right, I’m a broken man.
Growing old, I had lost the will to fight anymore. I was never fearless, nor could I manage to turn away from a fight. That, at least, stems from my childhood experiences. I never got used to taking a beating, but neither did I turn and run. In the end, the other kids followed me. Not because I was a bully, but because I wouldn’t let anyone bully them.
But I’ve long traveled away from youth.
Reflecting, I’m mad sometimes for all the chances I never took, the places not visited, the midnights that passed unnoticed. That doesn’t mean I wouldn’t take a chance again, or stay awake long enough to watch the woman I love fall asleep. I ran to be elusive. More often than not, I missed the things worth stopping for.
I wondered, oftentimes, knowing my lifestyle, if I would ever meet the turn of the century, let alone be around in 2020. Looking back, I wonder if what I wanted was ever attainable? Or that in reality it was never out there anyway.
I felt a new attention today, caught almost, as if the need to be off and running still hasn’t diminished. That getting up in the morning and joining a protest march in another country felt wholly worthwhile.
As I look out at the world today, everything going on that I don’t quite understand, I might be forgiven for thinking myself a pitiable soul. I’m not a very bright bean — I see this very well as more and more happens in the world, even among my friends.
I see this very well, now that I am too old to be a clever boy — but I’ve got something. And that something is an incomparable sense of what is important to me in the world. Equal opportunity for everyone. An end to racism. To cast a vote for a government the rest of the world can look to and respect for their moral leadership.
But on a personal level, I was always looking for something else, something I alone could control. It took a while, but I have come to learn that this thing is nothing more than the simplicity of kindness and honesty and family. We don’t all have to find strength in fighting through the wilderness of absurdity to arrive at a straightforward and undeniable fact in this world— that family is love.
Given a second chance, I did learn it in the end. I was raised by a family not always accepting, not always tolerant, but a family that was a fighting force for love. We were not tongueless, we inhabited each other’s lives in ways too difficult to explain. Family can sometimes seem like an ugly language. It can make us want to hide behind walls. But it is a language that, once learned, brings great insight into togetherness.
I am about to accept two newly born grandchildren into my life and into a conflicted divided society.
When I was born, I was not welcome. I was not the result of a love story. I was not intended for this world, and that fact has left me — in so many ways, mostly creative —with wanderlust in a mind as dark as a self-confinement cell.
I’ve made egregious mistakes in my life, lied, bullied, fought everything except to face the truth of my life, even with the help of people who truly cared and loved me. It didn’t matter what came, how, or where, I never wanted to be found.
These two grandchildren, and any that follow, will have the most wonderful lives ahead of them, not because of fortune or fate, but because they entered this world belonging.
With each, I will learn a new language; one that will build as these children grow, with building blocks, toy towns, Christmas signs and Santa Clauses, with sleds piled high, marvelous worlds, changing then, to the language of geographical maps and travel, new experiences, and when every language is taught, it will be time to learn the language they will teach.
If life were a week long, I sit here on a Friday evening, never having stayed home enough. The shutters are almost down. The road I have traversed has never been unlit, even on the outskirts, but full of noise and lights and people. There have been carnivals, merry-go-rounds, dippers that sometimes dipped deep — but rose high.
I cannot wait to be a grandparent. I have so much to say, to share, to close out a chapter with so much joy.
The last part is where I am at. It is simplistic enough.
It is family. It is everything.
So I wake up, and march with others feeling the same way I do. The world has to be better for everyone — your world and the world of my grandchildren. | https://medium.com/literally-literary/the-friday-evening-of-life-62cebecb6a48 | ['Harry Hogg'] | 2020-06-08 05:49:44.091000+00:00 | ['Grandchildren', 'Nonfiction', 'Love', 'Relationships', 'Life'] |
Confessions of an Avid Reader | My childhood Books! (some of them)
I have lived without TV many times in my life. Either I didn’t have a television or I didn’t have cable, which never seemed worth the expense to me. When DVDs became available, I started a movie collection for times I wanted visual entertainment. Back in the days when you could still get something to fuzz in on your screen using an antenna, I would occasionally watch something on the four channels.
I admit to being able to get wrapped up in great movies and captivating television shows, but ultimately, if I went to that proverbial desert island without video, I’d survive.
However, if you plopped me on that gorgeous island with a stocked cabana, handsome bartender (ssh, don’t tell my husband), and all the scotch I ever wanted to drink (and the ability to stay sober), if you don’t also provide a vast library, I’ll opt for returning to the mainland in a nanosecond.
I have, you see, never, ever been without a book.
When we were children, the most exciting part of the school year was when the Scholastic Book truck would arrive. Each time, I marveled that our tiny, turn of the (last) century, stone school in the middle of rural western Pennsylvania mining country continued to be found by the driver. He must have consulted map in an atlas about the state!
Our parents allowed us three kids to pick one book each, so the decision was agonizing. What if that truck didn’t find us the next time? Which one would I choose? Would it be poetry, adventure, riddles?
(some of the) author’s childhood books
I’ve gotta make an aside: I wrote that last sentence then traipsed downstairs to see what books I was reading as a grade schooler. Here are the categories:
Disney books to movies
Poetry
Westerns
Assorted single topics
But the greatest number of books? Mysteries! That is so not a surprise! Wanna know what some of them are? Sure you do, I know you do, okay, here they are:
The Midnight Beast (original title Lion at Large) by Richard Parker
Mystery of the Witches’ Bridge (original title, duh, The Witches’ Bridge) by Barbee Oliver Carleton
The Swiss Family Robinson (original title The Swiss Family Robinson — just seeing if you’re paying attention) by Johann David Wyss
The Lost World by Sir Arthur Conan Doyle — think my 1965 paperback edition of this classic is worth anything? No, I suppose not.
Magic Elizabeth by Norma Kassirer. Here’s one for you, when the internet first started — yes, there was a time, Millennials, where the world wasn’t at our fingertips — I tracked down Norma and was able to thank her for such a fun book.
Okay, stop pressuring me. Yes, I will add them to my to-re-read shelf and do so before the end of the year. Those kids’ books will make an interesting diversion to my GoodReads Challenge, eh?
Back to the truck
[No, wait, I have one more side note for you. I find it quite hilarious that I have a book from 1967, when I was eight, entitled Too Much Noise. I am sound-sensitive. I sleep with a fan going at home and earplugs anytime I’m not in our abode. Loud noises hurt my ears and children squealing at that wretchedly high pitch of theirs equals inch-long nails on a chalkboard. Mom always said I was the worst sleeper — guess this mini tome backs that statement!]
Now seriously, back to the truck and agonizing decisions.
Believing in the less-is-more mentally, that frugalness restriction on behalf of our parents simply ensured that our books were precious and treated with tender care. The four of us (there’s a sister 10 years younger than me, so she wasn’t part of the truck experience) still have our childhood books.
Books remain like gold to me and I treasure them.
Right now I’m reading:
William Shatner’s Up Till Now. Don’t miss it if you want to laugh out loud when you’re supposed to be nodding off to sleep. He’s uniquely hilarious.
John Grisham’s The Innocent Man. I’ve been hit and miss with Grisham over his long career. This one, his first non-fiction, is a page turner and quite disturbing.
Pierce O’Donnell’s In Time of War: Hitler’s Terrorist Attack on America. Did you know the Germans once landed submarines near Long Island and Florida, letting out several would-be terrorists? I didn’t.
Antony Beevor’s The Second World War. It took a year to get through Beevor’s D-Day: The Battle for Normandy, but was so worth it that I’m determined to finish this saga as well.
There’s also a little cozy in there I started last night, but I have a feeling I won’t finish, so I’m not providing the name. It’s one of those books that has, for me, a pedestrian writing style — the character went to work, she went to the store, she went home. I scream out, “What’s the point of the story?” Then I calmly go to Scrivener (writing software), open my novels, and read the first chapters and think: Yes, at least you would know what my books are about. You may not like them, but you would get the gist of the story
There you have it — four books and a Bible study (Women of the Bible). That’s how I roll. In the midst of these, I stopped to read Preston & Child’s, City of Endless Night.
Sunday, it was 86 degrees farenhiet with correspondingly thick humidity, so my typical afternoon doing yard work was out of the question. I flung myself into my favorite easy chair, cool glass of tea at hand, and indulged. In this thrilling, speedy novel, they redeemed themselves from the last two flops. Whew.
a favorite reading spot on a cooler afternoon
Many of my friends are disciplined in their reading. They actually read one book at a time! Can you imagine the self-control they must have to do that! Astounding! Alas, I have tried to stem my yearnings for devouring various genres at the same time. But with the multiple selves I have inside, it seems to work best if I keep at least one book going for serious me, one for laughing me, and one for historian me. I’ve not yet had a problem keeping the stories straight — I don’t think I’m in any danger of suddenly placing Captain Kirk in WWII Europe. So what the heck, I keep at it.
Then I go into elimination mode.
My household rampages to clear out stuff usually begins with the easiest to part from: clothes we don’t wear, shoes that no longer have any support to them, coffee cups with chips in the rim. These are the easy things — they are donated or tossed, depending on their overall condition. My trash, your treasure, right?
Paperwork is next. I’ve moved so many times over the decades that even this is not that big of a deal anymore. I delightedly shred like an Enron employee — having either chosen to scan that item or have it simply disappear from my life. My load lightens.
I love to purge the unnecessary and the redundant, removing the heaviness of too many objects in my life — but watch out, I occasionally purge people.
Then it comes time to go through bookcases.
Ah, what agony! What indecision! What pain!
Leave room for more! author pic
When I left Red Lodge, Montana to move back to Pennsylvania, I assessed the loads stacked on three bookcases and donated or sold over 300 volumes. It hurt. But I lived. I’ve moved five times since I got to Pittsburgh and not having those small, but ever-so-heavy boxes hiding my perfect bound treasures has eased the process.
I’ve been in this home with my husband for ten years. Collecting creeps up on us when we rarely pack up lock, stock, and barrel and go elsewhere. Formerly, each spring and fall, I reviewed my collection, removing book after book from the shelves and assessing each. What ones won’t I read again, what ones haven’t I yet read, what ones are like beloved friends I couldn’t possibly expel from my life?
My annual book-purge evolved into a continual process with an empty box always sitting by one bookcase. It’s there to remind me when I finish a physical book to pause and think: Will I re-read this? Is there someone I want to pass it along to? Both answers no? Into the box it goes, designated for the Mt. Lebanon Library’s used bookstore.
Then there are e-books.
I love my Kindle because I can get a book right NOW, and I can take a hundred books on vacation and not have to pay to check their own suitcase. I love highlighting a word to learn or confirm its meaning. I love that the Kindle app allows for creating collections. It was also a big help when you could start deleting books. That was an organizational purge I longed to make. There are, sadly, some truly bad books out there and although I’ll try reading almost anything, sometimes they have to go.
I started this year (as I did last year) with a plan.
My goal was to read every book on hand — physical and Kindle — without buying any new ones. By December (and now) I realized I failed. I mean, Michael Connelly releases a new Bosch each autumn and John Sandford is sure to unleash a new Virgil Flowers — please, won’t you, John?
Sigh, I may never get ahead on this goal, but I sure will entertain myself trying.
And you?
Are you a crazed and avid reader? Do you read more than book one at a time in different genres? Do you have one genre you stick to? Do you — horrors! — skip reading entirely and take to the screen instead?
Tell me I’m not alone in my desire to never be without a book. | https://rosemarygriffith.medium.com/confessions-of-an-avid-reader-a9d359fa4040 | ['Rose Mary Griffith'] | 2020-10-12 17:45:44.422000+00:00 | ['Reading', 'Books', 'Fiction', 'Parenting'] |
The Moon Land | The Moon Land
A poem about being a poet
Photo by karolina gac on Unsplash
intrigue me,
write that story which is lost somewhere
or get me that book which is abandoned in the library.
help me dream, because I am lost.
take me to that moonland which is somewhere.
tell me it’s okay to be who I am
a loner, an artist, a thinker, a poet
it’s not easy to hold so many pieces.
tell me it’s okay to be sad in the new world.
I know you don’t read me
for a few minutes just be silent with me,
love me. | https://medium.com/scribe/the-moon-land-19c5b828f615 | ['Priyanka Srivastava'] | 2020-09-27 12:06:25.027000+00:00 | ['Poetry', 'Writers On Writing', 'Sadness', 'Anxiety', 'Writing'] |
Is your data yours? Privacy in todays interconnected world. | Photo by Alex on Unsplash
Data privacy & civil right
Is your data yours? Privacy in todays interconnected world.
Two faces of mass surveillance.
Since the beginning of the 21st century we witnessed an increasing trend in the strengthening of state surveillance. Due to some events beyond citizen control, fundamental rights have been ceded to the government for the benefit of the community. The State Surveillance system was strengthened from 2001 after the attack on the Twin Towers. This new security theory was called “Terrorism Surveillance State” and was applied internationally, mainly in the United States. This surveillance state relied heavily on strongly controlling everything that happened within borders, which in some cases exceeded the limits that separated surveillance from xenophobia (or in some cases put into question the individual freedom of the individuals). Border control was strengthened, extreme immigration measures and airport controls were reinforced with material prohibitions based on past terrorism attempts. Such prohibitions include not allowing passengers entering the main cabin with liquids greater than 100 ml, taking off shoes and placing yourself in highly accurate full-body scanners at the security check stage
However, the emergence of data collection and cross-referencing of date from different sources was different from these previous linear actions of prohibition and control. Showing greater effectiveness in preventing acts of terrorism. The seemingly endless race to collect data to prevent such acts sparked major political debates that questioned the scope of these measures as opposed to the basic values of freedom. In other words, the balance of the sovereigns -the citizens — seemed to be unbalanced against them. After visualizing the imbalance, the intellectuals began to question and discuss the benefits and shortcomings of State Surveillance. Dr Reinhard Kreissl’s arguments in his paper “Terrorism, mass surveillance and civil rights” opens the debate on privacy, civil rights and security: Should civil rights and privacy be sacrificed and traded in for more security? Will more surveillance and more data for the law enforcement agencies significantly increase the level of security in a society?
Questions raised by Dr Reinhard Kreissl not only apply specifically to mass state surveillance of terrorism (note that we added the term “mass” after the appearance of the data as a source of surveillance), but also apply to the present day in the midst of the global crisis caused by Covid-19. In my opinion, we are beginning to witness the legitimization of the “pandemic mass surveillance”, renewing the term used by Dr. Kreissl, but not his questions about the invasion of privacy, and now twenty years later we are asking ourselves same questions.
What is “mass surveillance” about?
In the year 2020, technology has advanced by leaps and bounds compared to the year 2001. Access to data has multiplied exponentially and the massive — and necessary — use of the Internet has made our data available to many more people than we would like. Therefore, the data that we share is not private and if we add the cross-checking of data, it is not anonymous either.
Photo by ev on Unsplash
The levels of interconnection of the data are extraordinary and you will probably be nervous to know that it is not necessary for you to put your name, surname or ID so that they can identify you with a very low margin of error. The Markup shows us some clear examples about the (de)anonymization of data, when we believe that they are anonymous. Laboratories, governments, statisticians, etc. have access to medical data for research, traceability and disease evolution. They are shared without name, surname and ID number, so at first glance they seem anonymous, right? But, what happens if we cross them with census data (also anonymous) and with data from your any online subscription service you’ve got? VOILA! With a very low margin of error, a fairly accurate profile can be made of you, one that includes information such as your ID number, first and last name, your political preferences, your address and any other information you might think of. It is here that a major problem with privacy arises. The combination of data that we constantly generate, when paired with different sources that have different and complementary datasets of one individual leads to that data no longer being private
The health crisis, as we mentioned earlier, has shown the emergence of a new type of surveillance, the pandemic. To some, it may be a great mechanism for alleviating the virus. For others, it is a new excuse for governments to justify excessive control over citizens. However, the real debate arises when we look at the issue from a macro perspective. What is the regulation so that this issue does not become an excuse to prolong control? We agree that Big Data is a great tool for these exceptional situations such as terrorism and pandemics, among others. This acceptance does not mean that outside these particular conditions the acceptance of excessive surveillance remains. Privacy is perhaps one of the greatest pillars of freedom, where from John Stuart Mill’s point of view it is a fundamental pillar, as long as it does not impact negatively on the freedom of others. Following this idea, we can put under the magnifying glass all those forms of excessive surveillance that are not found under exceptional circumstances such as the pandemic that is currently underway.
Yascha Mounk, a specialist in political theory and democracy, has several times questioned state surveillance without control and regulation mechanisms. Not only does he view it as a threat to privacy, but he also warns of the damage it does to liberal democracy. Mounk raises three areas in which we must work to attain responsible surveillance:
● All the measures taken in particular situations should be temporary and never indefinite. ● Additional mechanisms of democratic and institutional control, valid within the time horizon defined for exceptional measures, are needed. ● All measures taken must be specifically aimed at combating a particular end, terrorism, pandemic, etc.
The areas raised by Mounk are clear and reasonable in terms of the protection of freedom. Decisions that negatively affect civil rights must have a beginning and an end. And if they do not, they should be duly accepted by the citizens. Fighting a pandemic or terrorism is an end in itself, and surely a large majority is willing to contribute to ending these undesirable situations. However, guarantees must also be demanded for this voluntary surrender of freedom. At the end of the day, it is our freedom that is at stake. | https://medium.com/discourse/is-your-data-yours-privacy-in-todays-interconnected-world-ad600811cba5 | ['Gui Bentley'] | 2020-05-07 12:26:00.972000+00:00 | ['Freedom', 'Surveillance', 'Politics', 'Data', 'Society'] |
Physicians are working like Robots for Robots. But, convincingly, shouldn’t it be the other way around? | It is not a surreptitious truth that anyone who pursues a professional life must dedicate part, if not all, of their lifespan to learning the required skills. Moreover, pronounced, valid to those seeking the medical profession. Long lore period, sleepless nights, and stressful days are very well known to every medical doctor. That does not only hold in the United States but is also valid across the globe. That is why the medical profession is considered a lifestyle more so than just a career. It is the lifetime investment of every physician. It is precisely based on the latter fact that most medical doctors, by omission, only know what they have invested their lives in!
Indeed, some physicians may step out of their comfort zone and pursue alternate avenues to adapt to their ever-changing domain, i.e., the Healthcare industry. However, the majority take a passive stance, particularly regarding the information technology of their industry.
Physicians working like Robots
Experts in every industry logically and by virtue have taken control of the leadership within their respective domains. They have harnessed the best of what technology can offer to their needs by taking the lead on business requirements, design validation, and quality assurance.
Unfortunately, we cannot say the same for physicians. Health information technology is currently driving the physician practice route towards an unfamiliar sphere. Information technology and its contributors are merely dictating how physicians must practice and how much they should earn. In other words, the value of their work is not decided between them and the patient but merely determined by data analysts’ algorithms. The 21st-century physicians are the disciples of mathematical algorithms. Working like a Robot is the expected attitude of physicians in developing countries today.
Physicians, by nature, are skilled craft persons. They are pretentious of their capacity to customize the care for each of their patients. But Peculiarly, they can be reluctant to simplify their efforts or more error-proof if doing so jeopardizes this artistic part. Physicians are also the right analytical sages and use these skills in their regular practice. When an idea for change surfaces, they are adept at what De Bono calls “black hat thinking,” an intellectual style that emphasizes the solution set’s criticisms. This style is useful, but it will smother innovation and change if it governs every meeting.
The Black hat attitude’s ultimate consequence has not worked in their interest. It has created a vacuum for other industry leaders, information, and the insurance industry, significantly to disrupt the healthcare domain. Doing so has uttered physicians’ taking a follower stance rather than a leadership role, just if physicians should assume the pre-programmed algorithms.
Physicians working for Robots
Health information, too big data have been the pinnacle of developing healthcare technology. The development is more vertical than the healthcare community can keep up with, establishing a chasm that facilitates makeshift scrutiny and nooks in the healthcare vacuum. The adeptness of deciphering information into the right diagnosis and furnishing the corresponding treatment alternative is an everyday habit for all physicians. It implicates huddling the pertinent data for each patient, integrating it with pre-existing knowledge, constructing a clinical judgment, and inciting the most suitable treatment in line with the patient’s expectations and needs. The clinical assessment pertains to the decision-making process, likewise called clinical reasoning. It allows clinicians to reach a clinical conclusion on treating a disease in an individual patient contingent on objective findings and collected subjective patient understandings. Today, data industries are contemplating strategies to replace or, at the very least, replicate physician clinical practice.
Artificial Intelligence needs an algorithm to diagnose, just like a physician needs medical knowledge. Developments in modern data analytics and computational leverage grant the recourse to obtain new perspicacity and transport data along with subsidized value to clinical practice in real-time. Such systems are denominated as “Clinical Decision Support” (CDS).
The Computer doesn’t go to Medical School. Still, with the inception of new machine learning facilities, the unstructured data is coming to be more than ever convenient, particularly with esteem to shadow learning of physician clinical judgment by the deep learning gadgets. Therefore, amidst physicians opt into being followers, it is essential to realize that they are working like robots. Nonetheless, they are working for them as the instrument of in-depth learning education.
Robots working for Physicians
Healthcare sentiments need to change!
A medical practice requires a meaningful transition from one hundred percent unyielding applied a science-based remedy to the domain of compassion that inundates the fairest of all scientific and technological inventions within the confines of its core integrity. Medical science is about recipes to the patient as individuals and not rectifying a set of query outcomes and procedures. Technological innovations such as Data Analytics, Pharma, imaging, biotechnologies are merely complementary scientific grounds to medicine, not vice versa. Medicine is the science schooled between a physician and one patient. That contrasts to something cultivating a cookie-cutter medicine through which the traditional population health and overreliance on technology would offer. In the present epoch, technology has become a part of human existence. It can bring about comfort to our lives and deliver safety and efficiency to our patients. But we will be leading the way to tragic outcomes if the technology cannot be governed or adapted by the industry professionals. Thus, technology has to be expanded under the Oversight of Industry Experts, constructed to make life simpler for patients and clinicians alike. But never should be for any reason diverse. | https://medium.com/technology-hits/physicians-are-working-like-robots-for-robots-fc7e318d5333 | ['Adam Tabriz'] | 2020-12-25 02:41:53.077000+00:00 | ['Robots', 'Medical Practice', 'Data Science', 'Artificial Intelligence', 'Medicine'] |
Contribute to No Mud No Lotus | Contribute to No Mud No Lotus
Slow-cooked words for hungry minds
Photo by DAVIDCOHEN on Unsplash
We are interested in evocative, thought-provoking, mind-pleasing words: short to medium-length stories (about 500 to 2,000 words) on subjects of travel, writing, or inspiring essays. Submit well-crafted, fully edited and copyedited stories to us at [email protected]. | https://medium.com/nomudnolotus-writer/contribute-to-no-mud-no-lotus-ebc4e0540c12 | ['Camille Cusumano'] | 2020-10-06 21:51:39.930000+00:00 | ['Book Review', 'Nonfiction', 'Essay Writing', 'Creative Writing', 'Travel'] |
Top Hive Commands with Examples in HQL | Hive Commands — Edureka
What is Hive?
Apache Hive is a Data warehouse system which is built to work on Hadoop. It is used to querying and managing large datasets residing in distributed storage. Before becoming an open source project of Apache Hadoop, Hive was originated in Facebook. It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language called HiveQL (HQL).
Hive is used because the tables in Hive are similar to tables in a relational database. If you are familiar with SQL, it’s a cakewalk. Many users can simultaneously query the data using Hive-QL.
What is HQL?
Hive defines a simple SQL-like query language to querying and managing large datasets called Hive-QL ( HQL ). It’s easy to use if you’re familiar with SQL Language. Hive allows programmers who are familiar with the language to write the custom MapReduce framework to perform more sophisticated analysis.
Uses of Hive:
1. The Apache Hive distributed storage.
2. Hive provides tools to enable easy data extract/transform/load (ETL)
3. It provides the structure on a variety of data formats.
4. By using Hive, we can access files stored in Hadoop Distributed File System (HDFS is used to querying and managing large datasets residing in) or in other data storage systems such as Apache HBase.
Limitations of Hive:
* Hive is not designed for Online transaction processing (OLTP ), it is only used for the Online Analytical Processing.
* Hive supports overwriting or apprehending data, but not updates and deletes.
* In Hive, sub queries are not supported.
Why Hive is used inspite of Pig?
The following are the reasons why Hive is used in spite of Pig’s availability:
Hive-QL is a declarative language line SQL, PigLatin is a data flow language.
Pig: a data-flow language and environment for exploring very large datasets.
Hive: a distributed data warehouse.
Components of Hive:
Metastore:
Hive stores the schema of the Hive tables in a Hive Metastore. Metastore is used to hold all the information about the tables and partitions that are in the warehouse. By default, the metastore is run in the same process as the Hive service and the default Metastore is DerBy Database.
SerDe :
Serializer, Deserializer gives instructions to hive on how to process a record.
Hive Commands :
Data Definition Language (DDL )
DDL statements are used to build and modify the tables and other objects in the database.
CREATE, DROP, TRUNCATE, ALTER, SHOW, DESCRIBE Statements.
Go to Hive shell by giving the command sudo hive and enter the command ‘create database<data base name>’ to create the new database in the Hive.
To list out the databases in Hive warehouse, enter the command ‘ show databases’.
The database creates in a default location of the Hive warehouse. In Cloudera, Hive database store in a /user/hive/warehouse.
The command to use the database is USE <data base name>
Copy the input data to HDFS from local by using the copy From Local command.
When we create a table in hive, it creates in the default location of the hive warehouse. — “/user/hive/warehouse”, after creation of the table we can move the data from HDFS to hive table.
The following command creates a table with in location of “/user/hive/warehouse/retail.db”
Note : retail.db is the database created in the Hive warehouse.
Describe provides information about the schema of the table.
Data Manipulation Language (DML )
DML statements are used to retrieve, store, modify, delete, insert and update data in the database.
Example :
LOAD, INSERT Statements.
Syntax :
LOAD data <LOCAL> inpath <file path> into table [tablename]
The Load operation is used to move the data into corresponding Hive table. If the keyword local is specified, then in the load command will give the local file system path. If the keyword local is not specified we have to use the HDFS path of the file.
Here are some examples for the LOAD data LOCAL command.
After loading the data into the Hive table we can apply the Data Manipulation Statements or aggregate functions retrieve the data.
Example to count number of records:
Count aggregate function is used count the total number of the records in a table.
‘create external’ Table :
The create external keyword is used to create a table and provides a location where the table will create, so that Hive does not use a default location for this table. An EXTERNAL table points to any HDFS location for its storage, rather than default storage.
Insert Command:
The insert command is used to load the data Hive table. Inserts can be done to a table or a partition.
INSERT OVERWRITE is used to overwrite the existing data in the table or partition.
INSERT INTO is used to append the data into existing data in a table. (Note: INSERT INTO syntax is work from the version 0.8)
Example for ‘Partitioned By’ and ‘Clustered By’ Command :
‘Partitioned by’ is used to divided the table into the Partition and can be divided in to buckets by using the ‘ Clustered By ‘ command.
When we insert the data Hive throwing errors, the dynamic partition mode is strict and dynamic partition not enabled (by Jeff at dresshead website ). So we need to set the following parameters in Hive shell.
set hive.exec.dynamic.partition=true;
To enable dynamic partitions, by default, it’s false
set hive.exec.dynamic.partition.mode=nonstrict;
Partition is done by the category and can be divided in to buckets by using the ‘Clustered By’ command.
The ‘Drop Table’ statement deletes the data and metadata for a table. In the case of external tables, only the metadata is deleted.
The ‘Drop Table’ statement deletes the data and metadata for a table. In the case of external tables, only the metadata is deleted.
Load data local inpath ‘aru.txt’ into table tablename and then we check employee1 table by using Select * from table name command
To count the number of records in table by using Select count(*) from txnrecords;
Aggregation :
Select count (DISTINCT category) from tablename;
This command will count the different category of ‘cate’ table. Here there are 3 different categories.
Suppose there is another table cate where f1 is field name of category.
Grouping :
Group command is used to group the result-set by one or more columns.
Select category, sum( amount) from txt records group by category
It calculates the amount of same category.
The result one table is stored in to another table.
Create table newtablename as select * from oldtablename;
Join Command :
Here one more table is created in the name ‘mailid’
Join Operation:
A Join operation is performed to combining fields from two tables by using values common to each.
Left Outer Join:
The result of a left outer join (or simply left join) for tables A and B always contains all records of the “left” table (A), even if the join-condition does not find any matching record in the “right” table (B).
Right Outer Join:
A right outer join (or right join) closely resembles a left outer join, except with the treatment of the tables reversed. Every row from the “right” table (B) will appear in the joined table at least once.
Full Join:
The joined table will contain all records from both tables, and fill in NULLs for missing matches on either side.
Once done with hive we can use quit command to exit from the hive shell.
With this, we come to an end to this article. I hope you found this article informative.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Big data. | https://medium.com/edureka/hive-commands-b70045a5693a | ['Shubham Sinha'] | 2020-05-08 07:53:53.897000+00:00 | ['Hql', 'Hive Commands', 'Big Data', 'Hive', 'Big Data Analytics'] |
This 6-Month-Old Baby Published her First Novel | Eugenia C. Gritts is the youngest person in history to be a published author. According to her family, she’s been a prolific writer since before she was even born. In fact, her early drafts were communicated through her mother’s stomach and written in Morse code.
The novel, More Milk Please, has just been released by a small publishing house in their small town of Dildo in Newfoundland. As such, Eugenia will be going on a small book tour across North America to promote the immense struggles newborns face on a daily basis.
She dives into the serious problems for newborns. Words, for one, are surprisingly difficult. Eugenia understands them, but her tongue muscles are too weak to make the correct sounds which she finds incredibly frustrating. She’s trying to say things like “please bestow the soother upon me.” Instead, people just hear “goo goo ga ga.”
In her novel, she also touches on other daily difficulties such as trying to find a nipple under a dark towel or wiggling as much as possible while her mother wipes all of the poop out from between her butt cheeks.
There are also plenty of light moments in the novel. Eugenia tells us about her fondest memories like giggling at strangers in the grocery store. Once, she even lost her nose when her grandpa was visiting.
Eugenia C. Gritts is already quite accomplished and working on a second novel about struggling to fart. It’s clear she’ll be rivaling Shakespeare with her prolific writing in no time.
If this 6-month-old baby can be published without any help from anyone (except her wealthy parents, well-connected agent, and expensive editor) then any of us can! | https://medium.com/the-haven/this-6-month-old-baby-published-her-first-novel-7ec718bd4891 | ['Victoria A. Fraser'] | 2020-12-22 10:48:43.464000+00:00 | ['Satire', 'Writing', 'Humour', 'Comedy', 'Humor'] |
Automation in IT | The idea of automated everything gave rise to the industrial revolution and, evolving along with the world, has reached the current stage of the post-industrial society with even more processes and services to be automated.
How can you automate your business processes in a modern company? What tools should you choose and what kind of automation would match your needs?
You process automation strategy should be tailored to your company whether you choose an end-to-end Business Process Automation or limit it to automatic registration of sold items.
Since we are an IT company, our passion is IT automation and its evolution. But, first, what is IT Automation?
IT Process Automation
IT process automation (ITPA), or runbook automation (RBA), is the ability to orchestrate and integrate tools, people and processes through workflow.
Runbook Automation is a category of products that provide the connective tissue between lower-level management applications for networks, systems, applications, etc., and higher-level tools such as a service desk.
ITPA solutions are designed to automate IT processes so that they could improve operational efficiency, mitigate operational risk, reduce costs, offset the negative effects of complexity and improve standards enforcement. They are supposed to coordinate the work, sequence the timing of subordinate workflow, and deliver multiple tasks and services across IT groups and models.
What is an IT Process?
An IT process is a set of tasks (not one action) that an IT manager or operations staff member takes to deal with a certain situation or in response to operational requirements. The idea of an IT process specifically excludes business processes and focuses only on the IT-related spheres, such as preparation and execution of backup procedures, change management including audit capabilities, or establishing new user resources.
What IT Processes Can We Automate?
It is often said that the IT department is usually the least automated division in a company and IT processes cannot be automated due to their complexity and ad-hoc nature. It is more or less a myth that can be overcome by spending a bit more efforts and better understanding of what can be automated and in what way. Some of the areas where automation can add more values include:
Change management;
Configuration management;
Provisioning;
Routine maintenance;
Identity & access management;
Backup and restoration
Disaster recovery
Data movement
Most of these tasks are repetitive and consume many hours from IT teams. In fact, some of these tasks are relatively easy to automate. Nevertheless, we should be aware that in complex IT environments — particularly when systems are virtualized or cloud-based — implementing IT process automation can be challenging.
Robotic Process Automation
But let’s assume that you are an ordinary office worker and do not care about clouds and configurations on your computer. All you care about is routine work of data entry or batching you on which you spend a couple of hours a day. Every day of your working life!
For end-users, Process Automation comes in another — Robotic — flavor. But do not imagine a metal humanoid figure occupying your desk, In fact, it is another side of IT Process Automation — a software product that interfaces and interacts with applications on behalf of humans. RPA could be programmed to handle repetitive routine tasks without the need for human input.
The difference between RPA and ITPA lies in how each is used as well as in their complexity. Robotic Process Automation is used more at an end-user level to assist office workers who are not necessarily well-versed in programming and other complex IT tasks. ITPA, on the other hand, is leveraged primarily for more complex workflows, such as automating incident management to handle incoming alerts, analyze, verify and prioritize them, notify the appropriate parties and when the desired action is taken, complete the workflow and close the ticket. This is a much more complicated process and is usually overseen by experienced IT professionals
Overall, it can be said that RPA is automation for the end-users while ITPA is more behind-the-scenes process.
Benefits of Process Automation
reduced human errors,
faster response to mission-critical system problems
more efficient allocation of resources. However, achieving these benefits isn’t always trivial.
Which solution is right for you?
There’s no simple answer. To find the right approach, it would be necessary to conduct a thorough needs analysis to determine what features would be required. Still, there are a vast range of options and tools — from simple scripts to complex AI and ML solutions. | https://medium.com/sciforce/automation-in-it-31550abbda41 | [] | 2019-02-19 13:21:00.734000+00:00 | ['Process', 'Artificial Intelligence', 'Machine Learning', 'Automation', 'Robotics'] |
Mask Up! | Mask Up!
It’s not that hard, people.
Photo by engin akyurt on Unsplash
They say no one likes masks
But I think that’s a lie.
It’s a chic fashion statement
That says, “I don’t want to die.”
They say masks are a burden,
But it depends whom you ask.
I call it a hug for my face.
It’s like a Snuggie, my mask.
They say masks don’t protect you
Or the people you meet.
I guess some people think science
Has become obsolete.
They say masks are just silly.
What I think that they mean
Is their IQ matches the digits
Found in COVID-19. | https://medium.com/no-crime-in-rhymin/mask-up-923d712d4b05 | ['Jen Kleinknecht'] | 2020-10-15 02:01:35.295000+00:00 | ['Poetry', 'Rhymes', 'Masks', 'Covid 19', 'Coronavirus'] |
Like Beef. | Like Beef.
Sometimes there really are no words. As I do research for other things, publications in a newspaper move me, and I had to create a publication just to keep them separate to the rest of my work.
Oh Really?
Baron Von Liebeg, a celebrated German chemist, states that there is much nutrition in one pound of pure chocolate as there is in a similar quantity of rare beef. Pure Chocolate is good, drink and beverage in one. GHIRADELLI’S- 1984
This advert NOT sponsored.
Reference/Resource/Bibliography
Because my references and resources are about 100 sources and very long, often times, this tanks my stats. I’ve opted to record them on a different page as to not ax my required reading time. You can find them recorded accurately here.
Like this story? Want to see others like it? Check out more in Internet Archaeology (True Crime) or Historical News. You can catch technology/cyber security influenced articles in Infoseconds. | https://medium.com/historicalnews/like-beef-c4dd2e3c0688 | [] | 2020-12-17 04:04:49.842000+00:00 | ['History', 'Food', 'Vegan', 'Chocolate', 'Time'] |
Your Users Are Telling You a Story, Are You Listening? | You may have heard more and more about “Big Data” recently; the term has been growing steadily in popularity over the past couple of years. But what exactly is “Big Data”? And how can you use it to your advantage?
We’re creating data quicker today than ever before. 90% of all the data in the world has been created in the last two years.
In the early days of the internet, information was created by individuals and companies for an audience to consume. As Web 2.0 grew, the audience created their own content for sites like Flickr and Facebook. We’ve now reached a time where connected devices are recording data at an unimaginable rate.
A modern car records about 25GB of data per hour from its sensors; information about the road conditions, the driver and the climate to highlight just a few metrics.
This is a huge amount of information. In fact, it’s about the same as 12,500 copies of War and Peace. Just consider that for a moment: you can read War and Peace in about 48 hours but in that same timeframe another 599,999 more have been created…
We’re now in a situation where there is too much information for people, and the tools we’ve developed, to process. Not so long ago it used to be prohibitively expensive to keep this much information. Storage is now available so cheaply, companies are able to save and use the data to gain a competitive edge.
Big Data is the term used to describe the new tools and processes that allow business to understand this information. It can help you solve specific issues, like security and fraud, or find new business opportunities by guiding your decision making.
Online retailers, for example, can track every product a customer looked at before making a purchase. This data allows far greater personalisation than a simple “users also bought…” system.
You and I could both buy the House of Cards boxset but we might have arrived at it for completely different reasons. You might like the political aspects whereas I’ll watch anything directed by David Fincher. Our preferences will be clear from the journey we took to arrive at the boxset. If we can understand what our customers are subconsciously telling us, we can increase total spend and total completed transactions.
The education industry, which traditionally works on a yearly feedback cycle, is beginning to see the benefits of realtime feedback. Online learning platforms can help educators understand their students by tracking their behaviour. An abundance of information can be used to spot trends and correlations which would otherwise go unnoticed.
We’ve never known how long a student takes to answer a question or whether they change their answer once they’ve read a subsequent question. This type of information is absolutely invaluable. It gives a insight into the way a student is thinking. It allows students to be treated as individuals.
Businesses have been aware of the benefits of analytics for some time, but Big Data has some key differences, encapsulated by the 3 vees:
“Volume”. We’re dealing with more data than ever before, but we can now analyse it all instead of just taking samples.
“Variety”. We used to deal with data that was very structured and fit nicely into a spreadsheet or database. Data now comes in a range of formats, from text, location data, images and tweets. It’s all valuable so we need to process it all the same.
“Velocity”. We can no longer produce reports once per quarter. Big Data allows us to use near-real time insights to be more nimble than our competitors.
It is not just having more data than the competition that will give you an advantage. Historically a lot of this information lives in silos in different parts of an organisation. The most valuable insights may only be found by consolidating the data and viewing it holistically. Companies need leadership teams that set clear goals and ask the right questions. True advantage will come by combining your domain expertise with data insights.
If you’re not listening to what your users are telling you, someone else will. | https://medium.com/future-proof-briefings/your-users-are-telling-you-a-story-are-you-listening-6249a99394f3 | ['Josh Sephton'] | 2016-03-10 11:40:37.270000+00:00 | ['Big Data', 'Data Science', 'Customer Experience'] |
Everything you need to know about AutoML and Neural Architecture Search | AutoML and Neural Architecture Search (NAS) are the new kings of the deep learning castle. They’re the quick and dirty way of getting great accuracy for your machine learning task without much work. Simple and effective; it’s what we want AI to be all about!
So how does it work? How do you use it? What options do you have to harness that power today?
Here’s everything you need to know about AutoML and NAS.
Neural Architecture Search (NAS)
Developing neural network models often requires significant architecture engineering. You can sometimes get by with transfer learning, but if you really want the best possible performance it’s usually best to design your own network. This requires specialised skills (read: expensive from a business standpoint) and is challenging in general; we may not even know the limits of the current state-of-the-art techniques! It’s a lot of trial and error and the experimentation itself is time consuming and expensive.
This is where NAS comes in. NAS is an algorithm that searches for the best neural network architecture. Most of the algorithms work in this following way. Start off by defining a set of “building blocks” that can possibly be used for our network. For example, the state-of-the-art NASNet paper proposes these commonly used blocks for an image recognition network:
NASNet blocks for image recognition network
In the NAS algorithm, a controller Recurrent Neural Network (RNN) samples these building blocks, putting them together to create some kind of end-to-end architecture. This architecture generally embodies the same style as state-of-the-art networks, such as ResNets or DenseNets, but uses a much different combination and configuration of the blocks.
This new network architecture is then trained to convergence to obtain some accuracy on a held-out validation set. The resulting accuracies are used to update the controller so that the controller will generate better architectures over time, perhaps by selecting better blocks or making better connections. The controller weights are updated with policy gradient. The whole end-to-end setup is shown below.
The NAS algorithm
It’s a fairly intuitive approach! In simple terms: have an algorithm grab different blocks and put those blocks together to make a network. Train and test out that network. Based on your results, adjust the blocks you used to make the network and how you put them together!
Part of the reason this algorithm succeeds and the paper demonstrates such great results is because of the constraints and assumptions made with it. The NAS discovered architecture is trained and tested on a much-smaller-than-real-world dataset. This is done because training on something large, like ImageNet would take a very long time. But, the idea is that a network that performs better on a smaller, yet similarly structured dataset should also perform better on a larger and more complex one, which has generally been true in the deep learning era.
Second, is that the search space itself is quite limited. NAS is designed to build architectures that are very similar in style to the current state-of-the-art. For image recognition, this is to have a set of repeated blocks in the network while progressively downsampling, as shown on the left below. The set of blocks to choose from to build the repeating ones are also quite commonly used in current research. The main novel part of the NAS discovered networks is how the blocks are connected together.
Check out the best discovered blocks and structure for the ImageNet network on the right below. It is interesting to notes how they contain quite a random looking mix of operations, including many separable convolutions. | https://towardsdatascience.com/everything-you-need-to-know-about-automl-and-neural-architecture-search-8db1863682bf | ['George Seif'] | 2019-05-04 13:21:52.199000+00:00 | ['Technology', 'Towards Data Science', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning'] |
The Emergence of Religion & God as an Evolutionary Strategy | Evolution of Religion
Now that we have our evolutionary primer, we can apply this logic to the emergence & prosperity of religion. There’s something peculiar about human civilization: we’re just not built for the world we live in. Most of our evolutionary track occurred in a very different world than that which we currently occupy. Any number of the ills of modern society, from undue violence to chronic loneliness can — in one way or another — find origin or explanation in our evolutionary track. We’ll only focus on one element of this:
The Invention of the Stranger
150 & Me
For the majority of human history (hunter-gatherer societies), we lived in pretty small groups. Current estimates are around 150. At first, this might just sound like an arbitrary fact, but it carries a lot of weight. It’s known as Dunbar’s number & research suggests it’s the amount of social connection we can handle, given our brain size. By all evidence it appears, there’s a ratio between brain-sizes & group-sizes in primates (including humans), the number approximately 150.
The ratio of the size of the neocortex — the region of the brain associated with cognition & language — to the size of the body is linked to the size of social groups.
As a consequence 150 estimates how many social interactions we can handle.
150 → 10'000
Around 12'000 years ago, the human race went through a major transition. For exceedingly long periods the human race lived in hunter-gather-societies (80'000 years +), then about 12'000 years ago the Neolithic Revolution began & agriculture was invented. All of a sudden groups could take on specialized roles & survive in much larger numbers.
12 '000 years ago, human settlement groups exploded from around 150 people to anything up to 10'000 people. What else happened about 12'000 years ago? A major shift in religious beliefs & practices, moving towards what we know of modern relgions today.
The Invention of the Stranger
This explosion in numbers undoubtedly brought a whole myriad of contemporary issues. Although societies could become far more efficient through specialization, the human race — nor any other species — had ever lived in groups larger than they could cognitively manage: thus the invention of the stranger.
This breaks our wiring, we’re not built to handle casual strangers. Other people should either be in-tribe (who can be trusted) or out-tribe (who should probably be eliminated).
These ramifications are still omnipresent, some geneticists attribute our innate altruism to be a byproduct of this revolution. Selfish genes receive no benefit from altruistic behaviour. ‘Undue’ altruism could then possibly be a consequence of the ability to connect with arbitrary strangers, classifying them as one-of-us.
Of course, the negative ramifications are equally evident. Ever wonder why we’re able to walk past a homeless person & feel totally detected? Able to objective a person who is clearly suffering greatly? Again, a mechanism for handling strangers (this time, less noble).
Religious Law
When this Neolithic Revolution began, we needed new mechanisms to structure society. Consider the cascade of new challenges:
Who can you trust? When should we cooperate? How do hierarchies emerge & how/why do those of ‘lower rank’ accept their position? Does society have leadership/governance? Do individuals have a commonality in goals & life objectives?
The list is endless.
Though religion greatly predates this revolution, firmer, contemporary religious beliefs emerged as the mechanism to resolve these issues.
From a pragmatic view, consider how religion addresses these issues:
Provide a ground for trust/commonality. Create shared goals & objectives that create community & comradery. Provide a framework for law — religion was inseparable from law for most of its history — that all could appreciate as just. Provide an explanation of hierarchy, serving those in higher positions in society by keeping ‘lower rank’ individuals happy with their place in society, because this is all they should need by God's law. Provide a framework for common goals & working together. Setting a moral obligation. Create meaning in life that supersedes our current suffering — encourage self-sacrifice for the benefit of the group. And so much more.
It’s an endlessly interesting topic, but here are a few small examples of behaviour that highlights the benefits:
Fighting for nation & God are easily cast as noble, one should be happy to lay down their life (the ultimate selfish-gene sabotage) for the greater good. Undoubtedly a prerequisite for a powerful society.
Religion defines trust, which allows small communities to focus on productive outputs, allowing for great specialization & maximum productivity on behalf of the group. We also see this in modern society today: consider how small groups of a particular religion dominate certain sectors (finance/law etc) by giving each other business, lower rates etc.
Provided a greater divide with others: if we are of the same religion, we are ‘brothers’. That makes it so much easier to cast non-believers as inferior/unenlightened — now consider most modern war justifications?
Donations & helping others are often a core doctrine of religion — obviously promoting the prosperity of the group.
‘Serve your god & everything you ever want will return to you’ — chasing utopia, in another life, brings us sanity & peace with our own situations whilst requiring we sacrifice for a greater good — again promoting the collection group selection.
If we have this commonality, all other discrepancies are secondary.
Religion as law greatly serves those in power, increasing societal structure & obedience.
Now, this addresses the practicalities of theology, it doesn’t speak of its origin.
What if Religion & God are an emergent property of the evolutionary track of cognitively complex beings?
Religion as an Evolving Organism
Although providing a great organizational framework & although it may serve those who lead a society, religion was almost certainly not a top-down autocratic system — but rather emergent from the interactions of society.
Religion & God may be despotically gifted from one generation to another, however, it’s highly improbably that a central autority dictate much of it’s policy. Hypocrical as it may appear — as belief in a deity is defined as a top-down centralized system — Religion probably emergent through the interation of individuals in a bottom up fashion: emergent from interaction, convergent to an meta-stable state.
The reason we began with an evolutionary primer, is to apply this thinking to religion. It’s foolish to analyze anything statically, things only make sense in the context under which they originate — as a consequence, everything is only understood in contrast with its history, competitors, era & general environment. Evolutionary thinking is also in thinking about converging to an optimum, survival of the fitness (& broader selection mechanisms) are applicable to so much more than biology: trends, cultures, social norms, ideas, innovations & yes, religion. | https://medium.com/swlh/the-emergence-of-religion-god-as-an-evolutionary-strategy-9e6149d56a41 | ['Zach Wolpe'] | 2020-10-05 10:21:05.681000+00:00 | ['Religion', 'Artificial Intelligence', 'Machine Learning', 'Complex Systems', 'Simulation'] |
The Hidden Price of Becoming A Successful Developer | You will lose a lot of your “friends”
People always say — stay away from the mediocre, you are your circles, tell me who your friends are kinda thing.
But how do you actually know? To be honest, I didn’t know it back then, all I thought was that as long as you are happy then you are in good company, I didn’t know the cost behind every decision, the hidden cost, and the long term damages if we continue to live a mediocre life.
There is something wrong in this world, people tend to settle for whatever there is instead of gradually asking for more, no other reason other than we deserve to live a life that we want.
If you are serious about becoming a developer then you need to start prioritizing, by all means, giving up weekends, Friday nights, cocktails, vacation, happy hour, and endless conversations.
The journey will ask everything from you and will take everything you’ve got, with time, effort, and narrow focus to do great work.
Your current job if you still need to work while learning, the time you needed for the study, personal errands is already enough to cover your every day for the next 12 months.
Because you really can’t expect anything new if you keep doing the same thing if you keep choosing the same choices you have been doing all your life — and it will all start to change with one different decision after another.
Some of them won’t understand, and most of them will never even try to understand, but it’s okay — work on you, and choose you, it is time.
You’ll lose some, you’ll win some — but what matters is you will know yourself better, you will see who stays and holding the torch with you.
At the end of the tunnel, you will find yourself. Shining brighter than ever — so keep working on yourself because the people that deserves you will meet you at the top. | https://medium.com/for-self-taught-developers/the-hidden-price-of-becoming-a-successful-developer-caf833464ca8 | ['Ann Adaya'] | 2020-11-30 11:32:07.072000+00:00 | ['JavaScript', 'Software Development', 'Software Engineering', 'Work', 'Programming'] |
From 9–6 to 24–7–365. | From 9–6 to 24–7–365.
These are the numbers that represent our work-life mentality and technology’s capability.
Photo by Fauzan Ardhi on Unsplash
Technological platforms do not need a break. We do.
We need to strike a balance. We have to start thinking about ways to leverage technology, so our work continues to progress without our actual presence.
In short. Automate. Then take a break. | https://medium.com/technology-hits/from-9-6-to-24-7-365-e938ee60ec71 | ['Aldric Chen'] | 2020-12-16 06:17:16.227000+00:00 | ['Life Lessons', 'Work Life Balance', 'Technology', 'Productivity', 'Automation'] |
Big data analytics: Predicting customer churn with PySpark | Exploratory analysis
Just from a cursory look at the data, we noticed that there were rows where the userId was missing. Upon further investigation, it looks like only the following pages have no userId:
+ — — — — — — — — — -+
| page|
+ — — — — — — — — — -+
| Home|
| About|
| Submit Registration|
| Login|
| Register|
| Help|
| Error|
+ — — — — — — — — — -+
This leads us to believe that the page hits from these ID-less rows are from people who are non-registered, not-signed-in-yet, or not-yet-playing-music. All these pages will eventually lead to playing music. However, since theses rows contain mostly empty values, we will drop them all before starting our analysis.
Just from the list of feature names in the previous section, we can get an idea of how rich the data is. It’s safe to say that every single interaction a user has with the app is logged.
Without going any deeper down the data rabbit hole, we first need to define churn. For our project’s sake, a churned user is someone who has visited the page for “Cancellation Confirmation”. This is where users confirm their cancellation request. Henceforth, any user who has visited said page will be classified as churned, otherwise they will be considered non-churned.
Luckily for us, printing the categories in the page feature shows that it has just such a value.
+-------------------------+
|page |
+-------------------------+
|About |
|Add Friend |
|Add to Playlist |
|Cancel |
|Cancellation Confirmation|
|Downgrade |
|Error |
|Help |
|Home |
|Login |
|Logout |
|NextSong |
|Register |
|Roll Advert |
|Save Settings |
|Settings |
|Submit Downgrade |
|Submit Registration |
|Submit Upgrade |
|Thumbs Down |
|Thumbs Up |
|Upgrade |
+-------------------------+
We can gain better insight by looking at some selected columns of the activity of a random user.
+--------------------+-------------+---------+-----+---------+
| artist|itemInSession| length|level| page|
+--------------------+-------------+---------+-----+---------+
| Cat Stevens| 0|183.19628| paid| NextSong|
| Simon Harris| 1|195.83955| paid| NextSong|
| Tenacious D| 2|165.95546| paid| NextSong|
| STEVE CAMP| 3|201.82159| paid| NextSong|
| DJ Koze| 4|208.74404| paid| NextSong|
| Lifehouse| 5|249.18159| paid| NextSong|
|Usher Featuring L...| 6|250.38322| paid| NextSong|
| null| 7| null| paid|Thumbs Up|
| Harmonia| 8|655.77751| paid| NextSong|
| Goldfrapp| 9|251.14077| paid| NextSong|
+--------------------+-------------+---------+-----+---------+
Or we can observe when songs are played the most during the day. Which turns out to be at noon or during lunch hour.
xkcd-like scatter plot
Feature Engineering
Now that we have clearly identified the churned users, it’s time to put our data science hats on and try to identify the contributing factors to the churn rate. How does one behave if they were unsatisfied with the service? I will try to answer that question with the 6 engineered features below:
Average hour when a user plays a song
Maybe people who aren’t enjoying the service tend to play music at different times per day. It’s clear that the difference between these two groups is negligible. The actual values are 11.74 and 11.71 between churned and non-churned users respectively. Which is a measly difference of 1 minute and 48 seconds. However, we will keep it in our model since it might prove to be useful once we combine it with the other features we are about to create.
2. Gender
Our music streaming service might be more appealing to one gender over the other. It’s worth investigating if one gender is over-represented among the churned users. We will create a dummy variable for gender with a value of 1 for Male and 0 for female.
By grouping the users by gender and churn, we can clearly see an over-representation of males in the churned group. 62% of churned users are male, versus only 51% of non-churned users. So males are more likely to cancel their subscription.
+-----+------------------+
|churn| avg(gender_dum)|
+-----+------------------+
| 1|0.6153846153846154|
| 0|0.5144508670520231|
+-----+------------------+
3. Days of activity
People who have just signed up for the service might not have formed a real opinion on the service yet. For starters, they haven’t had enough time to try all the different features. People also need time to get comfortable with any new software. Which in theory should make newer users who haven’t been subscribed for long more likely to cancel, or churn.
It turns out that our theory was accurate, new users are significantly more likely to cancel than older users.
4. Number of sessions
It makes sense that users who log on more are enjoying the service since they are using it more often. The more times someone logs on, the more likely they should be to keep using our service.
That turned out to be accurate. Similar to the Days of activity feature, people who are either new subscribers, or don’t use the service that often are more likely to cancel.
5. Average number of songs per session
It’s not just the quantity of sessions that matter, but the quality as well. Even for users who stream music sporadically, they can still be consistently listening to plenty of songs per session. Indicating a good experience, despite the low number of times they’ve used it.
While the difference is small, it is certainly visible. The effect might not be as significant as the previous two calculated features, but it’s visible enough to keep in our analysis and find out how relevant it is.
6. Number of errors encountered
One of the most annoying messages we can receive is an error message. Especially one which interferes with something we are engrossed in, like a streaming movie or music. Users who experience more errors on average should be more likely to cancel their service, since they’ve had a buggier experience. Let’s look at the verdict.
It turns out that churned users experience less errors. This might be explained by the longer time-frame in which non-churned users are active. So the longer we use the service, the more likely we are to run into errors. All things considered, the error rate is low.
Building our model
With our features being properly engineered, we will train three models and pick the best one:
Logistic Regression Random Forest Gradient-boosted Tree
In our attempt to build modular code, we’ve written a function which builds a pipeline for each model and runs it. It also takes a parameter grid if we want to train our model on the data with different hyper-parameters.
After finding the best fitting model from the combination of hyper-parameters, it is returned by our function. We can then apply each model on our test data and calculate how accurate it is by using a standard evaluation metric. In this case we will be using the F1 score.
The F1 score is the weighted average of Precision and Recall. It’s a good measure for general problems since it isn’t heavily skewed by false positives or false negatives alone. Something that is especially useful when we have an uneven class distribution, which is our exact situation here.
The end result being:
The F1 score for the logistic regression model is: 0.825968664977952
The F1 score for the random forest model is: 0.8823529411764707
The F1 score for the gradient-boosted tree is: 0.8298039215686274
By using the same metric, we can easily compare all 4 models and conclude that in this ‘smaller’ dataset, the most accurate model to predict user churn is the Random Forest classifier.
Training on the full dataset
The next step was to take our functioning code and run it on a cluster on AWS. We ran into several issues while trying to run the analysis on the full dataset. While the issues seemed small in nature, it took a very long time to resolve them given how much time it takes the Spark cluster to process the data.
Firstly, the version of Pandas on the cluster was outdated. We circumvented the issue by simply changing the problematic method from ‘toPandas()’ to ‘show()’. While the output isn’t as visually appealing, it serves its purpose.
Secondly, Spark was timing out after a certain amount of time which made the results prior to the timeout useless. The only viable solution I could find was to increase the cluster size in order to shorten the time required to process our code.
This was a surprising problem since Spark is a big data framework, so long processing times are supposed to be expected.
Thirdly, performing a grid search on hyper-parameters proved especially challenging given the timeout issue and just how much time it take to process the data on one model. So for the sake of our own sanity, we used the default parameters for our classifier of choice.
Lastly, the biggest issue happened at the model evaluation step. Even though the model was trained without any apparent problems, I kept getting the following error message and could not find a solution for it.
Exception in thread cell_monitor-17:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/opt/conda/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.6/site-packages/awseditorssparkmonitoringwidget-1.0-py3.6.egg/awseditorssparkmonitoringwidget/cellmonitor.py", line 178, in cell_monitor
job_binned_stages[job_id][stage_id] = all_stages[stage_id]
KeyError: 1256
I was not able to properly gauge the accuracy or find the confusion matrix of the random forest classifier. Unfortunately, we can only presume that the model would perform even better when trained the full dataset using a cluster on AWS.
Feature importance
From a business standpoint, model accuracy is not the only thing decision makers are interested in. They also want to know exactly what pushes people to cancel or unsubscribe from their service. The following graph shows exactly how big of a role each feature plays in that decision.
+--------------+----------+
|feature |importance|
+--------------+----------+
|days_active |0.4959 |
|n_sessions |0.2146 |
|avg_play_hour |0.1355 |
|avg_sess_songs|0.1009 |
|n_errors |0.0386 |
|gender_dum |0.0145 |
+--------------+----------+
It turns out that the number of active days and the number of sessions streaming music are the most important features when predicting user churn. | https://towardsdatascience.com/big-data-analytics-predicting-customer-churn-with-pyspark-19cd764f14d1 | ['Tarek Samhan'] | 2019-03-25 01:00:26.190000+00:00 | ['Data Science', 'Udacity', 'Machine Learning', 'Big Data', 'Spark'] |
Integration in elementary terms | CALCULUS FOR DATA SCIENCE AND MACHINE LEARNING
Integration in elementary terms
Formulas to accelerate integration
In this post, we will summarise some of the most useful techniques to calculate integrals. This will allow you to avoid the limit notation, but there will always be some difficult integrals.
Primitive functions
A function F satisfying F’ = f is called a primitive of f. Of course, a continuous function f always has a primitive.
Primitive integral, self-generated.
But now we are searching for the elementary function of primitive, a function that can be written from familiar functions like logarithmic, trigonometric…
Integration methods
The basic methods of integration are theorems which will allow us to express primitives of functions in terms of primitives of other functions.
As we already introduced, when we calculate the derivation of a function we drop the constant values, when we integrate the resulting function, we can’t go back to the original function, we will always lose this constant, that’s why we write the result of an integral like with +C.
For example in the following image, we see that if we calculate the derivative and the integral of this one we are not obtaining the original function again, we lose the 4 value.
Example of why we use a constant C, self-generated.
In this sheet, you will have all the basic integrals.
Some advanced theorems on integration methods
Integration by parts
If f’ and g’ are continuous, then
Integration by parts, self-generated.
This technique is used when the function to integrate can be expressed as a product of functions f and g. There are some tricks to apply this theorem:
Consider the function g’ to be the factor 1, so we can always use integration by parts.
Use integration bt parts to funt the integral of terms of itself.
The substitution formula
If f and g’ are continuous then
The substitution formula, self-generated.
To apply this formula, we have to be able to separate the function to integrate into (f∘ g)g’. Then, using the following decomposition:
Let u=g(x) and du = g’(x)dx
and Find a primitive(as an expression involving u )
) Substitute g(x) back for u
Conclusion
In this post, we explained two ways to simplify integrals calculation, with this post we’re done with basic calculus. The next posts will be more practical, starting with machine learning regression models. | https://medium.com/ai-in-plain-english/integration-in-elementary-terms-2ad14e464105 | ['Adrià Serra'] | 2020-10-05 20:47:29.545000+00:00 | ['Data Science', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning', 'Calculus'] |
The A.I. Among Us | The A.I. Among Us
Understanding the Deepness of Deep Learning.
As an integral part of many industries, Artificial Intelligence is progressing rapidly and becoming a part of our daily lives. Just by using the internet, there could be a chance that you will be part for an algorithm used for the future of technology through data. But, how far are we from true artificial intelligence? Could A.I. be among us?
Photo from https://www.forbes.com/sites/cognitiveworld/?sh=5f48e1fe729a
Well, today, we have already developed an advance machine learning technology with extensive behavioral algorithms that has an ability to cater what we need to do and what we like to see. Through high computing power and massive amounts of data, these machines learn the patterns and algorithms to mimic how the human brains think.
One popular example for the application of A.I. is Natural Language Processing (NLP), where the machine learns to be proficient in understanding speech or text that were delivered in natural language. This is being used by virtual assistants and chat bots like Siri, Alexa, Cortana, etc. We can also relate, Tesla’s Autopilot and Full Self-Driving Capability that uses A.I. / Deep Neural Networks to perform semantic segmentation, object detection and monocular depth estimation that learns from diverse scenarios in the world. Interestingly, A.I. was also used to study the most strategic of games we have like Jeopardy, Chess and Go, wherein the machine successfully acquired the skill to beat even the world champions.
The documentary of Google’s DeepMind AlphaGo, shows us an aspect on how machines can be so much intelligent that it can overpower our geniuses like Lee Sedol in the world of the game “Go”, which is considered as the most complex board game ever devised by man. There are moments when AlphaGo made “weird” moves that people think unnecessary but in the end it turns out to be right move for victory. Lee Sedol managed to win one game out of five, he managed to somehow adapt on how AlphaGo plays the game. However on the last game, AlphaGo won again because the machine is designed to learn from past iterations and improve its algorithm to avoid losing the game.
Looking at the brighter side, these machines can help and teach us to be better thinkers. Cracking the code of the games mentioned does not aim to kill the essence of it but rather gives us the knowledge that Artificial Intelligence has a greater potential to be applied in any field. With this, we can have the capacity to build more machines that can solve more serious world problems like an A.I. that can detect cancers better than human doctors, and even our today’s enemy COVID-19; Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs. This shows why Big Data matters and why it is powerful. | https://medium.com/swlh/the-a-i-among-us-8f010214ebc4 | ['Angel Mariano'] | 2020-11-13 15:17:14.878000+00:00 | ['Deep Learning', 'Artificial Intelligence', 'Machine Learning'] |
Crawling Into Time | Photo by Annie Spratt on Unsplash
my dog paces
panting frantically
she’s a golden retriever
and she has to pee
ann sits on the bed
reading a red book
hardcover
the cats lay on
opposite ends
of the floor
glaring at each other
the wind rustles the trees
outside the open window
where i can
hear the cars
every so often
in the dim lights
i look over
and see
on the nightstand
next to my side
of the bed a drawing of
ann’s lower back
hips
ass
and thighs
in tight lace
underwear
a birthday present
for me
from two years back
the dog stops pacing
climbing into bed
successfully acquiring ann’s
attention
meanwhile i sit here
tapping on this laptop
crawling into time
where i can feel
pieces of peace
waves sliding through
my body
at a snails pace
it is
quite nice
but enough of this
now
it’s late
i’m going to slip
into the bed
maybe read a book
maybe start writing a grocery list
maybe see what ann’s wearing
underneath the covers
maybe close my eyes
and think up a better
ending
to this one | https://medium.com/scribe/crawling-into-time-2fe181ec894c | ['Austin Briggman'] | 2020-02-28 07:54:01.689000+00:00 | ['Writing', 'Moments', 'Endings', 'Observation', 'Poetry'] |
How to write Crystal Clear Code | Today, we’re going to explore an exciting programming language called Crystal — a compiled Ruby-syntax-like LLVM backend statically typed fully object-oriented language. The reason why I’m so excited about Crystal is the fact that it brings together the benefits of both compiled and dynamic languages.
Brief history
Crystal was designed and developed by Ary Borenszweig and Juan Wajnerman in 2014. The purpose was to create a language that includes all the benefits of Ruby, but without its associated downsides. Namely, the language must be as elegant and productive as Ruby, but as fast and safe as a compiled/statically typed language.
Everything is an object
In Crystal, everything is an object. The definition of an object boils down to these points:
It has a type;
It can respond to some methods;
Data types in Crystal
Symbol is a number (Int32) but with a human-readable name — for example, :hello . Symbols can be used as identifiers or keys of any kind, such as as keys in a hash map.
Tuple is a fixed-size, immutable sequence of values. tuple = {26, "Arsen", '👨💻'} is an example of Tuple(Int32, String, Char). You can also assign labels to values, which will create a NamedTuple. For instance, tuple = { age: 26, name: "Arsen", avatar: '👨💻' } looks much better and allows us to reach values by using a symbol as a key tuple[:age] # gives 26 .
Proc represents a pointer to a function and a context. The return type is inferred from the proc’s body. proc = ->(x : Int32) { x.to_s } # is a proc with one input argument and a return value of String type To invoke a proc, you invoke a proc.call(1) method.
A Proc can also be created from an existing method, like so:
Type system
Despite being a statically type-checked language, Crystal keeps the type system almost invisible for developers. Most of the time, it actually feels like you’re using a dynamically typed language instead.
Crystal achieves this by allowing a variable to have not only one type, but N-types at the same time.
In the example above, the variable a could be an Int or a String. In Crystal, this is known as a Union Type. The most interesting part is how methods are called on a union type. The rule is simple: all types in an union must respond to the method you’re trying to call.
It means that if you try to call a + 1, you will get a compile-time error because Strings do not respond to + methods.
If you want to call a type-specific method, a runtime check is required:
Methods
To define a method, you need to provide a name and specify a list of parameters. You can provide types for the parameters but this is completely optional.
The return value is the last computed expression within the method. In this case, the return type inferred based on the returning value.
Nullability
Types in Crystal are non-nilable. To express nullability, Crystal uses a type called Nil, which means that if a method returns a value or nil, it will provide a union of some type and Nil.
Error handling
Lastly, Crystal deals with errors by raising and rescuing exceptions. To raise an exception you use raise "OH NO!" method and begin/rescue/end to catch exceptions.
This concludes my quick overview of the Crystal programming language, but if you want to learn more about it, please visit the official site. | https://medium.com/revolut/how-to-write-crystal-clear-code-ff2d3fe03a49 | ['Arsen Gasparyan'] | 2017-12-21 13:06:39.779000+00:00 | ['Engineering', 'Software Development', 'Programming', 'Mobile', 'Revolut'] |
Recurrent / LSTM layers explained in a simple way | Introduction
For all the previously introduced layers, the same output will be generated if we repeat the same input several times. For instance, if we have a linear layer with f(x)=2.x. Each time we ask to predict f(3) we will get 6. So if we ask 10 times in a row, predict us the output when the input is 3, the NN will always give 6:
F(3)=6; F(3)=6; F(3)=6; F(3)=6; F(3)=6; …
Now imagine we are training an algorithm to detect repetitions, so we want that F(3) = 0 for the first time (no repetition detected), then we would like to get F(3)= 1 for the second time. We can’t achieve this behavior with non-recurrent layers. Since by definition we will always get the same output for the same input. A hack solution for this is to take a vector of 2 variables, so we can treat the first variable differently than the second variable. So a F([3;0]) =0 (no repetition is detected) but F([3;3])=1 (repetition is detected). The downside of this hack is that we can operate only on a predefined fixed sequence length.
Solution
In order to solve the previously introduced problem, recurrent layers were invented. They are a family of layers that contain an internal active state. In the simplest form, we can write Recurrent layers in the following way:
H is a hidden active internal state that starts usually with 0.
is a hidden active internal state that starts usually with 0. f is a function that updates the internal state between sequence steps.
is a function that updates the internal state between sequence steps. g is another function that uses the current internal state to calculate the output.
is another function that uses the current internal state to calculate the output. After each input X , H is updated using f , then the output Y of the recurrent NN is generated from this updated state using g .
, is updated using , then the output of the recurrent NN is generated from this updated state using . So when we send several time the same input X=3 we might get different output Y, because the internal state is changing after each time.
For example, let’s consider the following simple example:
First we start with H = 0.
After the First X= 3 , we get H=3, and output Y=-3+5= 2
, we get H=3, and output Y=-3+5= After the second X= 3 , we get H=2*3+3=9 and Y=-9+5 = -4
, we get H=2*3+3=9 and Y=-9+5 = After the third X= 3 , we get H=2*9+3=21 and Y=-21+5 = -16
, we get H=2*9+3=21 and Y=-21+5 = If we reset H = 0, then we ask again for X=3 we get H=3, Y=2 again.
And so on, we can see easily here how we can get different outputs with the same input repeated. At any moment, we can reset the internal state (H=0) then the same sequence will be generated.
Basically recurrent networks behave like non-recurrent networks if we reset the internal state after each step (sequence length = 1).
Properties
For recurrent-networks, For the same input sequence, we will get the same output sequence (internal state is reset after each sequence, not input). While for non-recurrent networks, for the same input we get the same output.
Recurrent layers are very useful for everything related to sequencing. Where one element of the sequence itself is less important than its position in the sequence.
Consider text processing: If we have the letters: H,O,U,S if we want to predict the next letter, we will predict E. If we have the letters: L,I,S => we will predict T. Although both sequences have the letter S as the last known letter before prediction, but the history of the sequence is more important for the prediction than just the last letter. This sequence’s history is encoded in the internal state H, so when the letter S finally arrives, we will have different histories (or internal state) that will allow us to generate E as the next letter in the first case, and T the next letter in the second case.
If you enjoyed reading, follow us on: Facebook, Twitter, LinkedIn | https://medium.com/datathings/recurrent-lstm-layers-explained-in-a-simple-way-d615ebcac450 | ['Assaad Moawad'] | 2020-07-28 07:44:24.164000+00:00 | ['Machine Learning', 'Lstm', 'Artificial Intelligence', 'Recurrent Neural Network', 'Backpropagation'] |
The Vulnerability Industrial Complex | The Vulnerability Industrial Complex
On turning our insatiable need for hope and inspiration into profit
What does it say about America that it wants to build a giant wall to keep other humans out?
As with anything to do with the current administration, the subtext isn’t subtle: America doesn’t want to share. To this end, Trump is planning to turn the US into a country-sized gated community where members abide by exclusionary rules.
This civil engineering endeavor reminds me of another guarded concrete barrier that separated humans in 1960s Germany, both physically and ideologically. The Berlin Wall severed family ties and shattered many lives in the name of some elusive common good.
A wall was an aberration then and it is an aberration now.
And yet, Trump’s wall strikes me as one of the most brutally honest metaphors about America today.
Ours is a country that fancies itself as an exclusive membership club, bound together by a monotheistic religion devoid of compassion, decency, or indeed any kind of supernatural force.
In the United States, there’s only one god.
We worship the nineteenth letter of the English alphabet adorned with two vertical strokes because it’s the only tangible deity there is. | https://asingularstory.medium.com/the-vulnerability-industrial-complex-cf13ad2d0fbf | ['A Singular Story'] | 2019-09-15 12:01:02.581000+00:00 | ['Politics', 'Social Media', 'Mental Health', 'Culture', 'Self'] |
📺 High tech — high life. 🔌 Quality of life in the… | 🔌 Quality of life in the Post-Apocalyptic world got better.
Fellow Cryptolords!
In the post-apocalyptic world, High Quality of Life is a myth. Or not?
Today we’re getting down to the ground and talk about Life Quality. From the beginning of the Worldopo project Life Quality concept, let’s say, did not correspond with the actual quality of life. We want to change that!
New Buildings to sustain your Settlement
In the new concept, we assume that hexagons represent land with people living on it. A real, living, and breathing people — like you or me. As each person, we all need food, apartments, and someplace to have a few shots of whisky when winter comes. Of course, being happier makes one work better and make funnier jokes!
Thus, three new Life Quality Buildings come in Grocery to purchase cereals or any other food; Apartment House to stay for a night, and Bar Buildings to have that whiskey!
Happiness level = Productivity level.
Because Wolrdopo is a Business-oriented simulator, we want to make the world much more immersive and interactive than before. The first reason behind touching Quality of Life is that it brings cash circulation inside the Settlement Economic system to sustain other activities.
The second reason is the Realistic Economy: happier people work much better in Settlement buildings, like Hyperfactory or Financial Dep. The more comfortable people are — the more money you will get. In other words, Life Quality buildings influence the efficiency of your New World Business Empire!
Only together, we can get rid of misery and poverty in the post-apocalyptic world!
☝🏻 Follow us for mo’ updates!
👉🏻 Telegram: https://t.me/worldopo_ann
👉🏻 Facebook: https://www.facebook.com/worldopo
👉🏻 Twitter: https://twitter.com/Worldopo1
😉 See ya’ll around! | https://medium.com/worldopo/high-tech-high-life-2463073362ba | [] | 2020-09-29 07:02:03.615000+00:00 | ['Game Development', 'Games', 'Development', 'Cryptogames', 'Crypto'] |
The Risks & Rewards of Emerging Technologies within Public Services | By Brandie Nonnecke, Director, CITRIS Policy Lab & Camille Crittenden, Executive Director, CITRIS and the Banatao Institute
Investments in digital infrastructure in the public sector have lagged for years. The COVID-19 pandemic has torn back the curtain to reveal a dilapidated IT framework that undergirds many of the services that millions rely on for education, food, and public safety. Within the first three months of the pandemic, over 44 million Americans filed for unemployment, overwhelming current government software systems and public service workers. Now is the time to remediate patchy systems and strengthen the tools and platforms needed to meet the demand for public services likely to continue well into the future.
The pandemic only highlights a long-standing need to improve public sector processes. With decades of rising workload demands, worker shortages, and budget constraints, many public sector institutions have been ramping up deployment of emerging technologies to support productivity. Machine learning-powered tools are increasingly used to support decision-making in classrooms and child welfare offices, chatbots can field common questions from the public and offer appropriate resources in law enforcement and food assistance programs, and robotic process automation (RPA) bots assist to streamline social service applications.
While emerging technologies such as natural language processing, machine learning, and RPA promise to make the public sector more efficient, effective, and equitable, they also pose ethical challenges in implementation. Emerging technologies, especially AI-enabled tools, can present risks to the public by reinforcing biases, making costly errors, and creating privacy and security vulnerabilities from data collection and collation. For public sector workers, implementation of inefficient, inaccurate, or ineffective technologies can overburden and undermine their efforts.
The public sector is at a pivotal moment in its digital transition. While the pandemic has acted as a catalyst, jumpstarting the rollout of emerging technologies in services, full integration into the sector is still in the early stages. The appropriate modernization of the sector requires proactive and thoughtful consideration of the benefits and risks of deployments and careful analysis of the effects of these early applications to inform appropriate technology and policy strategies. Doing so will better ensure that future applications maximize benefits and mitigate harms to the public sector workforce and the beneficiaries of its programs.
The CITRIS Policy Lab, with support from Microsoft, has released a report investigating the effects of emerging technologies within three public service sectors: K-12 education, social welfare services, and law enforcement. The report explores implications of emerging technologies on efficiency, effectiveness, and equity in each sector and provides specific technology and policy recommendations for each. These analyses are used to formulate broad recommendations to guide adoption of emerging technologies in ways that mitigate harms and maximize benefits for workers and the public. Among the recommendations: implement frequent reviews to ensure technology deployments are adequately meeting the needs of the workforce and public; develop appropriate training mechanisms to equip workers with the technical skills necessary to use and evaluate the effects of new technology; and adjust procurement processes to confirm that gains in efficiency and effectiveness from implementation do not outweigh equity concerns.
The public sector is rife with antiquated IT infrastructure in dire need of being updated. The COVID pandemic and related economic disaster have accelerated the need to implement better technology-powered solutions. Fortunately, innovative tools incorporating machine learning, virtual reality, and robotics are ready to be put into service in the sector. With appropriate consideration of their effect on the workforce and the public, emerging technologies can be leveraged to provide more efficient, effective, and equitable outcomes for public service professionals and the democracy they serve. | https://medium.com/citrispolicylab/the-risks-rewards-of-emerging-technologies-within-public-services-e56bcc72b845 | ['Citris Policy Lab'] | 2020-09-09 17:22:18.508000+00:00 | ['Artificial Intelligence', 'Emerging Technology', 'Future Of Work', 'Public Sector', 'Covid 19'] |
A Terminal Worthy of a Pilgrimage | This ingenious, large-scale, and unprecedented solution could not be replicated in an enclosed terminal, but it informed the principle of displacement ventilation and was assisted by the introduction of “air towers” at the ground level. Viewing early photographs of the Hajj Terminal in use, one’s eye goes to either the pilgrims in their distinctive national garb or to the billowing tent structures overhead. However, look closely and you will see a series of octagonal pylons with rows of nozzles around them. These “air towers” deliver fresh air at the occupied level to assist the stack effect of the tents and provide additional temperature mitigation for the pilgrims.
Air towers cool the expansive area at ground level while hot air escapes from the tops of the tents. Photo © Jay Langlois | Owens-Corning
From the 1990s on, such air towers have become commonplace in contemporary airport terminal design as a means of delivering conditioned air from below within high-ceiling spaces. The Hajj Terminal is the first large-scale use of this technique.
When pilgrims arrive at the Hajj Terminal and pass through immigration and customs, they might spend a dozen hours waiting for organized transport to Mecca or Medina. Most of the terminal’s surface area consists of these waiting areas. The open area is divided into five large modules measuring 135 by 300 meters. These modules provide a clear path from customs to curb and back to check-in upon departure. Each module is provided with essential services and discrete group waiting areas, some with cooking facilities. This arrangement allowed for a sense of national identity within the global community of the faithful gathering for the Hajj.
Many designers today are planning outdoor areas for new terminals. The ultimate source of this idea is standing in Jeddah.
Setting aside the religious dimension, contemporary airport terminal design attempts to break down the immense scale of the building into areas that are more comprehensible and inviting, without losing the sense of general orientation and wayfinding that is also necessary to navigate such spaces. This “passenger experience” dimension of the Hajj Terminal design is not fully appreciated.
Born of necessity, it turns out that the open-air format of the Hajj Terminal provides healthier conditions during the current pandemic, allowing the sunlight, natural humidity, and air movement to create a safer environment than a densely packed interior space. Many designers today are planning outdoor areas for new terminals, and a few small terminals that are mainly open air in benign climates have received renewed attention. The ultimate source of this idea is standing in Jeddah.
These design moves are closely bound up with the form, function, and technology of the superstructure, but are transferrable to more conventional systems for fully enclosed and conditioned terminals. And that is what happened. Most of these strategies are taken for granted today, but were unknown or uncommon when the Hajj Terminal was designed. The utterly unique and singular purpose of the terminal for the pilgrimage has meant that the Hajj Terminal’s influence has gone largely unacknowledged. The terminal became known only for the tents, while its other innovations were obscured by a superficial understanding of what lies beneath them.
Pilgrim tents during Hajj. Photo © SOM
Finally, the matter of the tents — the salient visual feature of the Hajj Terminal. Along with the tapering steel columns and cable system, they form perhaps the most perfect synthesis of visionary architecture, advanced structural engineering, technical mastery of new materials — all in a form that is deeply resonant of the place and representative of its function. There have been superficial imitations in terminals large and small, but no subsequent use of a fabric roof structure that so elegantly plays many roles. In addition to forming a relatively lightweight covering for a vast area, and performing a key role in mitigating the high ambient temperatures of Jeddah’s climate — almost no matter the season in which the Hajj falls — the conical form of the Hajj Terminal cells allude in the subtlest way to the canvas tents erected by and for the pilgrims in vast camps around Mecca and Medina since the earliest centuries of the Hajj. Some 45,000 tents are still erected to house pilgrims not staying in the increasing number of hotels and hostelries.
In my view, it is critical to an appreciation of the achievement of the Hajj Terminal to know that the designers first envisioned a fully enclosed and mechanically cooled terminal. They did not start with a superficial mimicking of the pilgrim tents. Numerous post-war architects had dreamed of realizing in permanent form the concepts of lightweight exhibition structures of the 1920s and 1930s, such as those designed by Le Corbusier. Even considering the large tensile structures for sports venues of the 1960s and 70s designed by Frei Otto, the technical prowess achieved in Jeddah of scaling up multiple innovations to colossal size in relatively new materials and systems is impressive.
In the end, the structural rationale of the terminal, growing from the functionality and flexibility of the square grid, and rising through the anticlastic stretching of the high-tech fabric, arrived at an allusion to the pilgrim tents. The reference could not have been lost on the waves of pilgrims. The aptness could not be more profound. The synthesis of architecture, function, structure, environment and culture no more complete. These remain central aspirations for terminal designers today. There may be no building more seminal to its type than the Hajj Terminal, yet largely unknown today. The result was both deeply resonant and entirely new.
General view of western, unfinished half of Hajj Terminal. Photo © Jay Langlois | Owens-Corning
Several years ago I had the good fortune to be able to make my own secular pilgrimage to the Hajj Terminal. It did not disappoint. The terminal’s immense size makes an instant impression. Although there are no other buildings around it to lend a sense of scale, the terminal seems to hover high above the desert. It is true that the “furniture” — the functional partitions and passenger services at the ground plane — needs upgrading for operational, commercial, and climatic reasons, after decades of use. But the brilliant, truly awe-inspiring framework remains.
The one great surprise for me was to see that the west half — all 230,000 square meters of sheltered area — was never finished with the passenger facilities at the ground level. The superstructure of tents looms over the rough ground — not like a ruin that has been diminished by time, but a luminous shelter waiting to be inhabited by the next generation of pilgrims. Would this not be an ideal canvas on which to design a model pandemic-resistant terminal for this new age? As they say in Arabic, “inshallah.” | https://som.medium.com/a-terminal-worthy-of-a-pilgrimage-2e2402713b63 | [] | 2020-07-28 19:45:49.127000+00:00 | ['Architecture', 'Hajj', 'Transportation', 'Design', 'Airports'] |
Structured Streaming: Essentials | This is the second chapter under the series “Structured Streaming” which center around covering all the essential details to set up a Structured Streaming query. Peruse the previous chapter here for getting introduced to Structured Streaming.
Sources
Sources in Structured Streaming refers to the streaming data sources which brings data into Structured Streaming. As of spark 2.4 the built in data sources are as follows,
Kafka
Kafka source reads data from Kafka brokers and it is compatible with Kafka broker versions 0.10.0 or higher versions. Follow this link which focuses on Spark-Kafka integration for more details.
File source
File source reads the files as a streams of data from a directory. For ex, reading log files from the HDFS directory which was collected using flume at regular intervals.
We will leave all the nitty-gritty details of file sources and how to use it in production for later parts of this series.
Socket source
To read the text input from a socket connection. Socket source should be used only for testing purpose.
Schema: Socket source has two schemas,
1. Contains only one column named “value” and of string type.
2. Contains two columns named “value” and “timestamp” of which the former is of String type and the latter is of Timestamp type.
val socketDF = spark.readStream.format("socket")
.option("host", "localhost")
.option("port", 9999)
//set to false or ignore this parameter to set the schema to type 1
.option("includeTimestamp",true)
.load()
Rate source
Rate source is likewise utilized for testing, yet here the rate source will naturally produce some example data for testing.
Schema: Rate source has two columns named “value” of long type and “timestamp” of Timestamp type.
val socketDF = spark.readStream
.format("rate")
.option("rowsPerSecond", 10)
.load()
Schema Inference
For file sources, it is recommended to provide the schema so that the schema will be consistent. To automatically infer the schema, it is necessary to set the configuration parameter “spark.sql.streaming.schemaInference” to true which by default is false.
For sources like Kafka, Socket, Rate the schema is fixed and won’t possible to change.
Sinks
Structured Streaming uses sinks to add the output of each batch to a destination. Spark is shipped with the following sinks as of spark 2.4,
File sinks
To write the output to a file in a directory. The supported file formats are ORC, JSON, CSV, Parquet.
writeStream
.format("json")
.option("path", "destinationdir")
.start()
Kafka sinks
To publish the output to Kafka topics.
writeStream
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.option("topic", "updates")
.start()
Foreach and Forechbatch sinks
Both sinks are used to run some arbitrary computations on the records. Both have their own merits. More details of each sink will be covered later in this series.
Memory sinks
The memory sink is designed for debugging purpose, the complete output will be stored in memory. So, it is recommended to use this sink in a low volume of data.
writeStream
.format("memory")
.queryName("tableName")
.start()
Console sinks
As like memory sink console sink is also designed for debugging and stores the data will be stored in memory. Instead of storing the data as in-memory table it will print it to the console.
writeStream
.format("console")
.start()
By implementing StreamSinkProvider, it is possible to develop custom sinks.
Output Modes
Output mode defines how to write out the data to the sink. Like if it just needs to add the new information (rows) or to update the rows with new information. Spark supports three output modes which needs to be specified at the time of setup the sinks, if not the default is append.
Append
Append mode writes only the new rows to the output table since the last trigger.
Update
Update mode writes only the rows which have been updated since the last trigger. If the query doesn’t contain any type of aggregations, it behaves like append mode.
Complete
Complete mode writes out all the output rows to the resultant table.
Triggers
Triggers defines how often structured streaming should check for new data in the source and updates the result.
· Fixed interval: The query will be executed in a micro batch mode. However, if a certain query hasn’t been processed within the interval, the streaming engine will start processing the next batch only after current batch is finished
· One-Time-Micro-Batch: The query will be executed only once and then it will shut down by itself. It is useful if it is necessary to process the data after a long interval, like schedule the application to run only two times in a day to avoid unnecessary cost.
· Continuous Processing: To achieve low latency as low as ~1 ms. But it is in the experimental stage right now, not recommended in production.
· Default: By default if any type of triggers is not explicitly specified, the streaming engine will look for the new data as soon as the previous batches continue.
df.writeStream
.format("console")
.trigger(Trigger.ProcessingTime("2 seconds"))
.start()
In the next chapter of this series, we will dive into fault-tolerance semantics and how it is achieved in Structured Streaming.
If you have any queries feel free to contact me on twitter and follow data kare solutions for more Big data articles. | https://medium.com/datakaresolutions/structured-streaming-essentials-e5e251531825 | ['Arun Jijo'] | 2019-03-03 03:36:01.716000+00:00 | ['Structured Streaming', 'Hadoop', 'Big Data', 'Spark Streaming', 'Spark'] |
Explain Like I’m 5: The React Virtual DOM | The Fix
The virtual DOM, on the other hand, is a lightweight representation of the DOM. Think of the virtual DOM as a current blueprint of your house on paper. You can see all aspects of your house through this blueprint, but drawing a new room on the blueprint alone won’t magically fix the real room. In the same way, the virtual DOM is a great representation of the DOM, but it cannot directly change what you see on the screen. So how is this done?
The house analogy of what happens with the virtual DOM is like so:
You want to renovate a room in your house.
You have access to the blueprint of the house.
You then make a new copy of the blueprint and make revisions on this new version. The new blueprint now has the new room representation.
Now you compare the original blueprint and the new blueprint and see what sort of construction needs to be done.
You then go into the real house and carry out those changes relevant to that room alone without having to rebuild the entire house.
Since you no longer have any use for the old blueprint, you can throw that away. Now your most recent blueprint is your only blueprint, which represents the most updated version of your house after the fix.
In this example, the original blueprint of the house is the virtual DOM. You needing to make a fix in the house is akin to when an element needs updating (as triggered by a change in state). You making a revised blueprint based on the changes needed is akin to the virtual DOM updating completely. You comparing the original blueprint with the updated blueprint is akin to React comparing the pre-updated virtual DOM with the updated virtual DOM. In this process, React determines exactly what aspect of the virtual DOM has changed. This is referred to as diffing. Finally, you going into the house to make the exact changes needed is comparable to React updating only the objects that need changing on the actual DOM. | https://medium.com/better-programming/explain-like-im-5-react-virtual-dom-297ed1b8f7b | [] | 2020-11-30 15:30:46.015000+00:00 | ['JavaScript', 'Software Development', 'React', 'Computer Science', 'Programming'] |
Simple Regression Example | Simple Regression Example
K-Fold Validation, Normalization, MAE
Overview
Basics:
Goal
Input & Output
Encoding & Decoding
Architecture
Regularization
Validation
Code:
Import
Model Definition
K-Fold Validation
Validation
Final Model
Goal
We will try to predict the median house price given 13 different parameters. The parameters are attributes such as crime rate, property tax rate, square footage etc.
Input & Output
You can learn more about the data set here and here.
The input variables in order are:
CRIM per capita crime rate by town ZN proportion of residential land zoned for lots over 25,000 sq.ft. INDUS proportion of non-retail business acres per town CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) NOX nitric oxides concentration (parts per 10 million) RM average number of rooms per dwelling AGE proportion of owner-occupied units built prior to 1940 DIS weighted distances to five Boston employment centres RAD index of accessibility to radial highways TAX full-value property-tax rate per $10,000 PTRATIO pupil-teacher ratio by town B 1000(Bk — 0.63)² where Bk is the proportion of blacks by town LSTAT % lower status of the population MEDV Median value of owner-occupied homes in $1000's
If number 12 is what I think it is, that’s really fucked up. Anyway. The output should be the median price in the 1000’s. If we used activation functions in classification, we do no such thing for regression since the output should be a real-valued number.
Encoding & Decoding
In this stage, we don’t necessarily encode so much as normalize the data. The reason we do this is because each attribute has different ranges, which makes learning more difficult.
So we normalize each attribute or feature by making its mean 0 and a standard deviation 1.
Architecture
We will use a relatively small network of 2 hidden layers, each with 64 units. The considerations for this are the size of the data set and the normalization step.
Having a small data set makes overfitting more likely, and a small network is one way to deal with overfitting. And normalizing the data set makes it a bit easier to learn.
These two are the necessary and sufficient conditions.
For the loss function, we will use the MSE or the Mean Squared Error.
Regularization
Our primary method of regularization is early stopping again. But this time, we will use a metric of MAE or the Mean Absolute Error, which is the absolute value of the difference between the predictions and the actual prices.
Validation
Generally, with a bigger data set, we partition it into training and validation sets. But what would happen here is that our validation set would end up being too small. And smaller sets often give a high variance, which isn’t great for validation.
So we use a k-fold validation method. Here we partition the data into k groups, where k is often 3 to 5. And we train on the k-1 groups and evaluate with the last group. Then we repeat this process k times. We are changing the evaluation group every time.
During each iteration, we get a validation score. After all the iterations, we average the scores to get a final validation score, like a GPS measurement error reduction. | https://medium.com/an-idea/simple-regression-example-5889645d7293 | ['Jake Batsuuri'] | 2020-09-28 02:45:39.021000+00:00 | ['Data Science', 'Deep Learning', 'Keras', 'Python', 'Machine Learning'] |
Good News About Black Holes! | Good News About Black Holes!
You will still become infinitely long and infinitely thin, but the information that is you will (probably) be retained
Photo: Ute Kraus via Wikimedia
Like most sensible people who’ve got their priorities straight, I worry about black holes a lot! I worry about the fact that if I were to jump into one, I would be moving slower through time the faster I moved through space. This is not only crass, antisocial behavior, but it’s one of those things that will probably turn out to be bad for your health as well. I worry about the fact that someone watching me would never see me pass the event horizon. They would just see me slow all the way down to a complete stop and then disappear in a red glow, which would be immensely disconcerting for them. I worry, particularly, about “spaghettification,” which is exactly what it sounds like except not at all because what it sounds like is a nice bowl of noodles and what it is, it turns out, is the stretching of your body into infinitely thin strands. The astrophysicist Charles Liu describes this as being rather like “toothpaste being extruded out of a tube,” which is all very well and good if you’re toothpaste!
And the thing I really worry about, the thing that keeps me up at night, is the idea that if I fall into a black hole, no trace of me will ever come back out again, even when the black hole dies.
This is a silly thing to worry about, it turns out. Thanks to “entanglement entropy” and something called the Page curve, physicists are beginning to think that if I were to fall into a black hole, the information that is me would, in fact, be retained in the universe, just in a (somewhat worryingly) garbled state.
Part of this is down to wormholes, though the physicists who are anxiously trying to figure out where exactly I would go if I were to fall into a black hole are just as anxious to assure everybody that when they say wormholes, they don’t actually mean wormholes. That would be crazy! What’s really going on, they say, is something called nonlocality—or, to put it in a more existentially disturbing way (as this article on the new findings does), “a failure of the concept of ‘place.’”
This has of course put my mind at ease considerably w/r/t the question of where the information that is me would end up if I were to fall into a black hole. (Answer: Not nowhere! But also not “somewhere” in any way that makes sense if you’re naively accustomed to thinking of the universe as something that is at its core, geometric.)
But I’d be lying if I said that I am not, now, worried about some new things. | https://medium.com/sharks-and-spades/good-news-about-black-holes-6ea98a8e8b98 | ['Jack Shepherd'] | 2020-11-19 16:49:05.122000+00:00 | ['Humor', 'Philosophy', 'Spirituality', 'Science', 'Space'] |
Want To Hire Best PHP Developers? Follow These Tips | The hunt for PHP developers in the virtual world always heeded high on the priority list. Although being one of the oldest languages, PHP still has the potential to provide a high-notch result in the IT sector. Almost every project seems to be dependent on PHP development. After all, this is some real business we are talking about; making mistakes is not acceptable. Choosing one from the ocean can be insanely tricky; therefore interviewing a lot of candidates to hire the best among them for your PHP development company is the wise decision to make.
Now you must be wondering who these PHP professionals are? Why is so much hype around them? Well, they are the one who bridges your business and consumers on the same ground fulfilling clients’ requirements. A smart and strategic developer not only will excel in the coding aspect but also take care of online business-enhancing. From a brand-new to an experienced one, every business executors rely on developers, therefore precautions should be taken while evaluating the one. As I said before, picking someone who can make your business grow globally in the virtual world is an imperative procedure.
While browsing through the list of PHP developers you’ll inevitably come across numerous programmers. Some of their profiles might convey to you what sort of developer he/she is, while some may not since not everyone is good at expressing themselves. During such scenarios, individual interviews should be taken resulting in a powerful impact on you to select the best one for your enterprise.
Now as an entrepreneur, we understand it is a lot to take in. Working simultaneously on every area seems so daunting; as a result, you may end up neglecting a few essential aspects (the technical ones). Don’t frown we have noted down these tips for you to pick the best PHP developer from the virtual realm. Let’s dive in!
1. Don’t roll with obvious questions:
The task of hiring a PHP developer has never been easy, although equipping a long list of questions will seize your worry. Because the right collection of questions will help you to discover the best one, your research for the questions will be a process in your mind. However, avoid these obvious questions about PHP:
PHP’s release date
Who edited the parser that developed PHP
Full-form of PHP
Who wrote the first script of PHP
You might feel an obligation to ask those questions, but it’s not necessary. Because with these you’ll find out their strong memorizing power, not practical experience. These sorts of queries will not lend you a proficient PHP developer.
To hire the best programmer/expert, stick with the practical utility and processing of PHP language. Comprehend the knowledge of the candidates for the current evolution of PHP. Study the recent progression of the program and obtain relevant questions for the applicant.
Their awareness of updates and technologies with time will give you a prospect to assess them for their information grasping characteristics for the field. Put a critical scenario or a problem that is often faced by the developers through the process in the real programming world instead of asking a bunch of online search issues.
2. Apprehend earlier projects:
The best way to conclude someone’s performance is by evaluating their earlier projects. Whether you’re seeking a PHP developer or a reputable PHP development company, you must be aware of their previous projects, which might give you an impression of their way of operating. It’s necessary to comprehend the method of your candidate before you determine it with immediate programming using PHP language, first go ahead with their previous projects.
Analyze the project and remarks of previous clients on it. In this way, you can grow a prospect for the potential team for your company. Contact their technical staff and project managers, who so far have observed their work. Their feedback for the applicant will help you to summarise his/her techniques and way of addressing the technical problems of PHP or any other language.
Inquire more regarding their portfolio issues, such as, how much time they took to finish it, what kind of errors they’ve encountered while programming and how their way of tackling things, or what additional techniques and the medium they applied to perform it, etc. Their explanations will confer the expertise of their profession, you can gauge the working hours on a project, tackling skill of technical glitch and acknowledgment of other tools.
3. Communication approach:
To establish a secure relationship you must have a firm ground of communication. The professional y’all is persisting to hire from the PHP development company must be conscious of the various approaches to correspond with his/her colleagues, clients, or higher authorities. Whether it’s audio, video, skype, phone, email, or through messages; methods of interfacing professionally define their personality.
Though connecting via skype is more efficient than lengthy emails or messages with several falsities. You can’t anticipate a great understanding of work from a person via texting, can you? No, that’s why it’s vital to hire someone who can readily communicate through phone calls or skype video calls with different consumers. Apart from the programming language, the shortlisted applicant must be fluent in the universal language English.
The appointed developer not only has to concede guidance but also has to communicate with the clients to make them understand a few fundamentals of the project. The conversing action has to be professional, eventually all of these influence the reputation of your organization.
4. Field Knowledge:
If you’ve previously questioned them about the pioneer projects, you’ve surely figured out the capability to dodge the fundamental errors of programming. The PHP developers should be skillful in resolving the innate flaws of the program, apart from coding or designing a plan. This skill showcases its internal knowledge of the field.
Every second in the webspace the technology keeps on evolving and the only way to be competitive with the leading enterprise; is by keeping your employees up-to-date with the emerging technologies. Hire a PHP expert who possesses information about all the latest trends and has the ability to learn new skills all the way. Once you have the response to this, then test their understanding about how PHP avails different clients and companies.
This is critical to ensure that all PHP developers have the skills to deliver a project with proficiency and precision, so after examining the knowledge of technical aspects go for the fundamentals errors of the tech, it’ll straightway affirm the theoretical or practical expertise of the candidate. Be vocal about Parse Error, Fatal Error, Warning Error, or Notice Error in detail, question them why these sorts of errors occur and how they’ll resolve it.
5. Attitude toward profession:
A person’s attitude towards his/her profession determines the sincerity and respect for work. Similarly, a developers’ perspective impacts the project outcome, whether it’s pessimistic or optimistic. Where skills are the deciding factor while hiring, but in the end, attitude is a deal-maker or breaker. And to assess their strategy towards the critical situation puts them in a difficult space in regards to technology and analyzes their outlook on the circumstances, especially how they react to it. You might not find the right or anticipated answers but you’ll know their mind-set towards such a situation.
It’s a well-known phenomenon that no matter how skillful PHP developers are, if he/she lacks in flexible abilities such as interpersonal skills, possessing an upbeat character, or having a learning attitude; it gets considerably finicky to work synchronically. So, it’s essential to hire a person who has an adaptive and flexible attitude for work, towards people, and their surroundings without creating a fuss about everything.
Conclusion:
Evaluating their skills, understanding, awareness, attitude, and all the technical perspective is important and necessary. Because to attain something or someone worthy for your business, who can expand your web development company to another level is a complex duty but a vital one. A right potential employee of yours will commence you to a milestone that you’ve imagined.
Recruitment of PHP programmers is crucial when you’ve to scale them on a technical basis, every one of them excels in their way but to find the best among them is your test. Comprise standards to assess a PHP developer while you interview them, in this way you can compare each candidate, if they will fit into those parameters or not. However, I’m sure following these pointers you can pick the best PHP developers for your enterprise in no time.
Ascertaining PHP developers might be a tricky thing because the most challenging responsibility is to seize one efficient PHP developer in a million is robust and rigid but did we mention it is doable? All you have to do is ask as many relevant questions to refine every apprehension you’ve concerning the project, programming, their personality, innovative technologies, and all other quintessential details relevant to a job with candidates. Exactly to choose the meritorious one you’ve to comprehend a particular protocol including these few tips, and bam you’ve got an excellent PHP developer on board with you. | https://medium.com/front-end-weekly/want-to-hire-best-php-developers-follow-these-tips-5d2665ea2447 | ['Nelly Nelson'] | 2020-11-10 18:28:00.871000+00:00 | ['Development', 'Php Developers', 'Php Development', 'PHP'] |
What’s Important Is Feeling | Story me short, summer 2k15. Here are six of the best new short story collections — silent mixtapes for those pockets of downtime between eating, beaching and sleeping.
1. What’s Important Is Feeling by Adam Wilson
No one knows who slept with her first. Besides, sleep isn’t the right word. What we did: pressed lips to closed lips, tried to slip in some tongue; buried her beneath us on carpeted floors and futon mattresses; felt her dry skin against our sweat-wet hands; said, “Don’t cry”; wiped tears with our T-shirts; kept on because she said, “Don’t stop.”
So raw, so good. A boy works at a suburban diner, falls in love with a girl who plays with her pancakes, and invites her into his (embarrassingly bad) band. He grows weed with a friend — “a weakling of a plant that turned out to be male and unharvestable” — and talks about girls. In another story, a homeless man breaks into a college apartment and begins to cry. These stories are vulgar, young, and tinted with Instagram’s Nashville filter. Most are told in first person, by narrators under thirty. Summer reading, definitely. Porches, dead time, and soft thunder. It’s impossible for me to describe just how good these are. I was up all night.
2. Can’t and Won’t by Lydia Davis
When I fell asleep, I dreamed about the guillotine; the strange thing was that my little niece, who sleeps downstairs, also dreamed about a guillotine, though I hadn’t said anything to her about it. I wonder if thoughts are fluid, and flow downward, from one person to another, within the same house.
A reminder that it doesn’t take more than a sentence to make a story. Little drops of water in a bucket. Most of these stories are only a paragraph long, and some just a few sentences. Tiny fragments, a few of them poems. Together, they’re a catalog of thoughts — some are dreams (marked “dream” in tiny letters at the bottom). Lydia writes about the feeling of slogging through a bad novel, getting catfished, an eight-sentence story about a lazy bodyguard. The book also includes a few anecdotes based on letters by Flaubert, sprinkled in just for fun. This book is like a pointillist painting. The message is that everything is important.
3. Don’t Kiss Me by Lindsay Hunter
Back at his house he asks if I want to play Princess Leia and I am touched, I know he’d rather have her for a girlfriend than me, It’d be an honor, I tell him, and he hands me my lightsaber and then knocks it out of my hand with his. You’re dead cinnamon-bun dumb-hair, he says, looking up at me through his bangs, my hand is throbbing from where his lightsaber hit, again I die for him, I shudder and quake and cry out and fall at his feet and die. Now I’m going to maybe spit in your hair, you don’t know, keep your fat eyes closed, he says. Okay, I whisper.
WORD. MAGICK. Ugh, this book is very funny and totally unique. Strong characters say weird shit in those fat margin-to-margin paragraphs that make you want to stop breathing. These short, punchy stories experiment with form and voice, and they’re exhilarating, bordering on poetry. You’ll find all kinds of wacky here: a story about an apocalypse at P.F. Chang’s, an all-caps soliloquy about a candle shoppe, and a game of “Lips” where two girls touch lips until one of them screams. You will probably never be bored again in your life after these.
4. The Miniature Wife and other stories by Manuel Gonzales
To be honest with you, Jim, my face is unevenly shaved because my three-inch wife has climbed up the porcelain sink, hoisted herself up to the medicine cabinet, opened the heavy mirror door, and has dulled all of my razor blades.
What would happen if you were stuck on a hijacked plane for twenty years? What if loud, annoying noises gave you visible cuts and bruises? Check into these stories-slash-thought-experiments for some totally serious (but also insane) answers. Manuel’s stories are trippy: A scientist who works in “miniaturization” manages to unwittingly shrink his wife — she is no bigger than a coffee mug — and builds a dollhouse for her. A composer is paralyzed from the mouth down — he can’t speak with his mouth, so he teaches himself to speak through his ears. In each one, Manuel takes a tiny, ridiculous premise, and describes it with so much detail it becomes plausible. Like a lucid dream.
5. Making Nice by Matt Sumell
Thing is she didn’t think that pots and pans should go in the dishwasher, so I pointed out that there’s a setting on the dishwasher for pots and pans, just look, it’s right there, open your fuckin’ eyeballs. Well she didn’t like that very much and started in with this business about me being a loser and headed nowhere […] Either way it made me upset, and I slammed the refrigerator door so hard the milk exploded, then I turned around and told her to shut it or I’d punch her mustache off her face and watch it fly across the room like a hairy bug.
Say what you want about angsty male narrators who think in long curmudgeonly sentences, but this one is very funny. These stories are offensive, angry, hilarious and sad. Inhibitions are killed and left to rot. All of these stories are told by the same character, a thirty-year-old named Alby whose mother recently died of cancer. The callous jokes only make the poignant moments — Alby alone with his mother, in the hospital at home, emptying her piss bag or doing her nails — more heartfelt. Think: vulgar family jokes that are funny and sad at the same time.
6. We Live in Water by Jess Walter
Wayne eases the door closed, steps down the hall to the boys’ room. Little and Middle, nine and eleven, splayed on twin beds like they were dropped fifty feet. The Little one could be it out of temperament alone. He’s a hoarder, a brooder. Dark eyes like his mother. Looks up from his Legos like you interrupted church. Kid didn’t say a word until he was four and then it was a full sentence: “I want more applesauce now.”
These are the most tender stories you will read in your life. Stories about people who unravel and cry at the past. Most of these stories are told in close third person, and the protagonists are drunkards/cheats/divorcées with broken families and good intentions. Many weave their way back and forth between past and present, giving you clues as they go. The kind of book that pairs well with slow Billy Joel, or Dylan. Gotta be honest and say this is my favorite on this little list. | https://humanparts.medium.com/what-s-important-is-feeling-8dba150884e5 | ['Harris Sockel'] | 2015-06-07 02:43:24.200000+00:00 | ['Books', 'Short Story', 'Fiction'] |
Robert E. Sherwood — American Playwright & Screenwriter | A literary sketch of Sherwood seen through the eyes of John Mason Brown’s biography — The Worlds of Robert E. Sherwood
Like many another, Robert E. Sherwood came back from the First World War a changed man, as John Mason Brown writes in his 1965 biography:
“ He was a troubled and uprooted young man. He was not only convalescing physically, he was convalescing from a war. Like hundreds of thousands of other young men, he was a different man, come back to a different country to start a different life. With difficulty he was adjusting himself to the lower-keyed realities of peace. For the first time he was confronted with the full meaning, to him and his family, of his father’s failure and retirement. Skene Wood was gone. So was the Lexington Avenue in which the Sherwoods had lived for thirteen years. So was the routine of going off to school or college which before the war had been part of his life. The question Sherwood faced, once he had recovered from his immediate past, was his future. He wanted to write and he needed a job.”
Having suffered from a gas attack in 1918, and fallen onto wooden spikes in a German trench, Sherwood, on his return to the US, made his way to New York for a medical check-up and the hoped for writing job. The medical went well, with Dr James Alexander (a noted lung specialist) declaring his lungs much improved and that by the autumn he should be “…nearly normal…” which no doubt brought a wry smile to Sherwood’s face.
When it came to a writing job he pretty much fell into it, but with results far less painful than falling into a German trench.
After visiting the lung specialist Robert Sherwood answered an advertisement for a position as a writer at Vanity Fair. He was interviewed by Robert Benchley, who was impressed by Sherwood, as John Mason Brown writes:
“ On May 21 Robert Benchley, two days after he had taken over as managing editor of Vanity Fair, was writing in his diary about ‘meeting Bob Sherwood who presented his six feet five or ten in candidacy for a job he may get as Miss Bristed is leaving.’ The postscript, as expected, is that he got the job, and a week later was working at the office on a three-month trial basis at $25 a week.” That $25 would equate to around $300 in today’s values.
It was better pay than the army and a job that opened many literary doors for Sherwood during the eight months he worked at the magazine, and as Mason Brown writes in his compelling and beautifully written biography:
“ Vanity Fair was the Gideon of the sophisticated. Frank Crowninshield [editor from 1914–1935] was its boutonniere of an editor; Condé Nast [ Condé Montrose Nast], owner of the far more profitable but equally glossy Vogue, its ducal publisher; and Sherwood’s two associates, in whose office he was given a desk, were Robert Benchley and Dorothy Parker with whom he at once formed an inseparable trio.”
These were the founding members of the Algonquin Round Table.
By 1954, just over one year from his death, Robert E. Sherwood had become something of a national treasure, with some of the most popular American plays, and screenplays, ever written under his belt, with a keen reputation rightly earned as an advisor to and speechwriter for President Roosevelt during World War II.
In 1948 Sherwood’s best selling biography, Roosevelt and Hopkins, was published to great acclaim, and gives an intimate and personal account of a crucial time in American history.
Abe Books
Sherwood was often criticised for his honesty, especially when he made a speech in the April of 1954 at his old school, Milton Academy in Massachusetts . He was consumed with weariness.
John Mason Brown:
“ Sherwood’s day had been exhausting. He had spoken that morning at the Naval War College in Newport on the political aspects of strategy, and speaking had become for him an ordeal. He had guessed he would be nervous at Milton, and he was. To take his mind off his speech, he had asked his old friend, tutor, and favorite master, Albert W. Hunt, to dine with him at the Ritz-Carlton in Boston. Though they had talked pleasantly of old times, his nervousness grew. It was even greater when they had driven to Milton. There, to subdue it, he took a giant’s slug of gin, uncontaminated by ice, ten minutes before he walked onto the platform.
“ It was not only conquered pain or the fatigue of a tiring day which pinched his face and accentuated the puffs under his eyes as he towered above the podium. It was the accumulated weariness of days and nights and years of living and giving and working as if his superb energies were endless. It was the secret fear that haunts the creator, the fear that his gift had left him because he had not had a successful new play produced on Broadway in fourteen years.
“ It was the sorrow and the emptiness of the let down after the death of Roosevelt and after the dizzying tensions and excitements of the war years spent close to the White House and to great events. It was the heartbreak of seeing the heroic efforts and sacrifices of a Second World War end in disenchantment, with Hitler and the other marchers stopped but with victory not even bringing a true peace. It was the deep alarm with which he recognised the threats of nuclear power and his realization that this time he spoke not only in a different world but in a different age.
“ Sherwood had tried to tell the unflinching truth as he saw and felt it in 1940 when Paris had fallen. He tried to do the same thing now as, with his long, nervous fingers spilling communication, he opened his manuscript and began to read, having put on his reading glasses. These turned the audience for him into a blur except for the attentive students in the front rows.”
Forgive me for quoting from the prologue of Mason’s biography at length, but it’s an extraordinary piece of writing that places the reader directly at Sherwood’s shoulder, and throughout it there are chilling echoes of Sherwood’s work, not least his 1946 screenplay for William Wyler’s, The Best Years of Our Lives (based on Mackinlay Kantor’s novella, Glory For Us), which is full of the weariness and the heroic sacrifices that many back home are not — at least initially — even prepared to acknowledge. And it’s Sherwood’s committed, honest writing that turns Kantor’s slim verse volume into a many faceted film that is perhaps the pinnacle of Sherwood’s writing career. It is one of those Hollywood movies that gets it right, giving the viewer a genuine insight into the ‘deep alarm’ of early post-war America.
The ‘deep alarm’ is a theme that runs through all of Sherwood’s work, as it does through the novels of John Steinbeck: it is the alarm of a generation.
And by the time of his speech at his old school Sherwood came out as the committed pacifist his 1946 screenplay suggested, giving the assembled students, parents and many veterans, in the school’s assembly hall, a stern lecture on the dangers of the unrolling nuclear age, not least the cobalt bomb, rounding off with an emotional role call of three of the school’s students (one his nephew) who had died in two world wars, and the more recent Korean War. He finished with saying , in a slight reversal of his message, that war was a dirty business and that those involved in war must fight in the dirtiest way possible, “…never forgetting that world disarmament is the ultimate goal.”
Sherwood was a better dramatist than polemicist, and apologized when the letters of complaint started coming in.
He was pretty much at the end of his tether.
John Mason Brown was born in Louisville, Kentucky on the 3rd of July 1900, graduating from Harvard in 1923. As a journalist he worked for the New York Evening Post for twelve years. During WWII Brown served as a lieutenant in the U S Navy, with his memoir, To All Hands, vividly describing his time aboard USS Ancon during the invasion of Sicily.
John Mason Brown. Image: wordpress.com
After his war service Brown’s column, ‘Seeing Things’, appeared in the weekly magazine, The Saturday Review, a wide ranging column he continued to write until his death in New York in 1969.
Brown’s biography of Sherwood is his lasting literary legacy, not only of a friend, but of a fine playwright, biographer and screenwriter. | https://stevenewmanwriter.medium.com/robert-e-sherwood-american-playwright-screenwriter-868b00c1447e | ['Steve Newman Writer'] | 2019-11-27 15:14:31.795000+00:00 | ['History', 'Screenwriter', 'Biography', 'Books', 'Playwrights'] |
How to Write Killer Haiku | Haiku is a deceptively simple art form. A single poem is just 17-syllables long. In the original Japanese, a haiku could be whispered in a single breath.
How hard could it be to write a great haiku?
Pretty damn hard.
The compact form forces you to be economical with your language. You cannot dilly-dally like you can in prose with extravagant descriptions. Haiku are about precision.
The strict limits of haiku force you to be clever, clear, and concise. When executed correctly, a haiku becomes a stunning orange butterfly in the summer twilight — both transient and indelible.
Anyone who believes it is easy to write a haiku has probably never written a good one — they certainly have never written a great one.
What do I know about haiku?
I’ve spent the better part of the last three years writing haiku almost every day. I have a spreadsheet of thousands of haiku. Some of them are good; many of them are awful. I’m always working on writing better haiku.
I have published a book of over 400 haiku, and I am in the process of publishing a second book of haiku. I publish a haiku-heavy zine each month. I read dozens of haiku every day.
It is not just my favorite form of poetry — the haiku is also my favorite form of storytelling.
I’m a haiku maniac. I believe you can write a great haiku about anything. It doesn’t have to be about hummingbirds or waterfalls. I’ve written about pirates, serial killers, aliens, time travel, and a bunch of other weird stuff. But haiku can also be hummingbirds and waterfalls if that’s what moves you.
Here are five tips and techniques I’ve picked up for writing better haiku.
Write for the Reader
Of all the different types of literature, poetry tends to be the most masturbatory. Poets become obsessed with writing their true feelings. But poems that are birthed from the unfiltered ego of the poet are opaque and dull.
The best haiku are written for a reader. The true art of haiku is finding something universal in a personal experience or something personal in a universal experience.
If you want readers to connect with your work, you can’t make it all about you. That doesn’t mean you should write for everyone — you just need to write for someone.
Learning how to step outside of your own perception makes you a better writer and a more interesting poet.
Create Rules for Yourself
We often think that creativity is all about freedom. However, we are at our most creative when rules or limits constrain us. Limits force us to stretch in uncomfortable ways. With English language haiku, many poets ignore the 17-syllable construction because it does not accurately reflect the way haiku work in Japanese. Instead, these poets focus on the concept of a poem spoken in a single breath.
I choose to use the 17-syllable limit because I have no idea what a poem spoken in a single breath means. It’s too open-ended for me. By limiting myself to 17-syllables in the 5–7–5 pattern, I am forced to get creative with my word choice, my story structure, and my message.
You may find different limits work better for you. It doesn’t matter what rules you set for yourself, that is the beauty of poetry. All that matters is that you have some rules.
In a poem this compact, a single word choice, or even a single punctuation mark can change everything. Take this poem:
The moment we met I was afraid you just felt Sexual tension
I originally wrote it this way:
The moment we met I was afraid — you just felt Sexual tension
The em dash changes the entire tone of the poem. I decided to remove the em dash in the final version because in this instance I preferred the ambiguity. It is the limits of the haiku that gives poets room to be creative. Every syllable matters. Every punctuation mark matters. Everything matters in a way it never can in prose.
Write About Moments
Haiku are for small stories. The most profound haiku are about moments. They are about the beating of the wings of the falcon, the cloud drifting in front of the moon, or the primal fear of seeing a strange shadow in a house you thought was empty.
If you can break a story or idea down into a single stirring moment, you will have created something beautiful.
When a haiku fails, it is usually because it is trying to do too much.
Leave Something Unsaid
The best writing advice I’ve ever received was from my high school drama teacher. He was talking about improv, but it applies perfectly to haiku. He said, “Always leaving them wanting more, instead of wishing they had less.”
Always leave them wanting more, instead of wishing they had less.
Allow the reader to make some inferences to how the moment ends. Give them the freedom to fill in the details around the event. Don’t waste your precious space trying to say everything. Instead, focus on your artistry and trust your reader to write the rest of the story.
You will create a stronger bond with your reader when you trust them with everything you left unsaid.
Rewrite and Let Go
We all want our work to be perfect. You can drive yourself to despair if you’re not careful. Let your haiku sit for a few days after your first draft. When you revisit them, you will likely find you want to make a few tweaks.
Go ahead and rewrite your haiku. Then let them go into the world. What’s the point of being a writer if you never share your gift with anyone?
Even Emily Dickinson showed some of her poems to a few close friends.
Don’t agonize over a few syllables for days or weeks. Write your haiku and release them into the world to live lives of their own.
Once you’ve released your haiku, it’s time to begin crafting some more.
The world is waiting for you. We need your voice — especially if it’s a little weird. | https://medium.com/weirdo-poetry/how-to-write-killer-haiku-fd62c83a0a68 | ['Jason Mcbride'] | 2020-08-23 18:01:47.242000+00:00 | ['Poetry', 'How To', 'Haiku', 'Fiction', 'Writing'] |
How to Build a Successful Design Team — Part 1: Effective Leadership | One of the best pieces of advice I’ve received and which I’ve carried with me throughout my entire career, is to hire the best people you can; hire people that are smarter than you. This has become a cornerstone of my team-building process: look for great people that are very talented, and never let yourself feel threatened by them.
The key principles of effective design leadership
Once you have those exceptional people in place, working for you and with you, your work as a leader is far from done. Below are some of the lessons I’ve learned over the years (sometimes the hard way), and how they can help you be an effective design leader, even during these challenging times.
Lead by example
My top rule is one that sounds simpler than it is. It’s easy to talk the talk, but you also have to walk the walk as much as possible. This applies to your own actions as a designer and covers everything from aesthetics to ethics. I don’t always live up to this (I’m human), but I continually try.
Establish and build trust
Have trust in people, and invest in trust, so your team trusts you as well. It makes leading people much easier. They may not agree with you, but they know that you’re experienced and will be more open to listening to what you’re saying. It’s an incredibly important part of being a design leader.
Explore broadly and discuss openly
I’ve always had a need to feel very good about the ideas I’m putting forward, so I explore many different directions. A lot of the time I come back to the first or second thought, but in order to feel I’m doing the right thing, I have to look at 20 other things and discuss them openly. I’ve always fostered that in my teams.
Provide clear direction to people, but give them autonomy, too
At our studio, Ammunition, we have really benefited from having independent people who work well together, take direction, but also feel they have ownership and responsibility. It’s a very tricky but important balance. Give people direction and let them figure out how to get there. Give them enough rope, so they can learn and grow. A lot of creative leaders find that difficult, but I always marvel at the fact that we have about 40 people that sometimes behave like 80 people. This is because they work together very well and feel responsible for creating something great.
Protect the creative environment
In most corporations with internal design teams there is a lot of chatter and noise, with many people trying to influence your decisions or make demands. It ends up being extremely hard to be creative and take creative risks in these circumstances. You need to protect creative time and space for designers as much as possible, so they can explore and try things and maybe even fail at some of them. If you’re trying to push something new, chances are that the engineering and operational teams of the organization you’re working with will not like it because it means more work, time, and expense for them. But it’s really important designers feel encouraged and confident in taking risks. It’s absolutely critical you give them that protected environment and allow it to flourish, and only you, as the design leader, can ensure that happens.
Have high standards
Make sure your team knows you have high standards and understands your expectations. You have to find the line between high standards and unreasonable expectations, but always be very clear that the work has to be good, right, and done well. I like to be challenged, and I like to challenge people. I’ve been told, time and time again, by leaders of the organizations we work with that “we can do better.” Over time I began to learn that this wasn’t necessarily a criticism of the work we were presenting. Remind people that they, and you, can do better. Don’t rest on your laurels; always be better.
The importance of being design driven
Ensuring that everyone in your organization is aligned with the product strategy and overall goal, not just your designers, is also crucial. This is challenging, and requires leadership and contextualization. Not everyone immediately trusts that the design professionals they work with are concerned about their needs as well, so you constantly have to make sure you listen, understand, and contextualize the work you’re doing.
People also need to understand that being design driven doesn’t mean designers are running the show. It’s also not just about hiring designers and having a design process in place. It’s a soup-to-nuts approach of really thinking about what you’re doing and how it impacts the design of what you create.
When you’re design driven, everyone along the chain of events producing something contributes to the design that’s going out. It takes incredible leadership, constant vigilance, and an understanding of what good design actually means to build across an enormous chain of events and become truly design driven. It’s ultimately why there are so few truly design-driven companies in the world because it’s not completely understood what you need to do, how to do it, and how difficult it is to get there.
Leading a team through COVID-19
Sticking to the above principles during the current pandemic poses a challenge, of course. For us, the most difficult part of the new working from home culture, which COVID-19 has forced upon us, is how to review work, brainstorm, and truly collaborate.
My management style follows a drive-by approach, where I’ll just sit down with someone at their desk and talk through what they’re working on. It’s very difficult to replace that kind of face-to-face interaction.
Reviewing prototypes of the three-dimensional objects we create is also a lot harder. We tend to use 3D printing to develop, refine, and communicate our designs. Instead of producing just one model, we had to produce seven sets for a design review recently. We had to ship them out, coordinate things to ensure everyone had them in time, and also have security protocols in place that we have to follow because we let our prototypes out into the world. It’s created an incredible overhead for our program managers just to get to the point, so we can sit down in front of a model with a small group of people and talk about it. We’re still figuring out how to do that better.
It’s also challenging to provide the level of support designers need to be able to think creatively and explore new angles. There are questions that are hard to save up for a daily 10am call. Sometimes people need help right now, and so you need to make sure they feel empowered at any time to just send you a text and ask if they can discuss something right away. So it’s a challenge to foster the idea of not waiting to get feedback but seeking it whenever they need it. We’re currently exploring how to very quickly do a deep dive and encourage risk taking that’s necessary for people to create something interesting.
The future of design leadership
The current situation, during COVID-19, is training us to be better communicators. The windows of communication and visibility are fewer and far between, which means you really have to know what’s going on and communicate what needs to be done. This can actually be a huge benefit. As a leader, you also have to understand what’s going on in a lot of channels concurrently, instead of following a very linear process from task to task.
As far as the future of design leadership is concerned, there’s a lot of emphasis on designers becoming business leaders. You have to build that into your practice but also maintain a balance. Learn how to be business critical, but at the same time hold onto the values that got you here — creativity, passion, empathy, and the inherent desire to progress. That last one doesn’t always jive well with a broader corporate community that’s tasked with the process of executing consistently. For me, as a designer and business leader, that balance and that understanding of my roots is very important. | https://medium.com/thinking-design/how-to-build-a-successful-design-team-part-1-effective-leadership-c660d9aa0da3 | ['Patrick Faller'] | 2020-07-11 17:35:52.286000+00:00 | ['Leadership', 'The Future Of Design', 'Design Leadership', 'UX', 'Design'] |
2 Applications of AI in E-commerce that are in Demand Today | What is AI ?
AI is the ability of a machine to perform cognitive tasks the human mind performs. Tasks such as reasoning, correlating and problem solving. AI algorithms consists predominantly of two types of algorithms; machine learning and deep learning.
Machine learning algorithms are used to detect patterns from large quantities of data and are then used to make predictions and recommendations. Deep learning is a type of machine learning that can process a wide range of data sources compared to machine learning, requires less data processing by humans and can produce more accurate results.
For a fun, visual introduction to AI, by Jay Alammar video here: | https://medium.com/unitx-ai-magazine/two-ways-in-which-ai-can-aid-inventory-management-and-supply-chain-logistics-3063380647b5 | ['Kiran Narayanan'] | 2020-08-07 15:01:01.490000+00:00 | ['Inventory Management', 'Artificial Intelligence', 'Digital Transformation', 'Digital', 'Supply Chain Management'] |
Reinforcement Learning: Super Mario, AlphaGo and beyond | Most of the literature we find on machine learning talks about two types of learning techniques — supervised and unsupervised. Supervised learning is where we have a labeled dataset. This means we already have data from which to develop models using algorithms such as Linear Regression, Logistic Regression, and others. With this model, we can make further predictions like given data on housing prices, what will the cost of a house with a given set of features be. Unsupervised learning, on the other hand, doesn’t have a labeled dataset, but still, we do have abundant data. The model we create in this setting just needs to derive a pattern amongst the data available. We do this with algorithms such as K Means Clustering, K Nearest Neighbors, etc. to solve problems like grouping a set of users according to their behavior in an online shopping portal. But what if we don’t have so much data? What if we are dealing with a dynamic environment and the model needs to gather data and learn in real time? Enter reinforcement learning. In this post, I’ll take a look at the basics of what reinforcement learning is, how it works and some of its practical applications.
Without reinforcement learning, there is no supervisor to tell you if you did right or wrong. If you did well, you get a reward, else you would not. If you did terrible, you might even get a negative reward. Reinforcement learning adds in another dimension — time. It can be thought of being in between supervised and unsupervised learning. Whereas in supervised learning, we have labeled data and unsupervised learning we don’t, in reinforcement learning, we have time delayed labels, which we call rewards. RL has the concept of delayed rewards. So, the reward we just received may not be dependent on the last action we took. It is entirely possible that the reward came because of something we did 20 iterations ago. As you move through Super Mario, you’ll find instances where you hit a mystery box and keep moving forward and the mushroom also moves and finds you. It is the series of actions that started with Mario hitting the mystery box that resulted in him getting stronger after a certain time delay. The choice we make now affects the set of choices we have in the future. If we choose a different set of actions, we will be in a completely different state and the inputs to that state and where we can go from there differs. If Mario hit the mystery box but chose not to move forward when the mushroom began to move, he’ll miss the mushroom and he won’t get stronger. The agent is now in a different state than he would have been had he moved forward.
You might not be able to totally recall the first time you ever played Mario, but just like any other game, you might have started with a clean slate, not knowing what to do. You see an environment in which you as Mario, the agent, have been placed that consists of bricks, coins, mystery boxes, pipes, sentient mushrooms called Goomba, and other elements. You begin taking actions in this environment by pressing a few keys before you realized then you can move Mario with the arrow keys to the left and right. Every action you take changes the state of Mario. You moved to the extreme left at the beginning but nothing happened so you started moving right. You tried jumping onto the mystery box after which you got a reward in the form of coins. Now, you learned that every time you see a mystery box, you can jump and earn coins. You continued moving right and then you collided with a Goomba after which you got a negative reward (also called a punishment) in the form of death. You could start all over again, but by now you’ve learned that you must not get too close to the Goomba; you should try something else. In other words, you have been “reinforced”. Next, you try to jump and go over the Goomba using the bricks but then you’d miss a reward from the mystery box. So you need to formulate a new policy, one that’ll give you the maximum benefit — gives you the reward and doesn’t get you killed. So you wait for the perfect moment to go under the bricks and jump over the Goomba. After many attempts, you take one such action that causes Mario to step over the Goomba and it gets killed. And then you have an ‘Aha’ moment; you’ve learned how to kill the threat and now you can also get your reward. You jump and this time, it’s not a coin, it’s a mushroom. You again go over the bricks and eat the mushroom. You get an even bigger reward; Mario’s stronger now. This is the whole idea of reinforcement learning. It is a goal-oriented algorithm, which learns techniques to maximize the chances of attaining the goal over many iterations. Using trial and error, reinforcement learning learns much like how humans do.
Reinforcement learning broke into the scene in March 2016 when DeepMind’s AlphaGo, trained using RL, defeated 18-time world champion Go player Lee Sedol 4–1. It turns out the game of Go was really hard to master for the machine, more so than games like Chess simply because there are just too many possible moves and too many numbers of states the game can be in.
Just like Mario, AlphaGo learned through trial and error, over many iterations. AlphaGo doesn’t know the best strategy, but it knows whether it won or lost. AlphaGo uses a tree search to check every possible move it can make and see which is better. On a 19×19 Go board, there are 361 possible moves. For each of these 361 moves, there are 359 possible second moves and so on. In all, there are about 4.67×10³⁸⁵ possible moves; that’s way too much. Even with its advanced hardware, AlphaGo cannot try every single move there is. So, it uses another kind of tree search called the Monte Carlo Tree Search. In this search, only those moves that are most promising are tried out. Each time AlphaGo finishes a game, it updates the record of how many games each move won. After multiple iterations, AlphaGo has a rough idea of which moves maximizes its chance of winning.
AlphaGo first trained itself by imitating historic games played between real players. After this, it started playing against itself and after many iterations, it learned the best moves to win a Go match. Before playing against Lee Sedol, AlphaGo played against and defeated professional Go player Fan Hui 5–0 in 2015. At that moment, people didn’t consider it a big deal as AlphaGo hadn’t reached world champion level. But what they didn’t realize was AlphaGo was learning from humans while beating them. So by the time AlphaGo played against Lee Sedol, it had surpassed world champion level. AlphaGo played 60 online matches against top players and world champions and it won all 60. AlphaGo retired in 2017 while DeepMind continues AI research in other areas.
It’s all fun and games, but where can RL be actually useful? What are some of the real world application? One of the largest field of research and now beginning to show real promise is the field of Robotics. Teaching a robot to act similar to humans has been a major research area and also part of several sci-fi movies. With reinforcement learning, robots can learn similar to how humans do. Using this, industrial automation has been simplified. An example is Tesla’s factory that consists of more than 160 robots that do a large part of the work on cars to reduce the risk of defects.
RL can be used to reduce transit time for stocking and retrieving products in the warehouse for optimizing space utilization and warehouse operations. RL and optimization techniques can be utilized to assess the security of electric power systems and to enhance Microgrid performance. Adaptive learning methods are employed to develop control and protection schemes, which can effectively help to reduce transmission losses and CO2 emissions. Also, Google has used DeepMind’s RL technologies to significantly reduce the energy consumption in its own data centers.
AI researches at SalesForce used deep RL for automatically generating summaries from text based on content abstracted from some original text document. This demonstrated an approach for text mining solution for companies to unlock unstructured text. RL is also being used to allow dialog systems (chatbots) to learn from user interactions and help them improve over time. Pit.AI used RL for evaluating trading strategies. RL has immense applications in the stock market. Q-Learning algorithm can be used by anyone to potentially gain income without worrying about market price or risks involved. The algorithm is smart enough to take all these under considerations while making a trade.
A lot of machine learning libraries have been made available in recent times to help data scientists, but choosing a proper model or architecture can still be challenging. Several research groups have proposed using RL to simplify the process of designing neural network architectures. AutoML from Google uses RL to produce state-of-the-art machine-generated neural network architectures for language modeling and computer vision. | https://hussainather.medium.com/reinforcement-learning-super-mario-alphago-and-beyond-fdd676d1bf43 | ['Syed Hussain Ather'] | 2018-11-12 21:51:40.907000+00:00 | ['Reinforcement Learning', 'Alphago', 'Deep Learning', 'Artificial Intelligence', 'Super Mario Bros'] |
3 Ways I Overcome Social Anxiety | Anyone that knows me personally will be shocked to hear this: I used to have a lot of social anxiety. Although I have managed to overcome most of my anxiety around social situations, it has been a long journey filled with self-reflection, rejection from others, and numerous highs and lows.
To provide you with a bit of an insight, I moved about 3 times in primary and secondary school. I moved cities, states, and even countries a couple of times. An individual with affinity towards belongingness, overcoming social anxiety was almost a survival mechanism. I need to feel supported socially to thrive in all other areas of life.
So, I had no choice but to choose to overcome the fear of meeting new people and approaching strangers.
Here are some of the things I learned along the way that made me the “social butterfly” that I am labeled as today:
We are all just skeletons
Literally, underneath everyone’s beautiful and intimidating faces are just blood, flesh and skeletons. Fundamentally, we all come from similar places and to some extent share similar life experiences of going through happiness, despair, envy, pride, ego, etc. We are all human beings with likes, dislikes, struggles, pain, and a sense of humor (hopefully).
Understanding this truth helped put a lot of things in perspective for me and eased my anxiety around approaching strangers — to a certain extent. It was a bit easier for me to approach the popular girl or the hot guy because instead of idolizing them, like many teenagers do, I realized that they were also imperfect beings — just with different set of struggles than my own. This realization also enhanced my ability to form stronger bonds with people that are usually labeled as “hard to crack.”
Fear of Rejection
Rejection hurts, no matter the type. Whether it is being rejected from our dream job and college, or from a certain social group. Our feelings are hurt, our self-esteem takes a hit, and it unsettles our feeling of belonging. Researcher at Duke University, Mark R. Leary states that “being excessively worried about [rejection] — to the point that we do not do things that might benefit us — can compromise the quality of our life.”
So yes, I was afraid of being rejected by certain people, but my desire to form a relationship was of higher value to me, which meant I had to suck up the fear of rejection and be vulnerable at times.
The more I threw myself towards being rejected, the better I was at handling the pain that came from it.
Instead of taking others’ behaviors personally, I accepted that it was a reflection of their own understanding of the world. Truth is that most of the rejection didn’t doom me to oblivion, so as hard as it was, instead of naturally withdrawing from putting myself out there again, I continuously focused on building up my self-esteem and my positive qualities.
Identify the root cause of discomfort
Sometimes before others can reject us, we reject ourselves. Social anxiety, to some extent, is a construct, that our mind repeatedly manifests and forces us to believe.
“If I go to the party, no one will talk to me.”
“Everyone will stare at me walking in late and think that I am a loser”
Above described example of negative self-talks are all creations of your mind — -and completely false creations too! Going back to the skeleton analogy, people are too caught up in their own worries to give a sh*t about yours. Pardon my language, but I cannot emphasize this enough!
I used to be afraid to talk to guys, even if they confessed their fondness towards me because I did not consider myself pretty enough. I wanted to apply to work as a fashion advisor but was not confident enough to showcase my skills. I continued to reject myself before others due to my own insecurities.
What is stopping you in social situations? Is it your exaggerated flaws? Do you think that you are not worthy of belonging to a certain social group because of that flaw?
Self-reflecting on your deeper woundings and traumas to identify this discomfort, and then consciously making an effort to not feel that way is key for overcoming social anxiety. | https://medium.com/modernmeraki/3-ways-i-overcome-social-anxiety-a502b8a44d1a | [] | 2020-07-08 23:57:29.547000+00:00 | ['Life Lessons', 'Wellness', 'Self', 'Relationships', 'Social Anxiety'] |
A Conversation Between Jason And His Addiction | A Conversation Between Jason And His Addiction
A One Act Play
Image by Лечение Наркомании from Pixabay
Jason: I need you to let me go.
Addiction: We’re past that. I’m inside you, I am you. I’m your secret self. I’m the shadow you, the one you try to keep hidden from the light but the one that guides all your behaviors, the one that shows up in your dreams, the one that determines your present and maps out your future.
Jason: I am taking my power back.
Addiction: From me? Without me you would have no compass. You be stranded, lost. Your brain is so cooked you could never feel pleasure again without me. I drive you towards what feels good. I get you your dopamine hits, I get you the opiate release. I get you the happy feelings and when you’re happy you don’t have a care in the world. When you’re not happy, bad things happen to you.
Jason: I can recover from you. With time my brain can heal.
Addiction: Oh, come on now, Jason. Neuroplasticity? How long will that take? One year? Two years? Longer? Can you live in pain, in the grey skies, in the drab, flat nothingness for that long? How much coffee can one man drink? How many cigarettes can you smoke? How many bars of chocolate can you eat? Not enough, my friend, not enough. And if you don’t get the good stuff I’ll just work on other things. Dangerous things. You like to cut and burn, don’t you Jason?
Jason: You’re a bully and a thief and a liar. You’ve stolen from me. You’ve stolen my health, my self-respect, my money, my time, my friendships, the trust I built with my Mom and Dad. You stole Cassandra from me. You’ve taken my life. You’ve taken everything.
Addiction: Don’t make excuses. You’re the captain of this ship. You took the first drink, and the second and the third. You snorted that first Adderall, you popped that first Vicodin.
Jason: I was in unbearable pain. I did what I thought I had to do to survive.
Addiction: And I am the natural consequence of that decision. So here we are. We both know where we stand. You were losing your mind so you gave me the keys and asked me to drive and I did. And now you’re in no condition to take the wheel.
Jason: I’ve stopped drinking. I just have to wean down off the Xanax and I’ll be through with you.
Addiction: How many days off the alcohol?
Jason: Nineteen.
Addiction: You’re a babe in the woods. You’re my meal ticket. You won’t stop the Xanax. You have another refill.
Jason: I’m throwing this bottle out.
Addiction: Nope.
Jason: I’m done.
Addiction: You want it so bad you’re shaking. I can feel you shaking. You’re so hungry for it. Eat. For both our sakes, just eat.
Jason: I moving beyond you.
Addiction: No one ever really does. At best they die before we can reclaim them. It’s simply a matter of time. I am inevitable, Jason. Once the mechanism is in place, there is nothing you can do to restructure it from the ground up. It’s always there, ready to strike. And it’s always more powerful than your conscious mind. It thinks faster than you do, it wants harder than you do.
Jason: I’m ready to treat my depression without the crutch of drugs and alcohol. I’m ready to give this a shot sober. I was dying. You’re not going to kill me.
Addiction: I never wanted to kill you. I feel no shame when I say I am a parasite and I want my host to live as long as possible. But I’m also irrational and I don’t give a shit about your health or wellbeing and I am governed by one simple principle. More.
Jason: And I’ve had enough. You will not be the death of me. I still have people who support and love me and I am not going to let them down. I am going to give my all to this fight and I am not going to stop fighting until I have conquered you.
Addiction: Bravado. Tough talk. Surrender to me. Admit you’re powerless. Give in. Do what the Big Book says.
Jason: The Big Book is not what is going to save me. Standing up to you and fighting back is the way I win this war. I’ve been too passive. I need to be assertive. I need to tell you to go climb a tree. I need to tell you that this is my body, my life and that it special and wonderful and unique to me and that I am tired of handing it over to you. We’re done.
Addiction: Throw the bottle away.
Jason: I’m still weaning down.
Addiction: Go climb a tree, Jason. Put up or shut up.
Jason: I’m taking out the amount I need to complete the taper. Now I’m scratching off the prescription number on the label and now I am dumping the rest of the pills down the drain. And now I’m calling the pharmacy to cancel any refills on my prescription. And now I’m calling my psychiatrist to inform him that I can no longer be prescribed Xanax because I was abusing it. There, what do you think of that?
Addiction: I think you’re going to hit the streets tonight with your remaining cash looking for a hookup. I think you’re desperate and scared and you won’t know what hit you when the time comes. You will be climbing the walls, drooling, aching for a hit of whatever someone is passing around. I could see you grabbing a hot crackpipe and stuffing it between your lips and taking a few puffs while your fingers burn. I could see you grabbing a dirty needle and plunging it into your arm. I could see you snorting coke mixed with baby powder and fentanyl and overdosing in your car. I could see you buying two pints of the cheapest vodka and drinking until you blackout and wind up in a prison cell covered in your own blood.
Jason: That’s not my story today. My story is a story of triumph, of hope, of commitment, of courage. Today I took the first real step towards conquering my addiction. Today I took my power back from the devil. | https://medium.com/grab-a-slice/a-conversation-between-jason-and-his-addiction-9163d6c8bc5d | ["Timothy O'Neill"] | 2020-07-29 23:23:00.989000+00:00 | ['Addiction', 'Self', 'Mental Health', 'Life'] |
Puffin 全雲端隔離才是真正安全的隔離 | CloudMosa’s mission is to empower the world’s phones through cloud computing and make them universally powerful and useful.
Follow | https://medium.com/cloudmosa-tw/%E5%85%A8%E9%9B%B2%E7%AB%AF%E9%9A%94%E9%9B%A2%E6%89%8D%E6%98%AF%E7%9C%9F%E6%AD%A3%E5%AE%89%E5%85%A8%E7%9A%84%E9%9A%94%E9%9B%A2-355e671a037 | ['Cloudmosa'] | 2020-11-17 02:40:42.805000+00:00 | ['Web Isolation', 'Cloud Services', 'Cloud Computing', 'Cybersecurity', 'Remote'] |
NOOO! | Why cryptocurrencies fall …
In all the markets of the world, the low and rises in the assets is something very natural. There are many factors involved in this phenomenon, in the case of cryptocurrencies, this fluctuation is more volatile than normal, something that in turn is understandable, for a market that barely started in 2009 with the creation of Bitcoin, on the part of of Satoshi Nakamoto.
The performance of the cryptocurrency sector during the year 2017 was phenomenal, although the same has not happened so far in 2018, the cryptocurrencies have had a strong collapse, which has been difficult to trace, why is this? , What are the factors that intervene in this phenomenon?
When a well-known company is associated with a cryptocurrency, this news usually causes some kind of repercussion in the markets, in this case it is usually a positive impact, but when it is a rupture, it causes the opposite effect, a decrease in the value of cryptocurrencies, since it is a matter of trust, confidence is given and trust is removed, and when cryptocurrencies are backed by recognized companies this becomes a source of confidence for the investor.
Now, imagine the impact that could have the political measures taken for or against a government with respect to cryptocurrencies. These facts are positive or negative, should not necessarily determine the fate or future of an asset, but it is something that is unfortunately happening, this news ignites the alarms of investors and operators of cryptocurrencies, why? Simply because this is the game of investments, that which is behind the markets and economies of the world.
Parallel to this, the social media, is there to replicate any type of information, whether this is positive or negative news, and of course, the media end up playing an important role for the cryptocurrency markets to rise or fall in value, that’s when we begin to see the percentages in green or red, happy faces and long faces.
Now, when the percentages are in red, this in turn creates a kind of panic in the markets, and consequently the inhabitants account begin to sell their assets, a fact that only aggravates the situation, which could be handled by a normal way and not necessarily have such a negative impact. And less, in a cryptocurrency like Bitcoin, which in the long term, always has an upward trend. This also reveals to us that there is a worldwide disinformation regarding cryptocurrencies and their behavior, something in which the community should work to strengthen the market.
This, is in part the phenomenon of what has been happening since the beginning of 2018, and the version that runs through most of the media is that all this decline in the values of cryptocurrencies has its cause in what has It has been happening in South Korea and China, which is not surprising, since Asia is the most influential continent in this sector, both culturally and economically.
At the beginning of 2018 in South Korea there have been a series of political events regarding cryptocurrencies, which has not only affected the market of this country, but along with other factors has ended up destabilizing the entire cryptocurrency market at scale world, and this why? This is because South Korea and Japan have become the most influential markets in the cryptocurrency sector, and especially in Bitcoin.
It all began with a misinformation, and a series of speculations that were raised for weeks in South Korea about the government’s position on the cryptocurrency trade in that country.
First, a South Korean official from the Ministry of Justice made a general statement suggesting an imminent ban on the cryptocurrency bags in the Asian country, which caused widespread nervousness in the market. As soon as that statement was issued, the Ministry of Finance of South Korea downplayed the claims and said that “no agreement had been reached for a total ban”, undoubtedly an attitude that could only generate uncertainty. Three weeks later, the South Korean Finance Minister, Kim Dong-Yeon, made it clear that the government would not institute bans, however, for public opinion the wrong was already done.
What is clear is that the government of South Korea, is developing a legislation on cryptocurrencies, which will legitimize the exchanges, however, this ordinance will stipulate that financial service providers, foreigners and underage investors will not be able to trade in the South Korean stock exchanges. Cryptocurrency traders can no longer operate anonymously in South Korea, and must have verified bank accounts linked to the stock exchanges in order to apply.
Apart from this situation in South Korea, which has negatively impacted the cryptocurrency market, the Xataka web portal ensures that: “China is also proposing new measures against these markets. One of the directors of a banking entity has indicated to the government that it prohibits the centralized exchange of virtual currencies, and prohibits people and companies from offering services related to that activity. “
And something that seems to confirm this version, is that by February 5, 2018, the portal Cointelegraph cites the South China Morning Post statement:
“To avoid financial risks, China will intensify the measures to eliminate any platform on land or offshore related to virtual currency trading or ICO”, in the same line it states that “The ICO and the cryptocurrency trade did not completely withdraw from China after the official ban … Foreign transactions and regulatory evasion resumed … The risks are still there, fueled by illegal emissions and even by fraud and the sale of pyramids. “
That is, it is very clear a scenario of attempted control, by the South Korean and Chinese government, but we must also admit that investors in cryptocurrencies, taking advantage of market distortions, bought Bitcoins at their standard price, and then sold them in the market of Surcorea, where the Bitcoin was priced above normal, that’s the profit, it was something that was generating a kind of vice, which was not going to bring any good results for the cryptocurrency community on a world scale, and especially Bitcoin.
In this same line, the Xataka portal refers to the fact that bans on cryptocurrency operations have been confirmed in nations such as Indonesia, and regulations in France, which although they represent small markets in comparison with South Korea, are still relevant when considering all this bite that the cryptocurrencies have had in 2018.
In conclusion, that panic that was generated in South Korea, term to affect and expand throughout the world, immediately after, the inhabitants of cryptocurrencies began to sell their assets, a situation that only aggravates the situation, this In addition, the intentions of deepening the bans already existing in China, and those that are initiated in Indonesia, as well as the regulations of France.
All this adverse phenomenon reveals the lack of understanding that many users have about cryptocurrencies, a training topic that has to be worked on a global scale, because it is not a financial market, fictitious or unpredictable, quite the contrary, a very reliable, thanks to elements such as decentralization and technological benefits based on Blockchain technology, technology that stimulates security, transparency and trust.
Although with many cryptocurrencies such as Bitcoin and Ethereum, the financial decentralization of governments, central banks and private banks has been achieved, the steps that still have to be taken must be oriented towards achieving the decentralization of the next level, that is, the decentralization of the panic, ignorance, and dependence on markets as influential as South Korea, Japan and China, just to name a few, this is the only way that the cryptocurrency community can achieve true and fair economic stability world.
By Marcelo Durán / Bitdharma | https://medium.com/bitdharma-newsletter/nooo-95d99893fafc | ['Fritz Wagner'] | 2018-11-28 15:29:29.636000+00:00 | ['Ethereum', 'Trading', 'Storytelling', 'Bitcoin', 'Cryptocurrency'] |
Firebase Authentication and React: Protecting Your Routes. | Firebase Authentication and React: Protecting Your Routes.
Using Firebase and react-router to create protected routes in a react app.
Photo by Thomas Jensen on Unsplash
One of the most common requirements in web apps is to prevent unauthorized access to certain routes. For example, you may need to allow only signed-in users to visit the /profile route in your app. This task may seem daunting to beginners.
Photo by Ferenc Almasi on Unsplash
This tutorial requires familiarity with react context and react-hooks. If you know how to setup firebase and firebase auth in a react app, its good, but if you don’t, it’s not a problem. You can follow along and do that later. But, knowledge of the two things mentioned above are kind of must.
We will create a Firebase auth provider which will allow us to consume the authentication data anywhere in the app. We create AuthContext with the following default values
createContext( {userDataPresent:false, user:null} )
The idea is to update userDataPresent and user values as the authentication state changes in our app. This is done by updating the state using the useState hook. The onAuthStateChanged method provided by Firebase is used to listen for the changes in authentication state and if a change occurs, update the state accordingly (example below). Changing the state changes the context value that’s being passed down to the consumers and the consumers are able to react accordingly. Since our FirebaseAuthContext component will sit at the highest level in the component tree, any changes made to the state(and in turn, the context value) will force re-render of the rest of the components. Which means, signing out from protected routes will force a redirect. All of this can be seen happening in the example below.
One thing to note here is that, we keep the value returned by the onAuthStateChanged listener in the state after the initial mount ( the event listener returns a function to unsubscribe to the authentication events). This allows us to unsubscribe when the component is unmounted. This can be seen happening inside the useEffect hook.
The function changeState is called whenever the listener fires with new auth data.
The user variable from the context contains the actual user data. It contains null if the user is not signed in. Check this out for more info on the getting signed-in user data from Firebase.
The ProtectedRoute component is the consumer of the context we created earlier. It is basically a wrapper around the Route component provided by react-router.
The idea behind, is to return a Redirect if the user is not signed in , else, return a normal Route with an exact path. The userDataPresent is useful to show some kind of spinner (waiting for the auth state to be determined). The code for spinner can be put inside the return statement inside the else block. Now, ProtectedRoute can be used in place of the normal Route to create a protected route which reacts to the changes in the authentication state of the app.
How I use it
<Switch>
<Route exact path="/Login">
<div id="firebaseui-auth-container">Login</div>
</Route> <ProtectedRoute redirectTo="/Login" path="/Home">
<div>Home</div>
</ProtectedRoute>
<Route exact path="/">
<div>Root</div>
</Route> </Switch>
It’s important not to forget about the FirebaseAuthContext. The component should wrap within itself the routes for it to work since it is the Provider. | https://medium.com/swlh/firebase-authentication-and-react-protecting-your-routes-18d6da04b4c3 | [] | 2020-09-13 12:55:01.804000+00:00 | ['React Hook', 'Reactjs', 'React', 'Firebase', 'Firebaseauthentication'] |
Second Step: Body Care | Photo by Luis Reyes on Unsplash
When I started my journey to heal my anxiety some of the things I found helped my anxiety are the things all human beings have to do to take care of their bodies: lay off the sweets and increase movement. These types of changes are basic body care that helps everyone feel better.
But some of the changes I had to make were different, unique to someone with anxiety. Like my relationship with caffeine. Even on my best mental health days, I can only drink one iced latte before I start to feel jittery and jumpy inside and out. And when my anxiety is running high, Bye-Bye latte. Bye-Bye chocolate. See you in better days.
I don’t know why my husband can drink a pot of coffee without being affected and I can’t, but I have learned that is how I roll. So I respect my body and limit caffeine.
I have also learned there are some things that my body loves.
I have found a hot Epsom salt bath does wonders for me (body and soul). I am not sure why my body craves magnesium, but adding the Epsom salts takes a relaxing bath to a whole new level of healing. I learned this helped so much, my doctor suggested I take a mild magnesium supplement every day. Magnesium, who knew?
And let us talk about massages.
I used to consider them a luxury. Now I chalk my monthly massage up to anxiety maintenance. It is part of what keeps my body working at its best. If I skip my monthly massage for any reason (I am looking at you quarantine) my neck and shoulders get super fussy and my overall anxiety levels rise.
What helps your body?
The 40 million of us who have anxiety will not all want and need the same body types of body care. We are each unique and our self-care needs are as unique as we are.
Maybe you need to run regularly.
Or maybe you have a condition that limits intense physical activity so stretching and moving in a pool is your best bet.
Only you (in consultation with your doc) will know what is right for you.
But here is the thing.
We have to pay attention to our bodies to know what aggravates our anxiety and what helps us feel our best.
So here is my challenge to you.
Pick a day. Make notes about what you eat and what you drink. Pay attention to how you move, how much sleep you got, and any other relevant information. Then think about how each piece affected your body.
Did it help your anxiety?
Did it make you flare-up?
Take a week and start trying some new body care techniques.
You probably have a hunch of what will help you based on what your anxiety-related symptoms are?
If you are super tight all the time, stretch.
If you are frazzled, meditate.
If you are stumped, talking to others who deal with anxiety can give you some good ideas.
Track over the course of the week what helped and what didn’t.
Photo by JESHOOTS.COM on Unsplash
And, I know you may not love this idea, but go see your friendly neighborhood doctor. If you don’t have a friendly neighborhood doctor that understands anxiety disorders, get one. They will be a super tool in your anxiety management toolbox. Have them run a blood test just to make sure your thyroid and blood sugar and vitamin levels and such are all good. If one of these comes back out of whack, adjusting that could be a game-changer for you.
Talk about your body care plan with your doc. Your doctor may have some ideas for you (like my magnesium revelation) or you might help educate them, benefitting the next patient she sees with anxiety. | https://medium.com/whenanxietystrikes/second-step-body-care-683b500d40a9 | ['Dena D Hobbs'] | 2020-08-22 12:01:01.733000+00:00 | ['Mental Health', 'Body', 'Anxiety', 'Christian', 'Anxiety Disorder'] |
The Worst Best Thing That Happened In 2020 | The Worst Best Thing That Happened In 2020
The only good thing about grief is the wonder of it all
Photo by john vicente on Unsplash (cropped by the author)
It’s been almost nine months, yet every morning I lose her again.
I wake to a world without Katie, and that’s still my first cognizant thought. As soon as my eyes open, it’s “Oh, crap. Here we go again.”
Like Bill Murray in Groundhog Day, I hurry through the stuff I already know how to do, alert for some small opening — some pinpoint of light — into a future where I feel some control again, or where I can make some blind headway at least.
If I learned anything from my daughter’s death, it’s that now I know for sure, no one really has any control. Every day, you’re just along for the ride. Some days I slip backward and some days I move ahead, either a lot or just a little. My life is one big Chutes and Ladders game since February 2nd.
I’m beginning to formulate some sort of strategy for staying in the game.
This morning wasn’t so bad. I was moving through it, but then it got worse…
I read a story about toxic masculinity.
Actually, it was about toxic femininity, if that’s even a thing. The story and the ensuing comments explore the issue.
I don’t know what “toxic masculinity” really even means because as a natural health practitioner, massage therapist, and student of life, I know that too much of anything is toxic. So toxic masculinity is as damaging as toxic greed, or toxic guilt, or toxic laziness.
But then I realize no, that’s wrong, because masculinity is also a good thing, whereas greed, guilt, and laziness are not.
I try to think of a time when I thought of Katie’s “boyfriend,” that man who was hanging around her too long, as a toxically masculine character. I admit it didn’t occur to me. I didn’t have the foresight to look for trends or labels to describe him. I had never seen the Power and Control Wheel.
He had yet to make a statistic of my daughter.
Toxic grief: Is that a thing?
At what point does grief get the better of you? I’ve had my moments. Lashing out at stupid people, cussing, drinking too much wine, and then calling people to talk too late at night.
Grief can look pretty ugly. They say it’s only natural, to go with it. Everyone gives you a pass, but no one tells you when it expires. So you push it. My pass is looking a bit ratty. I know Katie would be shaking her head.
Fiercely independent, Katie never asked for money, but let me buy her dinner a couple of times a month if I was lucky. That last day together, Katie and I agreed Kyle was a jerk, the second bad boyfriend in her life. My daughter was 21, almost 22. She was trying to come into her own, I could see that. As we drove to the dinner theatre, she told me how she was working on some jokes to try standup comedy, and how ridiculous it was that Kyle claimed he wanted to do standup, too.
“But he’s not even funny,” I pointed out. “I know, right?” She did her signature exaggerated head bob.
“He keeps trying to steal my jokes and say they’re his,” she complained. She went on to explain the premise and the set-up behind a joke about men being color-blind. Only she could have thought of it, as it was born of her experience. She was so frustrated, I told her I was sorry.
I felt I owed her some accountability, a mom's confession to her adult daughter, a woman now. I apologized for choosing an emotionally absent man to be her father. (Yeah, I made that leap.)
“Oh mom, he has his good points.” She bounced right back, reminding me how her dad taught her how to play the guitar. I left it at that, knowing it was a sore point last December when Kyle and her dad left Katie hanging as they jammed without her.
I know how it feels to be drowned out by louder, drunker people. Like shouting behind thick glass. I know how it feels when masculinity, or insecurity, turns toxic. Katie was learning that, too. No wonder she wasn’t afraid to try standup.
When was she was going to break up with this one, this Kyle? I asked her. “I’m trying,” was what she said. | https://medium.com/portals-pub/the-worst-best-thing-that-happened-in-2020-ed04c8e200c9 | ['Jen Mcgahan'] | 2020-10-27 21:29:10.525000+00:00 | ['Grief', 'Parenting', 'Mental Health', 'Peace', 'Death'] |
My roadmap to strategic design — Part 2 | 18 months ago I wrote my three year roadmap to becoming a more strategic designer — so what’s happened halfway through? Well, I’m not John Maeda but I feel like I have moved in the right direction. In the past year I took a job at Amazon working on Amazon Music and recently was part of a team that launched a completely redesigned product on 9 platforms. I learned a lot about what it means to be a better designer, and found some opportunities to push strategy.
What I learned
Being a strategic designer is working within the constraints of what’s there
Designers are fetishize coming up with entirely new innovations; re: Thiel’s Zero to One mentality. There is no denying that design and innovation are synonymous, but design has an equal part to play in the more mundane. Its just as much about “coming up with a solution where there seems to be none”, than it is “come up with a solution to a question we didn’t know was there.” Previously I wrote “instead of just doing the mockup everyone asked for, look for the larger levers that can be pulled,” and I think that is even more important than I realized, especially in iterative design. Its easy to assume “if it ain’t broke, don’t fix it” and even easier to assume that the business and tech requirements necessitate little iteration. I think this is potentially the best opportunity for designers to be strategic. Thinking outside the box is great — but sometimes you are stuck in a box and have to figure out a great solution anyways. I think the best way to do this is develop relationships — understand the business needs, understand the tech constraints. Only then can a designer really figure out how to develop the best experience that can be built in this timeframe with these resources.
Said simpler, strategic design is designing change. It’s easy to design a perfectly impossible solution and complain that it got slashed by the PM and tech teams. A true mark of a strategic designer is designing a solution that just works; works within the constraints of the project and still pushes the design.
Design strategy is synonymous with design leadership
Following Jared Erondu’s advice, I have carefully watched those who are the most successful in pushing design in our organization. It’s not about the black cloak designer who brilliantly sketches out the new product on a board and then rushes back to their lair among a roar of applause. Its about the designers who are developing the trust and confidence of their teams. The crux of design is often the willingness to build it. It could be an amazing experience for customers all over the world, but if its trashes the engineers timeline and forces re-architecture every other month — it’s not going to get built, ever. Design is a tool, not an end point. It’s not about doing a show and tell where everyone gets to see wireframes that will become mocks. Its about using wireframes to facilitate a conversation about what the product should include and how it will be built. Like the best leaders, the best designers are those that communicate most efficiently and really include constraints and partners in the process. It’s about inspiring those around you to believe in the power of design.
There is no strategy without craft
I previously though design craftsmanship was “minutely detailed visuals” which is half correct. Daniel Burka told me that the designer’s superpower is taking an idea and making it real — turning it into something that can be seen, held, heard, etc. To really drive strategic design, its about getting ideas out in the open, validating, and iterating. No matter how good the idea, if you can’t make it real, its unlikely anyone will. Design craftsmanship is knowing what details cannot be sacrificed. A few basic oversights can sink the best designs, and spending the extra time to polish the crux aspects can make all the difference of success and failure.
Strategic design = adding value through design
Strategic design can manifest in a lot of ways; insightful business decisions, organizational structure, team process, etc. Initially I thought strategic design was just design decisions that affected business goals, I realized that was short sighted. Strategic design can be anything that adds value through design. I was lucky enough to be part of a small group that introduced a new sprint process to our organization. The biggest value design added was not new product designs, but process design.
What I will learn next
Tactical
I am focusing on the last portion — developing my craft. I have realized how the right tool can make or break a sprint. I got deep into Pixate only to find that it was going to be killed off and had to pick up Framer over the course of a weekend. Being familiar with a wide array of tools and processes will allow me to pick up and put down tools as they progress and pivot. On any given week I used 4+ different tools — all of which my ability to make high quality designs/prototypes/etc. quickly and curated to the purpose at hand. The only way this is efficient is being really comfortable in my process and confident in my craft. I think this is true of both tools and process. Modern designers are expected to be proficient in multiple softwares, but I want to focus on being proficient in multiple processes. This could be team structure, sprint types, or even unique product designs.
Strategic
My biggest goal for the next 18 months is to talk to more people. Every one has a slightly different take on what it means to have an effective design process (whether strategic or craft focused). Strategic design goes hand-in-hand with organizational process. Having a strong understanding of how design operates and fits in people’s and companies’ process is crucial to adding value through design. | https://medium.com/tradecraft-traction/my-roadmap-to-strategic-design-part-2-f54e84ab7ccf | ['Justin Smith'] | 2017-01-25 18:55:57.513000+00:00 | ['Design Strategy', 'Strategy', 'Design Thinking', 'UX', 'Design'] |
7 UX & UI trends on e-Learning | Starting with the premise that any e-Learning platform should make learning easy so the students can focus on the content (which is what matters the most to them) and the fact that the pace of our lives today influences when and how we learn — it’s easy to distill that the interface should act as a real learning facilitator and be adaptable to any learning “space”.
Taking this into account I’ve clustered some interesting characteristics and best practices of some of the best e-Learning platforms I’ve found:
01. PLAYFUL | https://medium.com/dsgnrs/7-ux-ui-trends-on-elearning-bdd623b7baaa | ['Teresa Mira'] | 2020-01-05 19:21:07.067000+00:00 | ['Trends', 'Elearning', 'UI', 'UX', 'Design'] |
vincent_van_gogh — analogueMan. A picture is worth a thousand… | Long ago, before photography … a painter must’ve seemed like a wizard.
To be able to record an image, record light, shadow, color! — on to bare canvas, or even a piece of wood? What a miraculous thing! There are modern-day artists living among us now doing the same thing. What’s different about these folk, what’s the magic? Some say we’re all artists — we just don’t know it.
Take Vincent van Gogh, self-trained for the most part: in the words of Don McLean, “They would not listen, they did not know how” …. well, they’re listening now. Van Gogh’s works are some of the priciest pieces on the planet.
How is it that my HD picture of a lazy French restaurant I took with my digital camera is worth little or nothing compared to van Gogh’s blurry ole painting of the same scene that’s worth millions?
By John Russell (1886) [Detail] Wikimedia Commons
By all accounts life was difficult for van Gogh. But he brought something to the table; he added something to a picture of which even the best camera lens is unaware: himself. His interpretation of a scene or subject, his relationship with it — there — on canvas.
But this was over 100 years ago. Surely we’ve outstripped van Gogh by now with our technical wizardry … haven’t we progressed beyond the antique, analogue ways — to a better, more advanced, digital way? Now-days anyone can take a picture and share it with everyone — at near light speed. There are so many digital photos out there, of a value not necessarily proportional to their resolution.
In van Gogh’s day cameras were a relatively new thing and being a photographer was difficult work — part chemist, part pack-animal, part picture-maker. There was a lot of gear involved and many photographers lived shorter lives due to the harsh chemicals needed to make a photograph.
I’m old enough to remember when being a photographer was still an esoteric pursuit, a peculiar occupation — some sort of sorcery performed in ‘a dark room’ — and for those who had some money because photos weren’t cheap to develop.
These days, just point and click: immediately a digital image shows up on a color screen, as if by magic — all powered by electricity and stored in some type of binary code. If I could go back in time 100 years and show this to an old-timer there would be no doubt as to my magic powers. Were I to go back too far, I might be hanged for practicing witchcraft. | https://medium.com/swlh/vincent-van-gogh-analogueman-5d9d8bd39edf | ['Mac Daniels'] | 2020-12-23 02:22:10.247000+00:00 | ['Van Gogh', 'Art', 'Digital', 'Future', 'clarkart.edu'] |
💀 I’m Receiving Death Threats on Medium | It started late last year…
No matter what topic I perused on Medium, I occasionally came across posts by questionable men selling a wide variety of illegal drugs. Fentanyl, Oxycontin, morphine, cocaine, heroin. You name it, people were selling it:
Initially, I simply flagged the ads — and forgot about them.
Over the next several months, as posts like the above grew exponentially more common (disturbingly, they were often paid, member accounts), I began writing regularly to Medium, urging them to do something about the problem. Medium, however, failed to either respond or act, and nothing changed.
So I took matters into my own hands and, in a variety of posts, began exposing the accounts myself.
At this point, my readers — and I’m very grateful— began flagging the accounts en masse, resulting in a large number of account suspensions.
And so the matter stood — until very recently…
On the evening of April 3rd, I received the following email:
My death threat
The email came from one “Ludwig Bohm.” Startled but curious, I searched for his email address on Medium. This is what I found…
Yes — the search uncovered thousands upon thousands of posts from scores of accounts, all selling dangerous narcotics. The posts had something else in common, too. Regardless of its supposed author, the email contact for every single post is the same, and belongs to — one Ludwig Bohm.
One man is responsible for nearly all of the rampant, unchecked illegal drug activity on Medium. And now that man is threatening my life.
Who knows? The threat could be an idle one.
It could be deadly serious, too.
No doubt unwittingly, Medium’s founders have created a penalty-free playground for drug pushers, rather than a mere happy gathering place for creatives. Until the former can be wholly purged from the platform, it isn’t a safe space for anyone, I’m afraid. Particularly not for those who speak out.
Goodbye for now, friends. With any luck, you’ll be hearing from me again soon…
Author’s Note №1
It would mean the world to me if you’d consider buying me a coffee.
Author’s Note №2
You might like my latest books: The Sea-Wave (for adults) and Kabungo (for kids).
Author’s Note №3
If you like cartoon merchandise (prints, coffee mugs, t-shirts, etc.), check out my online shop.
Author’s Note №4
Death Wish Coffee is the world’s strongest coffee. You’ll love it. ☕☕☕
Author’s Note №5
If you like books, stories, poems, cartoons and coffee, you’ll love my free newsletter. Subscribe today. | https://medium.com/pillowmint/i-received-my-first-death-threat-on-medium-6c752f1cfe75 | ['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites'] | 2020-07-02 21:26:48.319000+00:00 | ['Crime', 'Coronavirus', 'Medium'] |
Annotated RPN, ROI Pooling and ROI Align | Annotated RPN, ROI Pooling and ROI Align KP Follow Jul 4 · 3 min read
In this blog post we will implement and understand a few core components of two stage object detection. Two stage object detection was made popular by the R-CNN family of models — R-CNN, Fast R-CNN, Faster R-CNN and Mask R-CNN.
All two stage object detectors have a couple of major components:
Backbone Network: Base CNN model for feature extraction
Region Proposal Network (RPN): Identifying regions in images which have objects, called proposals
Region of Interest Pooling and Align: Extracting features from backbone based on RPN proposals
Detection Network: Prediction of final bounding boxes and classes based on mult-task loss. Mask R-CNN also predicts masks via an additional head using ROI Align output.
Region of Interest (ROI) Pooling and Alignment connect the two stages of detection by extracting features using RPN proposals and Backbone network. First let’s look at the region proposal network.
Region Proposal Network
The region proposal network takes as input the final convolution layer (or a set of layers in case of UNet kind of architectures). To generate region proposals, a 3x3 convolution is used to generate intermediate output. This intermediate output is then consumed by a classification head and a regression head. The classification head is 1x1 convolution, outputing objectness scores for every anchor per pixel. The regresson head is also a 1x1 convolution that outputs the relative offsets to anchor boxes generated at that pixel.
ROI Pooling
Given a feature map and set of proposals, return the pooled feature representation. In Faster RCNN, the Region proposal network is used to predict objectness and regression box differences (w.r.t to anchors). These offsets are combined with the anchors to generate proposals. These proposals are often the size of input image rather than the feature layer. Thus the proposals need to scaled down to the feature map level.
Additionally the proposals can be of different width, height and aspect ratios. These need to be standardized for a downstream CNN layer to extract features.
ROI Pool aims to solve both these problems. ROI pooling extracts a fixed-length feature vector from the feature map.
ROI max pooling works by dividing the hxw RoI window into an HxW grid of approximately size h/H x w/W and then max-pooling the values in each sub-window. Pooling is applied independently to each feature map channel.
ROI Align
As your see from the implementation of ROIPool, we do a lot of quantization (i.e ceil, floor) operations to map the generated proposal to exact x,y indexes (as indexes cannot be floating point). These quanitizations introduce mis-alignments b/w the ROI and and extracted features. This may not impact detection/classification which is robust to small pertubations but has a large negative effect on predicting pixel-accurate masks. To address this ROI Align was proposed which removes any quantization operations. Instead bi-linear interpolation is used to compute the exact values for every proposal.
Similar to ROIPool, the proposal is divided into pre-fixed number of smaller regions. Within each smaller regions, 4 points are sampled. The feature value for each sampled point is computed with bi-linear interpolation. Max or average operation is carried out to get final output.
Bi-Linear Interpolation
Bi-linear interpolation is a common operation in computer vision (esp. while resizing images). Bi-linear interpolation works by doing two linear interpolations in x and y dimension in sequence (order of x and y does not matter). That is, we first interpolate in x-axis and then in y-axis. Wikipedia provides a nice review of this concept.
Now that we understand how bi-linear interpolation works, let’s implement ROIAlign.
Summary
In this post, we implemented a few components of modern object detection models and test then out (see blog). Going through the work of implementing these components helps me better understand the reasoning behind their development. Of course, one would always rely on cuda implementation in actual research work. A logical next step would be to implement the remaining components of two stage object detection and test it out. | https://medium.com/swlh/annotated-rpn-roi-pooling-and-roi-align-6a40ac5bbe1b | [] | 2020-07-13 11:36:31.903000+00:00 | ['Machine Learning', 'Object Detection', 'Computer Vision', 'Faster R Cnn'] |
What the Buddha’s near-death experience teaches us about hustling too hard | Despite the Republican Party’s insistence that anyone who struggles to pay the bills must be lazy, Americans are a striving bunch. Not because of anything essential about us, but because of the way our society is organized. Capitalism forces those not lucky enough to be born wealthy to spend more time working for a living than enjoying that living. We have no choice but to hustle.
So, it makes sense that, in American Buddhist circles, teachings about the historical Buddha often solely focus on one chapter of his life. As the story goes, after leaving his noble family’s palace for the first time, the soon-to-be spiritual master encountered four people — one sick, one old, one dead, and one a monk — who so awakened him that he renounced civilized life to pursue a path of becoming enlightened, awakened for good.
This heroic search for meaning resonates with our conditioning, motivating us to deepen our own search. Suffering is all there is, we learn, so we better grit our teeth and get to work. One day, with enough money, time, love, mindfulness, or whatever we’re searching for, maybe we’ll finally feel fulfilled.
But what happened next is just as enlightening, even more so for those of us who strive. For six years, the Buddha wandered forests and jungles practicing intense forms of meditation and austere living. Alongside other yogis, he tortured his body to try to overcome its desires and needs, eventually limiting his daily diet to a few grains of rice. “When I touched my belly, I would feel my backbone, and when I went to urinate, I would fall over,” he would later say.
Eventually, at the edge of death, he realized that starving himself was only causing more suffering. With the help of a peasant woman who gave him milk rice, he recovered and developed the practices and teachings that would soon become known as Buddhism, sometimes called the “Middle Way.”
This second chapter of the Buddha’s life offers two powerful lessons.
First, striving, even in spiritual practice, is yet another form of the grasping that causes our suffering. When we lean into the future, we leave the present moment, which numbs us to our bodies and emotions — which is exactly why we do it.
Feeling things directly is difficult, especially if we’ve experienced severe trauma. So, we develop habits and patterns at an early age to protect ourselves from having to feel intense emotions caused by people we have no control over. But, as adults, the more we try to escape our feelings, the more pressure builds up inside of them, making it more difficult to express them in skillful and creative ways. Our patterns of suffering, samsara, feed into themselves, and around we go, again and again.
Striving is just one of those patterns. It’s a way of convincing ourselves that we’re working hard to end our suffering — to lose 50 pounds, write a novel, reach our goals. But it’s actually how we’ve been conditioned to dodge the hardest but most rewarding work of all, staying engaged right here, right now.
Luckily, the Buddha taught a practice to go along with his theory of human suffering. Meditation calms our mind and produces mindfulness, which helps us notice when we’re trying to escape. Then, with practice, it helps us feel compassion rather than resentment toward our habits. It helps change the way we relate to striving, anger, addiction — to whatever our pattern is.
The other lesson is: we can’t do it all alone. The Buddha didn’t reach enlightenment by himself. He would’ve died trying had it not been for someone else — a peasant woman living in an extremely patriarchal society, at that.
Pretending we can do it all on our own is yet another way we’ve been conditioned as Americans — this time in order to justify and preserve a particular social order. We worship the start-up entrepreneur, the one kid that made it out of a poor neighborhood, or the underdog politician. When people come together to make collective demands, say through a union, they’re denounced as “activists” with an “agenda,” or worse, crushed by the powerful, often using the government.
Ironically, this do-it-yourself mentality is what underpins racism, sexism, and other forms of oppression. Women who want equality should just “lean in,” i.e., strive harder to get to the top. Black people who end up in jail deserve their punishment because they chose to commit a crime — not because of skyrocketing economic inequality and hundreds of years of racism.
Not only did the Buddha need help to save him from an early death, but he would go on to include sangha, or community, as one of the ideals at the heart of Buddhism, known as the Three Refuges.
We can’t do it alone — thinking otherwise is just a fantasy.
Both striving and the do-it-yourself mentality are as American as apple pie. When we talk about the Buddha’s life, we shouldn’t skip the second chapter, which has so much to offer to our lives in these times. | https://jeremymohler.medium.com/what-the-buddhas-near-death-experience-teaches-us-about-hustling-too-hard-428f49c4fa79 | ['Jeremy Mohler'] | 2020-01-22 14:29:08.489000+00:00 | ['Meditation', 'Productivity', 'Buddhism', 'Politics', 'Mindfulness'] |
How A Black Lesbian Poet Helped Me Find My Truth | How A Black Lesbian Poet Helped Me Find My Truth
Your words have more power than you know.
It was the spring semester of 1992, and I was a sophomore at Washington University in St. Louis. I had enrolled in an Introduction to Women’s Studies course, and the professor assigned us an essay by Audre Lorde, self-described as a “black, lesbian, mother, warrior, poet.” Little did I know at the time that my life was about to be altered dramatically and irrevocably — for the better — and I would learn first-hand the power of language to shape the world.
In a brief yet profound piece, “The Transformation of Silence into Language and Action,” Lorde shared what her daughter said about the dangers of speaking: “Tell them about how you’re never really a whole person if you remain silent, because there’s always that one little piece inside you that wants to be spoken out, and if you keep ignoring it, it gets madder and madder and hotter and hotter, and if you don’t speak it one day it will just up and punch you in the mouth from the inside.”
Those words flashed through my mind like a bolt of lightning. After reading them, I uttered to myself for the very first time, “I’m gay.”
The early ’90s were a different era. Public support for the LGBT community (the “Q,” “A”,” and “I” didn’t come until later) was scant. Gay students were losing scholarships from the Reserve Officer Training Corps, and the infamous Don’t Ask, Don’t Tell military policy was just around the corner. Only five years earlier, in Bowers v. Hardwick, the Supreme Court had held that the 14th Amendment in no way protected gays and lesbians from anti-sodomy laws; gay sex was still a criminal act in many states. At the time that “I’m gay” was passing across my lips, the media often whispered homosexuality and AIDS together in the same breath, as if synonymous. It’s hard to convey what it feels like to have never thought of your sexuality without also thinking of AIDS or being arrested by the police. For all those reasons, accepting that I was gay wasn’t easy.
But by sharing her daughter’s powerful message, Lorde cracked open my consciousness and wrested from deep within me a truth I had not even been able to say to myself, much less to anyone else. In a single sentence, she altered the course of my life, for which I am eternally grateful. During the next three weeks, I came out to everyone I knew and discovered the freedom and joy, and the occasional pain and rejection, that comes from speaking one’s truth.
Lorde’s impact on my life is a testament to the power of our words to change the lives of others for the better. Her words taught me three powerful lessons about language: | https://divineppg.medium.com/how-a-black-lesbian-poet-helped-me-find-my-truth-d3fb13872f37 | ['Patrick Paul Garlinger'] | 2020-09-08 19:39:18.489000+00:00 | ['Self Love', 'Spirituality', 'Writing', 'Identity'] |
Life Expectancy vs Gross Domestic Product using Data Analytics! | Terms in use:
GDP: Gross Domestic Product (GDP) is the overall monetary or consumer value of all finished goods and services produced within the boundaries of a nation over a given period. It acts as a large measure of overall domestic output, as a detailed scorecard of the economic health of the country. GDP is measured in US dollars. Life Expectancy at Birth Year (LEABY): The word “Life Expectancy” refers to how many years a person may expect to live. By definition, life expectancy is based on an estimation of the average age at which members of a given demographic group will be at death.
The data used in this project is gathered from the World Bank. This project aims to try and identify the relationship between the GDP and Life Expectancy of six countries namely, Australia, China, Germany, India, the United States of America, and Zimbabwe with the help of Data visualization.
Find the code of this project here and connect with me on LinkedIn. Let’s get to the coding part.
1. Visualizing LEABY per country for the last 20 years (1999–2018)
Violin plots help us to visualize and compare more than one distributions at a time. There are two symmetrical ‘KDE — Kernel Density Estimator — plots’ along the middle line. The ‘black thick line’ at the center shows the interquartile range while the lines that extend from it to the ends represent a 95% confidence interval. The white dot in the middle shows the median of the distribution. Learn more about violin plots here.
Below is the code for creating the violin plots of each of the six countries.
from matplotlib import pyplot as plt
import pandas as pd
import seaborn as sns
df = pd.read_csv("final.csv") fig = plt.subplots(figsize=(12, 7))
sns.violinplot(data=df, x='Country', y='LEABY', fontsize='large', fontweight='bold')
plt.savefig("violin.png",bbox_inches='tight')
Output:
Violin plot: Life Expectancy at Birth Year (LEABY) vs Country
We can see that variance is the highest in the data of Zimbabwe country and the lowest in the United States. Also, the median Life Expectancy is the largest in Australia and the lowest in Zimbabwe.
2. Visualizing the correlation between GDP and LEABY
A FacetGrid takes in a function and creates individual graphs for which you specify the arguments.
Next, to see the correlation between GDP and LEABY, let’s see the facet grid of scatter graphs, mapping GDP as a function of Life Expectancy by country. Below are the matplotlib scatter plots for the plot (LEABY vs GDP).
g = sns.FacetGrid(df, col='Year', hue='Country', col_wrap=4, size=2)
g = (g.map(plt.scatter, 'GDP', 'LEABY', edgecolor="w").add_legend())
Life Expectancy vs Gross Domestic Product
Scatter plots are easy to interpret. We can notice the changes over the last 20 years in the countries from the plot above. China and the United States are the countries that have moved most along the x-axis i.e. GDP has increased over the years while Zimbabwe has moved most along the y-axis i.e. life expectancy there has increased over time. Further, Life expectancy in Australia and the United States seems to be constant for the last 20 years.
Furthermore, a great way to visualize a variable over time is by using a line plot. Now, instead of the scatter plots if we use line plots for GDP and Life expectancy individually we could more easily see the changes over time. Below is the FacetGrid of line graphs mapping GDP by country.
g3 = sns.FacetGrid(df, col="Country", col_wrap=3, size=4)
g3 = (g3.map(plt.plot, "Year", "GDP").add_legend())
FacetGrid of line graphs mapping GDP by country
United States of America has the highest GDP among these countries and it steadily seems to be increasing over the years. As we can see that the GDP in China has drastically increased in the past 10 years, it is interesting to know what happened in China that made this sudden change? There are many reasons behind that, which you can find in the below article.
Now let’s look at a similar plot as above for LEABY. Below is the FacetGrid of line graphs mapping Life expectancy by country.
g3 = sns.FacetGrid(df, col="Country", col_wrap=3, size=4)
g3 = (g3.map(plt.plot, "Year", "LEABY").add_legend())
FacetGrid of line graphs mapping Life Expectancy by Country
It is from 2010–2015 when there is a huge change in the Life expectancy of these six countries. The United States has had the least change in life expectancy over time, could this be the outcome of having high GDP? But then there is the case of Australia where the GDP is lower compared to other ones’, but the life expectancy is still almost equal to that of the United States. It seems that the relation between GDP and Life expectancy is not that simple. | https://towardsdatascience.com/life-expectancy-vs-gross-domestic-product-using-data-analytics-bc0d5c78043f | ['Dhruvil Shah'] | 2020-07-28 13:49:14.938000+00:00 | ['Python', 'Data Science', 'Data Analysis', 'Programming', 'Data Visualization'] |
How Sybil Attacks Affect Social Media Startups. A Tru Story. | Last night, TruStory experienced its first Sybil attack. In a Sybil attack, one person creates multiple accounts in order to exploit a network. In the case of TruStory, one person created multiple accounts and then had his accounts constantly agree with the arguments created across his other accounts, thus artificially boosting how much it seemed like people agreed with his arguments.
This attack came after the single largest day of growth for the application. The growth was largely driven by a hotly contested claim that was the featured debate: feminism is a greater threat to humanity than climate change.
As one might guess, this claim attracted a crowd with incredibly diverse perspectives. Growth for a social media company always brings in people with varying view points. In particular, the one person performing the Sybil attack went on to create a slew of accounts with controversial, and to be quite frank, outright racist/antisemitic, names:
Accounts created by single person
Here is an example of an argument made by one of the attacker’s accounts:
We want to be clear that the TruStory community did not downvote this user’s arguments because of the content itself. The TruStory community is not against users posting controversial arguments and having those arguments be discussed in a productive manner in accordance to the values of TruStory.
The primary reason the community downvoted this user’s arguments was because the attacker had created accounts purely for the intent of upvoting their own arguments, which in turn would artificially accrue him TRU points and make it seem like more people agreed with his arguments.
This sort of behavior goes against the values and mission of TruStory, as the community wants TruStory to be a place where people can have productive debate on controversial topics without artificial manipulation.
We are creating mechanisms to ensure that users cannot easily create a significant amount of accounts to constantly agree with each respective account’s arguments.
TruStory is all about bringing people to together discuss controversial issues in a rational manner. Come see what other hot debates are happening now! | https://medium.com/trustory-app/how-sybil-attacks-affect-social-media-startups-a-tru-story-1be8bb34d855 | ['Mattison Asher'] | 2019-10-24 21:39:24.833000+00:00 | ['Social Media', 'Technology', 'Debate', 'Startup Lessons', 'Startup'] |
Supercharging your Mobile Apps with GPU Accelerated Machine Learning using the Android NDK & Vulkan Kompute | Supercharging your Mobile Apps with GPU Accelerated Machine Learning using the Android NDK & Vulkan Kompute
A hands on tutorial that teaches you how to leverage your on-device phone GPU for accelerated data processing and machine learning. You will learn how to build a simple Android App using the Native Development Kit (NDK) and the Vulkan Kompute framework.
Vulkan Kompute in Android NDK (Image by Author)
Some smartphones nowadays pack laptop-level hardware — carrying up to 16GB RAM, high-speed multi-core CPUs, and GPUs that can render high-performance complex graphical applications on 4k displays.
Image by Author
Tapping into that power — especially the GPU processing power — for on-device data processing capabilities becomes growingly important as mobile hardware only continues to improve. Recently, this has been opening exciting opportunities around edge computing, federated architectures, mobile deep learning, and more.
This article provides a technical deep dive that shows you how to tap into the power of mobile cross-vendor GPUs. You will learn how to use the Android Native Development Kit and the Kompute framework to write GPU optimized code for Android devices. The end result will be a mobile app created in Android Studio that is able to use a GPU accelerated machine learning model which we will write from scratch, together with a user interface that will allow the user to send the input to the model.
Android Studio Running Project in Emulator (Image by Author)
No background knowledge beyond programming experience is required, however if you are curious about the underlying AI / GPU compute concepts referenced, we suggest checking out our previous article, “Machine Learning in Mobile & Cross-Vendor GPUs Made Simple With Kompute & Vulkan”.
You can find the full code in the example folder in the repository.
Android Native Development Kit (NDK)
Android NDK Diagram (Image by Author)
The Native Development Kit (NDK) is one of Android’s solution to address the increasing computational demands from mobile apps. The NDK framework enables developers to write low level, highly efficient C and C++ code that can interoperate with the Java/Kotlin application code through the popular Java Native Interface bindings.
This tooling enables mobile application developers not only to write highly efficient code, but also leverage existing optimized frameworks written in C++ for advanced data processing or machine learning.
Vulkan Kompute
Playing “where’s waldo” with Khronos Membership (Image by Vincent Hindriksen via StreamHPC)
Vulkan is an Open Source project led by the Khronos Group, a consortium consisting of several tech companies who have come together to work towards defining and advancing the open standards for mobile and desktop media (and compute) technologies.
A large number of high profile (and new) frameworks have been adopting Vulkan as their core GPU processing SDK. The Android NDK’s main documentation page has a full section dedicated to Vulkan, together with hands on examples showing how it can be used in Android mobile devices.
As you can imagine, the Vulkan SDK provides very low-level access to GPUs, which allows for very specialized optimizations. This is a great asset for data processing and GPU developers — the main disadvantage is the verbosity involved, requiring 500–2000+ lines of code to only get the base boilerplate required to even start writing the application logic. This can result in expensive developer cycles and errors that can lead to larger problems. This was one of the main motivations for us to start the Vulkan Kompute project.
Kompute is a framework built on top of the Vulkan SDK which abstracts a lot of boilerplate code required, introducing best practices that expose Vulkan’s computing capabilities. Kompute is the GPU computing framework that we will be using in this tutorial to build the machine learning module in our mobile Android app.
Vulkan Kompute Documentation (Image by Author)
Machine Learning in Mobile Development
In this post we will be building upon the Machine Learning use-case we created in the “Machine Learning in Mobile & Cross-Vendor GPUs Made Simple With Kompute & Vulkan” article. We will not be covering the underlying concepts in as much detail as in that article, but we’ll still introduce the high level intuition required in this section.
To start with, we will need an interface that allows us to expose our Machine Learning logic, which will require primarily two functions:
train(…) — function which will allow the machine learning model to learn to predict outputs from the inputs provided predict(...) — function that will predict the output of an unknown instance.
This can be visualised in the two workflows outlined in the image below.
Data Science Process (Image by Author)
Particularly in app development, this would also be a common pattern for machine learning workflows, for both predictive and explanatory modelling use cases. This often consists of leveraging data generated by your users as they interact directly (or indirectly) with the app itself. This data can then serve as training features for machine learning models. Training of new models can be performed through manual “offline” workflows that data scientists would carry out, or alternatively through automated triggers retraining models.
Android Studio Project Overview
Project File Structure (Image by Author)
We will start by providing a high level overview of the core components in the Android Studio project, including the Vulkan Kompute C++ bindings, the Android User Interface, the App logic build in Kotlin, and the build files required. If you are interested in a particular area you can jump to its respective section below.
You will need to make sure you install Android Studio, and also install the Android NDK — the rest of the dependencies will be installed and configured automatically when opening the project in the IDE.
Now that you have everything installed, you are able to import the project. For this, you first have to clone the full Kompute repository and import the Android Studio project under examples/android/android-simple/ . You should now be able to see the project load and configure the build. Once it opens you are able to run it in an emulator or in your own physical Android phone. This project was tested in the Pixel 2 emulator, and in a physical Samsung Galaxy phone.
Final GPU Accelerated Kompute App (Image by Author)
When you load the project you will notice the following key components in the file structure, which we will be breaking down further in detail in the following sections:
Android SDK Application — The Android UI, asset resources, build files, and Kotlin/Java components that provide the relevant application logic that interacts with the UI and C++ Kompute ML bindings.
— The Android UI, asset resources, build files, and Kotlin/Java components that provide the relevant application logic that interacts with the UI and C++ Kompute ML bindings. Android NDK Kompute ML Module — The Kompute ML C++ code and bindings configured through Android NDK for GPU accelerated processing.
Android SDK Application
This section covers the Android SDK Java/Kotlin and User Interface components, which should provide an intuition on how the high level business logic interacts with the native C++ JNI bindings.
The user interface consists primarily of input text boxes and display text labels to enable users to interact with the GPU Accelerated ML processing C++ core (as shown in the screenshot below). If you are curious on exactly the views used, you can inspect it in your Android Studio project, or directly open the activity_kompute_jni.xml file.
The core of our mobile app can be found in the app/src/main/java/com/ethicalml/kompute folder, inside the KomputeJni.kt file. This Kotlin file contains the main business logic for our Android app.
If we look at the shortened version of the class in the code block below we will notice the following key points:
fun onCreate(…) — This function is called on initialisation of the Android Activity (when the app is loaded)
— This function is called on initialisation of the Android Activity (when the app is loaded) fun KomputeButtonOnClick(…) — This function is triggered when the main “KOMPUTE” button gets pressed, and triggers the C++ Jni binding functions using the data from the user interface text boxes.
— This function is triggered when the main “KOMPUTE” button gets pressed, and triggers the C++ Jni binding functions using the data from the user interface text boxes. external fun initVulkan(): Boolean — This function is one of the C++ JNI functions that will be bound to the Vulkan initialisation C++ function.
— This function is one of the C++ JNI functions that will be bound to the Vulkan initialisation C++ function. external fun kompute(...): FloatArray — The is the C++ JNI function that will train the ML model and run inference on the inputs provided, returned the inference results.
— The is the C++ JNI function that will train the ML model and run inference on the inputs provided, returned the inference results. external fun komputeParams(...): FloatArray — The C++ JNI function that trains the model and returns the learned parameters weight 1 , weight 2 and bias .
— The C++ JNI function that trains the model and returns the learned parameters , and . companion object { ...("kompute-jni") } — This is the name you will give to your C++ output shared library, which will contain the compiled C++ source with all relevant binding functions.
As you will also notice, the external fun functions do not have any definition — this is because the definition is provided in the C++ JNI function bindings, which will be covered in the C++ JNI bindings section.
Now to cover each of the functions in more detail, we start with the onCreate function. This function is in charge of initialising all relevant components in our application. This includes:
val binding — This is the main object that will allow us to access all the text boxes and elements in the UI.
— This is the main object that will allow us to access all the text boxes and elements in the UI. val successVulkanInit = initVulkan() — This is our first call to a C++ JNI function, which is primarily in charge of initialising Vulkan. If it’s successful it returns true , and we display the respective success message in a android.widget.Toast popup — an error is displayed otherwise.
Next up we have the KomputeButtonOnClick(...) function. This function gets triggered when the user presses the “KOMPUTE” button in the app. The main purpose of this function is to retrieve the inputs from the input text boxes in the UI, then use the input data to perform an ML train/inference step through the JNI C++ bindings, and finally display the resulting outputs back in the UI text labels. In further detail:
val <elementname> = findViewById<EditText>(R.id.<elementname>) — This is the format of the steps that create the variable that holds the respective text box with inputs. In this case <elementname> is the name of the element we are interacting with.
— This is the format of the steps that create the variable that holds the respective text box with inputs. In this case is the name of the element we are interacting with. xi, xj and y — The FloatArray elements created from the text in the respective input boxes, which are then used for the ML model processing.
— The elements created from the text in the respective input boxes, which are then used for the ML model processing. val out = kompute(xi, xj, y) — Here we run the C++ JNI Binding function kompute which trains and processes the data through the KomputeModelML class we create in C++.
— Here we run the C++ JNI Binding function which trains and processes the data through the KomputeModelML class we create in C++. val params = komputeParams(xi, xj, y) — Here we run the C++ JNI Binding function which trains and returns the learned parameters of the Kompute machine learning model.
— Here we run the C++ JNI Binding function which trains and returns the learned parameters of the Kompute machine learning model. <elementname>.text = <value> — The lines that follow this format basically override the text labels in the UI to display the outputs.
The last few functions are only explicitly set as external functions to be bound to the C++ JNI bindings. Furthermore the companion object section provides the name of the shared library that will contain the respective bindings referenced in this activity.
You can find the full file in the KomputeJni.kt file in the repository.
Android NDK Kompute ML Module
This section covers the Android NDK Kompute ML Module files, which includes the build-system, and the C++ source code using the Vulkan Kompute framework.
Vulkan Kompute Architecture Design (Image by Author)
We will be using the core components of Kompute which are outlined in this accompanying diagram. Namely, we will be loading the relevant data in the GPU using Kompute Tensors, processing it with the respective Kompute Operations, and orchestrating this with a Kompute Sequence and a Kompute Manager. We won’t be covering the Kompute architecture in detail but if you want to learn more about the underlying concepts, you can check out the more detailed article on the underlying implementation.
The core components in the Android NDK bindings module consist of the following:
JNI Binding Functions — The native functions that can be called from the Java/Kotlin Android SDK application code.
— The native functions that can be called from the Java/Kotlin Android SDK application code. KomputeModelML Class — The class that exposes the Kompute GPU Accelerated ML model logic.
— The class that exposes the Kompute GPU Accelerated ML model logic. CMake build file — The C++ build file responsible for compiling and linking all relevant libraries.
JNI Binding Functions
The JNI bindings in this case are provided via the KomputeJniNative.cpp file. The skeleton of the class is below — the function code logic has been redacted for simplicity, and will be explained in more detail below.
The JNI binding functions have to match the class functions defined in the Java/Kotlin code. The format for the function is:
Java_<modulepath>_<class>_<functionname>(env, thiz, ...params)
In our case the class is in the com.ethicalml.kompute module, in the class KomputeJni and its respective function — below the name of the functions will reflect this structure.
Diving one level deeper, we can now go through each section of the file. Starting with the imports, we can see below the imports together with comments outlining their core functionality.
In Android applications, we actually need to initialize the Vulkan dynamic library manually (which is something that you normally wouldn’t do outside of Android). The reason why this is required, is because the Vulkan library is not actually linked in Android phones. The reason why Android avoids doing any linking is for backwards compatibility, mainly to ensure the app doesn’t crash if the Vulkan library is not found in older phones.
This means we need to manually find the library in the C++ code and if found, link each function to its respective memory address pointer so our C++ framework can use it. Fortunately, this is something that Kompute does automatically, and we won’t be covering the details in this article as it probably would require an article in itself, but if you’re interested you can read more about it in this post, and you can see how Kompute imports Vulkan dynamically in the Core.hpp header file using the vk_ndk_wrapper_include files.
Below you can see the implementation of the function that exposes the initVulkan logic— Java_com_ethicalml_kompute_KomputeJni_initVulkan(...) . You can see inside this function we run InitVulkan() until the Vulkan library is successfully initialised, or alternatively fails if the maximum number of retries is reached.
Once Vulkan has been initialised, it is possible to call the remaining functions. The first one is the kompute function, which is in charge of training the model and running an inference request. The function receives the input Xi and Xj values, together with the expected predictions that the model will learn from. It will then return the prediction treating Xi and Xj as unseen data. The function will basically call the KomputeModelML class train function and predict function.
The last remaining JNI function that will be exposed to the Java/Kotlin code is the komputeParams function, which is in charge of returning the parameters that the machine learning model learns, namely the weight 1 , weight 2 and the bias parameters.
The only remaining functions are the utility functions that we used in the JNI logic — namely jfloatArrayToVector and vectorToJFloatArray — these functions are self explanatory, so we’ll leave it to the reader to explore further in the source if interested.
KomputeModelML Class
Now that we’ve covered the key functions that are bound to the Kotlin / Java class, we can cover the KomputeModelML C++ class that contains the Kompute GPU Accelerated logic.
The header file for the KomputeModelML class is outlined below, and contains the following key components:
#include "kompute/Kompute.hpp” — header containing all the Vulkan Kompute dependencies that we’ll use in this project
— header containing all the Vulkan Kompute dependencies that we’ll use in this project void train(...) —Trains the machine learning model using the GPU native code for the logistic regression model. It takes the input array(s) X , and the array y containing the expected outputs.
—Trains the machine learning model using the GPU native code for the logistic regression model. It takes the input array(s) , and the array containing the expected outputs. std::vector<float> predict(...) —Perform the inference request. In this implementation it is not using GPU code as generally there tends to be less performance gains through parallelization on the inference side. However there are still expected performance gains if multiple inputs are processed in parallel (which this function allows for).
—Perform the inference request. In this implementation it is not using GPU code as generally there tends to be less performance gains through parallelization on the inference side. However there are still expected performance gains if multiple inputs are processed in parallel (which this function allows for). std::vector<float> get_params() —Returns an array containing the learned parameters in the format of [ <weight_1>, <weight_2>, <bias> ] .
—Returns an array containing the learned parameters in the format of . static std::string LR_SHADER — The shader code that will be executed as machine code inside of the GPU. Kompute allows us to pass a string containing the code, however for production deployments it is possible to convert the shaders to binary, and also use the utilities available to convert into header files.
If you are interested in the full implementation you can find all the files in the repository. Furthermore if you are interested in the theoretical and underlying foundational concepts of these techniques, this is covered fully in our previous post.
CMake Build File
The CMakeLists.txt build file is a very important component in the Android NDK workflow. This section becomes particularly important if you wish to add Kompute into your own project. The cmake file is quite small so we’ll be covering each of the lines separately.
First we need to make sure the Kompute library is available. Usually you would run the INSTALL target of the Kompute build to be able to use/import the respective library. However in this case we need to make sure Kompute is built for the right Android CPU architecture —our simplest option is adding the main repository as part of the build, which means that Kompute will also be built for the right mobile architectures. If you want to include this in your project, you just need to make sure the path is relative to the Kompute cloned folder.
We now set the variable VK_ANDROID_INCLUDE_DIR to the vulkan include directory. This contains all the include files we need for Vulkan — for completeness, Kompute uses the vulkan.h header as well as the vulkan.hpp C++ headers.
We are now able to add the library that will be used by the Java/Kotlin Android Studio project, which in this case is the shared library kompute-jni .
We now are able to add all relevant include directories. This includes the VK_ANDROID_INCLUDE_DIR which we defined above, as well as the VK_ANDROID_COMMON_DIR which contains Android log.h . The single_include is what contains the kompute/Kompute.hpp header from Vulkan Kompute. Finally, we need to import the Kompute Dynamic library wrapper vk_ndk_wrapper_include which is necessary as the Vulkan library is imported dynamically. The logic behind this could become a series of articles in itself, so we won’t go down this rabbit-hole, but if you’re interested you can read more in this post, and you can see how Kompute imports Vulkan dynamically.
To compile the project we’ll want to make sure the VK_USE_PLATFORM_ANDROID_KHR is set, as this is what enables the Android configuration. For this project we also disable the Vulkan debug layers with KOMPUTE_DISABLE_VK_DEBUG_LAYERS.
Finally, we are able to link the relevant libraries to our kompute-jni shared library target. This includes:
kompute — This is the library created in the Kompute build.
— This is the library created in the Kompute build. kompute_vk_ndk_wrapper — This library also gets created by the Kompute build and contains the code to dynamically load and wrap the Vulkan library.
— This library also gets created by the Kompute build and contains the code to dynamically load and wrap the Vulkan library. log — This is the Android log library, which is required by Kompute to override logging.
— This is the Android log library, which is required by Kompute to override logging. android — This is the Android library which is used by the Android project.
That’s it — you are now able to run the application, which will execute the full build. You should then be able to see the Kompute app in your Android Studio emulator, or in you physical phone, where you’ll be able to trigger the processing on your on-device GPU.
Android Studio Running Project in Emulator (Image by Author)
What’s next?
Congratulations, you’ve made it all the way to the end! Although there was a broad range of topics covered in this post, there is a massive amount of concepts that were skimmed through. These include the underlying Android Development workflows, Vulkan concepts, GPU computing fundamentals, machine learning best practices, and more advanced Vulkan Kompute concepts. Luckily, there are resources online to expand your knowledge on each of these. Here are some links I recommend for further reading: | https://towardsdatascience.com/gpu-accelerated-machine-learning-in-your-mobile-applications-using-the-android-ndk-vulkan-kompute-1e9da37b7617 | ['Alejandro Saucedo'] | 2020-10-18 19:50:57.014000+00:00 | ['Android Ndk', 'Mobile App', 'Artificial Intelligence', 'Machine Learning', 'Vulkan'] |
Mini-LED is Coming to the iPad Pro. And I Don’t Care. | My iPad Air 4 revelation
Over the last few weeks, I’ve been testing an iPad Air 4. I love it. Which is odd, because I’ve been using a 12.9" iPad Pro, fully decked-out and with the Magic Keyboard for the last year or so.
I’ll be offering more in-depth thoughts about this switch soon, but the thing I love the most about the Air is the smaller form factor. In comparison to the 12.9" Pro, it’s beautifully portable and a joy to grab and consume content from, no matter where I am or what I’m doing.
Sure, I miss Face ID a little bit, but Touch ID is just as secure and intuitively placed on the power button.
However, there was another element of the iPad Air 4 which concerned me: the screen. Unlike the iPad Pro’s 120Hz ProMotion display, the Air features a 60Hz panel.
Having used the Pro for so long, the difference between the two devices was immediately noticeable. The 60Hz screen made the Air feel slower than its super-fast A14 Bionic processor suggested it should be and, if anything, actually made the new device feel older.
That bothered me. For about ten minutes. Then, I continued to use the Air as my main iPad, and the difference in Hz soon became less evident (even when occasionally switching to the iPad Pro). After a while, it simply wasn’t a ‘thing’ any more.
This fascinates me. ProMotion is a brilliant feature when you first discover it. An iPad with a 120Hz screen feels more buttery-smooth than ever… but, with hindsight, that’s only when flicking through your app screens or scrolling web pages. I know it also comes into play during video playback, but I can’t say I’ve ever noticed a difference.
Just like 60Hz, 120Hz quickly disappears as a tangible user benefit or disadvantage. By comparison, Mini-LED is likely to have less of an immediate impact during that first encounter, which means it’ll be even less likely to impress the uninitiated.
This begs the question: what should Apple be doing with the next iPad Pro if Mini-LED is going to be such a non-event? | https://medium.com/macoclock/mini-led-is-coming-to-the-ipad-pro-and-i-dont-care-e24a5f0ec2b3 | ['Mark Ellis'] | 2020-12-29 06:30:18.529000+00:00 | ['Gadgets', 'Apple', 'Tech', 'iPad', 'Technology'] |
Talking to Latinos Today | Talking to Latinos Today
Henry Gomez | Director of Strategic Planning, Zubi
GTB welcomed Henry Gomez of Zubi and other key panelists to share their thoughts and experiences on Hispanic culture, collectivism, and other key insights of this ever-growing segment.
As a global agency with offices in Spain, Latin America, and Central America, this was a great opportunity to learn from our sister agency. | https://medium.com/gtb-tweets/talking-to-latinos-today-3f44fc64488 | [] | 2019-02-15 14:50:03.309000+00:00 | ['Zubi', 'Henry Gomez', 'Startup', 'Gtb Talks'] |
Fathers, Kittens, and Loss | Slapstick
“A kitten is chiefly remarkable for rushing about like mad at nothing whatever, and generally stopping before it gets there.” — Agnes Repplier
My dad first met Buster and Bella when my family came to visit that Thanksgiving. We’d just moved into our first house, a few miles away from the rental property where the kittens had found us. We still hadn’t unpacked, but I brined and roasted a turkey anyway. My dad cracked up over Buster, who ran and skidded across the shiny hardwood floors, sliding several feet before somersaulting, spinning around and doing it all over again.
“Buster KITTEN,” he chortled. “Like Buster KEATON!”
Yes, the antics of our crazy little kitten-boy had reminded us of slapstick comedy, and the name seemed appropriate. My dad immediately picked up on it and enjoyed pointing it out to other people who didn’t quite get it.
It cracked him up every time he said Buster’s name.
Loss
These days, we have only three cats, and one parent between the two of us.
Buster, technically a “senior’ cat at 10 years old, has grown more reserved, but he still has plenty of cattitude. We like to say the devil’s in the tail, and his is always held high. Bella was born a bob-tail, so we figure Buster got most of the devil. His nickname since the early days has been Gatito Diablo.
My dad never liked cats when I was growing up. We were dog people, through and through — and big dogs at that. It wasn’t until the cat my sister and I adopted in college temporarily moved in with my parents that he suddenly became smitten. That cat was never allowed to leave, and lived to the ripe old age of 21, in the lap of luxury.
I wish he could see Buster now. I imagine he’d still have a chuckle over the name, but instead of enjoying the antics of a kitten, he’d find a comfy chair in a sunbeam, where an elderly man and an elderly cat could have a quiet afternoon nap together. | https://medium.com/writing-heals/fathers-kittens-and-loss-69c84e58a84 | ['Kathryn Dillon'] | 2019-10-21 14:33:10.655000+00:00 | ['Grief', 'Loss', 'Healing', 'Cats', 'Writing'] |
WTF is ‘Digital Product’? | This little ditty is an extract from a wider piece on the state of the digital industry originally published on the Marvel Blog, now available here. Below is a slightly edited extract attempting to bring some definition to Digital Product and Digital Product Work? Why you ask? Well pretty much everyone is talking about digital products these days so… because.
What is ‘Digital Product’? In as human speak as possible — in the style of trying to explain to my mother what I do — here goes…
A Digital Product is a software enabled product or service that offers some form of utility to a human being.
Real world examples: Your favourite car service app such as Uber or Lyft, your mobile banking app with Chase or Barclays, your shoppable H&M or Nike app, the digital dashboard in your Tesla car (lucky you), the interface of your consumer electronics devices such as your phone or smart watch, the controller app for your Sonos speakers, or an investment bank broker’s trading platform. Some products and services are made up mostly or entirely of software such as your Facebook messaging or Tinder ‘dating’ app. This is the software that is eating the world as almost all businesses come to be digitised and run on software.
The digital touchpoint through which a human interfaces with said product or service can sit on many types of platforms and devices. Those can include web, mobile, auto, wearables, VR and beyond. Things are slowly moving beyond the visual interface towards the more natural form of conversational interface. Such examples are speaking with your Amazon Echo or accessing services in written conversation in your messaging app. As you are interacting with the product it does more than simply display information, as say a marketing website does. Complex interactions take place between the part you use (the front-end) which is connected (integrated) into the wider system that runs the service in its entirety (the back-end). This means software is core to it all.
‘Digital product’ or ‘product’ work is the delivery of the digital touchpoints of a product or service.
Delivery can involve end-to-end concept, design, and engineering of the digital product, bringing it through to market launch and beyond. Delivery can draw on multiple disciplines of not only design and engineering but also business strategy, product management, data science, and marketing.
There are multiple players involved as well as not inconsiderable cost. So the aim is to make the process as holistic as possible in order to get the product to market efficiently. So the method by which you deliver the product becomes as important as the product itself. Well-established best practices of product development include integrated teams, Agile (not waterfall), SCRUM, Lean, and Continuous Delivery methodologies. All of this makes digital product delivery an expensive investment. Engagements can run over many quarters and even years, commanding a high premium and margin. The budget for this kind of work usually comes from places other than the marketing department, which usually makes shorter-term, campaign-based investments.
The point is this... All client’s businesses are becoming heavily software driven. So client demand has shifted towards the actual delivery of digital products and services into market, rather than just the strategy behind or marketing of them. In addition Design has become recognised as a critical factor in creating successful products and services. All of this has placed a premium on those skills and the relatively few companies that can offer it. Thus the red hot M&A market for studios who can offer those capabilities and a whole lot of posturing by those who can’t.
For those who wish to nerd out on the topic, here’s pretty great stab at What is Digital Product Design by Paul Devay. | https://medium.com/fktry/wtf-is-digital-product-3ae51ae2664f | ['Jules Ehrhardt'] | 2018-05-14 02:27:12.635000+00:00 | ['Innovation', 'Product Design', 'Digital', 'Design'] |
Customer Segmentation Tutorial: Data Science in the Industry | Step 1: Loading and Cleaning the Data
The data can be loaded with the pandas library, using ISO-8859–1 encoding.
import pandas as pd
data = pd.read_csv('/kaggle/input/ecommerce-data/data.csv',encoding='ISO-8859-1')
data.head()
Calling len(data) yields 541909 , meaning that there are 541,909 rows in the data. Running len(data[‘CustomerID’].unique()) returns 4373 , meaning that there are 4,373 distinct customers. Ideally, our final data will have 4,373 rows.
We need to deal with missing values. By checking len(data) — len(data.dropna()) , which returns the number of rows that would be dropped if we deleted any row that had a na value. This outputs 135080 , meaning that between 1/5 and 1/4 of the data would be missing if we simply dropped any row with a na value.
So where are most of these missing values coming from? The answer: missing CustomerIDs . The code data[‘CustomerID’].isnull().sum() , which checks how many missing values there are in the column CustomerID, returns 135080 . All of the rows that would have been dropped using data.dropna() had a missing CustomerID .
Normally, we would want to impute the data when so much of it has missing values. However, due to the nature of IDs, which are put on a linear scale (as numbers) but are inherently classes, imputing methods like averaging or taking a median won’t work. We may want to try KNN, but there isn’t much data for it to work off of, and may be inaccurate.
For the purposes of this project, I’ll be dropping all rows that have na values. However, this may not be wise in a serious business project because of selection bias — say that all the customers whose IDs are na belong to some additional cluster or provide valuable information that is discarded because their IDs are unknown. In a more formal business setting, it may be wiser to try and impute the IDs through a method like Decision Tree, training the classifier to recognize a customer’s ID based on its purchases. For this scenario, a high-variance algorithm needs to be used due to the astronomical number of classes.
data = data.dropna()
data.head()
After dropping the missing values, our data is len(data) = 406829 rows long. That’s still a considerable amount of data.
The InvoiceNo and StockCode columns are irrelevant. We can remove them:
data.drop(['InvoiceNo','StockCode'],axis=1,inplace=True)
data.head()
Next, we’ll be vectorizing the Description column to cluster products into groups. To prepare the data for this, we’ll need to make the words lowercase and remove any punctuation, for example in row 4 with “RED WOOLY HOTTIE WHITE HEART.”
Let’s make a function that cleans the description. It uses regular expressions to remove punctuation and lowers capital letters.
import re
def clean(description):
return re.sub(r'[^\w\s]','',description).lower()
As a demonstration of the function:
It’s ready to go! Pandas’ .apply function lets us apply a function to a column.
data['Description'] = data['Description'].apply(clean)
data.head()
Great! The text is ready to vectorize. Two last things — we need to reset the index and quickly create a new column. Because we removed all na values, there are going to be some holes in the indexes. If let untamed, it will cause problems for us in the future.
data = data.reset_index().drop('index',axis=1)
In the future, we will need to know the total amount of money spent on a product — rephrased, the unit cost times the quantity of items.
data['Total'] = data['UnitPrice'] * data['Quantity']
We’re ready to move on! | https://medium.com/analytics-vidhya/customer-segmentation-tutorial-data-science-in-the-industry-a6b486f0b0b0 | ['Andre Ye'] | 2020-05-28 16:57:26.282000+00:00 | ['Data Analysis', 'Machine Learning', 'Statistics', 'AI', 'Data Science'] |
50 Powerful Call-to-Action Phrases | Are you looking to increase your conversion rate? Call-to-Actions are an essential part of the conversion process, but what kinds of Call-to-Actions should you put on your website? CTAs should be simple yet effective, and should catch the attention of your visitors. The formula for a successful CTA page title consists of combining such sales buzzwords as “free,” “discount,” “offer,” “gift,” “guarantee,” with action-oriented words like “click,” “download,” “request,” and “send.” Listed below are 50 powerful Call-to-Action phrases that visitors can’t resist clicking on.
Download now Click here Join now Download here Start now Click here for details I urge you to… Get a free… Talk to an expert Immediate download While supplies last Money back guarantee Money back guarantee, no questions asked Get it now! Act quickly Free shipping Shipping discount Come in for a free consultation Come see us today Reserve your spot now Come in today Start your trial Start your free trial Offer expires… Satisfaction guaranteed We’d like to hear from you I can’t wait to hear from you Limited availability Limited time offer Best value For more details call… Please don’t hesitate to call We’re waiting for your call! Send for our free brochure Send for our free catalog Subscribe to our email list Subscribe to our newsletter Send in your application today! Apply here Order now and receive a free gift Tell us what you think Take our quiz! Sign up online at… Get started today Request your FREE quote today Just reach for your phone Members only/Subscribers only Contact us It’s important that you respond promptly Download our eBook for more information
Now that you know what attention-grabbing Call-to-Actions look like, you can implement your own. Don’t just copy and paste the CTA phrases from the list above. Tweak the CTA phrases so that they apply specifically to your company and your products and/or services. For example, instead of just saying, “Start your free trial,” you can say “Start your free trial with (insert company name here) so your CTA is more personalized. Use these phrases as a template for your website’s CTAs, sit back, and watch the clicks roll in! | https://medium.com/stevens-tate-marketing/50-powerful-call-to-action-phrases-b369b39d9955 | ['Nicole Wagner'] | 2017-07-21 15:12:45.241000+00:00 | ['Inbound Marketing', 'Conversion Rate', 'Call To Action', 'Phrases', 'Marketing'] |
Introducing Origin Protocol | Since the beginning of the Internet, online marketplaces have created virtual town squares where buyers and sellers can meet, connect, and ultimately transact. Online marketplaces in the sharing economy have especially thrived in the past few years. Companies like Uber, Airbnb, Getaround, Fiverr, and many others have reached regional and global scale, positively impacting millions of users. Collectively, tens of billions of dollars of transactional value flow through these household names every year.
These companies have undoubtedly changed the world for the better, providing suppliers with new income sources and giving consumers a diverse set of new services with just a few taps on their smartphones (food delivery, on-demand massage, carsharing, ridesharing, homesharing, freelance gigs, etc.).
However, as centralized businesses, there are also several significant shortcomings with these sharing economy marketplaces. They charge heavy transaction fees (typically 20–30%), are susceptible to censorship and regulation, and have failed to reward early buyers and sellers that are responsible for their successes (e.g. the first 100 drivers on Uber or the first 100 hosts on Airbnb).
Today, we are introducing the Origin platform, built on Ethereum and IPFS. Our vision is to enable the sharing economy of tomorrow, one that is fully decentralized and free of onerous transaction fees, is censorship-resistant, and rewards network participants for contributing to the platform.
Origin empowers developers and businesses to build decentralized marketplaces on the blockchain. Our protocols make it easy to create and manage listings for the fractional usage of assets and services. Buyers and sellers can discover each other, browse listings, make bookings, leave ratings and reviews, and much more.
We envision a future where open-source code can replace multi-million (and billion) dollar businesses. Smart contracts will become the new virtual town square where buyers and sellers can meet, connect, and transact. We are excited about the decentralized sharing economy and hope you are as well.
Matt and Josh
Founders of Origin
Learn more about Origin: | https://medium.com/originprotocol/introducing-origin-6e7e3a1cd1c9 | [] | 2020-01-14 18:52:47.870000+00:00 | ['Marketplaces', 'Product', 'Sharing Economy', 'Startup', 'Peer To Peer'] |
Q&A with Ashley Fern, Director of Digital Content and Strategy, Betches Media | Q&A with Ashley Fern, Director of Digital Content and Strategy, Betches Media
This week, The Idea spoke with Ashley Fern to learn more about how Betches Media leverages audience feedback and maintains a unified brand image across different channels and satellite communities. Ashley recently launched the company’s newest vertical, Betches Travel. Previously, she helped build the Betches Brides vertical, including its podcast. Subscribe to our newsletter on the business of media for more interviews and weekly news and analysis.
Can you give me a brief overview of your team and role?
I’m the Director of Digital Content and Strategy at Betches Media. I joined a little over a year ago and at the time it was a new role for the company.
I manage Betches Media’s digital strategy across all platforms including our website, social media accounts, podcast, and video. I also work closely with our brand partnerships team.
My job is to connect all of the different content we produce and make sure we’re telling a cohesive story. This means that regardless of whether an audience member is a site reader, podcast listener, or Instagram follower, they will still be aware of what Betches Media is doing across different channels. Our focus is to deliver a consistent message — what you listen to, for example, will be supported by what you might then go on to read on our site.
Underneath me, my department is comprised of the editorial and social teams. Editorial includes our Editor-in-Chief Sara Levine, editorial assistants, and our network of freelance writers. Our social team is made up of social media managers who run our main instagram handle (which is our most well-known social media with 6.9 million followers) and our 12 satellite communities such as @betchestravel, @dietstartstomorrow, and @betches_sup. All together, we have over 8 million instagram followers across our 13 accounts.
What does your day-to-day look like?
My work is very meeting-based. My week usually starts off with a social content brainstorm that is led by our editor-in-chief that I help oversee. That same day, I also have a weekly sales and partnership meeting. I also meet with each person who manages a satellite account to determine what our current priorities and initiatives are. Aside from that, I meet with our three founders and serve as a liaison between upper management and our more specific teams.
What has been your favorite initiative to work on so far?
If you had asked me three months ago, I would’ve told you that it was our Betches Brides vertical, which I launched last March. I created the Instagram account, produced the podcast, and managed the Facebook group that currently has 10,000 members. At the time, a lot of people in our office, including myself, were engaged. And, because we see ourselves as maturing with our audience, launching a brides vertical seemed like a natural and appropriate opportunity.
Most recently, however, I launched Betches Travel, which garnered 40,000 instagram followers in its first week of launch. When we launched this, our strategy was to make sure that every satellite account had a piece of media that was travel-related but catered to that respective vertical. By doing this, we could move audiences from one vertical to another.
What drew Betches Media to travel?
Millennials are obsessed with travel largely because of Instagram. But, we noticed that there really isn’t a platform to get real travel advice. You could go on TripAdvisor, but you don’t know whether the reviews are trustworthy.
We want to create a space that provides honest reviews and experience of ourselves as audience members. A picture may be worth a thousand words but we want to help tell the authentic story behind it.
This has led us to create travel guides as well as hotel and restaurant profiles (and, of course, memes and tweets) based on our own criteria cards. With our first restaurant profile we rated the restaurant not only on its food quality, but also on things millennials care about in the travel space like the cost, the availability of gluten-free and vegan options, and the ease of making food adjustments.
What challenges did you face when launching these verticals?
I think the hardest thing is finding messaging that hits everyone’s budget and demographic.
With some of our early Betches Brides podcasts episodes, for example, we featured vendors that we loved but we received a lot of the feedback that it wasn’t accessible to our audience’s price range. So, we pivoted our messaging to be more inclusive. Our appeal lies in the fact that we’re reaching our audiences not as industry experts but as people that they can relate to.
Luckily, our audience is super engaged. They love responding to everything and voicing their opinions. And, of course, we love listening to their opinions because we don’t want to create content that they don’t find useful.
When we began looking into travel, we posted 20+ polls asking our audiences about their preferences: Do they stay in boutique hotels, Airbnbs, or hostels? Do they fly first class or economy? I tracked all these responses with our consumer insights and analytics manager and began to identify both the things that would draw in our audiences and also the things that would alienate them. We quickly learned that while our audiences are very open to spending money on travel, they want to know how to make it go further.
What is the most interesting thing (product, tool, article, social channel, special project, redesign, etc.) that you’ve seen from a media outlet other than your own?
The New York Times publishes interactive playlists that take readers through an artist’s album and reveals the backstory behind an album. The artist’s previous tracks are integrated and the reader gets to learn more about the the inspiration behind the album. As someone who loves to read, I love the idea of an interactive book where you get to listen to a song while reading about it. It’s an awesome way to bring words to life.
Rapid Fire
What is your first read in the morning?
Instagram feed or email.
What was the last book you read?
Lie to Me by J.T. Elison
What job would you be doing if you weren’t in your current role?
I would be a party planner. | https://medium.com/the-idea/q-a-with-ashley-fern-director-of-digital-content-and-strategy-betches-media-965c65c1d8e0 | ['Tesnim Zekeria'] | 2020-02-19 18:44:01.506000+00:00 | ['Journalism', 'Travel', 'Digital Strategy', 'Subscriber Spotlight', 'Brand Identity'] |
Looking For Jobs After College? Choose Blockchain Over Cubicles | When I first fell down the blockchain rabbit hole, I didn’t think I’d end up this deep.
Cryptocurrencies and other Blockchain projects offer a wealth of opportunities with extremely low barriers to entry.
If you want to get involved, and I mean involved beyond the fact that you just want to put some money in and magically see it grow 1000x, all you have to do is put yourself out there.
I mentioned it in my last post, “Why I Invest In Cryptocurrency”, but Bitcoin has only been around since 2009.
Even the experts in this field don’t have much of an upper hand on newcomers — what’s 8 or so years?
That’s not even a decade.
Beyond the low barrier to entry, we’re also at a point in cryptocurrency’s development when new types of talent are needed in the space.
Lead Engineers shouldn’t be running press conferences.
I’m going to write another article on “The Problem with ICO’s” this week — so I won’t delve into that topic just yet.
With the market needing new talent, now is the perfect time to put yourself out there and grab whatever opportunity you can see. Your options are limitless. Whatever your field, it can be applied to projects in this space, and who knows — you might just end up changing the world.
You don’t need to have a large amount of money to invest to get involved.
IF YOU WANT TO WORK IN THE INDUSTRY
CREATE VALUE
I can’t explain to you how simply getting my name out in the space through networking, writing, and helping others has benefited me.
The past few months have brought some great opportunities my way, I became a partner at a Blockchain Accelerator, started working with an exciting new project— and I can trace all of these opportunities back to a single moment.
The moment when I decided that I wanted to contribute value to the blockchain community.
A lot of people I’ve met in business have told me to never do anything for free, they’ve told me it will make people devalue my work. I disagree. My willingness to help people, for free, has brought me more benefit than anything else. If anything, it’s borderline selfish of me. The point is, reach out to some companies you’re interested in working with and offer to help. This industry is so young, and most of the companies in it are in startup mode. They can probably use some assistance, and if you offer help, they might just ask you to join the team.
COMMIT TO YOUR GOAL & CREATE ACTIONABLE STEPS
Not too long ago I wrote down all of my crypto related goals in a notebook; it’s sitting next to me now.
Everyday, I look at this paper and write down what I can do in the next 24 hours to further myself towards these goals.
Reverse engineer your goals and tackle them one day at a time, don’t get ahead of yourself.
By doing this I’ve made solid steps towards achieving what I want in an alarmingly short period of time. I hope that doesn’t come off as arrogant — if anything, I’m still in disbelief at the velocity at which I’ve progressed.
It’s insane.
Entrepreneurship is like riding a lion. Everyone thinks, “Wow look at him he’s so brave and cool, I wish I could do that”, while the Entrepreneur thinks, “This is cool but when did I get up here and how the hell do I keep it from eating me alive”.
Keep making incremental steps towards your goals, whether that be increasing your portfolio value or starting your own cryptocurrency — take it one step at a time.
You might be surprised how fast you find yourself gripping the lion’s mane and wondering those same questions — contemplating how far you’ve come.
CREATE A PERSONAL BRAND ONLINE
This is crucial in any industry. Your brand is what people associate you with. It’s your personality, your work, your sense of humor: it’s the perception you create for yourself.
I’m not going to dive too deep into this because honestly, no one can explain it better than Nicolas Cole. He’s a dear friend and the one who first taught me the importance of creating a personal brand. If you haven’t read his work, I highly suggest you check it out.
Also, please do me a favor and leave a comment on one of his articles saying, “I heard Reza Jafery beat you in a typing test”.
It would really make my day.
WORK YOUR ASS OFF
I never said it’s easy.
To have what most don’t, you have to do what most won’t.
You have to put in the time. Research is key, but even more than research, is sticking to your plan. Keep your goals in mind, whether that’s working with cryptocurrencies, investing, or even starting a non-profit dedicated to researching ways blockchain can positively impact the 3rd world.
You just need to do it. It’s incredibly simple. If you’re willing to work smart — and focus on spending your time on activities or studies that either A) enrich your knowledge relevant to your goal, or B) make progress towards your goal — you can make it happen.
Just remember that closed mouths don’t get fed.
You wouldn’t believe the amount of opportunities that have come my way from simple Twitter DM’s or cold emails.
Put yourself out there, figure out what you want to specialize in or be apart of, and chase it with reckless abandon.
IF YOU WANT TO BE A TRADER/INVESTOR
LEARN FROM YOUR MISTAKES
I set off on my cryptocurrency journey halfway through 2017.
I don’t talk about when I started very often because we’re in a age of weighing peoples worth in the amount of days they’ve spent peering over charts.
Which is a little ridiculous.
Don’t get me wrong, there are plenty of O.G’s whom I tip my hat to — I wouldn’t have learned as much as I have this quickly, if it weren’t for the thought leaders in this community.
There is also a lot of negativity whenever someone who is obviously new, asks something that only someone who’s obviously new would ask.
We should be lighting the path for the curious, for the new investors, not shaming them.
Which is why I bring up my initial dalliance into cryptocurrency.
I’m proud of the short length of time I’ve been involved in crypto. In less than a year, I’ve managed to see higher returns than any traditional investor could dream of.
I’ve learned about new technology, taught myself technical analysis (with the help of a mentor), and even managed to get involved on a deeper level by working with the companies I mentioned earlier.
The best part of it all is that I feel like I haven’t even scratched the surface. One month ago I decided I wanted to work with blockchain projects and get paid in crypto. Today, that’s a reality.
Because I don’t care how long anyone has been doing this.
This industry is young enough, that a few years of experience doesn’t give anyone much of a head start.
A few years of advantage can be outweighed by working harder, smarter, and bringing in more value to the community than others are willing to.
I started investing less than a year ago and I will outwork you.
“You think that your time spent trading crypto is your ally? You merely adopted crypto. I was born in the bear market, molded by it. I didn’t see green candles until I was already a man.”
I’ve been waiting for an opportunity to pull this quote out.
I was baptized in the Great Bear Market of 2017 (I don’t think anyone has ever referred to it as that, but I’m going to roll with it).
The first time I bought a coin, I bought Ether (ETH), the coin that fuels the Ethereum Network, at its all time high on June 6th or 7th. It was right above $300 at the time. I had been studying Ethereum for weeks, reading all about it, Bitcoin, and other blockchain projects.
I was convinced Ethereum was going to change the world — but I also knew from a brief stint day trading that one should never buy into the tail-end of a uptrend.
“Don’t chase green candles”, I thought.
Which is investor speak for “don’t buy an uptrend that’s already started as you never know when it will stop”. It’s a part of the mantra conveyed by the famous investing quote, “buy when there’s blood on the streets”.
I waited patiently for the price of Ethereum to drop. Weeks passed and the price continued to rise. Soon it was over double where I had initially considered it for investment.
I panicked.
I bought the top.
It lost over 50% of its value the next day.
I was rekt.
Too stubborn to realize the loss and cash out, I decided to spend the next several months studying everything I could get my hands on regarding Blockchain, Technical Analysis (TA), and Investment Strategy.
RESEARCH
I started scalping Ethereum. Selling the highs and buying the lows, using what I was learning from studying TA.
I obsessed over chart patterns, lagging indicators, leading indicators, Fibonacci, Elliot wave — I fell so deep into the matrix, I forgot I was even plugged in.
Soon, despite Ethereum still being 50% of the value I had bought into, I was back at my breakeven investment. I had doubled the amount of Ethereum I initially owned.
I started investing in ICO’s. EOS, BAT, Bancor, Tezos — I was doing a shamefully small amount of research, but it being the height of the ICO craze, I ended up doing pretty well. Making a decent percentage gain on the ICO’s I had put my Ethereum into, I started getting into Altcoins- which is when my portfolio really started to explode.
While I may have only been around for under a year — I like to think I was forced into the accelerated program — where I had to either learn fast, or lose everything. It wasn’t my initial strategy but circumstances forced me into it, and now I am glad.
Most of what I’ve learned, the knowledge that allowed me to recoup those losses, has come as a result of the losses occurring in the first place.
“Adversity is like a strong wind. It tears away from us all but the things that cannot be torn, so that we see ourselves as we really are.” — Arthur Golden
You’re not too late to get into cryptocurrency; this market is just getting started.
Furthermore, you have a better chance of outworking and surpassing your peers in this market because it’s 24/7.
If you’re willing to put in the time and work harder than everyone else, you will come out on top.
— — — — — — — — — — — — —
If you want to learn more about trading, you can find more of my cryptocurrency articles below | https://medium.com/hackernoon/5-ways-to-win-in-cryptocurrency-e952074a14f | ['Reza Jafery'] | 2018-05-27 09:44:38.691000+00:00 | ['Investing', 'Startup', 'Blockchain', 'Cryptocurrency', 'Bitcoin'] |
Nuhom — Dismantling The Traditional Homebuying Process | Nuhom — Dismantling The Traditional Homebuying Process
A seamless online home buying process that puts the buyer first and saves them an average $7,500 of the agent fee
The Problem
There are 75 million digitally native millennials that are about to start families and will begin buying homes. The real estate industry, however, has not been able to successfully adapt to consumer behavioral shifts and technology. It is true that home buyers use tools like Zillow and Redfin to find their home, but also isn’t surprising that six out of 10 homebuyers find the home they will ultimately buy without an agent. Technology has empowered house hunters, yet, for the majority of the process, people still buy homes the way our parents and grandparents did, while paying massive agent fees. Although all forms of buying and selling are moving online, home buying remains inconvenient, slow, and not transparent.
What The Company Does
Nuhom has created a seamless online home buying experience that puts the home buyer first. Through a guided digital buyer-first experience, Nuhom allows home buyers to find their homes completely online, allowing modern buyers to find homes, tour homes, make offers, and save money. Nuhom believes that home buyers should be rewarded for their hard work researching and finding homes online. That is why Nuhom gives clients 50% of the buyer’s agent fee back at closing. Traditional buyer agents earn about 2.5% of the home purchase price, which means that clients save an average of $7,500 with Nuhom.
The Market
Real estate is the largest undisrupted form of buying and selling in the U.S. worth more than $1.6 trillion annually. Today, online home buying is less than 1% of the total market, but expected to grow as 75M digitally native millennials are beginning to start families and will demand convenience and speed. Competitors include other tech-driven real estate companies that are also aiming to disrupt the industry such as Zillow, Opendoor, and Redfin. Nuhom acknowledges these competitors and is aggressively taking a buyer-first approach to solve many of the pains that exist today through better design, use of technology, and unique operational models that reward the modern home buyer.
Business Model
Nuhom earns the balance of the commission after providing home buyers 50% of the buyer agent commission. On average, that comes to 1% of the home purchase value. The company is also developing new ways to continue to provide value to clients after the home purchase is complete.
Traction
Nuhom launched at the start of the coronavirus, compelled to provide better value, savings, and safety for home buyers. The company is currently operating and generating revenue while they re-invent the home buying experience. The company has formed partnerships with lenders, real estate attorneys, and home inspectors to help home buyers get financing, and complete the transaction with ease. Nuhom is also receiving guidance from Northeastern University’s venture mentoring network program in marketing, technology, and operations.
Founding Team Background
Co-Founder Rishi Palriwala has experience in the real estate industry having worked as a real estate specialist for Century 21 Avon and as an analyst at John Hancock Realty Advisors. He has also worked at GE and co-founded several other ventures. Co-Founder Afjal Wahidi previously worked as the head of mobile and innovation at Staples and on several previous ventures as well.
What They Need Help With
Nuhom is always looking for advisors, investors, and talent. The company is on the look out for talented engineers, creative designers, marketers, and data scientists who care about what they build, and who want to help millions of people find the most important place in the world: Home. Connect with the Nuhom team.
Subscribe To The Buzz To Get More Startups In Your Inbox | https://medium.com/the-startup-buzz/nuhom-dismantling-the-traditional-homebuying-process-279fe5c03f0b | ['Bram Berkowitz'] | 2020-11-24 20:03:31.289000+00:00 | ['Venture Capital', 'Startup', 'Proptech', 'Real Estate', 'Home Buying'] |
7 Valuable Principles to Lower Pride That Causes Unhappiness | 7 Valuable Principles to Lower Pride That Causes Unhappiness
Everybody can have small doses of pride, causing an unfulfilled life
Photo by sergio souza on Unsplash
We live in a gray world. You see evidence of high doses of pride with self-defiance at every turn in the news. Businesses, public servants, and individuals not cooperating or collaborating.
People are making their own norm choices.
Maybe you’ve found yourself following what you’ve seen others do even when it’s against better judgment.
The way to stay safe is to participate in abstinence and know your stance or morals before a situation arises that needs confronting.
Ah, but living out freedoms is more exciting than playing it safe, as younger people would think.
When I was in my early 20’s I thought I was invincible. That’s the only message my brain fed me. I didn’t have enough experience with life to know what could happen when you live parts of your life carelessly and endanger yourself. I was in tip top shape and prime health, had good friendships, a respectable work title, and thought I had the world on my shoulders. Can you relate to a time you felt like that?
I believe the 20-somethings today have that same belief as they go about lives ranging from life as usual to recklessly self-defiant. Imagine if under peer influence or social desires, they participate in a large group party like those similar in a fraternity house or depicted in the classic movie Animal House, where all kinds of mayhem can occur.
The source could be individual pride, demonstrated in an experience bigger than oneself.
If participants were to pay attention to their behaviors, they would see signs that they’re heading in the wrong direction.
They may not be self-aware yet of the damage they may incur from their pride and ego.
They could end up paying with problems or accidents that stem from their actions, lasting long (or for a lifetime) after a bad decision and instant gratification.
Even their short lived fun isn’t actually contributing to their happiness as they are feeding the parts of their mind and body that won’t give them authentic joy and happiness.
That’s why they look for the next source of excitement until that unsatiated want is replaced or they grow out of their current ways.
We can actually learn from youngsters in what not to do.
But adults far beyond their 20’s (and in all ages) still make these mistakes because they don’t know how to turn to the better ways and reap a more rewarding life.
Today I want to focus on just one way that can lead to bad decisions, and that’s foolish pride that any one of us can fall victim to.
The way to not stumble in prideful actions, is to stay humble. It’s a simple solution but not easy to carry out consistently.
Good vs. Bad Pride
At any moment, your mind can play tricks on you or all of the sudden you need to react to a situation. The first line of defense and default response is your pride.
That’s why we get in trouble with our prideful tongue and behaviors. Then it can become too late as you don’t want to admit your mistake out of… guess what? Pride again. Everywhere you turn, that nasty big “P” word can show up.
If you didn’t have a little pride, or little “p” we’ll call it, you wouldn’t have a sense of worth. So a healthy dose of pride is good to celebrate, and pat yourself on the back to motivate you. Feeling good and proud about an achievement is a little “p” pride to smile about that makes life worth living.
But when pride hurts or potentially hurts others, or there’s a feeling of superiority, the gray line is crossed to the Big “P”, harmful pride.
Anger is an emotion associated with harmful Big “P” pride, such as getting mad over other peoples’ incompetencies in your perfection expectations, or failure to drive the way you think people should.
Usually when we deal with people’s pride, it’s the Big “P” type that makes other people whisper negative things about those people.
If you’ve ever encountered a person who’s fully prideful or has prideful traits, or it’s the person in the mirror, below are some ways to help that person change. Because the bottom line is pride creates unhappiness, strained and unfulfilling relationships, and limiting lives.
Prideful people can have the squeaky wheel gets the grease mentality that doesn’t bring happiness or guarantee longevity. Peace seeking people know how to reach a higher consciousness for true happiness and longer living.
7 Principles to Lowering Pride
1. Seek a better life.
If you seek to be popular led by pride, you will eventually do the wrong things. Many people want to have some degree of popularity within their groups. The truth is that doing what is popular is doing what others want (to hear or see) for a self-serving reason or motivation.
Authenticity is doing what is real to you even if it’s not popular. Standing your ground in what you believe and being wise to share when what you believe can help others even if it may hurt your popularity or reputation.
If you find yourself hurting others, then that’s a sign you’re heading in a pride seeking direction.
In consciousness, you can turn around by focusing on building up other people and this world.
2. Find compassion for others.
Turn any prideful judgment and superior feelings to empathy and sympathy. If you’re not a naturally compassionate person, compassion can be learned.
It’s a choice.
A good way to test, is to see how you feel watching sad news stories. If you feel sympathy for a less fortunate someone or a group you don’t know, you can feel compassion for those you care about. If you feel cynical or unaffected, you may want to do a pride check.
Purposefully imagine putting yourself in their less fortunate shoes. Think of a loss that you had in your life that you survived where you can relate to this group. Maybe your family growing up struggled to pay bills or you did.
If you’ve never faced a loss, think how grateful you are that you haven’t, and in your overflow of gratitude, you can feel compassion for them.
3. Have written reminders for your weak pride areas.
Place reminders where you may be triggered to act prideful based on your past behaviors. If it’s road rage, keep a kind message by your steering wheel.
Reminder notes may seem silly, but if they help once, they can change your pride weakness. They cost virtually nothing compared to hurting others. Remember, what goes around, comes around.
If your brain is generating prideful thoughts, searching for better thoughts in the moment or when you’re tired is not going to be easy especially if you’re distracted with traffic. But you can stop yourself with a reminder in front of you.
4. Serve others.
Remind yourself or seek to learn what a servant heart does. Practice daily kindness. Show up with a like-minded group that serves or thinks of others beside themselves.
A 5-minute daily, kind gesture can change your entire intention for the day. It’s not what you can get, but what you can give.
Purposefully overlooking an offense, counts.
5. Switch to higher ways in conversations.
Meekness is not weakness. Pride can create scoffers who see kind people as weak. Be strong to take their hurled insults and dismissing your existence. Stay your kind self, despite sneers or mocking criticism.
You could be talking to their ego that they will one day regret. And you may have long gone and moved on.
Restrain your tongue and listen more in the moment. This gives you deeper calm and control for any situation. Use your non-verbal cues, such as leaning in to show you’re listening.
Use body language such as hands clasped loosely in front of you to remind yourself to be conscious of your actions, or to be reserved and polite.
Interrupt for the right reasons. Answering or taking over a conversation when you know where the other person’s thought is going, is respectful bonding if you’re considered the expert or you have good rapport.
Don’t interrupt or cut someone off with your opinion because their opinion is not respected.
Be productive. Seek a goal. When you’re having relationship issues, the goal is to reconcile, not find disagreements or go round and round. If you agree to disagree, then your opinions are kept separate to your valuable relationship. People don’t have good reception for those who have belittled them caused by pride.
6. Be gentle.
You can practice gentleness with how you handle your everyday world. Gently close doors and fold clothing. You become caring for your things. When you have a caring attitude, you appreciate instead of take on an entitled attitude that can turn prideful.
Then be a gentlemen or a pleasant woman. Find out if that’s how people describe you.
Be free from any offended feelings or unforgiveness. If someone does offend, state the offense as an observation, and that you choose not to react. This throws offenders off guard, as it shows strength and that you had another choice.
You’re not running away afraid, but you are letting it go. You likely won’t be offended on purpose again that same way, as they learned the lesson once.
If necessary, write down multiple offenses so you can see them. I remember in one 20 minute phone call I was offended 14 different times.
After the conversation, I just jotted each offense down to justify my belief that the other person’s pride reached an off the chart unhealthy level (and I determined needed intervention help).
In humility, kindness, and honesty, you can show prideful offenders higher ways for them to grow.
7. Change prideful ways when they occur.
If you catch yourself acting out in harmful pride, reflect on that incident and state what you did aloud to yourself as the observer.
If you want to change your specific wrong, teach yourself in the moment what you don’t want to repeat.
It may take several times, but at least you’re sincerely trying. If you really want to change, you will. Positive changes in your lead to happiness.
Real happiness is loving what you do serving others and learning higher, selfless characteristics. When you can control your pride and exchange for humility, in every situation, you’re well on your way to a life of clarity that attracts people.
You can become a great leader, partner or parent, and popular for the right reasons. You stop talking about yourself, and people can’t stop talking about you. | https://medium.com/age-of-awareness/7-valuable-principles-to-lower-pride-that-causes-unhappiness-d42d3e80f331 | ['La Dolce Vita Diary'] | 2020-07-11 19:14:15.741000+00:00 | ['Personal Growth', 'Mindfulness', 'Self', 'Productivity', 'Advice'] |
So Easy, a Junior Dev Can Do It | So Easy, a Junior Dev Can Do It
GraphQL is an open-source technology used to make API calls to get exactly and only the data you want. Created by Facebook, GraphQL (GQL) works well with sites pulling large amounts of data by replacing inexact or frequent API calls with a single, precise query for the data you’re looking for. Having a GQL-enabled backend is also useful for differing data needs between a mobile vs. desktop environment.
After researching what it can do and experimenting with it firsthand, it sounds like GraphQL will do for the backend what React has done to the frontend: become a mainstream enterprise technology by simultaneously enabling scalability while maintaining ease of use.
As a relatively new developer, you quickly realized there is a vast ocean of possible technologies to learn, and you secretly wonder how capable you are to learn all of it. Or maybe that’s just me. Thankfully, with a little research and some helpful “Getting Started” docs, you can get rolling with GQL in no time at all.
The Syntax
GraphQL syntax is made to look like a query language (that’s what the QL stands for). Or maybe a half-SQL, half-JSON hybrid. Regardless, it’s intuitive to look at and understand what you’re asking (and you don’t need to know any special SQL commands to use it).
A hypothetical query. It’s written as a string and sent through a query system like Apollo or Relay.
The Requirements
You need three pieces to make GraphQL requests from the frontend:
A GQL-enabled server to query (I’m borrowing Crystallize’ demo API) A wrapper to “translate” the GQL into a readable API request The actual <Query> in your frontend to fetch the data you want
While making GraphQL requests from the frontend require very little set-up, this blog assumes there is already a GQL-enabled server ready to receive requests.
I chose to use Apollo as the wrapper system to “translate” my API requests, which is currently the most popular wrapper choice among GraphQL developers. The other known alternative is Relay, which was created by Facebook.
I used a handful of npm modules to make and display my query:
react-apollo and apollo-boost to get started with Apollo
and to get started with Apollo reactstrap with bootstrap for a touch of styling
In Apollo, the client is the API backend you’re going to be reaching out to. For this demo, I used https://api.crystallize.com/graphql . This needs to happen “high” in your React app. I did it right in index.js by wrapping all other app components under the <ApolloProvider> tags and setting <ApolloProvider> ’s client attribute to the above API.
The next level down, I created a ProductWrapper component where I made the Apollo <Query> to the client with the sample GQL query above. The results were mapped and returned, passing in each child as a prop to a Product component, displaying each product individually.
Note: at the time of development, the sample query also returned an empty product, so I added a filter so that I would only pass down a complete product to the Product component so it wouldn’t break while mapping.
With some reactstrap styling to the Product component, I could render an indefinite number of products that matched my query within a responsive framework. I didn’t bother styling more than this since my goal was to implement GraphQL in a minimalist way—just enough so that I know I could.
Names, photos, prices, and product links all pulled from a single query.
The result was a simple, lightweight implementation. You can see (and borrow) my implementation through my GitHub repo.
While every developer and manager needs to assess the technology for themselves, for anyone sending a lot of data to a lot of users this is definitely a scalable solution worth considering.
If a junior developer can figure this out, there’s no reason a senior dev can’t build this into their own applications. For a large-scale app handling a surplus of API calls, this could reduce the server load significantly without sacrificing functionality. I’m bullish on GraphQL. | https://deetercesler.medium.com/using-graphql-with-react-2778750a768d | ['Deeter Cesler'] | 2019-02-23 19:50:08.202000+00:00 | ['React', 'Apollo', 'GraphQL'] |
VC Corner Q&A: Atin Batra of Twenty Seven Ventures | Atin Batra is the Founder & General Partner at Twenty Seven Ventures, a VC firm investing in early-stage EdTech and Future of Work startups across the world. 27V looks for entrepreneurs with an overwhelming drive to build a business that not only creates shareholder value but also leave a positive impact on our fragile world. The firm has invested in 6 companies operating in 6 countries.
Most recently, he was a Principal at the global VC firm Q Venture Partners, where he helped invest in Deep Tech & connected Hardware startups in North America. Prior to that, he led the Swire Properties’ B2B accelerator ‘blueprint’. Originally from India, Atin spends his weekends trail running Hong Kong’s beautiful country parks.
— What is your Twenty Seven Venture’s mission?
Twenty Seven Ventures (27V) is curating a group of thoughtful founders building the foundations of human learning and productivity; entrepreneurs with an overwhelming drive to build a business that not only creates shareholder value but also leave a positive impact on our fragile world.
The Fund invests in global EdTech and Future of Work startups at the pre-Seed and Seed stages. We invest globally because we believe there is untapped value in founders, across the world, learning from each other facing problems, both similar and dissimilar.
— What was your very first investment? And what struck you about them?
Toggle was my first investment out of this Fund. Toggle is a NY-based company building a full-stack robotics solution for rebar cage fabrication and assembly.
I was struck by the how humble the Founding team was. Both Dan and Ian had been successful in their careers before starting the company, and yet they came to work every single day looking to learn more about the construction business.
— What’s one thing you’re excited about right now?
Investing in Founder operating outside of the traditional venture capital hotspots. In today’s time and age of software technology pervading all industries, I truly believe there are amazing companies being built by teams not within the bubble that is the Bay Area or Tel Aviv or London.
— Who is one founder we should watch?
Keep an eye out for Amanda DoAmaral, founder of Fiveable. She is the most customer-focused entrepreneur I’ve come across.
I came across Amanda because of a video where she rails against the injustices being wrought by a particular educational curriculum provider. Knew it right then that the best interests of her students was always going to be top of mind for her. That’s been the secret to her success so far! And will be crucial going forward.
— What are the 3 top qualities of every great leader?
Innately curious, always open to learning. Appreciates the value of relationships and teams. Recognizes that allocation of resources (time, capital, human) is their primary job.
— What’s one question you ask yourself before investing in a company?
Would the smartest people I know want to work for this Founder? Would I?
IMHO, being able to build a stellar team around oneself is a Founder’s most under-appreciated quality.
— What’s one thing every founder should ask themselves before walking into a meeting with a potential investor?
What will the company and founding team learn/gain from this individual/firm?
— What do you think should be in a CEO’s top 3 company priorities?
Be customer-focused from day one. Build a diverse team that is committed to the company vision. Efficiently allocate the company’s resources — time, human, capital.
— Favorite business book, blog or podcast?
Really like “Invest Like the Best” podcast by Patrick O’Shaughnessy. It’s a constant presence in my Overcast queue.
— What is your favorite thing to do when you’re not working?
I’m an ultra-marathon trail runner. Running long distances in the mountains keeps me sane.
— Who is one leader you admire?
Given my hobby of long distance running, I’m a sucker for endurance athletes and their stories. 2 in particular that I follow closely are Scott Jurek and Alex Honnold — both of them are at the pinnacle of their respective fields, ultra-running and mountain climbing.
I’m always finding parallels between endurance sports and entrepreneurship; the ups and downs, hitting walls, building a crew +more. Keep reminding the portfolio founders and myself that this journey is an ultra-marathon, not a marathon, certainly not a sprint.
— What is one interesting thing most people won’t know about you?
I’m a thespian at heart — and was a theatre actor/director for the first 20 years of my life. Still deeply love theatre, so much so that my wife and I saw 5 broadway shows in 7 nights during our last trip to NY.
— What is one piece of advice you’d give every founder?
Strictly curate your group of mentors, advisors and investors — they could be as instrumental to your success/failure as the team (employees) you build. | https://medium.com/startup-grind/vc-corner-atin-batra-of-twenty-seven-ventures-7b6d4eca2ff | ['The Startup Grind Team'] | 2020-08-13 16:51:15.503000+00:00 | ['Vc Corner', 'Edtech', 'Funding', 'VC', 'Startup'] |
Managing Flutter State using Provider | Managing Flutter State using Provider
An introduction to Provider, one of a plethora of state management solutions for Flutter.
Who Should Read This
This post is intended for those new to state management in Flutter. The article assumes you are familiar with Stateless and Stateful widgets and can navigate between screens. If not, check out the links below.
What is State Management
State is the data your application needs to display or do something with. Data which may change.
For relatively simple scenarios, where the application consists of a single screen, we could use Flutter’s Stateful widgets and call setState() to rebuild the widget after we’ve modified some data or state. Remember, Flutter adopts a reactive model, rendering the user interface in response to state changes.
But consider the following scenario — a home screen which displays the currently signed-in user. That’s a two-screen application at a minimum and the home screen must somehow react to the action outcomes on another screen i.e. someone has logged in.
The above can be achieved by implementing a Publish-Subscribe pattern — where both screens are decoupled from one another and communication is achieved by one screen broadcasting a message, user sign in details, and another screen subscribing to that message. So let’s see how Provider can help.
So What Is Provider Exactly?
If you browse to the provider pub.dev package page you’ll see no mention of state management. Instead, it’s described as a wrapper around InheritedWidget and a quick search later you’ll see that InheritedWidget has something to do with propagating information down the widget tree. We’re getting closer.
In the context of state management, Provider is a widget that makes some value — like our user sign-in details— available to the widgets below it. Think of it as a convenient tool for passing state down the widget tree and rebuilding the UI when there are changes.
Concepts and Terminology
You need to become familiar with 4 concepts when working with provider.
Model — a class you create to encapsulate your application data and optional methods for modifying that data. This is what is made available to other widgets. Think of it as our message to broadcast. ChangeNotifier — provides model change notifications to listeners. Your model above extends this class and it’s how we publish or broadcast messages. Consumer — a provider package widget which reacts to ChangeNotifier changes and calls build method to apply model updates. This is our subscriber. ChangeNotifierProvider — a provider package widget that provides an instance of a ChangeNotifier to its descendant widgets. Consumer widgets need to know the ChangeNotifier instance they need to observe. That’s the role of ChangeNotifierProvider .
A Provider Sign-In App
We’re going to build the 2 screen app outlined earlier to show Provider in action.
▹ Create a new Flutter app in VS Code or using the Flutter CLI:
flutter create provider_login_app
▹ Next, we need to add the provider package as a dependency. To do this, add an entry for provider to the pubspec.yaml file.
▹ Install the provider package using the Flutter CLI pub get command:
flutter pub get
▹ You can import provider in your code using:
import 'package:provider/provider.dart';
▹ Open your main.dart file, delete the contents and then paste in the code below.
Points to Note:
Line 8: ChangeNotifierProvider is the widget that provides an instance of a ChangeNotifier to its descendant widgets. Here it builds an instance of our model class SignInDetailsModel . Line 41: Our HomePage consists of a Text widget which is wrapped inside a Consumer widget. The text displayed changes when a user is signed-in, the Consumer widget forces a rebuild when a notification is broadcast by our model class. Line 51: We define our Sign-In screen, consisting of a user name field, password field and a button. Line 84: Here we are using the Provider.of method to access the model instance. Provider.of obtains the nearest Provider<T> up its widget tree and returns its value. We then call signIn to crudely sign in the user and fire our notification, before finally transitioning back to the home screen. Line 127: This is our model class, where we encapsulate our user name and sign-in date. Notice how we extend ChangeNotifier and then call notifiyListeners() when broadcasting to any Consumer widgets that a sign-in has occurred.
Summary
State Management in Flutter is a complex topic, not least because of the number of possible approaches and packages available.
Here, we have attempted to introduce the reader to Provider and its concepts and also demonstrated how to incorporate it into an application in order to share state between screens.
Whilst briefly mentioned in this post, the techniques demonstrated here also promote lose decoupling - whereby the home screen and the sign-in screens are unaware of each other’s inner workings. They share a common model and utilise Provider to propagate and consume changes in this model.
https://www.twitter.com/FlutterComm | https://medium.com/flutter-community/managing-flutter-state-using-provider-e26c78060c26 | ['Kenneth Carey'] | 2020-08-06 02:27:25.100000+00:00 | ['Mobile App Development', 'Android App Development', 'Android', 'Flutter', 'iOS App Development'] |
4 Steps to Reach Exceptional Levels of Performance | 2 — If You Cannot Be The Best, Focus on Being Unique
Do you think you cannot be great because you failed at winning a gold medal? or because you lose a contest of the best painting? Did you come last at the annual competition of the entrepreneur of the year?
Another misconception that many of us have is to think that, to become great, we must become the best in a particular field. This is not necessarily true. Evidence shows that another way to achieve tremendous success is to offer a unique combination of several valuable skills. A combination that is rare or that no one else has.
Scott Adams, the author of “How to Fail at Almost Everything and Still Win Big” says the following: “Everyone has at least a few areas in which they could be in the top 25% with some effort. In my case, I can draw better than most people, but I am hardly an artist. I am not any funnier than the average standup comedian, but I am funnier than most people. The magic is that few people can draw well and write jokes. It is the combination of the two that makes what I do so rare. And when you add in my business background, suddenly I had a topic that few cartoonists could hope to understand without living it”.
By mixing three different skills (i.e. drawing, comedy, and business), Scott Adams made himself truly unique in the market. He then managed to launch a great career as a cartoonist.
You can follow the same strategy in life. If you can’t win by being the best at a particular field, you can focus on being unique. If you combine three different skills that you are fairly good at (i.e. each skill puts you in the top 25% of the population), you can create a profile that is rare and valuable in the workplace (25% * 25% * 25% ~ 1.56%).
With this system, your set of skills puts you in the top 1.5% of the population! You drastically reduced the level of competition and it is easier for you to stand out. | https://medium.com/illumination-curated/4-steps-to-reach-exceptional-levels-of-performance-ab7af2aa6c77 | ['Younes Henni'] | 2020-12-09 12:04:42.064000+00:00 | ['Life Lessons', 'Self Improvement', 'Productivity', 'Education', 'Personal Development'] |
Looking in the mirror: retribution vs. second chances | This post was co-written with More Than Our Crimes collaborator Pam Bailey.
I recently read a first-person perspective in the New York Times Magazine by Reginald Dwayne Betts, who was incarcerated for nine years and then became a lawyer to win freedom for his friends still behind bars. Titled “Kamala Harris, Mass Incarceration and Me,” it opens with this observation: “Many progressives mistrust [Harris] for her past as a prosecutor. As an ex-convict — and also the son of a crime victim — I can tell you it’s not that simple.”
I am one of those people who has deep reservations about Harris due to the bias in favor of incarceration that is built into the role of prosecutor in this country. However, Betts’ reflections on his instinctual desire for revenge for his mother’s rape also provoked some deep self-reflection. He is quite correct: Even people like us, who believe in second chances and understand so personally the inherent injustice and corrosive impact of the American carceral system, have an opposing, almost primal response when a crime is committed against a loved one.
Facts vs. emotions
That tension demands scrutiny by all of us who say we are truly committed to ending mass incarceration. The fact is that — contrary to what policymakers and many media pundits would have us believe — mass incarceration cannot be reversed by omitting individuals convicted of violent offenses from conversations about reform. We account for 55% of people warehoused in state prisons, where 87% of inmates are held. (The only reason D.C. residents like me are not in a state system is that the District does not have its own prison and contracts with the federal Bureau of Prisons instead.)
It’s also true that people convicted of violent offenses are less likely to be rearrested in the years after release than those convicted of property, drug or public-order offenses. (A major reason: Age is one of the main predictors of violence. The risk of committing violence peaks in adolescence and early adulthood, then declines with age. Yet we incarcerate people long after their risk has become minimal.)
Public opinion continues to run counter to these facts. A case in point: A poll conducted by Morning Consult for Vox in 2016 found that nearly eight in 10 U.S. voters said they support reducing prison sentences for people who committed nonviolent crimes. But fewer than three in 10 indicated any support for shorter prison sentences for people convicted of violent offenses.
Betts’ essay is so important because he confronts, with compassion and insight, both sides of this very emotional divide. He was incarcerated as a teen for wielding a pistol as he carjacked a man and tried to rob two women. His mother, however, saw the potential for goodness within him and fought tirelessly for his release: “My mother attended a series of court dates involving me, dressed in her best work clothes to remind the prosecutor and judge and those in the courtroom that the child facing a life sentence had a mother who loved him,” Betts writes.
During his court proceedings, Betts’ mother was raped at gunpoint. The chances are quite likely that the perpetrator also possesses the potential to change positively in the future. And the chances are good that he has a mother who hopes desperately that others will see that potential. “But on the last day of the trial of the man who raped her, my mother told me, she remembers only that he didn’t get enough time. When I asked her what she would have wanted to happen to her attacker, she replied, ‘That I’d taken the deputy’s gun and shot him.’”
Betts goes on to examine his own feelings: “I am inconsistent. I want my friends out, but I know there is no one who can convince me that this man shouldn’t spend the rest of his life in prison.”
I get that. If my mother was raped or killed, I would feel the same way. And if we are all honest with ourselves, that visceral thinking is driven by a desire for retribution, not a belief that offenders can’t change for the good. If, however, we come to know the offenders once they have traveled the path of rehabilitation, that changes.
In Betts’ case, he writes of two men with whom he was incarcerated. One was sentenced to 33 years for murdering a man when he was 17. “We came into those cells with trauma, having witnessed or experienced brutality before committing our own,” he writes. “Prison, a factory of violence and despair, introduced us to more of the same.” But they struggled and changed together, often discussing great thinkers like Malcolm X and searching for other mentors. When Betts got out early, he went to law school because he believed he owed it to him and others to help them get out.
I have one particular friend who was convicted of killing a police officer 30 years ago, and is now serving life without parole — what many have called the new death penalty. I know that police officer has a family who hurt in a deep way like I would if my mother was the victim. But I also know Donnie, and the man he is today is one of the finest, most honorable individuals I have ever known. I will spend the rest of my life, if I am released, advocating for reforms that would give him a second chance.
Role of prosecutors
Betts writes about interviewing Kamala Harris when she was still running for president and hearing her tell stories about Black women coming into her office when she was the district attorney for San Francisco, seeking justice when their children had been victimized. I get that too. Blacks have often joined the call for harsh law-and-order policies. That is, in part, the point of this essay: It’s universal.
But ultimately, that does not change my skepticism and concern about Harris. We have a right to expect more of our public officials, including the ability to rely on evidence rather than emotions. As a state official, Harris argued that a 60-year sentence for a person who committed a series of restaurant armed robberies “should tell anyone considering viciously preying on citizens and businesses that they will be caught, convicted and sent to prison — for a very long time.” And yet our own National Institute of Justice has concluded that “research shows clearly that the chance of being caught is a vastly more effective deterrent than draconian sentences.” It goes on to say: “Prisons are good for punishing criminals and keeping them off the street, but prison sentences (particularly long sentences) are unlikely to deter future crime. Prisons actually may have the opposite effect: Inmates learn more effective crime strategies from each other, and time spent in prison may desensitize many to the threat of future imprisonment.”
I share with Betts the wish that these issues will become the basis for “a national conversation about our contradictory impulses around crime and punishment.” That is why I am co-founding a nonprofit project called More Than Our Crimes. We will never be able to have that type of conversation until each of us seriously examines our own inconsistent values and our seemingly instinctual craving for retribution and vengeance. (In Marc Howard’s book “Unusual Punishment,” he points out that this is somewhat an American phenomenon. The Western European countries he examined do not impose the crazy-long sentences so common in the United States. And yet their crime is not soaring out of control.)
Self-insight can only go so far, however. Betts’ admitted inability to be consistent in the application of his values, and my realization that I am the same, points to a simple truth: It’s human nature to revert to primal instincts when people we love are harmed. This to me says we must not expect people to be “super-human.” Instead, we need to alter the system. Family members of victims should not be part of the process of deciding whether to grant parole or other types of early release years after the original conviction. Those decisions should be made by a person (or, ideally, people) trained in psychology and based on a multitude of factors — but not the nature of the original crime.
“An eye for an eye, and the whole world would be blind.” ― Kahlil Gibran | https://medium.com/the-innovation/looking-in-the-mirror-retribution-vs-second-chances-643ea52fd0ef | ['Robert Barton'] | 2020-11-17 16:33:37.153000+00:00 | ['Justice', 'Equality', 'BlackLivesMatter', 'Psychology', 'Prison'] |
How I Got a Promotion Without Working Any Harder | What I Failed to Understand for My Whole Career
I had two big and detrimental misconceptions about career progression:
You get promoted based on how long you’ve been working. You automatically get promoted when you’re good enough.
For my whole career, I sat around waiting for my turn to get recognised with a promotion or for someone to notice I was doing a good job. Either way, I handed the keys to my success to other people.
I know I’m not alone in this. If you’re reading this, it’s likely that you care a great deal about your career, yet you’ve never been taught how to actively work towards your next career step.
When I say “actively working towards a promotion,” I don’t just mean working extra hard, working extra-long hours, or doing extra stuff. In fact, to get a promotion, you don’t have to do any of that! How do I know this? Well, that’s where my story comes in. | https://medium.com/better-programming/how-i-got-a-promotion-without-working-any-harder-9a97f77e8ae4 | ['Tiffany Dawson'] | 2020-11-20 18:06:40.934000+00:00 | ['Remote Work', 'Startup', 'Career Advice', 'Work', 'Women In Tech'] |
How to Upload Files to Firebase Cloud Storage With React and Node.js | API: Node.js + Express.js + Multer
To provide you with a fully functional repo, I’ve included the API in the same project folder.
However, this is not ideal since having all your eggs in the same basket never is. So, keep in mind that in real life, you’ll either use a serverless function or have the upload API run with the rest of your back end.
Setting up the Express server
1. We install the necessary dependencies:
$ npm i express body-parser cors dotenv
2. At the root of the project, we create an api/ folder and within it an index.js file in which we set up the Express server:
Now we want both our front end and our API to run simultaneously while still having a single command to run. For this, inside our package.json file, we update our start script as such:
"scripts": {
"start": "(cd api && node index.js) & react-scripts start",
...
}
By the way, if you want to read more about how to run multiple commands simultaneously, I found this article useful.
Adding the upload endpoint
We create an /api/upload endpoint that is able to receive POST requests. We’ll use Multer, which is a Node.js middleware that allows us to handle multipart/form-data :
$ npm i multer
Multer adds a file (or files if you send multiples files) to the request object, which contains the file(s) uploaded. It also adds a body object to request , which contains any other text field you’d want to add to your request.
So, thanks to Multer, we’ll be able to access our uploaded file via req.file and our remaining data from req.body .
In index.js , we create a Multer instance:
const multer = require('multer');
To send our file to Firebase, we’ll need to make a Buffer object out of it. We’ll do that using the memory storage engine provided by Multer:
When using memory storage, our req.file will contain a field called buffer that contains the entire file.
Here’s the starter code for this endpoint:
We’re halfway there. Now all that is left is connecting to Firebase Cloud Storage and processing our files as the service needs to upload it properly.
Connecting to Firebase Cloud Storage
First, we need to add the @google-cloud/storage dependency to connect to Firebase Cloud Storage:
$ npm i @google-cloud/storage
2. Going on, we’ll need three environment variables. This is where the dotenv module we installed at the beginning comes into play. It loads environment variables from a .env file into process.env .
Create a .env file at the root of your project and add two environment variables inside it:
GCLOUD_PROJECT_ID : This is your Firebase project ID. You can find it in your Firebase account > Settings > General settings.
: This is your Firebase project ID. You can find it in your Firebase account > Settings > General settings. GCLOUD_APPLICATION_CREDENTIALS : Remember the private key you stored somewhere at the beginning? This is where we need it. You need to indicate the path to your private key JSON file here. So, it could be something like /api/services/myprivatekey.json .
: Remember the private key you stored somewhere at the beginning? This is where we need it. You need to indicate the path to your private key JSON file here. So, it could be something like . GCLOUD_STORAGE_BUCKET_URL : Finally, you need the URL of your storage bucket which is [YOUR_GCLOUD_PROJECT_ID].appspot.com . If you’re not sure, you can see it in your Firebase account > Storage > Files tab (which is the default one) > Just above the stored files list on the left, you have a URL like gs://you-project-id.appspot.com . This is it.
Finally, make sure you add your new .env file to your .gitignore .
3. Now we can initiate a Storage instance with our Firebase credentials:
const { Storage } = require('@google-cloud/storage'); const storage = new Storage({
projectId: process.env.GCLOUD_PROJECT_ID,
keyFilename: process.env.GCLOUD_APPLICATION_CREDENTIALS,
});
4. And create a bucket , which is a container for objects (files), which we associate to our Firebase storage bucket. We’ll use it below in our endpoint to process our file.
const bucket =
storage.bucket(process.env.GCLOUD_STORAGE_BUCKET_URL);
5. Next, we continue inside our app.post('api/upload') endpoint. After checking that we do have an existing req.file . If you log it, you’ll see it contains the following:
{
fieldname: 'image',
originalname: 'Medium Post Cover 3.png',
encoding: '7bit',
mimetype: 'image/png',
buffer: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 0b 22 00 00 07 6c 08 06 00 00 00 e9 93 d9 14 00 00 00 04 67 41 4d 41 00 00 b1 8f 0b fc 61 05 00 ... 1834232 more bytes>,
size: 1834282
}
Firebase uses Blobs (Binary Large Objects), which is a data type that can store binary data in a database.
So, we create a new blob in our bucket with the file() method, passing it our file name as reference:
const blob = bucket.file(req.file.originalname);
Then we initiate a writable stream. FreeCodeCamp wrote an awesome guide about Node.js streams in which they answer all the questions you might have about them (and I had a lot).
Streams are a collection of data (like arrays or strings) that might not be available all at once. They allow us to work with large amounts of data (like images or videos) that we need to process one chunk at a time.
And a writable stream is one type of stream that is an abstraction for a destination to which data can be written, so basically, we use a writable stream when we want to write data, which is what we want here.
const blobStream = blob.createWriteStream( {
metadata: {
contentType: req.file.mimetype,
},
} );
Important: You need to pass the file mimetype as metadata to createWriteStream() otherwise your file won’t be stored in the proper format and won’t be readable.
This returns a WriteStream object on which we can check for events. On the finish event, we assemble the public URL of our newly stored file and send it in the response for the user to either display the file on the front or store the location string in a database.
Note that we use encodeURI on the blob.name to cover cases where the file name had whitespaces or other characters needing to be encoded.
// If there's an error
blobStream.on('error', (err) => next(err)); // If all is good and done
blobStream.on('finish', () => { // Assemble the file public URL
const publicUrl =
`https://firebasestorage.googleapis.com/v0/b/${bucket.name}/o/${encodeURI(blob.name)}?alt=media`; // Return the file name and its public URL
// for you to store in your own database
res.status(200).send({
fileName: req.file.originalname,
fileLocation: publicUrl
});
}); // When there is no more data to be consumed from the stream the end event gets emitted
blobStream.end(req.file.buffer);
And there we have it! Our API is ready for use. Here is the gist with the full index.js file:
index.js | File upload API endpoint handling uploads to Firebase Cloud Storage
All that’s left is a simple front end with a form handling a file input. We’ll use React for this. | https://medium.com/better-programming/how-to-upload-files-to-firebase-cloud-storage-with-react-and-node-js-e87d80aeded1 | ['Claire Chabas'] | 2020-05-17 23:23:23.744000+00:00 | ['JavaScript', 'React', 'Programming', 'Nodejs', 'Firebase'] |
The Basic Commands You Need to Know to Get Started with SQL | The Basic Commands You Need to Know to Get Started with SQL Moeedlodhi Follow Sep 18 · 5 min read
Getting the fundamentals right is a MUST to mastering SQL.
Photo by Caspar Camille Rubin on Unsplash
SQL is one of the most commonly used tools that you will encounter on your Data Science journey. Before actually working on real-life projects, I had very little respect for SQL’s abilities(out of ignorance) and did not focus much on it but when I entered the real world of “Data Science”, I realized how important SQL is for extracting, transforming and loading data from databases particularly LARGE datasets with a ton of tables.
In this article, I will be going over some basic yet VERY important SQL commands which everyone should be aware of to get a strong start.
I will be writing my queries on Dbeaver, which is a Universal database tool available for download here but you can use any database tool of your choice as long as the concept is clear. Let us get started
1) SELECT and FROM
I am assuming you guys have an SQL database tool ( Dbeaver, pgAdmin) open in front of you.
The first two commands I would like to go over are SELECT and FROM.
SELECT, as the name suggests, selects all columns or column(s) of our choice FROM the table we are working on. For example
I have three columns visible here such as AlbumId, Title, and ArtistId.
If I go to my query editor and type:
SELECT AlbumId,Title
FROM Album
Where Album is the table, I will get only the AlbumId and Title back. Let us see
You can try this with any column and table of your choice. Now, you guys might be thinking, What if I want to return all of my columns? Well, it’s simple,
Just type:
SELECT *
FROM Album
Here, * denotes all of the columns
2) LIMIT
The Limit command as the name suggests “limits” the output of the table i.e We can limit the number of rows we would like to see.
Continuing from our previous example, let us say, I would like to see only the first four rows. Using Limit, We can do that.
SELECT AlbumId,Title
FROM Album
LIMIT 4
and I get:
Moving forward.
3) ORDER BY
ORDER BY rearranges the entire column of our choice both alphabetically and numerically if the values are numerical. This restructures the entire table. Let us give it a try
SELECT *
FROM Album
ORDER BY Title
LIMIT 9
The result:
As you might have noticed, The Title now appears alphabetically in order. Starting from A. And the AlbumId and ArtistId appear to have adjusted accordingly.
4)WHERE
The WHERE command is used to apply conditional statements to our table. For example, If I wanted to return only those rows which had an ArtistId of 90, I would use the WHERE command. Let us give it a try.
SELECT *
FROM Album
WHERE ArtistId = 90
ORDER BY Title
LIMIT 9
Pretty simple no? There are other applications of WHERE as well, such as selecting only those values with specific characters in them or only those where the starting alphabet is A or B. Lets take a look
5) IN and LIKE
The IN and LIKE operators are extremely effective when trying to select specific rows. For example, If I wanted to return only those rows where the Title started with the Alphabet A, How would I do that? Let us see
SELECT *
FROM Album
WHERE Title LIKE 'A%'
i get:
As you can see, All of my Titles start with the alphabet’s A. The ‘A%’ character in my LIKE statement denotes that the Title should start with A. There are other cases to this as well, such as:
‘%A’ all rows where Titles end with A
‘%A%’ All rows that have A anywhere in the Title name
I would recommend you guys practice on the above statements to get a good grip.
The IN command works similarly to the LIKE command but it takes a more general approach. What do I mean by that? For example, I wanted to return all the rows where the Title of the band is Facelift. Now, I could do that by using the LIKE command but there is a better way of doing this.
SELECT *
FROM Album
WHERE Title IN ('Facelift')
Running the above query gives me:
I can use the IN command to apply multiple conditions and return multiple values. For example:
SELECT *
FROM Album
WHERE Title IN ('Facelift','Big Ones','Audioslave')
6) AND & OR
AND and OR are widely used when you are trying to apply multiple conditions and are extremely handy. AND will return values when BOTH the conditions you’re applying are met whereas OR will return all rows for either one of the conditions being met. For example:
Let us say, I would like to return rows where the Title is Audioslave and the AlbumId is 5. Let us give it a try.
SELECT *
FROM Album
WHERE Title IN ('Audioslave') AND AlbumId = 5
The above query returns no rows and rightfully so because there is a row with the Title being Audioslave AND AlbumId being 5.
Let us give the OR command a try.
SELECT *
FROM Album
WHERE Title IN ('Audioslave') OR AlbumId = 5
The OR command does return rows because any of the conditions can be met to generate the results.
Conclusion
So this was a basic introduction to getting started with SQL queries on Dbeaver(or any other database). I went over some very BASIC yet extremely IMPORTANT concepts that are a Must know to get started with SQL. Hopefully, you guys will find this article useful.
If you do, feel free to share it with your peers.
[1]: Moeed Lodhi.. Introduction to SQL on Dbeaver
https://www.youtube.com/watch?v=TGb7ImXsAfo&feature=youtu.be | https://medium.com/towards-artificial-intelligence/the-basic-commands-you-need-to-know-to-get-started-with-sql-f50eed7a42f4 | [] | 2020-09-19 21:27:11.678000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Sql', 'Technology', 'Data Science'] |
FPL Gameweek 10: Where do we go from here? Wildcard awaits. | Final Decisions
Before sealing the fate of the rest of the players, let’s take a look at the schedule opportunities we mentioned before and see when might be good idea to bring in which players.
I have decided to apply a combination of both the best teams to choose from and the schedule. Based on this strategy, we need to have a split within the 15 man roster for diversification.
One error we faced over the past few Gameweeks was in losses - Multiple players from the same team losing causing us massive drops in points. While the past strategy was valuable to gain points as well, we have to be more strict with our decision making, we can only apply it to the teams that have a combination of the best form and results.
(Putting this here again for easier readability)
|6|Aston Villa|
|5|Liverpool|Manchester City|West Ham|Arsenal|Tottenham|
|4|Chelsea|
|3|Southampton|Leicester|
|2|Manchester United|Wolves|
|1|Everton|Leeds United|
When we take a look at both the best options for us in terms of teams and schedule - there are different ways we could approach this.
Approach #1- Two players each from the top 3 teams. One player from each of the rest of the teams.
Approach #2 - Split the schedule into 4 categories. Initial Team Selection/Gameweek 12 Selection/Gameweek 14 Selection/Gameweek 16 Selection. Pick the best players for each category based on opponents.
What we can do is combine these two approaches for a more robust team.
Initial Team Selection - We can see that Leicester, Southampton and Tottenham have an easier start in the next two Gameweeks.
Gameweek 12 Selection - Multiple teams with easier schedules open up such as Aston Villa, Manchester City, West Ham and Manchester United.
Gameweek 14 Selection - Better schedules for Liverpool, West Ham and Arsenal.
Gameweek 16 Selection - Better schedules for Liverpool, West Ham and Arsenal. Similar to the week before so it would be more valuable to have players from these teams.
Earlier, we also mentioned that we want to split our roster into defensive and offensive choices. We want our defense to be more from the most compact of teams and the offense to be from the ones with the most goals.
Defensive Advantage - Tottenham, Manchester City, Chelsea, West Ham & Arsenal.
Offensive Advantage- Chelsea, Liverpool, Tottenham, Southampton and Leicester City.
FPL Data Team Wildcard; source: FPL
Taking out Kane, Son and Chilwell (Given that we still see Tottenham and Chelsea as valuable teams), we have 12 remaining spots.
Diogo Jota and Timo Werner remain valuable picks as well but let’s evaluate all our options before we bring them back in as well.
Taking out 12 players leaves us with £72.6, an average of £6.05 given to us to play with for our Initial Team Selection. It’s not much but if we can find some value picks that align with both the data and the schedule, we could make it work.
Goalkeeping
Clean Sheet Percentage vs Save Percentage (Clean Sheets>1); source: FPL
First of all, we have some great keepers but given the fact that Kasper Schmeichel hasn’t performed well so far and that the next few Gameweeks look like tough matchups for Rui Patricio, we have decided to not keep either of them. When we take a look at the keepers that have kept atleast one clean sheet and combine that with their overall clean sheet to save percentage, one name sticks out from the rest. Edouard Mendy. The Senegalese shot-stopper has been head and shoulders (literally) above the rest after his recent move from Rennes. Add the fact that he is £0.3 cheaper than both our current keepers, this is a definite buy.
However, Chelsea don’t seem to have an easy schedule when compared to some other teams. Not that they still won’t do well in defense for they have covered that as well. To account for this, we want to bring in someone else with an easier schedule but still an excellent output.
Lukas Fabianski seems to fit this bill. He too is £0.4 cheaper than our current goalkeeper lineup and currently leads FPL in all goalkeepers with 53 points.
BUY: Edouard Mendy (£5.2) | Lukas Fabianksi (£5.1) | Remaining £62.3
Defense
Our defense has cost us a lot of points this season. From sudden drops to 2019 regulars such as Toby Alderweireld, César Azpilicueta to injuries for Virgil Van Dijk and Çaglar Söyüncü, it hasn’t been an easy road. We need a massive shift in defense with a more consistent output.
We have Ben Chilwell. The question is who to add next. Keep in mind that for all these selections, they will be based on both schedule and best team players available.
Players Dribbled Past by (player) vs. Blocks(>20); source: FPL
Here we have the players that take on the most opposing players from the defensive end of the field, but also who rush back to their area when under threat to make great blocks. We’ve often seen players like Walker-Peters and Ayling rush forward.
An interesting pick Gabriel Dos Santos (45 pts.), Héctor Bellerín (43 pts.) or Vladmir Coufal (33 pts.)
Given that Dos Santos is £0.1 cheaper and in better form, we’ll choose him. Coufal has an easier schedule and he is ridiculously cheap at £4.6. He goes into the team.
Goals(>1) vs Clearances(>5); source: FPL
Above, we have the players that have made the most clearances in addition to the most number of goals. Apart from all the attackers displayed, we have a few defenders as well. The ones that stick out are Dos Santos, Ezri Konsa, Jaanik Vestergaard and Kurt Zouma. Additionally, Southampton have an easier schedule. Kurt Zouma is the more attractive option with 3 goals compared to 2 of Jannik Vestergaard. However, he is more expensive and we have a player from Chelsea already in Ben Chilwell. We want to diversify unlike with our previous strategy. Ezri Konsa has two goals as well and is underrated at £4.7 albeit he is in a weaker team.
BUY: Gabriel Dos Santos (£5.1) | Vladmir Coufal (£4.6) | Ezri Konsa(£4.7) | Jaanik Vestergaard (£4.8) | Remaining £43.1
Midfield
Shot-creating Actions per 90 vs Goals; source: FPL
If we look at the most active players on the field from all our current teams alone, we get the following bunch. I was unsure if I wanted to keep Diogo Jota around but he continues to be in great form and in a great team. For £6.9, he is definitely worth keeping. This spread shows far more expensive players such as Salah and Kane. Jack Grealish sticks out with the most amount of shot-creating actions in this group. Aston Villa are off this weekend but they do have some easy games coming up. He is a bit expensive at £7.7 but I do think it’s worth the money. Villa are a risky team to pick but Grealish is worth keeping as a sub atleast when one of the other players have a tougher opponent.
It is tempting to choose Mohammed Salah but given that we have Jota and the fact that they do have a difficult three games coming up in addition to Liverpool’s injury worries, Jota alone maybe sufficient for the moment. Hopefully, we won’t come to regret that decision.
As we said before, Southampton have an easier schedule. This is going to be a risk but I do think it will be quite valuable to have a player who can score two free-kicks in a game. Enter James Ward-Prowse.
Ward-Prowse is not only great at scoring free-kicks. He is also the main corner-kick taker for the Saints.
Corner Kicks vs Goals; source: FPL
This gives us more reassurance to select him. While I was here, I noticed another steal of a pick in addition to Jack Grealish. Jarrod Bowen has been a stellar player for West Ham and it was interesting to see that he too was the main corner-kick taker. From this data and given that it is valuable to consider players from West Ham, we will choose him as well.
KEEP: Diogo Jota ( £6.7)
BUY: Jack Grealish (£7.7) | James Ward-Prowse ( £6.1) | Jarrod Bowen(£6.3) | Remaining (£16.3)
Forwards
We’ve already decided to keep Harry Kane which left us with two choices. We could keep Timo Werner and Patrick Bamford and leave it unchanged. Timo Werner has been a valuable addition to the team. Patrick Bamford has dipped a little in form but Leeds are quite an unpredictable team.
However, when we take a look at the schedule again, it does not favor Chelsea or Leeds. Leicester on the other hand have quite an easy schedule.
If you’ve followed along in this series, you would know of my bias for Jamie Vardy. Especially a Vardy and Kane pairing. Vardy is an excellent player and given that Leicester have two easy games before their schedule becomes tougher, it’s worth to see this pairing happen. As we’ve seen in the charts before, Jamie Vardy is a consistent performer given that he’s healthy and Leicester are performing reasonably well. For that reason alone, I have chosen Jamie Vardy.
However, choosing Jamie Vardy leaves us with having to give up Timo Werner and just £6.0 left. There are very few valuable forwards for that price besides Patrick Bamford who also happen to be exactly worth £6.0.
Fate must have it that we go this pairing for this Gameweek. I’m excited to see how this team performs and the tinkering we’ll attempt over the upcoming Gameweeks. I hope you are too.
Last but not the least, here’s our final wildcard team for Gameweek 11.
Gameweek 11 Wildcard Team; source:FPL
Let the games begin.
Thank you for taking your time to read through this. If you’d like to follow along in this series, here is a timeline of the FPL data analytics project.
1. Introduction
2. Gameweek 1
3. Gameweek 2
4. Gameweek 3
5. Gameweek 4
6. Gameweek 5–8
— — — — — — — — — — — — — — — — — — — — — — — — —
And there we go! Hope you enjoyed reading this piece as much as I did writing it. Feel free to write down any thoughts or suggestions. I’m always working on improving my analysis and my articles as time goes on so I appreciate all comments! Collaborating is fun so if you’ve got any interesting projects in mind, feel free to reach out to me personally. If you would like to follow me to keep up with updates in this series, follow me here or — — @__tomthomas
— — — — — — — — — — — — — — — — — — — — — — — — — | https://medium.com/fantasy-tactics-and-football-analytics/fpl-gameweek-10-where-do-we-go-from-here-wildcard-awaits-341f552e543a | ['Tom Thomas'] | 2020-12-05 12:26:29.984000+00:00 | ['Premier League', 'Analytics', 'Soccer', 'Fantasy Football', 'R'] |
The Russian Beard Tax made complete sense | The Russian Beard Tax made complete sense
“Quirky facts” strip out the historical context and are often kinda condescending and racist
Photo by Keenan Barber on Unsplash
I want to talk about Quirky Facts: brief, self-encapsulated bits of information that make you go “cool!” and hit share. These facts are stock-in-trade for my newsletter, The Whippet, and the internet in general, as well as shows like QI.
Here’s a Quirky Fact that you might have seen around:
FB post from the British Museum: “Peter the Great of Russia introduced a beard tax in 1698 and this token was given as proof of payment!”
If you couldn’t show one, you’d be forcibly shaved.
This is totally true, and it seems wacky and random, and the token is great. It reads “a beard is a superfluous burden”. You probably want your friends with beards to know about this.
Now here’s why the tax was introduced, roughly: In the mid-1600s, the head of the Russian Orthodox Church, Patriarch Nikon, instituted a bunch of reforms, including re-translations of important church literature (including the wording of prayers and hymns, how to cross yourself, how churches should be constructed, etc). He said it was to bring things in line with the Greek Orthodox Church, and that was probably true, but the restructure also gave the Patriarch, i.e. him, way more personal power.
The fact that the translations were made without scholarly consultation (and apparently contained a lot of errors) made his motives look suspect.
To enforce the reforms, old-style churches were demolished and soldiers raided homes to look for suddenly-heretical imagery. Understandably, a lot of people hated this. Some resisted for religious reasons and some because it was pretty clearly a political move.
The people who resisted, and kept to the old ways, were called Old Believers.
For Old Believers, shaving your beard off was blasphemous. Nikon’s reforms reversed this. So the tax was a combination of religious persecution — like a tax on headscarves would be today — and an attempt to quash resistance to a political power grab.
Not that quirky. I’m not even taking a stance on the reforms or the religious doctrines or Patriarch Nikon — I’m just saying, it’s a pretty rational attack on your political opponents, not a tax on beards, there’s nothing that funny or random about it.
Of course, I get why people share the cute beard tax token without explaining that context — you probably skimmed it yourself, right? It’s complicated and dry. It’s not WRONG to just share the one-sentence summary. But I’m uncomfortable with it because we end up in this position of going “lol random” at people from different countries or times, and then having no understanding of how power operates.
We also end up with a wrong impression of people from other cultures, including historical ones, as totally irrational. Just in case you don’t already know this, no one in the Middle Ages ever believed the earth was flat — it’s a myth from the late 1800s. It ties into contemporary people’s need to believe we’re much smarter and better off than olden times people. I’m not, like, worried that medieval people’s feelings will be hurt, the problem is what it does to us in the here and now. If we think history is full of irrational, you-so-crazy people instead of people just like us, we never really think that we can learn from it.
So here’s the thing I would love for you to remember: when you come across some quirky old or foreign thing that makes you think “lol so random”, catch yourself — especially if it has to do with laws, leaders or money. The people who did it had a motive, and that motive was almost certainly to do with protecting their access to power or resources.
I’ve used a historical example but for a contemporary, foreign-country one try “China makes it illegal to reincarnate without a permit”, a seemingly absurd law that is 100% about not allowing the occupied Tibetan people to choose their own leader after the Dalai Lama dies.
If you can’t see how power and resources come into it, you’re probably missing information, and it’s important to know that (whether you can be bothered finding out what the missing info is, I’m less fussed about. So long as you know there’s more to it). | https://mckinleyvalentine.medium.com/the-beard-tax-and-the-political-context-thats-missing-from-quirky-facts-955407092ad8 | ['Mckinley Valentine'] | 2020-10-06 03:23:19.618000+00:00 | ['Life Lessons', 'History', 'Psychology', 'Culture', 'Politics'] |
New York’s Democracy Reform Bill Sends Positive Message | New York’s Democracy Reform Bill Sends Positive Message
After decades in which all reforms were stymied, the new legislature enacted sweeping changes to voting laws on its second day in session
New York State Assembly Speaker Carl Heastie and others applaud as Senate Majority Leader Andrea Stewart-Cousins speaks to members of the state Senate during opening day of the 2019 legislative session at the State Capitol. (AP Photo/Hans Pennink)
This article was originally published in Miles Rapoport’s column on democracy issues for the American Prospect. Read other posts in the series here.
By Miles Rapoport
On one remarkable day — the second day of the legislative session — the state of New York took a great leap forward in how elections will operate in the state. The voting reform bills, which passed through both legislative Chambers in a single day, will open up new and wide opportunities for people to vote, and catapults New York from being at the back of the pack (and almost the only blue state there) to close to the front when it comes to expanding voting access to everyone.
For years, New York had been notorious for laws that effectively diminished voting, lest state residents realize that they could actually vote against incumbents. Alone among the 50 states, New York actually conducted two different primaries — one for federal offices, another for state — on two different days, a sure-fire way to confuse voters and fragment the electorate. As well, registration closed weeks before the election days, so if a challenger’s campaign managed to gain some traction, it would already be too late for the candidate’s backers, if not yet registered, to get themselves onto the voting rolls. The campaign finance laws, meanwhile, enabled major financial interests — most commonly, New York City’s real estate moguls — to lavishly fund their favored candidates.
Such laws created a system in which challengers were few, and the percentage of registered voters who came out to vote in primaries often barely exceeded single digits.
Now, those hurdles to voting have been toppled by New York’s new legislators. In one fell swoop, the legislature adopted:
Portable registration for people who move within the state
Pre-registration for 16- and 17-year-olds
Early voting
Same day registration (first step toward constitutional amendment)
Expanded mail-in voting (first step toward constitutional amendment)
Eliminating the “LLC Loophole” which allowed unlimited political donations by limited liability corporations through which major financial interests were permitted to make outsized donations to candidates of their liking
Putting state and federal primaries on the same day, which will increase turnout
Two aspects of this extraordinary development are worth exploring. The first is the apparent speed of the actions, which requires some New York history. The second is how much of what has passed is based on a suite of best practice policies that has increasingly been agreed upon by the robust and growing movement of voting expansion advocates and election reformers.
FIRST, THE HISTORY. This was only ‘fast’ if you believe history began when these legislators were sworn in. But the history of the fight goes back years, is full of frustration and betrayal, and the only constant has been the determination of organizers to never give up.
Organizations like Common Cause, Citizen Action of New York, the Working Families Party, Make the Road New York, and many others have been fighting for expanded voting rights and for public financing of election campaigns for years. But they have run up against the infamous “three men in a room” character of New York politics, where the Senate majority leader, Assembly speaker, and the governor end up settling everything behind closed doors. In addition, the Senate has for years been a Republican bastion, reliably (and sometimes conveniently for some Democrats) blocking both progressive economic policies and democratic reform.
In the last several years, though Democrats have had a technical majority in the Senate, a breakoff group of “Independent Democratic Conference” members caucused and voted with the Republicans. This stymied the Democratic-controlled Assembly, and allowed Governor Cuomo, the depth of whose commitment to election and campaign finance reform has always been suspect, to support the reforms every year, but never quite have them pass.
“In recent years, the voting reform organizations did excellent organizing and coalition building”
In recent years, the voting reform organizations did excellent organizing and coalition building. The Let New York Vote coalition has led the fight for voting reform, and Fair Elections for New York has brought 175 organizations together in support of public funding for elections. They generated strong popular support for change, helped along by the high-profile corruption in the state’s political system, including the convictions of both the longtime former speaker (a Democrat) and the longtime former Senate majority leader (a Republican) on bribery charges. But still, the old-school policies persisted. The frustration among advocates grew year after year, and was one of the key factors that led Cynthia Nixon to launch her challenge to Cuomo in last year’s Democratic gubernatorial primary.
Enter the other elections of 2018. While the elections of Alexandria Ocasio-Cortez and Antonio Delgado to Congress have gotten the most ‘ink’, at the state legislative level, progressive Democrats defeated six of the eight IDC Democrats and another conservative Democrat in the primaries. On top of that, in November, Democrats took eight seats in the State Senate from Republicans, completely undercutting Republican strength in the chamber and giving the Democrats a 39-to-23 advantage, with one Independent. This was the culmination of years of efforts by progressive Democrats, and especially the Working Families Party, to bring change to the New York State Senate, combined with intensive work by new, highly energized groups that grew out of the anti-Trump resistance.
The changes are palpable. The two leaders of the legislative chambers — Assembly Speaker Carl Heastie, Jr., and Senate Majority Leader Andrea Stewart-Cousins, mark a sea change from the old guard. New York and Nevada are now the only states with African American legislative leaders in both chambers, and the incoming leaders have worked with, and shared the frustration of, advocates and organizations on multiple issues. The winds of change are blowing left in New York on a number of issues, but it is significant that the new majorities chose to strengthen the state’s democracy as the first issue out of the block, as did Congressional Democrats in leading with their own democracy reform package, HR 1.
SECOND, THE POLICIES THEMSELVES are very significant and represent a real and increasing maturity among the advocates for voting rights and democracy reform. The modern fight for voting rights began in the civil rights movement of African Americans in the ’50s and ’60s, and hasn’t stopped since. But in addition, since the Florida debacle of 2000, an increasingly diverse and robust movement for democracy reform has been developing around the country.
For many years, this movement tended to be siloed and separate: mostly white good government campaign reformers in one corner, mostly African American voting rights advocates in another. In addition, organizations who advocated a particular reform, like making Election Day a holiday or full voting by mail, argued for their individual reforms as ‘the key’ to a better democracy.
Over the last five years or so, however, a fairly broad consensus has developed that our democracy is broken and dysfunctional in so many ways that a variety, or ‘suite,’ of policies is needed to really make a change. Voting rights supporters are getting behind campaign finance reforms, money in politics advocates are supporting the restoration of voting rights for people with felony convictions, and organizations that have focused on economic issues are joining the election reform fray.
As this maturing and melding process has moved forward, consensus support has emerged for a comprehensive reform package. The elements include: registration reforms like online voter registration, automatic voter registration, same day registration, pre-registration of 16- and 17-year-olds, and portable registration for address changes. On voting itself, the new elements include a combination of conveniently located in-person early voting, expanded mail-in options for voting, making election day a holiday, improving vote-casting by overseas and military voting, along with ensuring the integrity of vote counting through paper-based audit trails.
Two other areas are increasingly part of this consensus. The first is dramatically changing the role of money in politics. Shorterm (within the bounds of Supreme Court decisions that have fostered flooding our politics with big money), strong disclosure laws, elimination of loopholes like New York’s LLC, and varying forms of public financing (matching small contributions, providing citizens with vouchers, or full public financing for candidates who reach a threshold of small contributions) are all making headway. Long-term, overturning key Supreme Court decisions like Citizens United and 1976’s Buckley v. Valeo, which have defined unlimited spending as free speech, is critical to reining in the billionaire domination of politics.
The second element of the consensus is ending both racial and partisan gerrymandering. Increasingly, reformers and citizens are recognizing that districts drawn non-competitively discourage participation, increase polarization and drive many people from politics altogether. In 2018, five states passed ballot initiatives that created independent redistricting commissions or sharply limited partisan gerrymandering, and the move toward redistricting reform has gained remarkable strength in just a short period.
With the exception of eliminating the “LLC Loophole” and putting state and federal primaries on the same date, the New York legislature stuck to the voting rights agenda on their first day. But within that, they came a fair way to having the full suite of reform policies that advocates have recommended. Granted, in the case of same-day registration and voting by mail, the reforms require changing the New York State Constitution, but the legislature took a first and highly important step toward making this happen on Day 2 of their session.
Additional reforms may move forward during the rest of the session. Both Susan Lerner of Common Cause New York and Karen Scharff, who headed Citizen Action of New York for 31 years, say that public financing, voting rights restoration, and automatic voter registration all have real shots to be enacted as the session proceeds. Governor Cuomo included funding for a public financing system in his budget proposal, which will receive final action by April 1.
I think it is likely that New York’s great leap forward will help crystallize the thinking of advocates, legislators, governors, and secretaries of state in other states where reform is possible. In states with new political openings since the midterm elections, rather than think about just one reform or another, states now have a template for examining their existing laws, for assessing both where they have made progress and where they haven’t, and for adopting as much of the suite as they can as a holistic democracy reform. New York has taken a major step here, but it is not alone. Colorado in recent years has adopted a wide swath of democracy reforms, as has California. Last year, Washington state adopted a slate of similar reforms. Will other states follow this lead? It seems more possible now than it was even a few days ago.
Winning transformative reform in New York and other states will continue to be a hard-fought and long-term project. And even with all of the voting reforms, America’s voter participation will lag well behind other industrialized countries. The Electoral College is a huge impediment to truly democratic representation. Structural reforms like multi-member districts, ranked choice voting, and others have significant unfulfilled promise. We haven’t scratched the surface of truly reining in the power of money. The Senate itself, while we are at it, has a fundamentally undemocratic structure.
But we were never promised a rose garden, and people fighting for democracy reform know it is a very long haul. However, there are some moments to stop and smell the flowers that do grow along the way. New York’s victory is one such moment. Their democracy champions should savor it, as should we all.
Miles Rapoport is a Senior Practice Fellow in American Democracy at the Ash Center for Democratic Governance & Innovation. Previously, Rapoport was President of the independent grassroots organization Common Cause, & for 13 years, he headed the public policy center Demos. Rapoport served as Secretary of the State in Connecticut from 1995–1999, & served ten years in the Connecticut legislature. | https://medium.com/challenges-to-democracy/new-yorks-democracy-reform-bill-and-the-message-it-sends-99ad5e30854a | ['Harvard Ash Center'] | 2019-01-18 19:50:49.562000+00:00 | ['New York', 'Voting Rights', 'Politics', 'Democracy', 'Elections'] |
Coping with Being a Single Gay Man | Finding a significant other is truthfully not guaranteed for most gay men.
There are some gay men who do die alone, never having taken a partner. This is old news. Every gay who’s been around the block should have figured this all out for themselves by now. The question is, how do you cope with the alienating, lonely existence that is being a single gay man?
Most gay men deal with these feelings in unhealthy ways; self-medicating; having risky sex; etc. These unhealthy coping mechanisms can create a myriad of health problems, which only make the already depressing existence of being a single gay man that much more awful. I have found some better ways to cope with the pain of being single and lonely.
1. Get into a really time-consuming hobby that you are passionate about
This is probably some advice that you’ve heard before which is why I’m beginning here. If you haven’t done this already you’re really not doing yourself any favors. It helps to have a hobby you are passionate about because when your day slows down after work and you’re left to your own machinations, you’ll find yourself getting bored; of course! I’ve found that having a hobby to distract me is the best way to combat feelings of loneliness. It’s most important that this hobby is something creative because it gives you an avenue to express yourself and it busies your mind. You want to make sure this hobby is something you enjoy too, there’s no sense in starting this hobby if you don’t actually like it.
2. Make New Friends and Strengthen Old Friendships
The relationships we form in our lives are what end up mattering the most to us in any aspect. If you don’t already, it’s a great time to start valuing the people you do have in your life and welcoming new connections. Since, it’s clear that we don’t know when Prince Charming is going to show up, if he ever does, it’s best to accept what you have.
3. Change the narrative of what a fulfilling life would look like for you
You don’t have to rely on a romantic relationship to make you feel fulfilled in life. Plenty of people have lived fulfilling lives without one. The feelings of loneliness do not stop at the agreement to be in a committed relationship. Some relationships are filled with drama, infidelity, distrust, and abuse. People bring emotional and psychological baggage into relationships that they’re just truthfully not ready to get into. If you do get lucky enough to find someone to pursue a romantic relationship with, science says that being happy single is a good indicator that you’ll be happy committed as well.
4. Focus on yourself
Whatever does end up happening in your life, the best thing you can do for yourself is take make sure you’re taken care of in all areas of life. Spiritually, emotionally, financially, and physically. It is important that you always keep yourself at the forefront of your mind, you only have one life to live after all. Don’t waste it being less of what you’re capable of.
Know that you are not alone.
This is a issue that is widespread in our community, there are a lot of gay men who are single, despite actively looking for relationships, and it has nothing to do with them being less deserving of love. It’s hard to find relationships when our options are limited and our culture is hypersexualized. Relationships are hard to find in general and a lot of relationships don’t last because people are complicated and difficult to love. I’m not saying give up on finding love, but it may be a good idea to not make it as big of a priority in your life. Focus on yourself, hobbies, friends and family, communities, find a way to get your intimate needs met through other means. The truest form of love you will ever receive wont come from someone else, but from within you.
“And I’ve learned that we must look inside our hearts to find a world full of love like yours, like mine, like home.” — Charlie Smalls
References and Recommended Readings | https://medium.com/th-ink/coping-with-being-a-single-gay-man-df2eec8e403e | ['Jajuan Moten'] | 2020-06-01 15:53:41.133000+00:00 | ['LGBTQ', 'Identity', 'Gay', 'Self-awareness', 'Personal Growth'] |
How to Map Large Areas | How to Map Large Areas
A Workflow for the Commercial Drone Industry
By Eric Harkins, Founder and CEO, Back Forty Aerial Solutions
If I learned anything from my days working in the restaurant scene, it is this: you have to take time to save time. The commercial drone business is no different, especially when it comes to mapping large areas. Large plots pose a unique set of challenges including maintaining line of sight, optimizing battery life, collecting and organizing large amounts of data, and ensuring safety across such a large area. Because of the sheer amount of data involved, extra care must also be taken to produce a cohesive, high-quality map without blurred images or gaps in coverage. Technical blunders cause unnecessary delays, and the failure to produce a successful map may even result in the need to re-fly a mission altogether, costing drone operators valuable time and money
However, by taking the extra time to plan properly, equip properly and not rush through the execution, most major issues can be avoided. Additionally, settings expectations with customers beforehand is a wise idea. To this end, we at Back Forty Aerial Solutions developed the following workflow for mapping large areas. It is focused on using DJI equipment, but many of the tips are still applicable to other platforms and planning for efficiency.
This workflow was developed from using a 450 acre area of interest (AOI) we flew in mid-September 2016. You’ll see in this example that there are rolling clouds- we discussed current weather conditions with the landowner prior to flying and showed some examples of what the output could look like, he was fine with it. After some trial and error, we’ve concluded that a Phantom 3 Advanced, paired with the DroneDeploy app, is a cost effective system to use for orthographic mapping.
Pre-Site Planning
Review Satellite & Sectional Maps to Identify Barriers
The last thing an operator wants is to be surprised by a tree line or other unexpected obstruction. This not only creates a potentially unsafe situation, but it also cuts away from precious flight time as an operator scrambles to adjust to an unforeseen obstacle. Because maintaining line of sight (LOS) is so critical, I suggest starting your planning process by pulling up a satellite map, such as Google or Bing, to take a close look at the area you will be flying. Scan the maps for potential barriers to LOS. Check your local VFR Sectional charts for towers and the like. Take this opportunity to get a good sense of the roads and landscape and start thinking about places to set up, take off, land for battery swaps and post up for flight observations.
Scouting ahead should allow you to create a rough estimate of the time you’ll be spending on site. The DroneDeploy mission planner gives a good estimate of flight time for a large area. However, I recommend conservatively tacking on 20% to that time estimate to allow for flying to and from your start/end points, as well as for time spent on battery swaps.
Choose the Right Flight Parameters to Ensure Data Quality
Reviewing flight plan with customer prior to take-off
DroneDeploy’s software makes flight planning and flying with DJI products pretty simple. However, it’s more of an art than a science, and while the DroneDeploy interface makes it easy to plan flights, it is still our responsibility as operators to get the data we need. Follow these simple guidelines to help ensure data quality and maximum battery life.
Maintain Simple Boundaries- Don’t get carried away drawing complicated polygons for your boundaries. Doing so can leave you prone to tight corners and areas of questionable coverage. Instead, maintain fairly simple shapes that allow for smooth, consistent coverage. You can simply crop out unneeded areas when you upload to DroneDeploy.
Overshoot Your Perimeter- Our experience has shown that areas on the outside transects are more susceptible to artifacting/ distortion when processed through DroneDeploy. Because of this, we overshoot our perimeter by 10–15%, aiming for one transect outside of our AOI boundaries. This may sound like a lot, but the extra time it takes to fly the expanded perimeter is significantly less than the time it would take to re-fly an entire mission if the original data was distorted.
Area of interest (outlined in red) showing extra flight plan coverage, visual observation points and planned battery swap location
Decide on Altitude Parameters- Flying at higher altitudes allows for better coverage per battery, which can be especially important when covering large areas. However, the higher the altitude, the lower the ground resolution. Because of this, it is important to agree upon survey parameters and set expectations for ground resolution with your client prior to flying, and balance this out with your battery needs.
Set a Minimum 70/70 Overlap- Maintaining a minimum 70/70 side/front overlap will give processing software a better chance of tying together points between images. I’ve even talked to operators who do as much as a 90/90 overlap. If you are flying for 3d point cloud generation, this 90/90 overlap is where you will be living.
Fly Lengthwise- Whenever possible, orient your flight lines lengthwise, i.e. — parallel to the longest edge of your AOI. This allows for clear edges without wasting time by turning more corners. That being said, if you are flying in moderately windy conditions, do consider the direction of the wind. As a general rule, you should avoid flying across the wind, since it increases the likelihood that the drone’s legs will angle into the camera’s field of view.
This allows for clear edges without wasting time by turning more corners.
Turn Off Automatic Camera Settings — If you are fairly familiar with the DJI camera settings, you may be able to improve image quality and create more consistent exposure by turning off the automatic settings and instead manually setting them in the DJI Go app. However, this may backfire on days with rolling cloud cover, so it is important to be aware of the conditions and choose wisely.
Divide Area into Multiple Missions
Some operators prefer to draw one large mission in DroneDeploy. Although this is a useful way to get an overall estimate of flight time, we prefer to create multiple flight missions that divide the AOI down into manageable chunks. This allows us to more easily maintain LOS, plan battery swaps and maintain simple boundaries and limit the amount of tight corners.
Maintaining LOS and planning battery swaps can be difficult when flying one large mission
Breaking a large mission into smaller chunks allows for shorter transects and easier logistics planning
If you divide your AOI into multiple maps, it is important to keep them organized with consistent naming conventions. We generally start at the NW corner of an AOI and call that N1 Property Name, then move east with N2 Property Name and so on. Picking an arbitrary north/south dividing line, such as a road, can further divide your labels.
Note: Images from multiple flight plans can be uploaded to one map for processing as long as the total images do not exceed the per-map upload cap on your DroneDeploy plan. However, flight logs will only be available for the flight plan to which the images were uploaded.
Packing
Forgotten or unprepared equipment can waste lots of time or, depending on how far you’ve traveled, may even cause the day to be postponed entirely. Maintain a master packing list to be used for all large missions. For each new mission, consult the master list and add or delete items as needed. The night before, make sure you’ve packed, double checked your list and charged all batteries and equipment.
Sample Packing List (based on 450 acre flight):
DJI Phantom 3 Advanced x2 (because “two is one, and one is none”)
64gb Micro SD card x2
Micro SD to USB reader x2 (one in each case and an extra in each vehicle)
Phantom 3 battery x5
Fully charged laptop & charger (for reviewing captured images in the field)
400w inverter (hardwired to the truck battery, we plan on upgrading to 1000w)
DJI Phantom battery charger
3 battery, parallel charger
LiPo guard battery bag x2 (to safely charge batteries within)
2 gallon cooler (to cool batteries after each flight and get them onto the charger faster)
Stay-dry Ice pack x2 (goes into cooler and doesn’t sweat)
ND lens filters (to prevent whitewash on sunny days)
In-Field Operations
Maintain Safety
Safety is paramount, not just for you and your team, but for all of us. When a drone operator makes the evening news for being an unsafe pilot, our reputation as an industry suffers. Wearing bright colors in remote areas, adequately marking your takeoff and landing zones and maintaining LOS at all times can go a long way toward promoting safe flights. Don’t fly in unsafe weather conditions — it’s not worth the risk. Service equipment and routinely inspect it before and after each flight to prevent dangerous malfunctions. Additionally, make sure to keep in constant communication with your team through cell phone, radio or a device like Beartooth.
“When a drone operator makes the news for being unsafe, our reputation as an industry suffers,” says Eric. [click to tweet]
Bringing a spotter along both helps with safety and also makes missions twice as manageable. A second team member is useful for running interference on any curious passerby that can distract your pilot in command, as well as provide an extra set of eyes to scan the sky for unforeseen obstructions. In addition to helping with these safety measures, the team member can do double duty by moving batteries into and out of the cooler and field checking photos, further improving your speed and efficiency.
Optimize Battery Life
Unless your battery collection looks like the picture below, you will need to have a plan for charging and rotating batteries. Taking a few steps to optimize battery life for each mission can save a considerable amount of time in the field.
#dronegoals
Set up a safe home point, then drive as close as possible to your starting point and launch. Plan your battery swap locations ahead of time and drive to them as your mission progresses. The FAA has stated that this is acceptable in sparsely populated areas. I do however want to address the difference between using an iOS device vs an Android device. Android will NOT allow you to run multiple, simultaneous connections to a DJI drone. Therefore, in flight, you cannot change the home point between the take off point and the controller’s location on an Android device.
Another trick to optimize battery life is, in the DJI Go App, change the RTH battery threshold to lower than 30%. Alternatively, you can set your controller as the home point. When your battery starts to get down to 30–40%, stick close and you may be able to squeeze out an extra transect or two, depending on their length. Do make sure you know your limitations though, and always maintain enough battery life to land safely. You are always responsible for your aircraft.
A savvy pilot should always be keeping an eye on remaining battery life. If you know you have 33% life remaining but are about to start a 1500 foot transect that ends away from your intended landing zone you should be planning accordingly. Go ahead and take control at the end of the transect closest to you and land. In other scenarios, when you receive the low battery signal attempt finishing the transect ending closest to you, then switch back into P-mode and manually land. Alternately, if you set controller as the home point you can just hit the home button in the app or on the controller. The mission will resume at the front of the next transect, avoiding the need to re-fly a transect that was interrupted midway through.
During battery swaps, immediately toss the used battery into your cooler with the dry-packs. Set a timer and pay attention to it when it goes off. Think of a NASCAR pit crew — in and out. To avoid confusion, make sure to label batteries, either with a vinyl sticker or a sharpie.
Check Data in Field
Despite the best preparations, data quality issues do still occur. Cameras malfunction and fail to take pictures, GPS doesn’t geotag images correctly or photos end up overexposed. These issues are sometimes out of your control, but two days after you’ve left the job site isn’t the time you want to discover them. To check data in the field, bring multiple micro SD cards to a site and swap them out after each section has been flown. While the operator is flying the next section, another team member can move the images from the previous mission off the SD card and onto the laptop, then format the empty SD card for the next flight. I suggest taking a moment to separate the images taken during RTH/landing from your AOI images. Uploading these by accident can cause issues with processing your imagery.
The really important step here is to get on WIFI or if a cell signal is available, pop up a WIFI hotspot and do a mock upload to the DroneDeploy website. Drive back toward civilization if you have to. You can quickly check for coverage without actually uploading your images. Just hit that “Upload” button and select your images to get to the screen below.
‘Take time to save time’ in action! We now know we need to re-fly a portion of the mission.
Develop Your Own Routine
Successfully mapping a large area can be a daunting task. There is no shortage of challenges that threaten to cause costly delays. Let this workflow serve as a roadmap for your own mapping routine. Take notes along the way, find out what works for you and ultimately refine a plan that suits your business. The extra time you put into these preparations now will be returned to you twofold in the form of efficient, cost-effective mapping missions.
If you’ve read this far you’re probably thinking to yourself, “well Eric, where’s the finished product?” Don’t worry, I didn’t forget!
Behold — the fruits of our labor. Now get out there and fly safe, fly smart, and of course… have some fun! | https://medium.com/aerial-acuity/how-to-map-large-areas-6a27d325751 | [] | 2016-10-28 16:41:58.953000+00:00 | ['Surveying', 'Mapping', 'How To', 'Drones'] |
K-Means: The Marketer’s Algorithm | Introduction —
We shall cover the following aspects in this article:
Exploratory Data Analysis (EDA) to build our intuitions K-means Clustering in action What do marketers do with these clusters?
Let’s take a dataset and try to demystify concepts on the go.
Our data set is a collection of customer data at a shopping mall. This type of data is usually collected through a myriad of sources — Customer Relationship Management tools (CRM), transaction data, parking tickets, lucky draw coupons etc.
Our data frame has 200 rows of customer data across 5 features or variables.
Data set snapshot
Data dictionary — containing details of various features
Various feature columns in the data
No null values in the data set
Since there are no null values in the dataset and values in various fields appear to be of the desired datatype, we can directly proceed for EDA.
1. Exploratory Data Analysis –
Univariate analysis
Understanding the gender split of the mall traffic — there are 112 Females and 88 Males, clearly more number of females visit the mall than males.
People of all ages visit the mall — 20 to 70-years. The age distribution is slightly skewed to the right, meaning there are some very elder customers of the mall. The mode class of customers is the age group of 30 to 40.
The customers of the mall come from a diverse set of income levels — 15k to over 125k per annum. Customers in the income range of 50k to 75k form the leading demographic. There are also a few very high-income customers that belong to the income bracket of 125k and above.
Spending score is almost symmetrically distributed around the mean. Most of the customers have a spending score in the range of 40 to 60.
Bivariate analysis
Age and Spending score seem to have a negative correlation, i.e. younger customers tend to buy more than the elder ones.
The plot of income and spending score shows some inherent grouping for a few sets of variable combinations. At this stage, 5 such groups can be visibly identified from the scatter plot, it would be interesting to see how many clusters we actually end up identifying during clustering. Due to the symmetrical nature of the plot there appears to be no correlation between the two variables.
Age vs annual income plot also has a fairly uniform spread, implying that there exists a huge range of customers at a particular age and at a particular income level. There appears to be no correlation between them for the current data.
Let’s validate our observations by checking the correlation matrix and a heat map.
Correlation Matrix
Heat map of all numerical features
Annual income and Customer ID have a correlation of 0.977 which is awesome but is of no particular use as Customer ID is a nominal feature.
Our initial guess about of the correlation between age and spending score is correct as indicated by a correlation of -0.328.
2. K-means Clustering in Action—
By K-means clustering, we want to identify points that are closer to each other based on the distance (read Euclidean distance) between them and group them in one cluster. Doing this, we would be able to achieve a cluster of data points which are closely packed, i.e. share similar characteristics with points within the same group but differ from those outside the group.
Ideally, we want to achieve clusters such that within cluster variance is minimum and between cluster variance is maximum.
Image by author — Points ABC are similar to each other, hence grouped as Cluster 1, whereas DEFG are similar to each other and grouped as Cluster 2. However ABC and DEFG are different from each other.
Let’s put our intuition to test and apply clustering on income and spending score plot.
But before we do that, K — Means algorithm requires a user input for k i.e. the number of clusters we want to visualize in our data.
We can choose all the data points in 1 cluster or we can have as many clusters as data points i.e. 200. But ultimately, we would want to identify meaningful customer groups which are sufficiently large in size and share a set of characteristics.
So how do we pick the optimum number of clusters?
— The elbow method.
Let’s say we have a model with ’n’ clusters. We can find within cluster variance — how far are the data points from the cluster center, also called within-cluster-sum of squares (WCSS). The sum of WCSS for all clusters would give us the WCSS for the entire model.
Our aim is to have an optimum model with number of clusters convenient enough to identify customer groups distinctly and at the same time do so with a low value of total WCSS. As we increase the number of clusters our total WCSS decreases but we also end up with more clusters which may be impractical to deal with — clearly there’s a trade off.
To identify that sweet spot, we observe the rate of decline in WCSS with incremental change in number of clusters. The point beyond which there is no significant drop in total WCSS is our optimum value of ‘n’.
In our analysis, we experiment with cluster scenarios where n varies from 1 to 8 and obtain the plot.
Elbow method showing optimum number of clusters
At n = 3 and n = 5, we can see an elbow-like formation (hence the name). But the slope of WCSS reduces even further at n = 3, but WCSS decline beyond n = 5 is very less.
Since there is no radical reduction in the value of WCSS beyond n = 5, we pick 5 clusters and implement the K-means clustering algorithm.
5 Clusters identified by K — Means Algorithm
3. What do marketers do with these clusters?
Let’s understand it through STP.
STP — Segmentation Targeting Positioning, is one of the most fundamental frameworks used by marketers to identify and reach out to their target customers.
Segmentation is basically, categorizing customers into groups that share a set of characteristics.
We want to classify our customers based on –
Demographics — age, gender, education, income, marital status
Geography — country, city, locality, climate
Behavior — offline or online purchase behavior, usage — heavy user or light, buying frequency, benefits sought, loyalty
Psychographics — personality, interests, opinions, attitude, beliefs, lifestyle, values
At the end of segmentation, we should be able to distinctly identify such clusters.
Next, targeting would allow us to identify the most attractive or relevant customer groups. With a set of customer groups identified, we want to evaluate the attractiveness of each group basis parameters like –
revenue potential and size of the target group — we don’t want to be spending a lot for a very few customers who don’t buy a lot
accessibility — our ability to reach out to them etc.
Targeting allows us to zero-in on one or more target groups depending on what product offerings we have.
Finally, by Positioning we can differentiate our offerings from those of our competitors and create a desired brand image. Positioning would also allow us to figure out what value proposition should we communicate so that it appeals to the particular target group. It would also allow us to conceptualize what our campaign should look like or what should our creatives focus on etc.
Case in point –
We have identified 5 clusters using our model, based on demographic factors like age and income, behavioral factors — Spending score, a proxy or an indicator of behavior.
Let’s take Cluster 2 and understand how it looks like —
There are 21 Females and 18 Males in Cluster 2
We can build a profile for Cluster 2 based on our analysis -
Age — 27 to 40 years — working professionals and/or parent
Spending score — A high scoring group as a whole, mean — 82.13, median — 83, these are the customers who love shopping at our mall.
Annual Income — This is a high earning lot, range — 69k to 137k, mean of 86.5k and median of 79k . Mean is greater than the median, indicating presence of some outliers — presence of a few very high-income individuals.
Based on these attributes we can even label Cluster 2
— ‘Premium Shoppers’
These are customers with a relatively high spending scores as compared to others. We would like to encourage their behavior by acknowledging how important they are to us and plan our promotions accordingly. We can offer deals on our high-end products for these customers, subject to their interests and buying history — which would need more data.
Likewise, we can build profiles for other clusters as well.
Further, we can build a Buyer Persona — an image of our ideal customer.
A detailed version could be made by adding more features including their interests, hobbies, past customer reviews, RFM (Recency Frequency Monetary Value) data etc.
The better we understand our customer, better are the chances of our campaign’s success.
At the next level of analysis we can build robust models to classify our cluster members based on their probability to buy a particular product. This would help make our campaign efforts more focused on a set of high-quality prospects within the cluster and be more efficient with our budget as well.
The possibilities are immense!
Imagine we skip all this and run a one size fits all campaign, I bet you would agree that’s a fine recipe for disaster.
P.S — Thanks for reading! Please feel free to leave your constructive feedback in the responses.
Photo by Artem Kniaz on Unsplash
References –
1. Dataset
2. codebasics by Dhaval Patel
3. StatQuest with Josh Starmer
4. Machine Learning A-Z™: Hands-On Python & R In Data Science
5. K-Means Python Code
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/k-means-the-marketers-algorithm-8b55af7d70e4 | ['Gade Venkatesh'] | 2020-10-10 14:11:05.596000+00:00 | ['Machine Learning', 'Marketing', 'Retail', 'Data Science', 'Marketing Automation'] |
String Partitioning | How do we go about creating the algorithm for partitioning the string?
I think intuitively, you’ll think that you need to loop through each characters in the string, and that is a good idea.
At each character, say, the first character ‘s’, we’ll think to ourselves: is this the end of the first partition? In order to determine that we’ll have to go through the rest of the string and check if there is another ‘s’. In this case there isn’t another s, so we’ll create our first partition.
Moving on to the next character, ‘t’, we’ll go through the rest of the string again, and find another ‘t’. In this case we’ll actually find two, one at position 9, and another one at position 11. Once we find the ‘t’ with max position, we’ll know that for this second partition, it spans at least until the max position, in this case 11.
Now moving on to ‘r’, it’s at positions 2 and 8, that’s great but we already knew partition 2 goes to at least position 11, so it doesn’t add any new information. Continuing this way, we go through ‘i’, which has a max position of 15, so we need to update partition 2 max position again to 15. Continuing yet again to ’n’ with max position 16, and then ‘g’, which can be found at the end of the string.
At this point we’ll have the two partitions we need, ‘s’ and ‘tringpartitioning’.
Translating it into Python:
def partition(s):
def find_end(c):
for idx, cc in enumerate(reversed(s)):
if cc == c:
return len(s) - idx - 1
parts = []
prev_end = 0
curr_end = 0
# loop through every character and find
# the max position of that character
for idx, c in enumerate(s):
curr_end = max(curr_end, find_end(c))
# if the current character is
# at the end of the current partition
# add the partition to the list of parts
if idx == curr_end:
parts.append(s[prev_end:curr_end + 1])
prev_end = curr_end + 1
curr_end = prev_end
return parts S = 'stringpartitioning'
print(partition(S))
# ... ['s', 'tringpartitioning']
Let’s pause for a bit and think, what is the complexity of this algorithm?
First, we are looping through each character, for a string of length n, that is O(n). Then within each iteration, we are going through every rest of the character to find the max position, that is also O(n) (we are not actually looping through the entire string each time, it’s more like max n-1, n-2, n-3, but it’s still of order n). overall, the complexity would be:
O(n * n) = O(n²)
Can we improve it? You bet, the maximum positions for each character can be found in one iteration and stored for later use, there’s no need to do it for every iteration, here is how:
def partition(s):
# iterate through each character
# and store it's max position
end_map = {}
for idx, c in enumerate(s):
end_map[c] = idx
parts = []
prev_end = 0
curr_end = 0
for idx, c in enumerate(s):
# look up the max position for c: end_map[c]
curr_end = max(curr_end, end_map[c])
if idx == curr_end:
parts.append(s[prev_end:curr_end + 1])
prev_end = curr_end + 1
curr_end = prev_end
return parts S = 'stringpartitioning'
print(partition(S))
# ... ['s', 'tringpartitioning']
In this algorithm, we loop through each character and create a dictionary of max position, assuming dictionary look up and write are constant operations (you can usually do that), then we loop through each character a second time to create the partitions, but since we have already created the dictionary of max positions, we can just look them up, saving time from finding max position of the characters at each iteration.
Now with the above enhancement, the complexity becomes:
O(2 * n) = O(n)
Amazing! But is everything better? Well, let’s see, now that you created a dictionary, it’ll take up space. How much space does it take? if every character were distinct in the string the dictionary would have n entries. so its of order:
memory usage = O(n)
Nothing can be perfect is it? But we already knew. | https://medium.com/python-in-plain-english/string-partitioning-d0012444454 | ['Shuo Wang'] | 2020-09-13 14:21:27.119000+00:00 | ['Python', 'Leetcode', 'Programming', 'Art', 'Algorithms'] |
15 Terms You Need to Get Used to As a UX Designer | A wireframe is a simplified representation of your website or application. It consists of lines and text that can be hand-drawn or electronic. The focus of the wireframe should be structural elements that represent priority. In this stage, visual design and color are not presented. Like the picture shown below.
Low-fidelity wireframe
A mockup is a high-fidelity simulation of the product. It has a richer visual element than wireframe, including graphics, layout, and style. Mockups focus on how the users will interpret the design through its visual elements.
Compare to wireframes, prototypes are more flexible. They can be responsive and may contain images or content. Prototypes are made from paper or digital tools like Adobe XD. All wireframes are prototypes but just low fidelity without many details. But a high-fidelity prototype is not a wireframe. It takes you as close as to a real representation of UI and should feel like real software to users.
Prototyping On A Budget. NATE DESCHAINEP, 2018.
A/B testing is a controlled experiment for comparing two versions of the design. The goal is to identify which one is more successful. It is a way of testing the designers' hypotheses. A/B testing allows them to confirm whether the hypotheses will work out well or not. It helps you with making the right decisions.
A/B testing. Illustration by: Philip Clark
Affordance is a feature of an object that made actions possible. Users should understand the affordance of a feature of an object without having to know how to use it. In order words, affordance is a hint for users to interact with a product. For instance, a button should be designed to look as if it should be pushed.
Accessibility is about users with disabilities or special needs. It focuses on whether those users can access an equal user experience. Common barriers to accessibility issues are color blindness, hearing difficulties, wheelchair-user concerns, etc.
Clickstream is a way of collecting and analyzing data to track users’ actions. It usually takes place during users’ interaction with the product. It is also a method to get unbiased quantitative data about user behavior.
Example of clickstream analysis
A Diary study is a way of a research method used to collect qualitative data about user behavior over time. It is usually designed for understanding long-term user behavior and experiences.
Timeline of activities that take place throughout a typical diary study. Nielsen Norman Group, 2020.
Card sorting is a research method where participants label notes according to certain criteria. It helps designers structure content that is easy to navigate within the information architecture.
KPI is a way of monitoring or measuring progress towards a goal. Oxford’s Dictionary says it is a quantifiable measure used to evaluate the success of an organization in meeting objectives. In other words, KPI is a way to show where the gaps are and where to put more attention. KPI's can be used at every level in an organization to keep track.
A mood board is a collection of features such as fonts, images, icons, and other UI elements. They help to define the artistic direction of a project. It should be also focused on meeting user needs and problem-solving.
It is a map that tells you what part of the webpage a user is more focused on. It uses a color scheme, ranging from warm colors, like red and blue at the cooler end of the color spectrum. The warmer areas say the areas where users are focusing on more.
Eye-tracking is the measurement of eye activity on a screen. It involves measuring either where the eye is focused or the motion of the eye as an individual view a web page. You can get information on what users look at most and in what order.
Image from Don't Make Me Think by Steve Krug, 2013.
It is a diagram designed to visualize all the potential causes of a problem to discover the root cause. The head of the fish states a problem and bones along the spine represent categories of factors. When applied correctly, it ensures that you address the actual cause of the problem.
Fishbone Diagram
Technical debt or design debt is a cost of more rework caused by choosing an easy solution now. Instead of using a better approach that would take longer. Development teams, for example, would take shortcuts to deliver functioning products faster. Then later on they would work on it again to meet the standard quality. | https://uxplanet.org/15-terms-you-need-to-get-used-to-as-a-ux-designer-52b593566b7a | ['Eva V.'] | 2020-12-26 04:17:11.989000+00:00 | ['Tech', 'Product Design', 'UX Design', 'UX Research', 'Design'] |
How to get create-react-app to work with a Node.js back-end API | This is a very common question among newer React developers, and one question I had when I was starting out with React and Node.js. In this short example I will show you how to make create-react-app work with Node.js and Express Back-end.
create-react-app
Create a project using create-react-app .
npx create-react-app example-create-react-app-express
Create a /client directory under example-create-react-app-express directory and move all the React boilerplate code created by create-react-app to this new client directory.
cd example-create-react-app-express
mkdir client
The Node Express Server
Create a package.json file inside the root directory ( example-create-react-app-express ) and copy the following contents:
{
"name": "example-create-react-app-express",
"version": "1.0.0",
"scripts": {
"client": "cd client && yarn start",
"server": "nodemon server.js",
"dev": "concurrently --kill-others-on-fail \"yarn server\" \"yarn client\""
},
"dependencies": {
"body-parser": "^1.18.3",
"express": "^4.16.4"
},
"devDependencies": {
"concurrently": "^4.0.1"
}
}
Notice I am using concurrently to run the React app and Server at the same time. The –kill-others-on-fail flag will kill other processes if one exits with a non zero status code.
Install nodemon globally and the server dependencies:
npm i nodemon -g
yarn
Create a server.js file and copy the following contents:
const express = require('express');
const bodyParser = require('body-parser'); const app = express();
const port = process.env.PORT || 5000; app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));
app.get('/api/hello', (req, res) => {
res.send({ express: 'Hello From Express' });
}); app.post('/api/world', (req, res) => {
console.log(req.body);
res.send(
`I received your POST request. This is what you sent me: ${req.body.post}`,
);
});
app.listen(port, () => console.log(`Listening on port ${port}`));
This is a simple Express server that will run on port 5000 and have two API routes: GET - /api/hello , and POST - /api/world .
At this point you can run the Express server with the following command (still inside the root directory):
node server.js
Now navigate to http://localhost:5000/api/hello , and you will get the following:
We will test the POST route once we build the React app.
The React App
Now switch over to the client directory where our React app lives.
Add the following line to the package.json file created by create-react-app .
"proxy": "http://localhost:5000/"
The key to using an Express back-end server with a project created with create-react-app is to use a proxy. This tells the Web-pack development server to proxy our API requests to our API server, given that our Express server is running on localhost:5000 .
Now modify ./client/src/App.js to call our Express API Back-end, changes are in bold.
import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; class App extends Component {
state = {
response: '',
post: '',
responseToPost: '',
}; componentDidMount() {
this.callApi()
.then(res => this.setState({ response: res.express }))
.catch(err => console.log(err));
} callApi = async () => {
const response = await fetch('/api/hello');
const body = await response.json(); if (response.status !== 200) throw Error(body.message); return body;
}; handleSubmit = async e => {
e.preventDefault();
const response = await fetch('/api/world', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ post: this.state.post }),
});
const body = await response.text(); this.setState({ responseToPost: body });
};
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p>
Edit <code>src/App.js</code> and save to reload.
</p>
<a
className="App-link"
href="
target="_blank"
rel="noopener noreferrer"
>
Learn React
</a>
</header>
<p>{this.state.response}</p>
<form onSubmit={this.handleSubmit}>
<p>
<strong>Post to Server:</strong>
</p>
<input
type="text"
value={this.state.post}
onChange={e => this.setState({ post: e.target.value })}
/>
<button type="submit">Submit</button>
</form>
<p>{this.state.responseToPost}</p>
</div>
);
}
} render() {return ( Edit src/App.js and save to reload. https://reactjs.org target="_blank"rel="noopener noreferrer"Learn React ); export default App;
We create callApi method to interact with our GET Express API route, then we call this method in componentDidMount and finally set the state to the API response, which will be Hello From Express.
Notice we didn’t use a fully qualified URL http://localhost:5000/api/hello to call our API, even though our React app runs on a different port (3000). This is because of the proxy line we added to the package.json file earlier.
We have a form with a single input. When submitted calls handleSubmit , which in turn calls our POST Express API route then saves the response to state and displays a message to the user: I received your POST request. This is what you sent me: [message from input].
Now open ./client/src/App.css and modify .App-header class as follows (changes in bold)
.App-header {
...
min-height: 50%;
...
padding-bottom: 10px;
}
Running the App
If you still have the server running, go ahead and stop it by pressing Ctrl+C in your terminal.
From the project root directory run the following:
yarn dev
This will launch the React app and run the server at the same time.
Now navigate to http://localhost:3000 and you will hit the React app displaying the message coming from our GET Express route. Nice 🎉!
Displaying GET route
Now, type something in the input field and submit the form, you will see the response from the POST Express route displayed right below the input field.
Calling POST route
Finally take a look at at your terminal, you will see the message we sent from the client, that is because we call console.log on the request body in the POST Express route.
Node
Production Deployment to Heroku
Open server.js and replace with the following contents:
const express = require('express');
const bodyParser = require('body-parser');
const path = require('path'); const app = express();
const port = process.env.PORT || 5000; app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true })); // API calls
app.get('/api/hello', (req, res) => {
res.send({ express: 'Hello From Express' });
}); app.post('/api/world', (req, res) => {
console.log(req.body);
res.send(
`I received your POST request. This is what you sent me: ${req.body.post}`,
);
}); if (process.env.NODE_ENV === 'production') {
// Serve any static files
app.use(express.static(path.join(__dirname, 'client/build'))); // Handle React routing, return all requests to React app
app.get('*', function(req, res) {
res.sendFile(path.join(__dirname, 'client/build', 'index.html'));
});
} app.listen(port, () => console.log(`Listening on port ${port}`));
Open ./package.json and add the following to the scripts entry
"start": "node server.js",
"heroku-postbuild": "cd client && npm install && npm install --only=dev --no-shrinkwrap && npm run build"
Heroku will run the start script by default and this will serve our app. Then we want to instruct Heroku to build our client app, we do so with heroku-postbuild script.
Now, head over to Heroku and log in (or open an account if you don’t have one).
Create a new app and give it a name
Create new app on Heroku
Click on the Deploy tab and follow the deploy instructions (which I think they are pretty self-explanatory, no point on replicating them here 😁)
Deploy an app to Heroku
And that is it, you can open your app by clicking on the Open app button at the top right corner within the Heroku dashboard for your app.
Visit the deployed app for this tutorial: https://cra-express.herokuapp.com/
Other Deployment Options
I write about other deployments options here:
Project Structure
This will be the final project structure.
Get the full code on the GitHub repository.
Thank you for reading and I hope you enjoyed it. Any question, suggestions let me know in the comments below!
You can follow me on Twitter, GitHub, Medium, LinkedIn or all of them.
This post was originally posted on my personal blog website. | https://medium.com/free-code-camp/how-to-make-create-react-app-work-with-a-node-backend-api-7c5c48acb1b0 | ['Esau Silva'] | 2020-07-30 01:19:04.628000+00:00 | ['Create React App', 'Expressjs', 'Nodejs', 'React', 'JavaScript'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.