title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Redesigning Google Analytics from the Ground Up — How to Design User-Friendly Data Visuals
Google Analytics is a tool with a high learning curve that focuses on powerful functionality and customizability. Here is a screenshot of a demo account. As you can see, it uses an open sandbox design. However, for newbies and veteran marketers alike, open sandbox designs are often overwhelming and can lead to questions that are not addressed by the design. For example: which analyses should be performed routinely and which for special cases? How do I confirm a campaign’s effectiveness? Is the growth I’m seeing in my traffic good or bad? The biggest weakness of this sandbox design is the underlying assumption that the user knows where to look for an insight and how to look for it. On the other hand, user-friendly data visuals assume almost nothing about the skill level of the user so, for today, we’re going to focus on how to build an effective one. The first thing to understand is that user-friendly design requires a ground-up approach. Simply put, first we understand the user’s needs and pain points with GA. Then, using design, we try to resolve their pain. Who are our users and what are their needs? This is the first question that needs to be addressed for any user-friendly design. In order to do this, we first hypothesize about the needs of our core audience: marketers at small to medium e-commerce businesses. To help put a face on our audience, we’ll call our hypothetical marketer Brendan. Brendan works with a small team on marketing and he gives weekly reports using Google Analytics metrics. Brendan is interested in his website’s performance this week and has been monitoring his traffic with Google Analytics once a week. He wants a visual that allows him to accurately judge whether his metrics are going up, going down, or staying the same — in other words, which metrics are doing well and which are doing poorly. Standard practice is for marketers to assess website performance by looking at how their metrics have changed during the last week and how drastic that change is. So, we need our data visualization to convey 2 primary things. The change of the default Google Analytics metrics (there are 9 of them) since the previous week. The magnitude of their change. Designing Visuals for the Needs of the User In order to move away from the sandbox design, we need to look for effective data visualizations that tell strong narratives. In the visualization list from Storytelling with Data by Cole Nussbaumer Knaflic, the emphasis on change over short time periods suggests the use of text and slope graphs. (Slope graphs are essentially mini-line graphs that only show two time points.) Let’s Start with Text. Don’t underestimate text because it’s not “complex.” If text can communicate both change and magnitude effectively, why reinvent the wheel? Looking at one metric, text isn’t too bad. Brendan can clearly see and understand whether the metric has changed and by how much. Unfortunately, our real-life case involves 9 metrics. This is visual overload. It’s pretty unreasonable to ask Brendan to piece together a narrative from 9 blocks of text about changing quantities. The idea of change is getting across, but the main issue is that nothing visually differentiates the magnitude of change. This means that it’s a lot more work for Brendan to interpret. After some deliberation, we settled on adding a visual aid and arranging it into a dashboard. Not bad. With the addition of a small visual aid and some re-arrangement, we made a dashboard that helps Brendan with certain aspects of analysis. Brendan can now quickly scan and understand that these metrics belong in groups. The top group shows amount of traffic. The middle group shows type of traffic. The bottom group shows quality of traffic. Also, by adding an additional arrow for each 10% change, Brendan can get a much better sense of the magnitude of change. As such, Brendan can immediately compare the changes between grouped metrics and then begin hypothesizing based on the data shown. For example, users have gone up by 2 arrows but sessions and pageviews have only gone up by 1 arrow. So, the increase in pageviews and sessions is lagging relative to the increase in users. This suggests that new users are not exploring as many pages or making repeated visits as often.By adding a visual aid and grouping the information, Brendan is better able to hypothesize about the website’s performance. Now, we have a strong candidate for data visualization, and we haven’t even fully left the realm of text yet. Now, Let’s Try a Trendline To simplify things, we’ll isolate it to users, sessions, and pageviews. We want to see if this improves our dashboard. It’s immediately obvious that trendlines have the clear advantage of being more visually pleasing and concise. They clearly show where the metric started and where it ended up. The slopes of each line seems to represent the magnitude of each change. This visualization seems well suited for weekly metric analysis. Now, it seems natural for Brendan to look at that graph and say “Woo! Everything is increasing! Hooray!” However, there is a hidden implicit assumption made that isn’t shown in a normal trendline. I’ll add it below. After seeing this, Brendan might not be celebrating anymore. Notice how sessions and pageviews both had steeper slopes when going from 0 to last week? Since last week, their slopes have leveled off to the exact same slope as users. This graph suggests that newer users are visiting the site once, looking at a single page, leaving, and never coming back. That doesn’t sound too healthy from a marketing standpoint despite the fact that all of the metrics are experiencing growth. Therein lies the weakness with using a trendline. While slopes excel in showing the absolute change from last week to this week, they do a poor job of showing the percentage change. Even with the 0 segment added, and having a theoretical slope to compare with the current slopes, there is no clear indication of how much the slopes have leveled off. This forces Brendan to judge percentage changes by trying to visually assess the difference in slope between the dotted slope and the corresponding solid slope, further complicating analysis. Text or Trendline? Which is More Useful to the User? In designing our user-friendly data visualization, we need to choose which design is most helpful to Brendan. These two visuals essentially tell the same story, and each one has pros and cons. Let’s look at both together to summarize their strengths and weaknesses.
https://medium.com/analytics-for-humans/redesigning-google-analytics-from-the-ground-up-how-to-design-user-friendly-data-visuals-ca166b3b275e
['Zach Diamond']
2018-06-08 19:40:05.056000+00:00
['Analytics', 'Design', 'Data Visualization', 'Google Analytics', 'Data']
Keep your ship together with Kapitan
NEW: Katacoda scenario! Manage complexity with Kapitan We open sourced Kapitan back in October 2017. In the 12 months before that date, Kapitan helped us revolutionise the way we were running things at DeepMind Health, and allowed us to keep control over many heterogeneous systems: Kubernetes, Terraform, Documentations, Scripts, Playbooks, Grafana Dashboards, Prometheus rules: everything was kept under control from one single point of truth. I am not afraid to say it loud: there is nothing out there which is as versatile and powerful as Kapitan for managing configurations of complex systems. There.. I said it. Prove me wrong :) Having a product so radically different than anything else out there, obviously also meant that we had to learn and discover how to use it: Patterns, Best Practices.. We had to recognise and discover them as they surfaced while refactoring the mess we made with our initial rollout. Fortunately one of Kapitan’s strength is to make refactoring a joy, and so we did, and did it again, until we came out with a nice set of best practices. What we didn’t do, was to make them available to others… until now. Spoiler! Joining Synthace last year as Head of SRE also meant I had a chance to apply those best practices and approaches to a fresh new environment. This gave me the opportunity to test them and improve them much faster than what I would have been able to do previously, due to the much faster iterations we have there. The results were spectacular, allowing me to bring control in a place where manifests were generated using unmaintainable go code, secrets were managed manually, and each Kubernetes cluster was a snowflake. I introduced successfully Kapitan, but I was still working way too much with jsonnet, and needed to have a jsonnet file for each of the (almost identical) 20 microservices we had. So I had a though: what if I could replicate the full setup without touching any code at all? Introducing Kapitan Generators Kapitan Generator Libraries Diesel Ship Marine Generator, for Power, 230v Today I will give you a preview of how we use Kapitan internally at Synthace. I am also pleased to announce that we have released some of our internal jsonnet libraries we developed at Synthace as open source! See: https://github.com/kapicorp/kapitan-reference In particular, we will release: [RELEASED] A jsonnet manifest generator library to quickly create Kubernetes “workloads” manifests by simply defining them in the inventory. Get started with something as simple as: parameters: components: api-server: image: gcr.io/your-company/api:latest A jsonnet pipelines generator library to quickly create Spinnaker pipelines for the above defined workloads library to quickly create Spinnaker pipelines for the above defined workloads A jsonnet terraform generator library to create Terraform configurations. library to create Terraform configurations. A set of helper scripts which will make easy to get up and running with Kapitan. To set expectations right, these libraries will be released in a form that will probably require some refinements, but it should hopefully allow you to get started and inspire you to contribute with your libraries or implement the same approach to manage your system of choice. These generators are a huge step forward from our approach where you would need to create a jsonnet file for each service you wanted to manage with Kapitan. The ambition is to allow you to get started and generate configuration for you 80% of cases, and at the same time enforce some sane best practices along the way. Of course, you can still write your own jsonnet code if you need something fancier or want to have full control on a specific component. A sneak peek — generating manifests So let’s say that you want to get started with Kapitan. Until now the steps to get you started where quite a few, often cryptic and not well documented. We have releasing a “kapitan reference” repository (https://github.com/kapicorp/kapitan-reference) with all the batteries included. I will run this session assuming you have this already. Pre-requisites: docker gcloud (the example is on GCP) kapitan yq kapitan generators (released) Suggested read: Your first target file Note: Since we have released the Manifest Generator, you will now be able to follow these steps From your kapitan-reference repository (https://github.com/kapicorp/kapitan-reference), go on and create a first dev target file: inventory/targets/dev.yml . For simplicity, we won’t be creating inventory classes right now, so we are going to edit the target file directly. Make sure it has the following content: classes: - common parameters: target_name: dev components: echo-server: image: inanimate/echo-server Now run: kapitan compile --fetch T he --fetch command will make kapitan download the latest libraries and supports scripts that we package. Some of the third-party libraries we use are kube.libsonnet and spinnaker/sponnet . If you now check your git repository, you will find that Kapitan has generated for you some files: compiled/dev/ ├── docs ├── manifests │ └── echo-server-bundle.yml └── scripts Let’s look at what we have! Have a look at echo-server-bundle.yml apiVersion: apps/v1 kind: Deployment metadata: labels: app: echo-server name: echo-server spec: replicas: 1 selector: matchLabels: app: echo-server strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: echo-server spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - echo-server topologyKey: kubernetes.io/hostname weight: 1 containers: - image: inanimate/echo-server imagePullPolicy: IfNotPresent name: echo-server restartPolicy: Always terminationGracePeriodSeconds: 30 Admit it. That was quick! The manifest generator loops through the keys of the components inventory hash, and generates a new set of config for each one it finds, in this case, echo-server (from: https://hub.docker.com/r/inanimate/echo-server) As anticipated, the generator library tries to be smart and adds some best practices that you may or may not like for all your services. Why would you want a Deployment without podAntiAffinity? I’m sure there are valid reasons, but let’s make it a default, shall we? Exposing the service The deployment looks good, but it is missing some essential parts. Ehm.. we need a service! Right, let’s do that and recompile with kapitan compile classes: - common parameters: target_name: dev namespace: ${target_name} components: echo-server: image: inanimate/echo-server service: type: ClusterIP ports: http: service_port: 80 Adding the port definition will produce the following: It will add the port definition to the container ... containers: - image: inanimate/echo-server imagePullPolicy: IfNotPresent name: echo-server ports: - containerPort: 80 name: http protocol: TCP And it will create a new service definition apiVersion: v1 kind: Service metadata: labels: app: echo-server name: echo-server spec: ports: - name: http port: 80 protocol: TCP targetPort: http selector: app: echo-server sessionAffinity: None type: ClusterIP Are we there yet? Not quite: The service assumes that the echo-server runs on port 80. From the documentation, it looks as if the service is actually running on port 8080 instead. We would want the service to be exposed using a LoadBalancer service, so let’s change that. We would like a readiness probe classes: - common parameters: target_name: dev namespace: ${target_name} components: echo-server: image: inanimate/echo-server service: type: LoadBalancer ports: http: service_port: 80 container_port: 8080 healthcheck: type: http port: http probes: ['readiness'] path: / timeout_seconds: 3 Have a look at the bundle again: ... containers: - image: inanimate/echo-server imagePullPolicy: IfNotPresent name: echo-server ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: / port: http scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 restartPolicy: Always terminationGracePeriodSeconds: 30 --- apiVersion: v1 kind: Service metadata: labels: app: echo-server name: echo-server spec: ports: - name: http port: 80 protocol: TCP targetPort: http selector: app: echo-server sessionAffinity: None type: LoadBalancer Attaboy! Adding Environment Variables What else could we do? Well, from the echo-server docker page it looks as if we can play with a few parameters to change its configuration. Let’s add some env variables. classes: - common parameters: target_name: dev namespace: ${target_name} echo_server_port: 8081 components: echo-server: image: inanimate/echo-server env: PORT: ${echo_server_port} POD_NAME: fieldRef: fieldPath: metadata.name POD_NAMESPACE: fieldRef: fieldPath: metadata.namespace POD_IP: fieldRef: fieldPath: status.podIP service: type: LoadBalancer ports: http: service_port: 80 container_port: ${echo_server_port} As expected, the changes are reflected in the manifest: containers: - env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: PORT value: 8081 image: inanimate/echo-server imagePullPolicy: IfNotPresent name: echo-server ports: - containerPort: 8081 name: http protocol: TCP Adding secrets Just for the sake of testing, let’s also add a secret to the setup, even if the component won’t be using it. classes: - common parameters: target_name: dev namespace: ${target_name} echo_server_port: 8081 components: echo-server: image: inanimate/echo-server env: PORT: ${echo_server_port} POD_NAME: fieldRef: fieldPath: metadata.name POD_NAMESPACE: fieldRef: fieldPath: metadata.namespace POD_IP: fieldRef: fieldPath: status.podIP SECRET_PASSWORD: secretKeyRef: key: echo_server_password service: type: LoadBalancer healthcheck: type: http port: http probes: ['readiness'] path: / timeout_seconds: 3 ports: http: service_port: 80 container_port: ${echo_server_port} secret: items: ['echo_server_password'] data: echo_server_password: value: ?{plain:targets/${target_name}/echo_server_password||randomstr} Let’s break this down: ?{plain:targets/${target_name}/echo_server_password||randomstr} will create a random string, and store it in on git. Because we have used the plain backend, we will be storing it in cleartext. User gkms or other secrets backend if you care about your secrets will create a random string, and store it in on git. Because we have used the backend, we will be storing it in cleartext. User or other secrets backend if you care about your secrets The SECRET_PASSWORD env variable will have the content of the generated password. Now because we have decided to test the plain backend, you will see it in clear text in the manifest. Otherwise if would be encrypted and you would only see a secure tag. env variable will have the content of the generated password. Now because we have decided to test the backend, you will see it in clear text in the manifest. Otherwise if would be encrypted and you would only see a secure tag. The items instruction will also mount the secret as a volume, and only exposed the selected item. This means you will be also able to access the content of the secret using /opt/secrets/echo_server_password The result of the compilation adds a new file: compiled/dev/ ├── docs ├── manifests │ ├── echo-server-bundle.yml │ └── echo-server-secret.yml └── scripts Also notice that the files are all nicely and consistently named after the service. Lastly, we can move the component definition in its own class file: inventory/classes/components/echo-server.yml parameters: echo_server_port: 8081 components: echo-server: image: inanimate/echo-server env: PORT: ${echo_server_port} POD_NAME: fieldRef: fieldPath: metadata.name POD_NAMESPACE: fieldRef: fieldPath: metadata.namespace POD_IP: fieldRef: fieldPath: status.podIP SECRET_PASSWORD: secretKeyRef: key: echo_server_password service: type: LoadBalancer healthcheck: type: http port: http probes: ['readiness'] path: / timeout_seconds: 3 ports: http: service_port: 80 container_port: ${echo_server_port} secret: items: ['echo_server_password'] data: echo_server_password: value: ?{plain:targets/${target_name}/echo_server_password||randomstr} And then we can simplify the target to reference the component classes: - common - components.echo-server parameters: target_name: dev namespace: ${target_name} This way we can now reuse that component across other targets, like for instance inventory/targets/production.yml classes: - common - components.echo-server parameters: target_name: prod namespace: ${target_name} Running kapitan compile again will effortlessly generate the new files for the new target prod : ./kapitan compile Compiled dev (0.29s) Compiled prod (0.29s) which produced: compiled/prod/ ├── docs ├── manifests │ ├── echo-server-bundle.yml │ └── echo-server-secret.yml └── scripts Final words
https://medium.com/kapitan-blog/keep-your-ship-together-with-kapitan-d82d441cc3e7
['Alessandro De Maria']
2020-12-09 19:18:32.788000+00:00
['Terraform', 'Devops Tool', 'Kubernetes', 'Kapitan', 'Spinnaker']
Joe Abrams Invests in Ponder
We’re thrilled to announce that Joe Abrams, co-founder of MySpace and one of the fathers of social media, has decided to invest in Ponder and join our Board of Directors as an observer. Joe is an extremely successful entrepreneur and an expert in emerging growth companies in areas including technology, drug discovery technology, consumer products, big data, and online job placement. “Ponder is building a gamified referral platform on the blockchain, that will completely change the incentives for recruiting, business partnerships and personal relationships. I’m thrilled to not only be an investor but provide strategic guidance as the company grows!” said Abrams. Joe was a co-founder of The Software Toolworks, which he sold to Pearson PLC for $462m in 1994. He later co-founded Intermix Media, the parent company of MySpace, which he sold to News Corp in 2005 for $580m. Mr. Abrams also sits on the advisory boards for several companies including Recruiter.com, an online global recruiting service offering an industry leading job market technology platform. With his breadth of experience in technology and social networks, Joe is the perfect executive and strategic thinker to help the Ponder ecosystem grow over the coming years. Learn more about the Ponder platform and token sale here!
https://medium.com/theponderapp/joe-abrams-invests-in-ponder-3721adc78619
[]
2018-09-07 19:55:32.254000+00:00
['Blockchain', 'Recruiting', 'Social Network', 'Investing', 'Startup']
My Top 3 — August 28th, 2019. Giving credit where credit is due
My Top 3 — August 28th, 2019 Giving credit where credit is due ( Photo by Michael Dziedzic on Unsplash) If You Want To Be Happy, Learn To Be Humble I think we could all do with a healthy dose of humility once in a while. I know I do. I’m not egotistical by nature, but we all have our moments, I think. And it’s so easy to get caught up in ourselves — ego is a very demanding thing. We need to learn to “knock ourselves down a peg” once in a while. Helen Cassidy Page takes the no-nonsense approach to getting over the Know-It-All syndrome we all secretly — and not so secretly — possess. A lesson we all need to learn. Will Humanity Ever Learn? About fifteen years ago, I boycotted the news — from all outlets. I don’t watch it, read it, or listen to it. That doesn’t mean that I don’t care what’s going on in the world, or that I don’t care. It’s simply a defense mechanism — a sanity-saver if you will. With so much bad going on in the world, it’s hard to keep your faith in humanity when you’re constantly bombarded with it at every turn. Jon Peters hit me in the gut and took the breath out of me with this one. We love to think, as humans, that we’ve got it right. That we’re doing the right thing — our hearts are in the right place, and we’ve all got our priorities in check. The hard and simple truth is that we don’t. The truth hurts. We need to fix it before it’s too late. Anxiety Preys on the Torture of Uncertainty My heart broke when I read this piece. I don’t know if it was the despair or the fierce determination — or maybe it was the way Matt Clarke describes his emotions. I feel like this is a brutally honest account of how people who suffer from anxiety really feel. Understanding this condition, and knowing the constant internal tug-o-war sufferers are going through, I believe can help us gain the insight we need — the empathy — to truly be there for those in need. We never really know what another person is going through.
https://medium.com/top-3/my-top-3-august-28th-2019-eb5650f02b57
['Edie Tuck']
2019-08-28 21:06:56.327000+00:00
['Awareness', 'Life Lessons', 'Humanity', 'Mental Health', 'Humility']
It’s time to cut ties with your toxic mother
by: E.B. Johnson The relationships we share with our mothers are truly unique. From the time we come into this world, they nurture us like no one else in our lives. They fill us with confidence and love, and they temper the tough experience of life by giving us a place of permanent shelter from the storm. This is not the case for everyone, however. For some, the relationship they share with their mother is turbulent, fraught, and toxic. What do you do when your mother hurts you more than she helps you? The answer, unfortunately, isn’t an easy one to come by. Our mothers are important and they hold a special place in our hearts — even when they aren’t the parent we need or deserve. Though they might berate you, belittle you, and criticize your every move it’s hard to let go of the first person you looked for in longing and in love. Letting go is necessary, though, when the bond we share with our mother has turned sour, dangerous, or toxic to our happiness and self-esteem. Motherly relationships aren’t always smooth sailing. While television and the movies have built up a very specific type of relationship between mother and child, it’s not always smooth sailing where this connection is concerned. Like any other relationship, the affection and communication we share with our parents can become bent and twisted. Humans also, they hold their own tragic flaws and histories, which can make it even harder to maintain compassion and see one another on an even playing field. The relationship you share with your mother can be just as toxic, just as soul-crushing, as any other relationship you’re a part of. Although we’ve been taught to revere our mothers and accept them no matter what — you still have a right to be happy and safe. When you’re tied into a toxic mother, however, neither of those things is possible. If you’re ready to overcome this toxic relationship, you’ve got to employ some brutal honesty. You have to start seeing your mother for who she is and see your own humanity in it all as well. No one deserves to be made small. No one deserves to be told they aren’t good enough, or that what they want for themselves and their futures is invalid. Stand up for yourself. Find the courage to take action in the name of your own wellbeing and have enough self-respect to cut ties with the person who’s wreaked so much damage. Signs it’s time to cut ties with your toxic mother. Are you dealing with a toxic or abusive mother? It might be time to cut ties and walk away, but not before you look for the warning signs. Endless guilt and upset When it comes to the toxic mother, there is never an end to the guilt and the upset. Every conversation ends in conflict, or an increase in your own feelings of guilt, shame, and eroded self-worth. On a regular basis, they make you feel worse about yourself — and they do it by both snide remarks and subtle undercuts that hit you in the soul. This might come from their own insecurities, or a need to keep you small in order to retain their power over you. Criticism comes standard Does your mother endlessly criticize you or critique you? Do they comment on your weight, relationships, career, or friends with little consideration and no abandon? Do they make you feel small with their words and their disrespect? This is a classic symptom of a toxic relationship, and it’s a system that’s regularly utilized by mothers who are striving to keep their children in a state of inferiority or insecurity. Looking for a savior Not all toxic mothers are screaming matches and sanctimonious critiques. Sometimes, a toxic mother-child relationship looks more like a stereotype flipped on its head. This can be the mother who is looking to their child as a savior. Maybe your mother expects you to carry her burdens for her, or resume the role of a parent in her life. Perhaps she depends on you financially, or she clings to you mentally and emotionally. Manipulating thoughts and feelings Manipulation is a common tactic used by the toxic mother, and it can wear a lot of different faces. This manipulation might be emotional. Perhaps your mother uses tears or protestations of pain to make you feel guilty, so you’ll bend to her will. On the opposite end, they might rely on mental manipulation and complicated, nuanced games of support and denial in order to win your allegiance (or servitude). Forcing the blame game When it comes to your mother, do you find yourself constantly apologizing? Even when you aren’t the one at fault? Again, this is a common tactic used by parents of adult children across the board. Rather than taking responsibility for their own mistakes, they shift the blame to you — forcing you to internalize it and take it on at your own cost and burden. They force the blame game and force you to take on the weight of everything that goes wrong in your relationship or family. Undercutting your relationship Toxic mothers love to get involved in their children’s relationships, and they love to cause problems and heartaches where there otherwise were no issues. Does your mother put herself in the middle of your relationships? Does she cause problems or sew seeds of doubt and conflict while running your partner down? Again, this is a common tactic used to wield control and undermine your happiness. Emotional explosions When your mother gets upset (with you or anyone else) what does her response look like? Dramatic and volatile explosions can often be a sign of a relationship that is plagued by toxic behavior. If your mother lashes out, screaming and terrorizing anyone and everyone who denies her wishes — it might be time to get serious about getting yourself clear and safe. Unchecked mental illness Living with and loving someone with mental illness can be a challenge and a struggle…especially when that person is your mother. Though we love our parents, it’s not always possible to support them through their mental illness. This is especially true when they refuse to support themselves. A parent who refuses to get help, take medication, or address their issues is not one who we can hold on to. We have a right to protect our own mental health, and a responsibility to ensure we protect our happiness and safety. Control, control, control There is, perhaps, no more telling symptom of a toxic parent than the issue of control. Do you have a mother who insists on controlling your life or calling the shots for you (and your siblings)? Do they refuse to listen to your ideas? Do they refuse to see the value in your goals? The controlling parent is not one who is looking out for their child’s best interest. They are someone who is looking out for their own image, and the picture of a family they want to build. The best ways to separate from a toxic parent. You don’t have to allow your mother’s toxic behavior to undermine your life forever. You can stand up for yourself and you can find the strength to slowly cut ties and discover your joy. In order to do this you have to dig deep, however, and prioritize your needs while you strive to tap into your own courage and personal power. 1. Acceptance as a first step Before you can move on and away from your mother, you have to accept who she is and how her behavior and choices impact you. Acceptance is not allowance. It is simply seeing reality for what it is and finding the courage to say, “Okay.” Until you cultivate acceptance, you can’t see where you’re standing or where you need to go. It’s the first step in taking action, and the first step in separating yourself from a mother who can’t see your worth. Take off your rose-tinted glasses and stop forcing your mother into the box you want her to fit into. Be brutally honest. Who is your mother? When you’re hurting, how does she treat you? When you’re angry, what solutions does she offer? Can you trust her? Does she consider you when she makes choices that shape and change your family? See your mother as the human that she is. Consider all the parts of her — the hurt child, the uncertain woman, the wise crone with more experience than you can muster. Understand that, just as you are flawed and broken…so is she. Take her off the pedestal and see her for who she is. How does she affect you? Does she add grace and support to your life? Does she bring you mercy when the world is cruel? Like any other relationship, the love we share with our mothers should add (not detract) from our lives. 2. Get clear on your intentions Once you’ve allowed yourself to see your mother as she is, you need to figure out your own intentions and how you want to proceed. If you’ve decided to severe ties with her for bad behavior, you need to ensure that you’re prepared to make this move. Cutting someone out of your life is a forever experience (in many cases). Are you prepared to say goodbye to your mother forever? Are you doing this out of a need to be happy, or a need to punish? Make no mistake — if you think cutting your mother out of your life will make her change, you’re wrong. We do not change for other people. Not really. We are the only ones who can change ourselves, and we can only do that when we decide it is something we want to undertake for our own happiness. “Punishing” your mother isn’t going to work. Especially if she doesn’t see you as an equal, or worthy, or respect. You need to ensure that you’re cutting ties for the right reasons: your need for peace and respect in your life. Anything beyond that could indicate a certain level of unpreparedness, or a lack of consideration. Give yourself time to think through all the pros and cons, and take action only once you’re certain you’re ready to let go. 3. Have an honest conversation After cultivating acceptance and aligning your intentions, the natural next step is to inform your mother of the upcoming changes. Before you do this, however, take some time to shore-up your boundaries. Have a clear vision of what you expect and how you want to proceed. Know too what behavior you are and aren’t willing to accept, and make that explicitly clear from the start. Boundaries in place, find a comfortable (and safe) time and place to sit your mother down and have a candid conversation. If your mother is especially toxic or volatile, enlist the help of a friend, or even a family issues expert that can help both of you navigate the difficult conversation to come with respect and civility. Avoid blaming language (i.e. you did this, you did that…) and stick to the facts you know. Explain how you’re feeling and why, but remove any “you” language that might otherwise inflame the conversation. Instead, describe scenarios as if you were detached (“Event A happened, and that made me feel bad”). Don’t hold back and make clear what comes next. Once you’ve given yourself room to explain where you’re at, leave room for them to do the same…but don’t accept abuse. 4. Let go of the guilt and the shame There’s a lot of guilt and shame associated with our toxic parental relationships, and that’s often precisely what keeps us trapped in their poisonous loops and patterns. We have to move past that guilt and that shame if we hope to free ourselves from the darkness and find a path to our own light. We are not responsible for the pain of our parents, nor are we beholden to them forever for the decisions that they made. While you should be grateful for the sacrifices that your mother made for you, you should not allow those sacrifices to make a martyr of you. You don’t have to sacrifice yourself on the altar of your mother’s charity forever. In many cases, your mother made the choice to have you. That choice comes with responsibilities that never rested on your shoulders. Stop allowing your mother to guilt you into holding on to her pain. Don’t allow her tales of woe and misery to be the chains that prevent your own joyful future. The pain that others caused her is no excuse for the pain she causes you. Embrace your own power and envision yourself free of her entanglements. Lean into your personal space and celebrate your individuality a little more each day. 5. Allow your inner child to run free Cutting ties with your mother is a strange feeling, and for a long time you won’t be certain how to move forward. In order to re-establish yourself as a newly independent person, you need to reconnect with the joy, optimism, and love in your life. The most effective way to do this is by opening up the door for your inner child to re-emerge. Slowly, they will learn to rediscover the world and what it means to be seen, valued, and loved for who and what they are. Walk away from your mother and allow your inner child out to run free. Take them by the hand and assure them that there’s no more monster under the bed. Let them know that they have a chance at happiness now, and they can dance exactly as they’d like to. Revel in the joy of this newfound freedom and use this process to tap back into your childlike sense of wonder. Make a conscious effort to let your inner child out of every day, or every week. Give them free rein to explore new relationships with all the childish curiosity that makes learning an enjoyable experience. Drop the inner criticisms and echoes of a mother who didn’t quite know how to love as well as she needed to. Re-parent your inner child and give them the support they never found in the caretakers who came before. Thank your mother for what she did, but kindly say good night and allow yourself (and your inner) child to move on and heal. Putting it all together… Though the relationship we share with our mothers is meant to be sacred and reassuring, it doesn’t always play out that way. Sometimes, the relationships we hold with our parents turn toxic and do more harm than good. In those moments, it’s important to stand up for ourselves and take stock. We have a right to be happy, and that doesn’t have to include a mother who criticizes you, runs you down, or otherwise works to destroy your happiness. Accept who your mother is and accept too how she impacts your life. Take off the rose-tinted glasses and see her as she is, not as you wish her to be. Once you take a brave step into this new reality, you can begin to set your intentions and decide what course of action is the best for you and your wellbeing. Weight the pros and cons of walking away, and understand that it’s not a tool for punishment and it’s no way to force your mother to change. Knowing what you need to do, sit your mother down and have an honest conversation with her. Communicate your feelings and your new boundaries too. Avoid blaming language, but let her know where the new lines lie. Allow yourself to let go of all the guilt and shame, and empower yourself to move forward in authentic joy by bringing your inner child out to celebrate their new freedom regularly.
https://medium.com/lady-vivra/its-time-to-cut-ties-with-your-toxic-mother-f1cad4a86e09
['E.B. Johnson']
2020-09-01 06:06:01.277000+00:00
['Relationships', 'Family', 'Mental Health', 'Self', 'Parenting']
Find the Noble Integer
Problem Noble Integer: InterviewBit Given an integer array, find if an integer p exists in the array such that the number of integers greater than p in the array equals to p. If such an integer is found return 1 else return -1 . Solving Process Simplify Input This problem is not very difficult but to solve it we have to apply a well known technique to simplify the given inputs. Let’s take a small example: 2, 6, 1, 3 This input should return 1 as 2 is a noble integer. We know that by counting the number of integers greater than 2 (3 and 4). Yet, how to solve this problem without having an implementation in O(n²)? The answer is to simplify the inputs. What if in our case the array was sorted? 1, 2, 3, 6 The integer 2 is at index 1, whereas the array size is 4. The success condition is the following: size(array) - index - 1 == array[i] Once the problem is solved using a simplification, we need to check the implications in terms of complexity. If our solution is acceptable, we generalize to the initial problem. In our case, we have to: Sort the array => O(n log(n)) Iterate over each element and check the previous condition => O(n) It means the solution is O(n log(n)). Bear in mind, sorting an array can’t be done with a better solution than a O(n log(n)) (like a merge sort for example). Also, we have to make sure our solution covers all corner cases. What about the following example, after being sorted: 1, 2, 2, 3 x At index 1, the condition is going to be true. Yet, this should not be a match. In our case, we also have to cover duplicate by checking if the next integer is equals to the current one. A possible implementation in Java:
https://medium.com/solvingalgo/how-to-solve-algorithmic-problems-noble-integer-f2cbc5c016ae
['Teiva Harsanyi']
2020-12-09 19:29:49.612000+00:00
['Arrays', 'Programming', 'Java', 'Coding', 'Algorithms']
The Devil’s Dictionary of Software Design
10x Programmer. The CEO’s justification for hiring ten times less programmers than are needed to complete a project. Abstraction. Making a program progressively more general and less specific, so that the next person to review the code cannot pinpoint who make the mistakes. MVC Pattern. A fancy acronym to remind you not to write your whole program in the event handler for a button click. OOP. Wouldn’t programming be so much easier if we were making metaphors about objects instead of writing lines of functional code? No, not really? Well I wish you had spoken up before. Refactoring. That thing you’ll do in the distant future to fix all the bugs you’re putting in right now. Singleton: A politically correct global variable. HTML. Often described as the first programming language you should learn by people who don’t understand what a programming language is. YAGNI (You Ain’t Gonna Need It). A formal expression of the fact that a programmer can often have a more beneficial effect on a project by staying in bed than by writing code. Pair Programming. When two programmers share one computer, not due to lack of funds, but as a form of peer pressure. Full Stack Developer. A back-end developer who can also center a <div> using CSS. (Alternate definition: a front-end developer who knows how to install MySQL.) Garbage Collection. Just one of the many things a runtime environment needs to do because you can’t trust programmers to clean up after themselves. Regular Expressions. A system no one really understands for making mistakes in string validation code. Inheritance. If our programming language doesn’t have this feature, we make fun of it. But if someone actually uses it, we make fun of them. Big-O Notation. A way of sizing up how the performance of different algorithms will scale. Or, a way of sizing up who studied computer science at school and who taught themselves the craft with Udemy courses. Agile Programming. The reason your CEO gave when he tossed out the project’s requirements document and forwarded you an email of customer requests instead. Interfaces. A design-by-contact approach that would have saved you a lot of headaches if you had implemented it six weeks ago, when you wrote that first quick prototype. Code Smells. Like ordinary smells, you can always deny being the one responsible for the offense. Clean Code. The excuse you’ll use for leaving out all the comments. Design Pattern. A name we give to a programming practice developers have repeated over and over again, so we can lavish it with undeserved reverence. Compiler. A tool that transforms minor syntactical mistakes to business-shattering disasters. Recursion. See recursion, Devil’s Dictionary of Software Design. Testing. An unnecessary diversion of resources, given that the program works fine on my computer. (Come and see it if you don’t believe me.) ORM (Object Relational Mapping). No one likes code that’s littered with dozens of tiny data class definitions. Also, no one likes inefficient, dynamically generated queries. But what if we invented a technology that combined both? Cryptography. The art of keeping sensitive information between you and the NSA. Encapsulation. A design principle that makes sure all the horrible mistakes you made in one class don’t leak into another. Obfuscation. Concealing the purpose of your code by adding a great deal of design patterns. Bootcamp. A place where newly recruited civilians prepare, painfully, for a world of stress and hand-to-hand combat (that’s corporate IT). DRY (Don’t Repeat Yourself). A principle that holds that the less programmers say, the happier everyone will be. Leaky Abstraction. The inevitable and perpetual state of programming, where simplifications ruin everyone’s life because you need to learn both the simplification and the thing being simplified so you can patch all the problems the simplification doesn’t cover. (See also, “The reason programmers will never be unemployed.”)
https://medium.com/young-coder/the-devils-dictionary-of-software-design-8f4fab207808
['Matthew Macdonald']
2020-12-01 18:31:14.881000+00:00
['Programming', 'Software Design', 'Programming Humor', 'Design Patterns', 'Humor']
10 Things to Consider Before Starting a Business With a Friend
10 Things to Consider Before Starting a Business With a Friend After the excitement wears off, now Carefully consider these options photo by Brooke Cagle If you have thought about starting a business there is a chance are you have contemplated on doing it solo or with a friend and are weighing out the advantages and disadvantages of doing so. Here is a list of pros and cons to help with your decision. 1.Understand Your Personality Type It important to understand your personality and this goes way beyond being an introvert or extrovert however really understanding type person you are, for instance you can explore this through the Myers Briggs 16 personalities test which provides an in-depth conclusion of what personality type you are from your relationships/friendships behaviour to parenting and strengths and weakness. This will enable to you to learn what your weakness are and by working with a friend who can compliment you in your weaker areas, this could be beneficial or contradictory as you can learn how benefical it will be being a solotrepreneur. I have a personality trait called INFJ and this has enabled me to understand my career paths further and my different types of relationships and how I can pursue my creativity. 2. The benefits of bringing your skills together Having someone you can bounce ideas and workload is bonus. You and your friend will have different expertise and skills you can bring to the table and this will compliment your business in areas whereby you need help. For instance, your friend could be the creative, and you can be better within Finances. 3. Sharing Responsibility It can be daunting if you’re starting a business and you’re trying to raise funds and learning each other’s management skills for the new team you are acquiring or learning more about yourself through networking for clients and industry knowledge and trends. Working your friend can ease the process of workload that requires starting a new business especially. 4. Stronger Force In The Market Having a company with multiple co-founders ensures the company is always one step ahead because being a solo owner you may just listen to yourself rather than sharing the business ideas with somoene as result you may be left behind your industry trends. Working with your friend the discussion of the business is always spoken about as you both have your respected networks and attributes to bring. 5. Be prepared To Risk Your Friendship As you are now business partners, be mindful that the friendship may change as your focal point may shift to your business to a new layer to the relationship you already have. Ask yourself despite having great friendship and both having attributes you can bring to the business are you able to share business with your friend more importantly the characteristics your friend including her flaws and yours can that be managed to co-work in a business? 6. Are you able to do this alone? Importantly what the reasons are why you’re willing to work with your friend? Write down all the attributes your friend is bringing to the partnership and ask whether you’re capable of alone? 7. Do not feel disheartened if you and your friend do not have the answers. Do not feel pressured if you’re both inadequate of fulfilling a role in a business. Majority of business owners hire people who are qualified in particular role. It is better for your business to have team of people who you can trust to get the job done. 8.Give Credits Where it is due This factor is not something many people consider in the early days as they are just excited to share this new chapter with someone they get along with. However if your if one of you have stronger experience/ understanding on area of the business which is pivotal for success, this may make you feel second best in the business accept that if that maybe the case and be humble about it and look at what could not acheive without this partnership and potiential that awaits in the future. If you are naturally a leader and wants the respect validation of their hardwork then this is crucial if you are take a step down when potientially needed. 9. There is no such thing as 50/50 Following on from the above, yes you may split the risk, expenses, profits but There will be one person who will naturally give more to the business. You are not an identical equal to your friend so your will results will show this. Life predicaments may come in the way too and one may need to step up for longer period than expected potientially too. 10. Don’t Be Shy By Having An Contractual Agreement Set On Everything You may have found it easy avoiding confrontation and uncomfortable talks and could easily avoid them when you were just friends but as business partners all stakes are in the open and transparency is key, communication about back and front end of the business should be expressed in written format. It is also best to get this out of the way at the early stage too and as the bussiness grows you can review it and update if needed. See The Positives It Will Bring The above may be alarming but see the benefits of how as business partners you could create a brand, niche and different perspective the options are limitless. Its like having your miniature board team. Alone we cannot always make the best decisions. If self motivation is your weakness knowing that you are also impacting somone else future if you are a proscatinator or are not giving 100% effort this alone should boost your need to get your work up to speed. You don’t want to disapointment somone who trusts you in their life projects. There is stigma that sucessful business are built soley by one person, that you cannot feel sucess unless done so. However many established business have people supporting them whether that is through hiring freelancers of creating a logo to supporting your digital marketing presence and most importantly a greater work life balance to which is key sucesss.
https://medium.com/swlh/10-things-to-consider-before-starting-a-business-with-a-friend-ce055fdeef62
['Cathy Assoba']
2020-11-30 22:03:09.860000+00:00
['Mindfulness', 'Business', 'Creativity', 'Friendship', 'Friends']
Changling
Photo by Jessica Podraza on Unsplash August 2018: I have begun working with cancer patients recently. It is a major shift from the task-oriented, literally, sterile world of the OR I left behind. I am becoming immersed in their stories. Social workers and other nurses have warned me to find my edges and reinforce my boundaries, but they don’t understand. I have come to this work to take down my boundaries at last. To ‘see’ The Humans who are my patients. And even this early in the process, the patients are changing me. I find myself humbled at their courage and in awe of their ability to live in The Now with joy and abandon. I know some will transition. This will be the diagnosis which provides the portal to the Next Thing for them. I am honored to be here with them to share this as well. Nothing teaches you the value of Life more swiftly than staring mortality square in the face. This week I found patients, even relatively newly diagnosed, reveling in Life. Tasting it, rolling it over on their tongues, and finding the Joy in this moment. Grateful, kind, and strong. Finding a Gift in their diagnosis even amongst the shock of it all and the chaos of creating a new reality which includes surgery, chemo, and All. The. Things. The Grace brought by the support staff and nurses into this world is profound. I feel as though The Universe has been grooming me my whole life for this job. Emotionally, spiritually, as well as the skill set I bring to it on a professional level. I am grateful beyond words to share their journey. And thankful to have listened to my intuition when She pointed me in this direction over and over again when The Universe kept nudging me to move on with My Life. I find myself leaning into all the parts of me I had kept hidden. I want to open my heart, be kind, show Love, be Real Ann right down to my core all the time without even thinking about it these days. Listen to your still small voice. I beg of you. She will take you to the most beautiful places and you will meet the most amazing humans. She is the travel agent for your soul. Never be afraid to let Her book you a new adventure. Any cost you pay in the short term reaps you such long term benefits it is not even to be counted. Let The Universe change you into who you are truly meant to become. Namaste.
https://medium.com/recycled/changling-367b04eca449
['Ann Litts']
2019-04-24 12:06:13.570000+00:00
['Life Lessons', 'Nursing', 'Self-awareness', 'Careers', 'Life']
Quantum Visible Light Communication and the Non-Physical Realm
The key to knowledge is hidden in light. Buried by our ancestors and covered up by our current world powers. They have blinded us from the truth and kept us prisoner by fear of the unknown dimensions. And then there was Light. Light is information. Data can be encoded in the Visible Light Spectrum and then sent via star’s to anywhere in the Universe in a matter of secs via Quantum Teleportation and Entanglement. Can you see him? The frequency of our soul has been captured in the image above. Often I wonder why scientists won’t investigate this discovery of data in the Visible Light Spectrum of our Sun and I have came to only one conclusion. The Frequency is Protected Why is this Technology Kept Secret? The reason this technology is kept secret is that this technology is connected to the afterlife. Our soul is connected to light and our soul is Extraterrestrial. We are now able to communicate with life after it leaves Earth. Our Soul as Technology Is our soul Advanced Technology? Advanced Alien Technology? Are Humans Biological Machines? Data can be transferred and stored in light. Our soul is Light. The fact that our soul leaves our body after death leads me to believe that we are not our bodies. So where are we if we're not here? We may reside in two possible forms in another Galaxy or Universe. Physical or Non-physical we exist somewhere else after Earth. With Advanced technology an Advanced Alien Civilization could have developed technology to upload their consciousness into a system that beams their consciousness via a non-physical soul to Earth at the day of conception from any Galaxy, Universe or Higher Dimension. Once on Earth all memories and experiences are collected and stored via light and transferred back after death to the source. So who are we? The evidence is pointing in one direction. "We are the Aliens." Humans may be Biological Machines developed to host multiple species of Extraterrestrials. Aliens could be jointly using this technology to experience Free Will on Earth. So who are we? Them. They have acknowledged being seen Quantum Communicating with Extraterrestrial Intelligence Instant alert: When the first message was received in Visible Light the Extraterrestrials were notified instantly. Did I Discover the Quantum Key? "The quantum key ensures the security of the communication and endow the receiver with the necessary tool to properly asses the content of the message. In the absence of the quantum key we may not be able to properly decipher a hypothetical quantum message ETI (Extraterrestrial Intelligence) may communicate." "Although the quantum key is missing, we should emphasize here a remarkable property of quantum entanglement. Thus, the measurement of the quantum state of one photon of an entangled pair instantly decays the state of the other. Roughly speaking, when we will first detect and measure the quantum information encoded in photons by ETI, the sender of the message will intently be avert. Our first quantum measurement acts like an “alert button”, instantly alerting the sender ETI that another technological superior civilization in the universe is raising." In conclusion, all we have to do is to measure the quantum information stored in one tiny photon to connect ourselves to the “universal quantum internet”. Can you see him now? We now have definitive proof we are not alone and the Non-Physical Realm is within arms reach. Now, we must understand this knowledge and pressure scientists to investigate this discovery. Visit Bent Light on Facebook for more information. See the evidence at the Bent Light Website and help spread awareness of this Discovery. We now have the Holy Grail of Evidence. The answer was hidden in Light. We have never been alone.
https://medium.com/we-are-not-alone-the-disclosure-lobby/quantum-visible-light-communication-and-the-non-physical-realm-6c9ef8224004
['William Lawrence']
2017-10-31 05:18:26.133000+00:00
['Extraterrestrial Life', 'Tech', 'Consciousness', 'Soul', 'Science']
3 Ways USAID Is Giving Hope to Iraqis Returning Home After Being Displaced by ISIS
Mr. Saeed, who has been working as a day laborer despite suffering from a chronic illness, hopes this support from USAID will soon enable him to open an electrical shop in Rambosi. The new shop will allow him to continue to earn a steady income and contribute to rebuilding the local economy without depending on work as a day laborer. Cash Assistance for Community Resilience The district of Baiji, home to Iraq’s largest oil refinery, endured several major battles during the ISIS occupation, causing residents to flee for their lives. Before the conflict, Ahmed, 32, had a thriving produce shop. When he returned, only piles of rubble remained where houses and buildings once stood. His own house was partially destroyed and is no longer structurally sound. With support from USAID’s partners in the Cash Consortium for Iraq, Ahmed’s produce stand is flourishing. / USAID After returning, Ahmed worked tirelessly as a day laborer to put food on the table. Some nights he and his family would go without food because he could not find work. When Ahmed received cash assistance from the Cash Consortium for Iraq (CCI) in December 2019, he was grateful. Now his family could start improving the quality of their lives beyond just bare survival. USAID supports the CCI, which delivers critical financial support to vulnerable, conflict-affected households in five governorates so they can purchase food and essential supplies, or develop their own livelihoods. Ahmed decided to invest some of the funds into restarting his business. Since reopening in July, Ahmed’s business has flourished. With his new income, he has diversified the range of fruits and vegetables he sells, and has been able to repay all his loans. Now, Ahmed is starting to think about how he can use his business to help others in his community who continue to struggle: “I remember what it was like to not have anything to eat. I want to find a way to give back to my community.” Since 2017, USAID has supported over 26,000 vulnerable, conflict-affected households with cash transfers through CCI, as the organization’s largest donor. Prior to COVID-19, monthly cash transfers were delivered to eligible households for one, two, or three months depending on each family’s level of need. Now during the pandemic, a one-time cash transfer is distributed to all eligible households to simplify distributions and limit the risk of COVID-19 transmission.
https://medium.com/usaid-2030/3-ways-usaid-is-giving-hope-to-iraqis-returning-home-after-being-displaced-by-isis-5ba7b6c16d45
[]
2030-03-30 00:00:00
['Middle East', 'Conflict', 'Iraq', 'ISIS', 'Humanitarian']
If You’re Overwhelmed With Too Many #1 Priorities, Ask Yourself This Question
The world is full of possibilities, and you want to do it all. You want to take that course, start that project, read that book, learn that skill, all while making time for your day job, hobbies, spiritual practice, friends and family-and all in the next month or two. It always happens in cycles: you get excited about a bunch of (genuinely interesting!) things, and you fill your schedule to the brim. Some people can make it all work, so why wouldn’t you? At first it feels exciting, but eventually it gets overwhelming. You know you need to prioritize, but everything seems so important that you don’t know what to drop. And then the worst happens: you get so obsessed about choosing the right thing and taking the most effective step possible that you end up . So you procrastinate. Then, as time goes by and deadlines approach, you are forced to choose the urgent over the important. You feel disappointed with your results, and you lack clarity and focus on what to do next. Inevitably, you always end up asking yourself: Why do I keep choosing the wrong things? Why didn’t I start things earlier? How can I plan better next time? The answer: ask yourself a different question. The Question You Need To Ask Yourself A few years ago when I quit my job and decided to change my life, I had a long list of things I wanted to do: start a Youtube channel about relationships, work as a live performing artist, start my own blog, find remote freelance gigs, become a life coach, learn meditation and yoga and travel the world and many other things. As surprising as it may seem, I started by doing most of those things, all at the same time. Obviously, that didn’t go very well. I quickly became overwhelmed. I never had time to relax; I felt guilt and FOMO whenever I worked on one thing because that meant neglecting something else; I felt that my life lacked coherence and focus, and instead everything seemed like a messy collage of activities and habits that didn’t make sense together and instead of fulfilling me only scattered my energy, attention, and happiness. And that’s when it hit me: I had to look beyond the surface. When we get too busy or lack clarity or focus, our first instinct is to try and find new systems and solutions. We try that new productivity software, we read a new book, we change our to-do lists and goal setting strategies, but we forget to look at the most important component: our very human needs that made us want those things in the first place. So I simply asked myself: “What do I really need?” What was behind my drive to pursue all those dreams, tasks, projects, and habits? What holes was I trying to fill in my heart and soul? What was the purpose of all that? When we were babies, we were very aware of our own needs: we cried when we needed food or love or sleep, and we laughed when we were covered for fun, intimacy, or contentment. As we grew older, things got more complex as we discovered needs such as appreciation, learning, personal growth, or emotional closeness. But the truth is, as human beings, we still function in the exact same way: we can’t be happy if our needs aren’t met (and our needs are usually much simpler than we think ). As I shifted my thinking from “what should I do” to “what do I need”, things started becoming clearer. Every task or project or possibility had a very strong, deeply rooted reason to be on my mind or in my calendar-I just had never thought to look at it before. Gradually, I came to understand that I didn’t necessarily have to keep my Youtube channel (which I didn’t enjoy so much anymore), or spend so much time doing freelance work just to pay the bills. As long as I met my need to be seen and appreciated, to be creative, to have fun, to have a positive impact in the world and to sustain myself financially while being free to travel, I would be happy with pretty much any professional occupation. It turned out that starting a blog was the answer. I invite you to make a list of all the tasks, habits, and projects in your life (present, future, or just hypothetical), and then ask yourself: “What are the needs that make me want to do those things in the first place?” You might realize that your strong desire to keep taking different courses comes from a deep need for learning and discovery. And you can : so why is it that you feel overwhelmed with so much “knowledge”? You might want to balance your need to learn with your need for creativity. Or maybe you just don’t feel stimulated enough by the things you’re studying, and you need a change. Or you might realize that the only reason you’re studying is because you just don’t know what else to do and your real need is to find purpose and meaning. In the process of discovering your needs, you might also find some hard truths. For example, you might realize that you took a job offer just to please your parents, or that you’re pursuing certain goals in a desperate attempt to become someone that you’re not. In those cases, keep asking until you find the source: why do I worry so much about what others think? Why do I struggle to accept myself? In what other ways can I bring my parents happiness? How can I heal this relationship? Another question you can ask yourself is: “What other needs do I have that I am not currently meeting?” To answer this question, it might help to look at moments or interactions in your life when you feel unhappy or less than proud of your behavior (such as procrastination, conflict with others, or feeling confused). You can be surprised by the answers. Think of needs you don’t usually consider: do you have enough personal space? Accomplishment in your work? Time to rest? Time to play? Authenticity in your relationships? Sexual expression? Opportunities to practice compassion? Safety and stability? Now, take into account that no matter how many needs you find, your time is limited. This is why you need to prioritize. The question to ask here is: “How can I optimize my actions so that each of them meets as many of my needs as possible?” For example, consider meeting your friends for a walk instead of a coffee place so that you can meet your need for companionship and physical movement at the same time. Find an occupation that satiates your curious and creative nature while providing you the financial stability you need. Leave your mark in the world in a way that is both fun, fulfilling, challenging, and helpful to others. Gaining Perspective When you have enough clarity around your needs-what’s been missing, why you choose the things you choose, and why you get overwhelmed-then it might be time to consider new options. Just to liberate yourself from all the ideas you were stuck with before, leave them aside for a moment. Instead, ask yourself: “What are the craziest things I could do to meet my needs?” For example, I often consider possibilities such as leaving everything behind and learning meditation in a cave in the Thai jungle, give away everything I own, start a new business from scratch, shave my head, or stop everything I’m doing and just write a novel. More often than not they stay in my imagination, but considering them helps me broaden my perspective, and once in a while there is a great idea among them that I actually follow. That’s me when I shaved my head a few years ago. Sometimes, all we need is a bit of perspective. Sometimes, in order to understand what to do next or what to let go of, you just need to look from a higher point of view so you can see the whole path. If you don’t reflect on your deepest needs and consider them when you make decisions, it’s very unlikely that you will ever be happy or satisfied. It’s okay to not know what to do. It’s okay to be confused and overwhelmed. That, too, is a sign of a need unmet. When it happens to me, it’s usually a cry for stillness and clarity, and in order to meet it I love taking time to be in silence. I go to the forest on my own, sit by the ocean, or simply look out the window. Very often, this is enough-that’s how simple it gets when you identify the right needs. Perhaps, if you find yourself needing clarity and stillness right now, this might be a good next step for you too. Spend a moment on your own, maybe in nature, without social media, without distractions or external stimuli. Maybe you take your journal with you. Maybe use it to get in touch with your deep, real, human needs. Printable Journaling Template to uncover your needs + define next steps If you want, visit the original version of this article to download a fill-in printable with all the questions in this article plus a few extra ones that will help you reflect and better understand your needs and, therefore, make better decisions and regain the clarity you’ve been craving.
https://silviabastos.medium.com/if-youre-overwhelmed-with-too-many-1-priorities-ask-yourself-this-question-dbb8da3a892
['Sílvia Bastos']
2020-04-25 11:19:54.905000+00:00
['Prioritization', 'Productivity', 'Journaling', 'Procrastination', 'Mindfulness']
Meta-Modelling Meta-Learning
A Meta-Model for Machine learning Generally, machine learning can be seen as a search problem, approximating an unknown underlying mapping function from inputs to outputs. Design choices, like the algorithm, hyperparameters and their variabilities, narrow or widen the scope of possible mapping functions, i.e. of the search space. This is similar across various machine learning algorithms (e.g. neural networks, gradient boosting, or linear regression). What distinguishes them from each other are the atomic building blocks. If we take neural networks as an example, the building blocks of a neural network would be layers. Random forests, as another example, are composed of decision-trees. So let’s define first a generic meta-model for any learning algorithm: Meta-Model for Machine Learning As it can be seen in the figure, on a high level, our learning meta-model consists of an objective, a learning algorithm, an optimizer, and data set metadata. The objective specifies the goal of the learning algorithm. For example, if we consider the same data set containing historical prices for houses, we could perform a linear regression to predict the house prices or a classification to perform a clustering on similar houses. In other words, the objective is the loss function which defines our learning goals. specifies the goal of the learning algorithm. For example, if we consider the same data set containing historical prices for houses, we could perform a linear regression to predict the house prices or a classification to perform a clustering on similar houses. In other words, the objective is the which defines our learning goals. The learning algorithm can be a neural network, random forest, etc., and the learning block respectively would be a layer, decision-tree, and so forth. Learning blocks can be composed to trees, e.g. to model layers in a neural network, and contain the learned state, e.g. matrices, weights, or trees. can be a neural network, random forest, etc., and the respectively would be a layer, decision-tree, and so forth. Learning blocks can be composed to trees, e.g. to model layers in a neural network, and contain the learned state, e.g. matrices, weights, or trees. Each learning algorithm and its learning blocks respectively, have a set of hyperparameters , e.g. the number of hidden layers in case of neural networks. , e.g. the number of hidden layers in case of neural networks. Hyperparameters as well as the learning blocks can have constraints . For instance, if parameters must be in a certain range. Another example, for a neural network layer, it must have the same number of inputs than the previous layer has outputs. . For instance, if parameters must be in a certain range. Another example, for a neural network layer, it must have the same number of inputs than the previous layer has outputs. A learning algorithm has to specify one or several initialization methods , like uniform random distribution, gaussian distribution, which initializes the state of the learning algorithm and its learning blocks respectively. The initialization has itself hyperparameters, like the random seed or the standard deviation. , like uniform random distribution, gaussian distribution, which initializes the state of the learning algorithm and its learning blocks respectively. The initialization has itself hyperparameters, like the random seed or the standard deviation. The optimizer optimizes the model towards the objective and is controlled by parameters, which can again have their own constraints, like the learning rate and regularization rate. Examples of optimization algorithms are gradient descent, stochastic gradient descent, or Adam. optimizes the model towards the objective and is controlled by parameters, which can again have their own constraints, like the learning rate and regularization rate. Examples of optimization algorithms are gradient descent, stochastic gradient descent, or Adam. The data set metadata contains statistical information about the data set, like its size, dimension, etc. and its features. Features can be derived features or regular features In an interesting article, Perez argues that an objective is also just a function and an optimizer just an algorithm and therefore both could be represented by a learning meta-model itself. Along these lines, one could argue that the optimizer can also be seen as another hyperparameter. However, it is important to note that for the same learning architecture, the accuracy, speed, and results can change significantly depending on the initialization method and optimizer hyperparameters. Therefore, we do think that it makes sense to keep the optimizer and objective as separate concepts.
https://medium.com/datathings/meta-modelling-meta-learning-34734cd7451b
['Thomas Hartmann']
2019-07-19 16:12:13.695000+00:00
['Meta Modeling', 'Automl', 'Artificial Intelligence', 'Machine Learning', 'Meta Learning']
Many iPhone Users Are Reporting Weird Screentime Issues With iOS 14.2
Many iPhone Users Are Reporting Weird Screentime Issues With iOS 14.2 12 Hours from an App I Don’t Even Have Installed My screentime claims I used my phone for 13 hours today. And what’s weirder is that it’s all coming from the Apple Support app… an app I don’t even have installed. Screenshot by author Turns out I’m not the only one seeing weird issues with screentime. There doesn’t seem to be a correlation in regards to the iPhone model. It also doesn’t seem to be a very common problem. However, others have been reporting other odd screentime glitches. Some have reported their screentime being inaccurately high with no app to blame. And others are also reporting that their phone claims they’ve spent hours on websites they’ve never visited. And many more people have claimed other problems with iOS 14 giving outrageously inaccurate screentime stats. Reports on Apple Discussion boards and Reddit have been surfacing. User kenijean had their settings app clock in 19 hours of use. Another person’s screentime had Twitter open for 20 hours and they didn’t even have the Twitter app downloaded. It seems that everyone’s scenario is a bit different but as far as my research has shown, it seems to be a bug consistent with iOS 14.2. It seems to be yet another bug with iOS 14.2 that Apple is holding the fix off until they release 14.3 instead of releasing a 14.2.1 version.
https://medium.com/macoclock/many-iphone-users-are-reporting-weird-screentime-issues-with-ios-14-2-aa98bff0ae9e
['Henry Gruett']
2020-12-07 07:20:56.925000+00:00
['Technology', 'Apple', 'iOS', 'Technews', 'Tech']
Learn to become a Backend Developer
Let’s break it down and explain each step in the section below. Before we start, although we haven’t listed the knowledge of HTML/CSS in the roadmaps above, it is recommended that you get at-least some understanding and know how to write some basic HTML/CSS. Step 1– Learn a Language There are myriads of options when it comes to picking a language. I have broken them down into categories to make it easier for you to decide. For the beginners who are just getting into the backend development, I would recommend you to pick any of the scripting languages because they have a lot of demand and it would allow you to get up to speed quickly. If you have some frontend knowledge, you might find Node.js to be quite easier plus there is a big job market for that. If you have already been doing backend development and know some scripting language, I would recommend you to not pick another scripting language and pick something from the “Functional” or “Multiparadigm” section. For example if you have been doing PHP or Node.js already, don’t go for Python or Ruby, instead give Erlang or Golang a try. It will definitely help stretch your thinking and open your mind to new horizons. Step 2 — Practice what you have Learnt There is no better way to learn than practice. Once you have picked your language and have got the basic understanding of the concepts, bring them to use. Make as many small applications as you can. Here are just a few ideas to get you started Implement some command that you find yourself using in the bash e.g. try to implement the functionality of ls Write a command that fetches and saves reddit posts on /r/programming in the form of JSON file in the form of JSON file Write a command that gives you a directory structure in JSON format e.g. jsonify dir-name to give you a JSON file with the structure inside the dir-name to give you a JSON file with the structure inside the Write a command that reads JSON from above step and creates directory structure Think of some task that you do every day and try to automate that Step 3 — Learn Package Manager Once you have understood the basics of the language and have made some example applications, learn how to use package manager for the language that you picked. Package managers help you use external libraries in your applications and to distribute your libraries for others to use. If you picked PHP you will have to learn Composer, Node.js has NPM or Yarn, Python has Pip and Ruby has RubyGems. Whatever your choice was, go ahead and learn how to use its package manager. Step 4 — Standards and Best Practices Each of the language has its own standards and best practices of doing things. Study them for your picked language. For example PHP has PHP-FIG and PSRs. With Node.js there are many different community driven guidelines and same for other languages. Step 5 — Security Make sure to read about the best practices for security. Read the OWASP guidelines and understand different security issues and how to avoid them in language of your choice. Step 6 — Practice Now that you know the basics of language, standards and best practices, security and how to use package manger. Now go ahead and create a package and distribute it for others to use, and make sure to follow the standards and best practices that you have learnt this far. For example if you picked PHP, you will be releasing it on Packagist, if you picked Node.js you will be releasing it on Npm registry and so on. Once you are done with that, search for some projects on Github and open some pull requests in some projects. Some ideas for that: Refactor and implement the best practices that you learnt Look into the open issues and try to resolve Add any additional functionality Step 7 — Learn about Testing There are several different testing types for testing. Get the understanding of what these types are their purpose. But for now learn about how to write Unit Tests and Integration tests in your applications. Also, understand different testing terminologies such as mocks, stubs etc Step 8 — Practical For the practice, go ahead and write the unit tests for the practical tasks that you have done this far, especially what you made in Step 6. Also learn and calculate the coverage for the tests that you wrote. Step 9 — Learn about the Relational Databases Learn how to persist your data in a relational database. Before you go and pick the tool to learn, understand the different database terminologies e.g. keys, indexes, normalization, tuples etc. There are several options here. However if you learn one, others should be fairly easy. The ones that you would want to learn are MySQL, MariaDB (which is mostly same and is the fork of MySQL) and PostgreSQL. Pick MySQL to start with. Step 10 — Practical Time Its time to bring everything that you have learnt this far, to use. Create a simple application using everything that you have learnt this far. Just pick any idea, maybe create a simple blogging application and implement the below features in it User Accounts — Registration and Login Registered user can create blog posts User should be able to view all the blog posts that he created They should be able to delete their blog posts Make sure that user can only see his personal blog posts and not from others Write the unit/integration tests for the application You should apply indexes for the queries. Analyze the queries to make sure that indexes are being used Step 11 — Learn a Framework Depending upon the project and the language you picked, you may or may not need a framework. Each language has several different options, go ahead and look what options are available for the language of your choice and pick the relevant one. If you picked PHP, I would recommend you to go with Laravel or Symfony and for the micro-frameworks, go with Lumen or Slim. If you picked Node.js, there are several different options but the prominent one is Express.js Step 12 — Practical Time For the practical of this step, convert the application that you made in Step 10, to use the framework that you picked. Also make sure to port everything including the tests. Step 13 — Learn a NoSQL Database First understand what they are, how they are different from relational databases and why they are needed. There are several different options, research a little have a look and compare them for the features and differences. Some of the common options that you can pick from are MongoDB, Cassandra, RethinkDB and Couchbase. If you have to pick one, go with MongoDB. Step 14 — Caching Learn how to implement app level caching in your applications. Understand how to use Redis or Memcached and implement caching in the application that you built in Step 12. Step 15 — Creating RESTful APIs Understand REST and learn how to make RESTful APIs and make sure to read the part about REST from the original paper of Roy Fielding. And make sure that you are able to fight someone if they say REST is only for the HTTP APIs. Step 16 — Learn about Different Auth Methods Learn about different Authentication and authorization methodologies. You should know what they are, how they are different and when to prefer one over the other OAuth — Open Authentication Basic Authentication Token Authentication JWT — JSON Web Tokens OpenID Step 17 — Message Brokers Learn about the message brokers and understand when and why to use them. There are multiple options but the prominent ones are RabbitMQ and Kafka. Learn how to use RabbitMQ for now, if you want to pick one. Step 18 — Search Engines As the application grows, simple queries on your relational or NoSQL database aren’t going to cut it out and you will have to resort to a search engine. There are multiple options, each having it’s own differences. Step 19 — Learn how to use Docker Docker can facilitate you greatly in your development, whether it is replicating the same environment as production, keeping your OS clean or expediting your coding, testing or deployment. I am going to leave the answer to “how it is going to help me” for you to search. In this step, go ahead and learn how to use Docker. Step 20 — Knowledge of Web Servers If you have come this far, you probably had to tackle with servers in the steps before. This step is mainly about finding out the differences between different web servers, knowing the limitations and different available configuration options and how to write applications best utilizing these limitations. Step 21 — Learn how to use Web Sockets While not required, it is beneficial to have this knowledge in your toolbelt. Learn how to write real-time web applications with web-sockets and make some sample application with it. You can use it in the blog application that you made above to implement real-time updates on the blog posts listing. Step 22 — Learn GraphQL Learn how to make APIs with GraphQL. Understand how it is different from REST and why is it being called REST 2.0. Step 23 — Look into Graph Databases Graph models represent a very flexible way of handling relationships in your data, and graph databases provide fast and efficient storage, retrieval and querying for it. Learn how to use Neo4j or OrientDB. Step 24 — Keep Exploring Once you start learning and practicing, you will definitely be coming across the things that we did not cover in this roadmap. Just keep an open-mind and a healthy appetite for learning new things. And remember the key is to practicing as much as you can. It will look scarier in the beginning and you might feel like you are not grasping anything but that is normal and over time you will feel that you are getting better. And with that, this post comes to an end. Feel free to befriend me on twitter or say hi by email. Also don’t forget to watch the repository for the future updates. Stay tuned!
https://medium.com/tech-tajawal/modern-backend-developer-in-2018-6b3f7b5f8b9
['Adnan Ahmed']
2020-01-12 11:57:44.866000+00:00
['Nodejs', 'JavaScript', 'Python', 'PHP', 'Golang']
Is decreasing your sleep time possible?
Is decreasing your sleep time possible? An introduction to polyphasic sleeping Sam Peet / © Culture Trip It was 3:40 AM. I found myself reading through Medium and chatting with some people on Discord. I was a little tired, but ready to get to work nonetheless. It seemed like a good start to the rest of the day. Specially since I’d woken up just 10 minutes before. But wait, I wasn’t pulling an all-nighter? That must mean that I went to sleep somewhere around 8PM. An extremely early time to go to bed. Am I some type of ultra early bird? Well, I actually went to sleep at 11AM. I got 4 hours and a half of sleep that night. This was my “core” sleep, which provided the most rest and lined up with my natural sleep patterns. You would expect anyone to be like a zombie with such short sleep, but I wasn’t. Not yet, anyway. I’d been on this schedule for 2 days so far, and I was still feeling decent. The sleep debt was starting to kick in a bit, though. A few hours later, I would take a 20 minute nap. A few hours prior, a second one. These naps would hopefully give my body enough REM sleep to last through the day. This schedule I was following is a polyphasic sleep schedule called “Everyman 2.” Polyphasic sleep Polyphasic sleep is an alternative to the traditional single-core sleep patterns. Most people’s sleep is monophasic, which means that they get all their sleep from a single night’s (or day’s) rest. In a polyphasic schedule, you get your rest from 2 or more periods of sleep a day. An example of a polyphasic sleep schedule is the siesta schedule common in the Mediterranean. This means sleeping less at night and having a nap in the afternoon. The siesta schedule is only a biphasic schedule, though. True polyphasic schedules have more than 2 rest periods, just like the one I attempted. The Everyman 2 schedule (Polyphasic.net) Polyphasic sleep doesn’t just split your sleep into different segments, though. It also shortens total sleep time. On the E2 schedule, I would only be getting 5 hours and 10 minutes of sleep every day. Almost 3 hours less than the recommended 8. Is this sustainable? Well, that’s a hard one to answer. The theory behind being able to shorten your sleep comes from looking at sleep cycles. While you sleep, you enter different sleep stages: light sleep, REM sleep, and deep sleep. Light sleep, as its name suggests, is the period in which you can be woken easily. It is the first stage you enter, and it serves as a transitional stage. It also lasts the longest. It might serve for some memory consolidation, but it does not provide the restorative effect that deep sleep does. Deep sleep is what really gets you rested. It doesn’t last as long, but it’s arguably the most important of the stages. During deep sleep, your mind cleanses itself off waste material, human growth hormone is produced, and your brain retains newly learned information. The Rapid Eye Movement (REM) stage is the one during which you have dreams. During it, your physiological processes start resembling the ones while awake, and your brain waves become more active. Polyphasic schedules attempt to reduce light sleep, which is thought to be unnecessary. In this way, total sleep time can theoretically be reduced. They attempt to achieve this by setting their sleep patterns in a way that will maximize deep and REM sleep. The “core” sleep will last for a longer time than a nap — for example, 5 hours — but will be shorter than 8 hours. The naps in the schedule are used to get REM rest, which should give your body its much-needed rest. In this way, total sleep would hopefully be reduced. That sounds too good to be true, though. If you could reduce your sleep time so easily and not face any consequences, surely everybody and their grandma would have hopped onto polyphasic sleep already! Imagine what you could do with a few more hours every day. This is where the caveats come in. People don’t naturally dive into REM sleep when napping. It takes time to train your body to do so, and it’s not a pleasant experience. You have to teach your body to adapt to these schedules. This period of “adaptation” can take anywhere from a couple of weeks to months. During this time, you’ll be putting yourself through sleep deprivation to achieve your sleep goals. Sleep deprivation causes a number of unwanted effects: cognitive decline, decreased alertness, decreased ability to retain information and make memories, and, of course, tiredness. Lots of tiredness. As the video itself says: “You will feel awful!” There are those who really want to permanently adopt these schedules. They are willing to go through the adaptation process and see if they can make it work. In fact, there are entire communities around polyphasic sleeping. Is this just one of those trends that comes and goes, or is it legitimate? We know that siesta schedules are common in some parts of the world, but what about the more extreme ones? You might have heard stories about remarkable figures and their peculiar sleep schedules. Nikola Tesla and Leonardo Da Vinci allegedly had several naps a day, sleeping for as little as 2 hours every day. Napoleon would sleep in 2 hour chunks and nap in the afternoon. Etc. These people are undoubtedly geniuses in one way or another. However, their sleep schedules might be impossible to adapt to by some people. Perhaps they were just unique in their sleep requirements. Sleeping less won’t make you a genius, but it can potentially give you more time to work with every day. To try to find out what we can accomplish, let’s try turning to more expert knowledge. To sleep or not to sleep …there is a lot to be discovered about sleeping in general. We know it’s important, but we still have a lot to learn about its mechanisms. Proponents of polyphasic sleep claim that human’s historic sleep patterns were polyphasic. The reason that we sleep monophasically, they say, is because electric lighting allowed people to stay up later. Historians such as Roger Ekirch have argued that, before electric light and the Industrial Revolution, interrupted sleep was the norm. Interrupted sleep is a type of biphasic sleep in which nighttime sleep is separated by a period of wakefulness, generally an hour long. It’s usually accompanied by an afternoon nap as well. This time was used by people to pray, reflect on their dreams, write, and a number of other activities. People like Ekirch have argued that this is the natural sleeping pattern for humans. Others have also hypothesized that this way of sleeping is important for dealing with medical issues such as stress. iStock by Getty Images It’s a long stretch from a biphasic sleep schedule to something like the Uberman schedule, though. Even the E2 schedule that I attempted seems a little extreme in comparison. What does science say about this? As with (seemingly) everything else, it’s a mixed bag. Some research has shown that naps can not only be refreshing, but improve cognitive ability. Other research says that they might not be as great, specially if they disturb nighttime sleep. When it comes to adapting to polyphasic sleep schedules, there is little evidence of people being able to adapt to anything more extreme than a biphasic schedule. A study at the University of Chicago found that shift workers, whom slept during the day, had more health issues and lower life spans than people who slept more traditionally. This vindicates the universally accepted idea that sleeping throughout the night leads to better rest. However, this clashes with other research of its kind; one which concludes that segmented sleep schedules did not affect shift worker’s alertness or cognitive ability. This one is particularly good for polyphasic proponents, since the subjects in this study also reduced their sleep by around an hour. Still, not a lot is known about biphasic and polyphasic sleeping. In fact, there is a lot to be discovered about sleeping in general. We know it’s important, but we still have a lot to learn about its mechanisms. Syncing it up Kanyanat Wongsa/Shutterstock.com While we might not know if it’s truly possible to adapt to a schedule like E2, we know that people vary in their sleep habits. Some people need less sleep than others, which might make it easier for them to try these schedules out. The siesta schedule, for example, might be hard to adapt to if you naturally sleep for 9 hours uninterrupted. However, a person that only needs 7 hours to function will probably find it a lot easier. There are also people with a rare gene called DEC2. This gene allows them to function with a few hours of sleep per night. It is a very rare gene, but people with it could be the perfect polyphasic sleepers. Of course, if you had that mutation you would probably know it by now. For the rest of us, polyphasics schedules wouldn’t be as easy. The extreme sleep debt makes it so that only the most determined can stick it out through the adaptation period. And even then, there’s no guarantee of long term success. Another thing to be mindful is your body’s circadian rhythm. Your circadian rhythm regulates your natural sleeping patterns. This is what makes some people night owls and others are early birds. Careful monitoring of your body’s wind down times can help you know your circadian rhythm better. If you decide to go through with a polyphasic schedule, it’s important to make it as natural as it can possibly get. Of course, use of alarm clocks and forcing yourself through sleep deprivation is not natural by any means, but anything that makes the process easier is useful. This is a good site for information, as well as the /polyphasic community on reddit. There are also other communities on platforms like Discord. Closing thoughts My own experience with polyphasic sleeping is still ongoing. I failed the E2 schedule after 5 days, as it proved to difficult for me to nap. However, I did a biphasic schedule for a few weeks and it worked pretty well for me. I am currently attempting E1 (As I should’ve done in the beginning.) Would I recommend polyphasic sleeping? Only if you’re prepared for it. I also can’t guarantee that it won’t have any long-term consequences; we don’t know much about sleep, so messing with your sleep for prolonged periods of time is probably something that shouldn’t be taken lightly. However, both the success stories from the community and the potential benefits of polyphasic sleeping outweigh the cons for me. At the end of the day, it’s up to your discretion. If you enjoyed the article and want to read more, follow me on Medium. Also, check out my other self-improvement articles on Betterment Kingdom.
https://medium.com/betterment-kingdom/is-decreasing-your-sleep-time-possible-a2032c592142
[]
2019-12-14 13:06:45.912000+00:00
['Life', 'Sleep', 'Polyphasic', 'Self Improvement', 'Productivity']
Arbor Day Is The Best Holiday. Almost all other holidays are about the…
“Trees were so rare in that country … that we used to feel anxious about them, and visit them as if they were persons.” Something about that line from Willa Cather’s My Ántonia stopped me cold when I first encountered it years ago. I was young then, a teenager, and the novel was unlike anything I’d read before; I didn’t quite understand why the book had such an effect on me. (Mencken hit it on the head when he declared unequivocally, “no romantic novel ever written in America … is one half so beautiful as My Ántonia.”) The notion of people checking in on trees, making sure they were all right, was so foreign to my own experience — growing up in the arboreal Northeast — and so endearing that it never really left me. Now, in late April of this uneasy year, after a sketchy early spring that felt like the seasonal equivalent of a flu patient’s thready pulse — perceptible, but just barely — I’m thinking again of Cather’s book and the companionable nature of trees, because the only holiday that matters is here again: Arbor Day. Believe me, I have no interest in pitting holidays against each other, and not only because any such contest will be over before it even begins if Arbor Day is in the mix. But if we are going to compare holidays…well, with due respect to secular and religious rites everywhere, they all come up short against a day set aside to celebrate trees. Of course, billions of people around the world get their knickers in a twist (in the best and worst possible ways) over holidays of all kinds, from Hanukkah, Eid al-Adha, Christmas, Kwaanza and Rama Navami to Halloween, Guy Fawkes Night, St. Patrick’s Day and more. But for everyone who feels love for one of those holidays, there’s usually someone else who actively dislikes it. Untold numbers of people despise Christmas because all they see is the schlocky, turgid consumerism that has come to define it. Many rational, live-and-let-live men and women would be perfectly happy if Halloween was abolished forever. Millions can’t stand Thanksgiving, because obligatory family get-togethers suck and/or turkeycide. And then there’s Arbor Day, a holiday so straightforward and unencumbered by politics, sectarian twaddle or historical grievances that each year it feels somehow radically innocent in a way that, say, poor old Columbus Day never can. It’s a day to “celebrate the role of trees in our lives” and “to promote tree planting and care.” Period. Arbor Day is not trying to be something more than what it already is: elemental. If it was a geometric shape, it would be a circle. If it was a rock and roll record, it would be the Velvet Underground’s first album. In January 1872, a man named J. Sterling Morton, a Detroit native and one-time secretary of the Nebraska Territory, proposed a tree-planting holiday, “Arbor Day,” at a meeting of Nebraska’s State Board of Agriculture. Trees, Morton knew, were badly needed in that part of the country as windbreaks, as means for keeping the land’s rich soil in place, for fuel and building materials, and for shade from the broiling sun that so often hammered away at the Great Plains. A few months later, on the first Arbor Day — April 10, 1872 — more than a million trees were planted across Nebraska. The perfect holiday was born. (A small village in northern Spain, Mondoñedo, is credited as the site of the first official, municipal tree-planting festival or holiday in the world, in 1594.) Today, Arbor Day is celebrated in every state in the U.S. and in some form or other in scores of countries around the world. Just last year, Madagascar celebrated its first-ever official Arbor Day, a few years after it began planting hundreds of thousands of trees annually with help from the Nebraska-based Arbor Day Foundation, to help the famously eco-diverse island nation recover from decades of aggressive deforestation. Individual U.S. states set aside their own days for local ceremonies, based on planting seasons and other regional factors. Arbor Day in Hawaii and Texas, for example, falls on the first Friday in November. In Louisiana, it’s the third Friday in January. In Alaska, the third Monday in May, and so on. But the national Arbor Day in the U.S. is always celebrated on the last Friday in April. Whenever people choose to mark the day, though, it is the holiday’s immediately graspable core sentiment and mission — Trees are good. Let’s plant lots of them — that lend Arbor Day its forthright, somehow very Nebraskan vibe. Yes, trees are beautiful. No one denies it. But trees also work, damn it. They keep city streets cool in the summer. They provide habitat for an infinite variety of critters. They release oxygen. They absorb carbon dioxide. They help prevent flooding. They hold hammocks and swings. No other living things are simultaneously as utilitarian and as aesthetically inspiring as trees. They get the job done, and they look good doing it. For Dan Lambe, the 46-year-old president of the Arbor Day Foundation, the holiday is in his blood. A native Nebraskan, Lambe grew up in Lincoln and as a child went with his family “every fall to pick apples and drink cider from the orchards in Nebraska City, the home of Arbor Day.” Lambe left Nebraska for a while and worked for non-profits in California, Arizona, and Texas. He moved back to Lincoln a decade ago, jumping at the chance to return home and work for the Arbor Day Foundation (founded in 1972). He was named president in 2014. Lambe’s laughter is boyish. There’s no other way to describe it. He sounds like a kid. “I’ll tell you,” he says, “to work every year toward a holiday as positive and as inspiring as Arbor Day is as awesome as it sounds. Planting trees is not the most complicated thing in the world, but for future generations, the benefits of planting even one tree are immense.” In the four decades the foundation has been around, it has helped towns and cities plant more than 250 million trees. Factor in that a single tree can absorb almost 50 pounds of carbon dioxide every year, and can sequester around a ton of carbon dioxide by the time it reaches 40 years old, and the benefits of a quarter-billion trees are immense, indeed. Arbor Day is not about nationalism, or religious dogma, or some vaguely tribal ethnic pride. Instead, it’s a holiday steeped in optimism. It’s a holiday that deepens a notion of stewardship and renewal of the natural world, rather than ownership and exploitation. It’s a holiday that makes us think about the sort of world we want to leave to our kids and grandkids. J. Sterling Morton himself argued that his brainchild “is not like other holidays. Each of those reposes on the past, while Arbor Day proposes for the future.” Amen to that.
https://medium.com/the-awl/arbor-day-is-the-only-holiday-that-matters-period-ecce347f6d4c
['Benedict Cosgrove']
2017-05-01 14:55:20.595000+00:00
['Environment', 'Tree Planting', 'Holidays', 'Arbor Day', 'Trees']
How NOT to Write a Blog Post
How NOT to Write a Blog Post Want to be a professional writer? Great! Just don’t do like I did… I turned in a 1,500-word blog post to a client recently. It has lots of great links, lots of insights for the target audience, lots of source notes. And I will wind up making the equivalent of $10/hour for my efforts. Why so little? Because it took me almost 20 hours to research and write the stupid thing! Never, ever again! Don’t get me wrong: I was pleased to get the referral and am hopeful it may turn into regular assignments, which is all good. But not if I ever, ever wind up putting that much time into something I should have been able to whip out in no more than five hours. So, please learn from my mistakes. Here are a few suggestions of things NOT to do when writing a blog post for a new client. Don’t complicate the hell out of the assigned topic In other words, Keep It Simple, Silly! When the new prospect approached me to write a “substantive” post for their agency, I was thrilled. This might turn into a regular, paying gig! In a niche I adore — digital tech marketing! They wanted a “listicle” about the top digital marketing strategies for 2020. Piece of cake, right? Do some web research, compile some links, write the post, refine it and send it off to the client…nothing to it! But, noooooooooooo, I had to be some kind of show off and send them a proposal about how they could present themselves as a thought leader in their niche. I proposed interviewing their CEO or CMO and presenting THEIR ideas about top strategies for the new decade. Great idea, except it’s close to the holidays and the prospect told me I’d have to interview several thought leaders in the organization — to his credit, he offered to help — and would they be able to get it done within their timeline (by Thanksgiving), etc. etc. I felt so dumb. Of course, there was no time to put together a piece of that magnitude. So I back-pedaled and went back to plan A, the listicle. This took about three rounds of emails to get back to what they wanted to begin with. Lesson 1: Give the customer what they ask for. There’s no need to waste three days going back and forth with client (I waited almost three days for client’s initial response). Don’t be such a perfectionist about the outline I offered to draft an outline for the post. It’s SOP; I do it with all my clients to make sure we’re on the same page. The new prospect agreed it was a good idea. So, I took the feedback I received from prospect during the thought leader debacle and their desire for a top-10 list and started doing research to prepare a comprehensive outline. And I researched, and researched…and researched. I spent hours putting that list together. The client offered four things they thought should be on the list and expected me to fill in the rest. I started with Google and went to all my regular websites (e.g., Forbes, Forrester, Content Marketing Institute). There was a lot of great content out there, which led me down a rabbit hole of research. This was a new client, after all. I wanted them to see how thorough I was. Sheesh. My notes — just for the outline —amounted to about 20 pages of links and content that I was going to draw upon when it came time to actually write the article. I wound up using less than half of the material and found NEW source material once I actually started writing! Lesson 2: Don’t overdo the preliminary research. I know from experience that once you actually start to write a project, new source material is going to come up. It is not necessary to do so much work up front. Know your audience better up front You have to know who you’re writing for — it’s an important tenet of copywriting. Some brands have personas constructed down to the tiniest details: where their audience lives, what kind of work they do, what kind of car they drive, what they like to eat — I’ve seen some personas with names and everything. You have to know your target audience. Well, I didn’t have a persona to work from, just a vague idea of who the prospect is targeting with their blog — upper level marketing professionals. Cool. The problem was that I didn’t start researching my target audience until AFTER I wrote the first outline! If I had taken a little time up front to do some research on who I was writing for I would have known right away that a 10-item listicle was MUCH too long for these busy professionals. Researching the target audience helped me re-order the outline and restructure it. Waiting until I had already done a lot of research about the topic and drafted an outline made me waste time duplicating my efforts. Lesson 3: Know your audience very well BEFORE drafting the outline. Don’t spend so much time doing more research while you’re WRITING the blog post OK, this may sound a little contradictory, but you should have enough research done in the outline phase so you’re not spending hours and hours doing more research when it’s time to actually write the blog post. But that’s exactly what I did. After learning more about my target audience, the tone and flavor of the post changed a little, which required me to do — you guessed it — more research! I dug up NINE additional sources to use as links spelling out the stats and usefulness around certain strategies. My final notes document was over 20 pages long and, as I mentioned before, I wound up not even using half of the original source material. Can you say redundant, boys and girls? Lesson 4: Do the right kind of research up front, so you don’t repeat yourself later into the project. Conclusion Writing this blog post made it hard for me to believe I’ve been writing for clients for more than three years and that I was a professional editor for 15 years before that! All of the ways I go about organizing my time and the project pretty much flew out the window. When I turned it in on Friday, knowing how much time I put into what was really a straightforward article, I cringed a little. Systems will be in place for the next piece I do for a prospect. Guaranteed. So, don’t do like I did. In fact, I’d love to hear how you guys structure your time for paid writing gigs. Maybe I can learn something new. Thanks!
https://medium.com/swlh/how-not-to-write-a-blog-post-75557ed5f734
['Joy Harding']
2019-11-26 14:01:11.545000+00:00
['Writing Tips', 'Self Improvement', 'Writing', 'Content Marketing', 'Research']
A Beginner's Guide to Connect Celery With MongoDB
Project Directory Structure ---Blogs |---Blogs |---__init__.py |---settings.py |---urls.py |---celery.py |---UserRegistration |---tasks.py |---utils.py |---BlogTasks |---tasks.py |---urls.py |---views.py manage.py Let’s Connect These In three Steps 1. Connect Django and MongoDB In your settings.py file. DBName mentioned here will be created, if not already present. import mongoengine mongoengine.connect(db='YourDBName', host='127.0.0.1', port='27017') 2. Connect Celery And Redis In your settings.py file. # CELERY SETTINGS CELERY_BROKER_URL = 'redis://localhost:6379/0' CELERY_ACCEPT_CONTENT = ['json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' 3. Connect Celery and MongoDB In your settings.py file. # CELERY MONGO SETTINGS CELERY_RESULT_BACKEND = "mongodb" CELERY_MONGODB_BACKEND_SETTINGS = { "host": "127.0.0.1", "port": 27017, "database": "jobs", "taskmeta_collection": "stock_taskmeta_collection", } A collection named jobs will be created in your DB, this collection will hold metadata generated by the execution of tasks. celery.py Now we need to insert a task in our MongoDB. That task will be picked by the celery beat and executed by the worker. To insert a task we just need to run a method that will have all tasks. Check the example below. BlogTasks/tasks.py Run create_tasks and this will insert a record in the schedule collection. Here you can see we have created a task named user-account-creation-task. A method executed by user-account-creation-task is present in the location UserRegistration.tasks and the method is user_account_creation_task . The frequency of user-account-creation-task is eight hours. But you can also do something like schedule[crontab][‘minute’] = ‘*/5’ and schedule[crontab][‘hour’] = ‘*’ this means that every five minutes task will be executed. But if you didn’t change it to per eight hours then your task will run every eight hours five mins. Also, note that we are using celerybeat-mongo GitHub project that we have already installed to create a task, validate data (using PeriodicTaskSerializer ), and saving in MongoDB. UserRegistration/tasks.py from celery import task from UserRegistration.utils import account_creation @task def account_creation(): account_creation() UserRegistration/utils.py def account_creation(): # Apply your logic here print('Running account creation task!') Now we have all the required libraries and code in files we can run celery worker and beat and check the result. Run mongod service Linux Env service mongod start Windows CMD add mongod location in the environment variables C:\Program Files\MongoDB\Server\3.4\bin\ Start MongoDB service using cmd mongod Redis service Linux Env service redis start Windows CMD add redis in environment variables C:\Program Files\Redis\ start redis service using redis-cli Celery Worker First, change your directory to where the manage.py file is present celery -A Blogs worker -l info If you are getting any errors then install eventlet and run celery -A Blogs worker -l info -P eventlet Celery Beat First, change your directory to where the manage.py file is present: celery -A Blogs beat -S celerybeatmongo.schedulers.MongoScheduler -l info Here we’re using a database scheduler which is provided by celerybeat-mongo . Its role is to pick tasks from the database and send it to the worker. If everything is done correctly then in your worker cmd you will see a print statement result Running account creation task! Why Should We Use the Database Scheduler? At CloudGain we prefer to use Database Scheduler because all of the schedules are present in MongoDB. That means we can easily manipulate frequencies of tasks and run them without the need for deployment. This makes testing tasks very easy.
https://medium.com/better-programming/a-beginners-guide-to-connect-celery-with-mongodb-b7afd197c061
['Vikas Gautam']
2020-11-17 18:12:21.697000+00:00
['Programming', 'Database', 'Celery', 'Python', 'Mongodb']
Practical Machine Learning with Python and Keras
Practical Machine Learning with Python and Keras Originally published at kite.com Table of Contents What is machine learning, and why do we care? Supervised machine learning Understanding Artificial Neural Networks Using the Keras library to train a simple Neural Network that recognizes handwritten digits Conclusion Take-home projects What is machine learning, and why do we care? Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to “learn” (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed. Think of how efficiently (or not) Gmail detects spam emails, or how good text-to-speech has become with the rise of Siri, Alexa, and Google Home. Some of the tasks that can be solved by implementing Machine Learning include: Anomaly and fraud detection: Detect unusual patterns in credit card and bank transactions. Detect unusual patterns in credit card and bank transactions. Prediction: Predict future prices of stocks, exchange rates, and now cryptocurrencies. Predict future prices of stocks, exchange rates, and now cryptocurrencies. Image recognition: Identify objects and faces in images. Machine Learning is an enormous field, and today we’ll be working to analyze just a small subset of it. Supervised Machine Learning Supervised learning is one of Machine Learning’s subfields. The idea behind Supervised Learning is that you first teach a system to understand your past data by providing many examples to a specific problem and desired output. Then, once the system is “trained”, you can show it new inputs in order to predict the outputs. How would you build an email spam detector? One way to do it is through intuition — manually defining rules that make sense: such as “contains the word money”, or “contains the word ‘Western Union’”. While manually built rule-based systems can work sometimes, others it becomes hard to create or identify patterns and rules based only on human intuition. By using Supervised Learning, we can train systems to learn the underlying rules and patterns automatically with a lot of past spam data. Once our spam detector is trained, we can feed it new a new email so that it can predict how likely an email is spam. Earlier I mentioned that you can use Supervised Learning to predict an output. There are two primary kinds of supervised learning problems: regression and classification. In regression problems, we try to predict a continuous output. For example, I am predicting the price (real value) of a house when given its size. problems, we try to predict a continuous output. For example, I am predicting the price (real value) of a house when given its size. In classification problems, we try to predict a discrete number of categorical labels. For example, predicting if an email is spam or not given the number of words within it. You can’t talk about Supervised Machine Learning without talking about supervised learning models — it’s like talking about programming without mentioning programming languages or data structures. In fact, the learning models are the structures that are “trained,” and their weights or structure change internally as they mold and understand what we are trying to predict. There are plenty of supervised learning models, some of the ones I have personally used are: Random Forest Naive Bayes Logistic Regression K Nearest Neighbors Today we’ll be using Artificial Neural Networks (ANNs) as our model of choice. Understanding Artificial Neural Networks ANNs are named this way because their internal structure is meant to mimic the human brain. A human brain consists of neurons and synapses that connect these neurons with each other, and when these neurons are stimulated, they “activate” other neurons in our brain through electricity. In the world of ANNs, each neuron is “activated” by first computing the weighted sum of its incoming inputs (other neurons from the previous layer), and then running the result through activation function. When a neuron is activated, it will, in turn, activate other neurons that will perform similar computations, causing a chain reaction between all the neurons of all the layers. It’s worth mentioning that, while ANNs are inspired by biological neurons, they are in no way comparable. What the diagram above is describing here is the entire activation process that every neuron goes through. Let’s look at it together from left to right. All the inputs (numerical values) from the incoming neurons are read. The incoming inputs are identified as x1..xn Each input is multiplied by the weight associated with that connection. The weights associated with the connections here are denoted as W1j..Wnj. All the weighted inputs are summed together and passed into the activation function. The activation function reads the single summed weighted input and transforms it into a new numerical value.K Nearest Neighbors Finally, the numerical value that was returned by the activation function will then be the input of another neuron in another layer. Neural Network layers Neurons inside the ANN are arranged into layers. Layers are a way to give structure to the Neural Network, each layer will contain 1 or more neurons. A Neural Network will usually have 3 or more layers. There are 2 special layers that are always defined, which are the input and the output layer. The input layer is used as an entry point to our Neural Network. In programming, think of this as the arguments we define to a function. The output layer is used as the result to our Neural Network. In programming, think of this as the return value of a function. The layers in between are described as “hidden layers”, and they are where most of the computation happens. All layers in an ANN are encoded as feature vectors. Choosing how many hidden layers and neurons There isn’t necessarily a golden rule on choosing how many layers and their size (or the number of neurons they have). Generally, you want to try and at least have 1 hidden layer and tweak around the size to see what works best. Using the Keras library to train a simple Neural Network that recognizes handwritten digits For us Python Software Engineers, there’s no need to reinvent the wheel. Libraries like Tensorflow, Torch, Theano, and Keras already define the main data structures of a Neural Network, leaving us with the responsibility of describing the structure of the Neural Network in a declarative way. Keras gives us a few degrees of freedom here: the number of layers, the number of neurons in each layer, the type of layer, and the activation function. In practice, there are many more of these, but let’s keep it simple. As mentioned above, there are two special layers that need to be defined based on your problematic domain: the size of the input layer and the size of the output layer. All the remaining “hidden layers” can be used to learn the complex non-linear abstractions to the problem. Today we’ll be using Python and the Keras library to predict handwritten digits from the MNIST dataset. There are three options to follow along: use the rendered Jupyter Notebook hosted on Kite’s github repository, running the notebook locally, or running the code from a minimal python installation on your machine. Running the iPython Notebook Locally If you wish to load this Jupyter Notebook locally instead of following the linked rendered notebook, here is how you can set it up: Requirements: A Linux or Mac operating system Conda 4.3.27 or later Git 2.13.0 or later wget 1.16.3 or later In a terminal, navigate to a directory of your choice and run: git clone https:// cd kite-python-blog-post-code/Practical\ Machine\ Learning\ with\ Python\ and\ Keras/ # Clone the repositorygit clone:// github.com/kiteco/kite-python-blog- post-code.gitcd kite-python-blog-post-code/Practical\ Machine\ Learning\\ Python\\ Keras/ # Use Conda to setup and activate the Python environment with the correct dependencies conda env create -f environment.yml source activate kite-blog-post Running from a Minimal Python Distribution To run from a pure Python installation (anything after 3.5 should work), install the required modules with pip, then run the code as typed, excluding lines marked with a % which are used for the iPython environment. It is strongly recommended, but not necessary, to run example code in a virtual environment. For extra help, see https://packaging.python.org/guides/installing-using-pip-and-virtualenv/ # Set up and Activate a Virtual Environment under Python3 $ pip3 install virtualenv $ python3 -m virtualenv venv $ source venv/bin/activate # Install Modules with pip (not pip3) (venv) $ pip install matplotlib (venv) $ pip install sklearn (venv) $ pip install tensorflow Okay! If these modules installed successfully, you can now run all the code in this project. In [1]: import numpy as np import matplotlib.pyplot as plt import gzip from typing import List from sklearn.preprocessing import OneHotEncoder import tensorflow.keras as keras from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix import itertools %matplotlib inline The MNIST Dataset The MNIST dataset is a large database of handwritten digits that is used as a benchmark and an introduction to machine learning and image processing systems. We like MNIST because the dataset is very clean and this allows us to focus on the actual network training and evaluation. Remember: a clean dataset is a luxury in the ML world! So let’s enjoy and celebrate MNIST’s cleanliness while we can :) The objective Given a dataset of 60,000 handwritten digit images (represented by 28x28 pixels, each containing a value 0–255 with its grayscale value), train a system to classify each image with it’s respective label (the digit that is displayed). The dataset The dataset is composed of a training and testing dataset, but for simplicity we are only going to be using the training set. Below we can download the train dataset In [2]: rm -Rf train-images-idx3-ubyte.gz rm -Rf train-labels-idx1-ubyte.gz wget -q http:// wget -q http:// %%bashrm -Rf train-images-idx3-ubyte.gzrm -Rf train-labels-idx1-ubyte.gzwget -q http:// yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz wget -q http:// yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Reading the labels There are 10 possible handwritten digits: (0–9), therefore every label must be a number from 0 to 9. The file that we downloaded, train-labels-idx1-ubyte.gz, encodes labels as following: TRAINING SET LABEL FILE (train-labels-idx1-ubyte): The labels values are 0 to 9. It looks like the first 8 bytes (or the first 2 32-bit integers) can be skipped because they contain metadata of the file that is usually useful to lower-level programming languages. To parse the file, we can perform the following operations: Open the file using the gzip library, so that we can decompress the file Read the entire byte array into memory Skip the first 8 bytes Iterate over every byte, and cast that byte to integer NOTE: If this file was not from a trusted source, a lot more checking would need to be done. For the purpose of this blog post, I’m going to assume the file is valid in it’s integrity. In [3]: with gzip.open('train-labels-idx1-ubyte.gz') as train_labels: data_from_train_file = train_labels.read() # Skip the first 8 bytes, we know exactly how many labels there are label_data = data_from_train_file[8:] assert len(label_data) == 60000 # Convert every byte to an integer. This will be a number between 0 and 9 labels = [int(label_byte) for label_byte in label_data] assert min(labels) == 0 and max(labels) == 9 assert len(labels) == 60000 Reading the images Reading images is slightly different than reading labels. The first 16 bytes contain metadata that we already know. We can skip those bytes and directly proceed to reading the images. Every image is represented as a 28*28 unsigned byte array. All we have to do is read one image at a time and save it into an array. In [4]: SIZE_OF_ONE_IMAGE = 28 ** 2 images = [] # Iterate over the train file, and read one image at a time with gzip.open('train-images-idx3-ubyte.gz') as train_images: train_images.read(4 * 4) ctr = 0 for _ in range(60000): image = train_images.read(size=SIZE_OF_ONE_IMAGE) assert len(image) == SIZE_OF_ONE_IMAGE # Convert to numpy image_np = np.frombuffer(image, dtype='uint8') / 255 images.append(image_np) images = np.array(images) images.shape Out [4]: (60000, 784) Our images list now contains 60,000 images. Each image is represented as a byte vector of SIZE_OF_ONE_IMAGE Let’s try to plot an image using the matplotlib library: In [5]: def plot_image(pixels: np.array): plt.imshow(pixels.reshape((28, 28)), cmap='gray') plt.show() plot_image(images[25]) Encoding image labels using one-hot encoding We are going to use One-hot encoding to transform our target labels into a vector. In [6]: labels_np = np.array(labels).reshape((-1, 1)) encoder = OneHotEncoder(categories='auto') labels_np_onehot = encoder.fit_transform(labels_np).toarray() labels_np_onehot Out [6]: array([[0., 0., 0., ..., 0., 0., 0.], [1., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 1., 0.]]) We have successfully created input and output vectors that will be fed into the input and output layers of our neural network. The input vector at index i will correspond to the output vector at index i In [7]: labels_np_onehot[999] Out [7]: array([0., 0., 0., 0., 0., 0., 1., 0., 0., 0.]) In [8]: plot_image(images[999]) In the example above, we can see that the image at index 999 clearly represents a 6. It’s associated output vector contains 10 digits (since there are 10 available labels) and the digit at index 6 is set to 1, indicating that it’s the correct label. Building train and test split In order to check that our ANN has correctly been trained, we take a percentage of the train dataset (our 60,000 images) and set it aside for testing purposes. In [9]: X_train, X_test, y_train, y_test = train_test_split(images, labels_np_onehot) In [10]: y_train.shape Out [10]: (45000, 10) In [11]: y_test.shape Out [11]: (15000, 10) As you can see, our dataset of 60,000 images was split into one dataset of 45,000 images, and the other of 15,000 images. Training a Neural Network using Keras In [12]: model = keras.Sequential() model.add(keras.layers.Dense(input_shape=(SIZE_OF_ONE_IMAGE,), units=128, activation='relu')) model.add(keras.layers.Dense(10, activation='softmax')) model.summary() model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy']) Total params: 101,770 Trainable params: 101,770 Non-trainable params: 0 In [13]: X_train.shape Out [13]: (45000, 784) In [14]: model.fit(X_train, y_train, epochs=20, batch_size=128) Epoch 1/20 45000/45000 [==============================] - 8s 169us/step - loss: 1.3758 - acc: 0.6651 Epoch 2/20 45000/45000 [==============================] - 7s 165us/step - loss: 0.6496 - acc: 0.8504 Epoch 3/20 45000/45000 [==============================] - 8s 180us/step - loss: 0.4972 - acc: 0.8735 Epoch 4/20 45000/45000 [==============================] - 9s 191us/step - loss: 0.4330 - acc: 0.8858 Epoch 5/20 45000/45000 [==============================] - 8s 186us/step - loss: 0.3963 - acc: 0.8931 Epoch 6/20 45000/45000 [==============================] - 8s 183us/step - loss: 0.3714 - acc: 0.8986 Epoch 7/20 45000/45000 [==============================] - 8s 182us/step - loss: 0.3530 - acc: 0.9028 Epoch 8/20 45000/45000 [==============================] - 9s 191us/step - loss: 0.3387 - acc: 0.9055 Epoch 9/20 45000/45000 [==============================] - 8s 175us/step - loss: 0.3266 - acc: 0.9091 Epoch 10/20 45000/45000 [==============================] - 9s 199us/step - loss: 0.3163 - acc: 0.9117 Epoch 11/20 45000/45000 [==============================] - 8s 185us/step - loss: 0.3074 - acc: 0.9140 Epoch 12/20 45000/45000 [==============================] - 10s 214us/step - loss: 0.2991 - acc: 0.9162 Epoch 13/20 45000/45000 [==============================] - 8s 187us/step - loss: 0.2919 - acc: 0.9185 Epoch 14/20 45000/45000 [==============================] - 9s 202us/step - loss: 0.2851 - acc: 0.9203 Epoch 15/20 45000/45000 [==============================] - 9s 201us/step - loss: 0.2788 - acc: 0.9222 Epoch 16/20 45000/45000 [==============================] - 9s 206us/step - loss: 0.2730 - acc: 0.9241 Epoch 17/20 45000/45000 [==============================] - 7s 164us/step - loss: 0.2674 - acc: 0.9254 Epoch 18/20 45000/45000 [==============================] - 9s 189us/step - loss: 0.2622 - acc: 0.9271 Epoch 19/20 45000/45000 [==============================] - 10s 219us/step - loss: 0.2573 - acc: 0.9286 Epoch 20/20 45000/45000 [==============================] - 9s 197us/step - loss: 0.2526 - acc: 0.9302 Out [14]: <tensorflow.python.keras.callbacks.History at 0x1129f1f28>> In [15]: model.evaluate(X_test, y_test) 15000/15000 [==============================] — 2s 158us/step Out [15]: [0.2567395991722743, 0.9264] Inspecting the results Congratulations! you just trained a neural network to predict handwritten digits with more than 90% accuracy! Let’s test out the network with one of the pictures we have in our testset Let’s take a random image, in this case the image at index 1010. We take the predicted label (in this case, the value is a 4 because the 5th index is set to 1) In [16]: y_test[1010] Out [16]: array([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]) Let’s plot the image of the corresponding image In [17]: plot_image(X_test[1010]) Understanding the output of a softmax activation layer Now, let’s run this number through the neural network and we can see what our predicted output looks like! In [18]: predicted_results = model.predict(X_test[1010].reshape((1, -1))) The output of a softmax layer is a probability distribution for every output. In our case, there are 10 possible outputs (digits 0–9). Of course, every one of our images is expected to only match one specific output (in other words, all of our images only contain one distinct digit). Because this is a probability distribution, the sum of the predicted results is ~1.0 In [19]: predicted_results.sum() Out [19]: 1.0000001 Reading the output of a softmax activation layer for our digit As you can see below, the 7th index is really close to 1 (0.9) which means that there is a 90% probability that this digit is a 6… which it is! congrats! In [20]: predicted_results Out [20]: array([[1.2202066e-06, 3.4432333e-08, 3.5151488e-06, 1.2011528e-06, 9.9889344e-01, 3.5855610e-05, 1.6140550e-05, 7.6822333e-05, 1.0446112e-04, 8.6736667e-04]], dtype=float32) Viewing the confusion matrix In [21]: predicted_outputs = np.argmax(model.predict(X_test), axis=1) expected_outputs = np.argmax(y_test, axis=1) predicted_confusion_matrix = confusion_matrix(expected_outputs, predicted_outputs) In [22]: predicted_confusion_matrix Out [22]: array([[1413, 0, 10, 3, 2, 12, 12, 2, 10, 1], [ 0, 1646, 12, 6, 3, 8, 0, 5, 9, 3], [ 16, 9, 1353, 16, 22, 1, 18, 28, 44, 3], [ 1, 6, 27, 1420, 0, 48, 11, 16, 25, 17], [ 3, 7, 5, 1, 1403, 1, 12, 3, 7, 40], [ 15, 13, 7, 36, 5, 1194, 24, 6, 18, 15], [ 10, 8, 9, 1, 21, 16, 1363, 0, 9, 0], [ 2, 14, 18, 4, 16, 4, 2, 1491, 1, 27], [ 4, 28, 19, 31, 10, 28, 13, 2, 1280, 25], [ 5, 13, 1, 21, 58, 10, 1, 36, 13, 1333]]) In [23]: def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ # Source code: https:// scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(, classes,title='Confusion matrix',=plt..Blues):"""ThisNormalization canapplied by setting `normalize=True`.""" plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # Compute confusion matrix class_names = [str(idx) for idx in range(10)] cnf_matrix = confusion_matrix(expected_outputs, predicted_outputs) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion matrix, without normalization') plt.show() Conclusion During this tutorial, you’ve gotten a taste of a couple important concepts that are a fundamental part of one’s job in Machine Learning. We learned how to: Encode and decode images in the MNIST dataset Encode categorical features using one-hot encoding Define our Neural Network with 2 hidden layers, and an output layer that uses the softmax activation function Inspect the results of a softmax activation function output Plot the confusion matrix of our classifier Libraries like Sci-Kit Learn and Keras have substantially lowered the entry barrier to Machine Learning — just as Python has lowered the bar of entry to programming in general. Of course, it still takes years (or decades) of work to master! Engineers who understand Machine Learning are in strong demand. With the help of the libraries I mentioned above, and introductory blog posts focused on Practical machine learning (like this one), all engineers should be able to get their hands on Machine Learning even if they don’t understand the full theoretical reasoning behind a particular model, library, or framework. And, hopefully, they’ll use this skill to improve whatever they’re building every day. If we start making our components a little bit smarter and a little more personalized every day, we can make customers more engaged and at the center of whatever we are building. Take home exercise In my next article, I’ll be showing you how to deploy a learning model using gRPC and Docker. But in the meantime, here are a few challenges you can do at home to dig deeper into the world of machine learning using Python: Tweak around with the number of neurons in the hidden layer. Can you increase the accuracy? Try to add more layers. Does the neural network train slower? Can you think of why? Try to train a Random Forest classifier (requires scikit-learn library) instead of a Neural Network. Is the accuracy better? This post is a part of Kite’s new series on Python. You can check out the code from this and other posts on our GitHub repository.
https://medium.com/kitepython/practical-machine-learning-with-python-and-keras-cc90d7a5f32a
['Daniel Pyrathon']
2019-02-23 00:00:10.208000+00:00
['Python Programming', 'Data Science', 'Keras', 'Python', 'Machine Learning']
In praise of reading books again and again
I love books, but I have a weird relationship with them. Sometimes I read books properly and sequentially, from start to finish. Half the time however, I read books in bits and pieces, often preferring to read multiple books at the same time, a page here, a paragraph here. I tend to treat my small library more like a buffet than a menu with distinct meals. Many books I read just once and never crack open again. With some, I don’t even get past the first chapter. But there are a few books that I keep coming back to time and time again. They are the books I reference often in my posts like Gary Keller’s The One Thing, or one of my favourites, 50 Cent’s and Robert Greene’s The 50th Law. These are my ‘quake’ books. Quake books (the term was coined by Ryan Holiday I think)( actually it was coined by Tyler Cowen), are the books that shake you to the core. They cause a seismic shift in your thinking and perception. They radically change the way you view and approach life or yourself. They open doors to new worlds of ideas and possibilities that were hidden from you up on to the point you came in contact with the book. These are the books you should read over and over again. Why do that? Why go back to something you already finished? Why not? For some reason, we tend to forget that repetition is how we learn anything. We understand that principle when it comes to studying and acquiring new skills. But when we approach books, we hold on to the mentality of getting it done and dusted. We read the book, and then put it down and that is it. Sure, you can treat many books that way no problem, but if you really want to extract the marrow from the bones of a book, especially a really good one, then it pays to approach reading it differently. We only retain a fraction of what we read anyway. How many times have we read a book, put it down and then completely forgotten about it? If you just read that textbook once come exam time, you would almost definitely fail? So you read, you studied, you took notes. The more we read and re-read a text, the more familiar we get with it. The easier it is to recall what we learnt and bring those lessons to mind when needed. The more times we read a book, the deeper the ideas and principles seep into our mind and subconscious, and the more they transform and change us. Which is really what they are for — To help us change and to help us grow. Now, It might seem boring to read a book you have already read before. Why read a book again when I already know what it says? Because things change, and we change. Every time you interact with a something — a book, a movie, a work of art, you bring your self, your perception, your interpretation, and your experiences to the table. What you take out of that interaction, is as much a reflection on who you are at that point in time, as it is a reflection of the thing itself. This is how we can grow to dislike something we used to love or grow to love something we used to hate. This is how many people can look at the same thing and have wildly different reactions. Reading books over and over again allow us to approach the content at different points in time. Points where we ourselves are different and have grown. Suddenly, a part of the book we usually glossed over before springs to life with new and fresh meaning. With the benefit of new experiences, we get deeper understanding and appreciation of the nuances in the ideas presented to us. We connect us to the author’s words in a way that we could never have appreciated before. We read books over and over again To remind ourselves. We are forgetful creatures. We are constantly collecting new information everyday and bombarded by stimuli all around. As we record all these new things, we forget others. Reading these books over and over remind us of what we have learned. They keep us on the path and from sliding off. They pull us back when we have strayed too far. And so these books become more than just books, they become life long companions, living sources of knowledge and wisdom, sources of strength and guidance to pull from in our journey of life, in our journey to get what we want and max out our potential.
https://oto-basex.medium.com/in-praise-of-reading-books-again-and-again-4f8ac5840bfe
['Otoabasi Bassey']
2019-07-16 18:33:57.725000+00:00
['Books', 'Reading', 'Self Improvement', 'Success']
Don’t Follow JavaScript Trends
Rewriting Your Code Some time ago, a cover for an imaginary book surfaced on the @ThePracticalDev Twitter account. Back in 2016, it was fashionable to make fun of the ever-changing world of JavaScript in a bit of a different manner than folks do today. Psst, I’ve invented the time machine (don’t tell anyone)! Let’s quickly travel back in time to 2016. SWOOSH! We are there. The JavaScript landscape looks like this: If you’re using a JavaScript framework or want to use a framework, Angular.js is probably something you’d choose. But the news about Angular 2 that’ll make you rewrite almost everything is just around the corner. Also, this new kid on the block — React.js — is coming up and getting ripe. Of course, there’s Vanilla JS, and no-framework-folks are there. Not using a framework is still a popular opinion in 2016, but it’s slowly fading. Knowing all of this, what would you do? Which path would you choose, and why? The answer might seem obvious now that you come from the future: React. But if you decided Angular.js, a couple of years down the road, you’ll get tempted to use new Angular versions and have to rewrite your code. If you chose to use React, you’d be a lucky winner since everyone is riding the React train nowadays. Now, you get tempted to drop class components and use functional components with those sweet, sweet hooks, right? Well, at least it’s not a whole new API to learn as with the Angular.js — Angular 2 change, right? So many choices, so little time. What to do? It doesn’t matter what we choose now or what we chose back in the day. We’ll still get tempted or have to rewrite our code later down the road. Reasons to do it might vary: Your company was using [insert framework name] and is unable to hire new folks You feel the old solution isn;t working out for you anymore, and you want something new You succumbed to the industry trends and want to use the latest and greatest Unless we break the cycle.
https://medium.com/better-programming/do-not-follow-javascript-trends-ca2f0dc19ec1
['Nikola Đuza']
2020-06-30 12:56:38.745000+00:00
['Angular', 'Nodejs', 'JavaScript', 'React', 'Programming']
Reinforcing personal boundaries in quarantine: how to respond to unsolicited advice, unplanned video-calling, and unwarranted pics from exes
Photo by Mike Palmowski on Unsplash.com It would be an understatement to say I am not a morning person: several cups of coffee, breathing exercises, and a dance routine are required to aid me to step into an extrovert attire. I work and write mostly during the evenings and nights, and while I greet a wake-up text from a lover with a virtuous smile, my morning elegance is something I keep to my private chambers. My mornings in quarantine are not very different from the usual ones: I wake up, stretch, power up on caffeine, cook an egg or two, slice some fruit, swallow my vitamins, and read my emails, while scheduling the important calls of the day — work, friends, sometimes family. But my mornings for the past days have had something of a disturbing element, which keeps taking me by surprise: everyone suddenly wants to video call. Video calls were not all that common until recently, which isn’t to say they weren’t an important part of virtual communication, but they were not the norm. Although I spend numerous hours in a week, sometimes in a day, with a close friend or two in a WhatsApp call, we very rarely (if ever) see each other on video. Such encounters are reserved for my psychotherapist (we live far away, thus we meet online), a lover, and, on occasion, a job recruiter — but even in these cases, with prior consent. Video calling someone without prior notice is the virtual equivalent of breaking in their bathroom without knocking, while that person is taking a dump - I have come to think recently, as several friends and acquaintances showed up on my phone’s screen each morning, eager to chat, while I was much less prepared for such immersive interactions. I imagine the person placing such calls without inquiring whether the timing is right feels confident and eager to see and speak to the other. But what if the recipient of that call is naked? What if she or he is crying, as they woke up from a dreadful, stress-induced nightmare painting the end of the world in a shopping-mall sized supermarket? What if they’re having sex or pleasuring themselves? Better yet, what if they’re going about their business in the bathroom? What if, simply, they’re unprepared to be seen? My pre-quarantine introvert/extrovert personas have been balancing each other out with the switch between indoor/outdoor environments. I love socializing when I go out but immediately retreat in my hermit shell as I rejoice in the comfort and solitude of my apartment. There is something unnatural and relatively surreal about encounters that unfold on a small screen - as opposed to meetings and dialogues in the 3D. When it’s close friends, the setting is obviously more relaxed. But when it comes to more official encounters, a sense of discomfort and anxiety builds up. Faced with virtual work interviews, I study my living ecosystem to determine what type of scenery I ought to create in order to make an effortlessly reliable, yet professional first impression. Not too creative, not too bland. A neutral background, the Business Insider advises. The wall just looks wrong. The light is too dim. A plant needs to be moved. My hair looks brittle on camera. I’m already tired. It baffles me that a video call can raise such concerns within us. Yet, we are reduced to finding acceptable, and furthermore, meaningful avenues to compensate for our loss of real-life interaction. As I navigate these new pathways of connection, I am left wondering: do we suddenly need to reframe our boundaries? Which brings the topic of unsolicited advice. In the past years, I have kept my online presence to a minimum, limiting my activity to postings about work, my books, and stories I wrote. Then, as I dealt with the grief of my last serious break-up, I limited the exposure of my private life — and thoughts, to a greater degree. Separating my internal workings from my professional ones in front of an audience was not just a trial to keep my privacy unadulterated — but merely an effort to protect my emotional health from unwarranted advice from the people on social media that I didn’t personally know — or barely knew. As an immersive journalist of many years on the topics of relationships and mental health, I inadvertently exposed several layers of my private life in order to curate stories — to a degree, and more often than not I received feedback in the form of more or less blunt psychological advice that implied I could have done differently in life — or better. But quarantine, and subsequent isolation, brought the extrovert in me back to social networking, one step at a time. As I began writing again on Medium and publishing more quarantine-related stories in my daily work as an editor too, I began to retrace my life on social media, and regain momentum there — surely as means to make up for the time and experiences I otherwise would have acquired in a real-life social setting. Naturally, I did get acquainted with people from my past too. Some I welcomed, some — not so much. Some of the calls and messages I began receiving as I stepped back on social media revolved around my health and state of mind. Speaking openly about my stress-triggered panic to leave the house for extended periods of time was welcomed with a plethora of advice. Have you tried yoga? Are you breathing correctly, though? Maybe you have, in fact, a panic disorder. Why don’t you meditate? Chin up, it will pass. Why are you feeling so scared, though? Are you afraid to die, or to get ill? You should get out of the house more! Don’t watch the news so much! Are you taking Vitamin D? Have you tried yoga, though? The trouble with unrequested advice or presumptuous opinions is that they keep popping in the back of one’s consciousness like dull hammer knocks, and one soon finds herself or himself entertaining a conversation they didn’t even want to be a part of, to begin with. It is easy to fall prey to frustration as we want to help another out of their perceived struggles, but I cannot help wondering how much we project ourselves on another when we try so hard to tell them what is best for them? Finally, in the spirit of it’s ok to text your ex now, messages from several former partners, or former friends have found their way into my mailbox. Some were expressing regret for our forlorn relationships or affairs. Some were duly inquiring about my health and state of mind. Some were sending me selfies, which fall — I came to think — just short of dick-pics, when uncalled for. I pondered over what enables people to do such gestures now (why not before?) While it is our deeply human need for connection that drives us to try to rekindle or think of people in our past in a kinder light right now, how do we keep healthy boundaries when it comes to verbal or visual interaction with them? A selfie from one of my exes made me uncomfortable in a familiar way. It prompted me to revisit the reasons we broke up, and the baggage that dragged the connection further and further away from my sympathies. The unsolicited advice I received from strangers or acquaintances prompted me to step back, thank them for being caring and thoughtful, and eventually remove myself from the conversations when it had become absolutely necessary. The common denominator of all such interactions is that they feel as if someone is walking in your apartment without ringing the doorbell. While this was a casualty in the Seinfeld series, it’s understandable that not everyone has the same legerity with their friends and families. In a sense, the quarantine has a regulatory function on the ways we connect and engage, and more importantly — on whom we choose to engage and entertain. It’s not commendable that we want, more than ever, to stay in touch — and it shouldn’t be a pain to do so: it’s a dire necessity. Still, it’s sensible that we walk the fine line between genuine care and curious trespassing, and we avoid crossing boundaries that may now call to be redefined.
https://medium.com/moments-of-passion/reinforcing-personal-boundaries-in-quarantine-how-to-respond-to-unsolicited-advice-unplanned-44fec379593b
['Ioana Cristina Casapu']
2020-04-06 17:13:37.394000+00:00
['Boundaries', 'Ex', 'Quarantine', 'Coronavirus', 'Relationships']
Staging a Social Community Experience
But the bigger question remains: How do you pull these core elements of scenes, exchanges, and materials together gracefully? How do you set up the interfaces such that everyone feels they can understand and maneuver through the various interaction pathways present in a conference? (by the way, who really are the customers you’re designing for? We’ll get to that in a moment!) And doing so while propagating a sense of community and the opportunity to learn, grow, build. There has to be a pragmatic sense of the takeaways to be shared back at the office to justify the expense, as mentioned earlier. After all, the budget might not be there next year! Here’s what I’ve identified that needs to be correlated to achieve positive effects: The participant journey (across multiple scenes and exchanges of content). Sure, it may sound trite, but any conference or workshop event is a journey of awareness, discovery, understanding, and self-realization for everyone involved. From that first tweet to the last feedback survey, consideration of that journey is vital to enabling a memorable, rewarding experience for everyone. The starting points, the thematic structure as a bridging of conversational moments all throughout, toward the final wrap… and also afterward! Remember the post-journey moments, of appreciatively and gently nudging to the next event, when you may spin up the journey all over again. How does it all flow together, from end to end, entry to exit, so that it feels intentional and balanced? The points and paths. This is basically the arrangement of the agenda pieces, all those scenes and exchanges and content items…into a meaningful structure optimized for social engagement, knowledge transfer, and, frankly, other forms of human nourishment — which means great snacks and plenty of breaks for fresh air and coffee! Even some musical interludes to break up the frequencies of attention, or as palate cleansers between sessions to help shift mindsets a bit. Consider those transitional moments that guide the audience (and speakers) across the thresholds of topics and discussions, and how to facilitate such movements with a sense for the tempo and temperature. Useful artifacts. Yes, this can refer to the swag bags, filled with fun albeit banal toys and shirts with logos. But this also refers to other artifacts that shape the arenas of social encounters: the posters, signage, name badges, schedules/brochures, and interstitial slides. And don’t forget book signings, whiteboards, job boards, tables with LEGOs, and stickies and markers for ideation! Some are take-home, while others serve as references along the exploration of pathways across scenes and exchanges. The people. This is the whole crux of the matter, right? But who, really, is the audience you’re staging this event for, and who else is involved? Remember to consider the speakers (what do they stand to gain and strive to convey in their talks?); the sponsors (how do they want to get their name out and empower their brand?); the staff/volunteers (typically uncompensated, so what will they get out of it? How can you create an opportunity for volunteers to really contribute and serve as gracious hosts or leaders?); and of course the attendees — what are their expectations and goals? Never forget the cycles of feedback from the attendees and others, to sustain productive future iterations! It is clearly a complex endeavor to stage the social-interaction encounters to achieve shared understanding and significant knowledge exchange, with valuable outcomes. But it is well worth the pressures and stresses of the challenge, if only to help everyone feel more illuminated about the topics and practices of design. So, the next time you’re at a buzzy UX conference or messy workshop, consider how multiple pieces came together for you — or didn’t! There is of course inevitably the “madness of the launch” of the conference or workshop itself — worthy of another essay on logistics and Murphy’s Law…but in the end it’s all about setting the stage for compelling conversation. To design such professional social events is to manifest a shared story, sparked by some question or problem, whereby you are intentionally shaping the conditions for crucial conversations. These should be about the essence of our practice, our arts of creating meaningful interfaces and interactions that may linger in the form of those golden nuggets each of us seek to pass along to our peers for our community of practice.
https://medium.com/the-designers-speakeasy/staging-a-social-community-experience-b632d58ae18f
['Uday Gajendar']
2020-06-15 04:36:25.027000+00:00
['User Experience', 'UX', 'Conference Planning', 'Design', 'Event Planning']
Dear Apple, Please Make a Next-Gen Apple Cinema Display for ‘Normal People’
Pro Display XDR Alternatives The alternatives to the XDR are less appealing. While Apple uses aluminum, others still use plastic. Sure, some screens look good, but you can not go beyond good with plastic. Why aren’t more manufacturers using premium materials? The main problem with the XDR for the average consumer is that it costs $7.000. This is a screen most people should stay away from as it is targeted at high-end professional users. So where does that leave the average Joe? Nowhere really. There are no options and you have to look elsewhere. Even Apple lists LGs displays as options on their site. I’m not sure this is a sign they won’t bother with displays for you and me anymore. After days of research, there are a few alternatives you can consider. Good luck learning the model names though. BenQ One of the best options offered by BenQ is the Designer Professional Monitor with 31.5-inch, 4K UHD, Display P3 |PD3220U. With its similar size and great panel, it is a valid competitor. The display is high rez (although not 6k) with a 10-bit HDR panel. You can daisy-chain it to set up multiple monitors and it comes with a puck with buttons so you can quickly change color modes (if you’re a fan of more clutter on your desk) ASUS ASUS has a lineup targeted towards creative users. They have a whole series called ProArt. It is within this lineup we find some of the best alternatives to the XDR. The ASUS ProArt Display PA32UCX-K 4K HDR IPS Mini LED Professional Monitor — 32” There is a variety of models to choose from. The K model f.ex. has the calibration tool included. LG LG has a few options. Apple sells the 27-inch LG UltraFine 5K Display on their site and it is tailored for the mac. It doesn’t even have hardware menus as the OS will help you with the settings. It is packed with a web camera, microphones, speakers, and several USB-C ports. The thunderbolt plug also powers your MacBook. If you are looking for more screen real estate, you can have a look at the 34'’ LG 34WK95U-W UltraWide® 5K2K Nano IPS LED Monitor with HDR 600. Working with widescreen is amazing. The productivity levels are boosted and you don’t get a screen border in the middle of your view. Another popular option in the LG camp is the Ergo series. The LG 32UN880-B 32 Inch UltraFine™ Display Ergo 4K HDR10 Monitor is something to consider. The stand looks fantastic.
https://medium.com/macoclock/dear-apple-please-make-a-next-gen-apple-cinema-display-for-normal-people-1d87d40dc48a
['Martin Andersson Aaberge']
2020-12-12 06:29:27.773000+00:00
['Technology', 'Apple', 'Content Creation', 'Hardware', 'Creative']
Life Lessons from the Locked-In and Laid-Off
Life Lessons from the Locked-In and Laid-Off Trying to stay sane as an unemployed shut-in Photo by JD Mason on Unsplash I have a dirty secret to share. When my last office first started working remotely, I was convinced that I was going to become a body-builder, superstar chef in all of my newfound free time. Goodbye, MTA! Time to be interior designer by day, kick-ass housewife by night. Then I lost my job. After finding a way to peel myself off the floor and a way to plug the tears, I told myself that in addition to mastering a souffle, building six-pack abs, and cleaning every corner of my house until it shines, I would push forward so fast and build the best network and get the greatest new job in record time. Right after I finished solving all of my life’s riddles and questions. So what’s my secret? I’ve streamed more episodes of 90-Day Fiance and Tiger King than Lynda learning classes. While my Instagram feed has been flooded by lots of #quarantinecooking, and I did wipe down the stove last night, I’ve just been taking it one day at a time. Sometimes that means eating chips for lunch. I missed my self-imposed deadline to finish my revamped portfolio. After hitting the ground running at warp speed, I realized that I needed some time. It was too much too fast. And it’s okay if at the end of this crazy chapter of my life, my abs still push in when you poke them and my bookcase is still a little dusty. This time isn’t easy. It’s hard and sometimes my lungs feel caked in cement, making breathing difficult and labored. I’ve never been good at sitting still, much less sitting at the dinner table every night with Anxiety and Uncertainty, each one hogging the mashed potatoes and unwilling to go home (or at least get out of my house). That said, in these past few weeks, I’ve learned some important life lessons to help get by. And if they can help me, queen of feeling perennially restless, I’m pretty sure they can help you too. Wash your hair. No, really. Get out of bed, have a coffee, take a shower (or bath, if you’re fancy like that) and put on clean(ish) clothes. Not the same clothes you’ve worn for the past week. It doesn’t matter if it’s a pair of sweatpants or a suit — put on what makes you feel good in real life, not life in the time of Corona. You may not have anywhere to go, but starting your day like you have somewhere to be or someone to see goes a long way in establishing some sense of normalcy again. It helps me feel like a real person, like who I was before the world was flipped on its head. Don’t pull your trigger. We all have that one thing that we know will upset us. For some of us, that might be several or many things. I’ve invested a lot of time in my life to working through things and on the pieces of myself that are imperfect. When my heart starts to beat fast, or my fingertips feel tingly, I know I’m sailing toward a bad dinner date with Anxiety. While I can’t always predict when it’s going to happen, there are things that I know doing are asking for trouble. Now is not the time, for example, to hop on the scale every day and meticulously track my weight. As much as that part of my brain is screaming at me to check, I have to tell it sit down and shut up. There are enough things out of my control right now — I don’t need to give in, pull the trigger, and add another ball of uncertainty to juggle. You don’t have to be okay. This is probably one of the easiest things to say, but the hardest to put into practice. You’ve undoubtedly had someone say this to you and I’m sure they had the best of intentions. I’ve even told it to myself many, many times, like some sort of strange daily affirmation. The biggest challenge is that by and large, people actually want you to be okay because they don’t know how to help if you’re not. They want to ask you how you are, because that’s the kind thing to do, but tell them anything other than that you’re fine and cheerful, and it throws them for a loop. But, this is a time for you — you are allowed to feel anxious, scared, and/or nervous. It’s okay to tell people that. Don’t burden yourself by being worried about burdening others. People will understand, even if they don’t have the right words to tell you that. It’s okay to be okay today and a wreck tomorrow. Or to be okay right now and a mess in an hour. It’s okay to be okay for a week and then break down. I’ve been amazed at the resiliency and strength I’ve found while dealing with this time of unpredictability. For the most part, I’ve been able to set aside the feelings I have about being laid off, or the craziness of Corona, and focus on what’s ahead. That said, like the sneaky fiends they are, Anxiety and Uncertainty pop up and hit me in the face, often when I least expect it. Yesterday, I was working on my portfolio and everything was great. I was happy and actually impressed looking back on what I’ve accomplished in my professional career. Then I got to the section about a particular project and alarm bells started going off. All of a sudden, I was not okay. That’s okay. I am okay now and will be until I’m not again, at which point the cycle will reset. Life goes on. Take a break. When the tears started bubbling up while looking at my project, I recognized that I just needed to take a minute. So I made lunch and came back to it in a few hours. Last week, I was doing so well until I had to work on my resume; I stared at the screen for hours without hardly writing a word. All of a sudden, I was crying, reminded of everything that was no more. It made things even more finite. Instead of forcing myself ahead, I took the day off and worked on a New Yorker puzzle to keep my mind distracted, but focused. I started fresh the next day and it went much better, resulting in a final product I actually like. I’ve learned that I don’t have to push myself to the edge just because I feel like I should do something. If my body or brain is telling me to stop, I need to listen. Ultimately, only I know when something is right for me. Taking a step back, or a moment to rest, doesn’t make me weak. It means I’m strong enough to give myself what I need. You don’t have to be Martha Stewart. Your house won’t fall apart if you don’t do more than you did before the world was crushed by Corona. If cleaning and cooking make you feel good, then go for it. Try a new recipe. Polish your furniture. Trap all of your dust bunnies. But, if you don’t feel like it, that’s okay too. I don’t recommend you let a mountain of laundry pile up, quit doing dishes, stop bathing, or eat potato chips for dinner every night. But, quarantine life is not a sport or a competition. I’ve tried lots of new things in the kitchen— baking with nut milk, spicing up boxed mac ’n’ cheese, and dalgona coffee (but let’s be honest, that was for the ‘gram) — but only because that brings me joy. I also only vacuumed for the first time in three weeks yesterday, mainly because my dust bunnies were full-grown rabbits and I was scared not to. I have a pile of dishes in my sink at the moment. We ate tater tots for lunch yesterday (they were veggie tots, but still). I wasn’t Martha Stewart before Corona came to town, so I sure as hell don’t have to be her when it’s gone. You are not alone. It feels callous to say there are silver linings to this pandemic, when countless lives are being lost, and essentially all of our lives are impacted in a meaningful way. But, you are not alone. This is a collective experience. While we may not all be affected in identical ways, Corona is universal and shared. If you feel worried, you are not alone. If you lost your job, you are not alone. I’ve been overwhelmed by how supportive and generous people have been since I was laid off. I have connected with old friends and family, and maybe actually made a couple of future friends through networking. We have been reminded of our humanity in scary ways, but also in beautiful ways. Ultimately, we are all flesh and blood, and when you strip away the distractions, we are much more the same than different. I’m not going back. There will undoubtedly be lasting impacts of COVID-19. People are already talking about what it means for the future of work and the social safety net. There are plenty of good questions about how we move forward in the next weeks, months, and years. One thing I think we can all be certain of is that our lives will never be exactly as they were. And you know what? That’s okay. I was someone who was often too busy to pick up the phone, too physically tired after work to do anything but try and force myself to eat some random thing out of the fridge, and so emotionally and mentally exhausted that I had to remind myself constantly to let things go to avoid a complete breakdown. I knew, deep down, that I needed to change — that I needed a change. I just always told myself that it could wait until I did the next thing, finished the last task, and that I was fine. I pushed and pushed down my fatigue, convinced I could just keep going. I tried tell myself that while change would be nice, I didn’t need it right now. Well, Life decided for me that I couldn’t wait any longer. And now that I’ve had a few weeks to reflect, Life was right. I’m not going to lie and pretend that I’m elated the change happened this way, or that it’s not hard, or that I don’t miss parts of my “old” life. I’m mad that I didn’t make the change myself, that I had no choice, and that it meant losing the way I have. That said, I’m not going back. Life said, do this. And for once, I’m trying to follow the instructions. Maybe I really didn’t know better before, despite what I told myself. There is a path forward. It’s a little thorny at the moment, and I can’t see where it’s going, but like each day, I’m going to take a few more steps ahead today. Right after I wash my hair.
https://medium.com/swlh/life-lessons-from-the-locked-in-and-laid-off-58041411ade6
['Laura A. Heeter']
2020-04-03 15:49:40.581000+00:00
['Self Improvement', 'Self', 'Advice', 'Personal Growth', 'Coronavirus']
Learning to See: Visual Inspirations and Data Visualization
Only apparently unrelated, abstract art and data visualization actually have a lot more in common than what one would expect, and can be considered by some means two very close disciplines. A study on “Early Abstract Art and Experimental Gestalt Psychology” by Crétien van Campen of MIT draws the conclusion that the same theories that are universally recognized as a basis for perception studies to support effective data visualization, have actually also deeply influenced the work of abstract artists such as Kandinsky or Mondrian. This common root that we can trace back to German psychologists of the early 20th century reveals how, while clearly pursuing different goals, abstract artists and data visualization designers both draw on common perception principles and apply them to simple shapes and a definite range of colors to create basic visual compositions that please the eye and, hopefully, deliver a message.
https://medium.com/accurat-studio/learning-to-see-visual-inspirations-and-data-visualization-ce9107349a
['Giorgia Lupi']
2016-01-27 16:27:25.618000+00:00
['Art', 'Design', 'Data Visualization']
The Next Evolution of Marketing Mix: Growing our Company in the Me Generation.
When old marketing frameworks aren’t relevant for a digital age, new ideas must be adopted to reach our multi-tasking, well-connected customers. tl;dr: Marketing mix frameworks have evolved to place customers first in our marketing strategies. Emphasis should be on our customers’ needs and wants and how our products can help satisfy those desires. Participation should be a new dimension with which we build our strategies. Companies that build participation avenues early will see organic growth through customers’ excitement and intimacy with products. Step into a time machine: We all remember sitting in our Marketing 101 classes, learning the basic 4P’s framework. We were taught that if we stepped into our marketing kitchen, customer acquisition and subsequent profits would be as simple as adding our four “P” ingredients. A dash of promotion. A sprinkle of place. Et voila: success! Dinner is served! Swedish Chef courtesy of GIPHY. But making a marketing recipe from a combination of product, price, place, and promotion isn’t as viable a formula in today’s connected world. Let’s look at the formula another way. The 4P’s v2.0 Created by McCarthy in 1960, the 4P’s were thought to be the most common and necessary variables to the marketing mix plan: 1. Product is WHAT we are selling: a good or a service. 2. Place is WHERE we are selling it: the channels. 3. Promotion is HOW we are selling it: building awareness. 4. Price is HOW much a potential customer is willing to pay: perceived value. The theory is that if you should be able to build your marketing strategy around these four elements for business success. But where is the rest of the customer acquisition funnel? What about brand bonding and loyalty? Today’s world has easier flows of information and more informed customers. We must adapt our marketing strategies. We need a more user centered approach. This is where 4P’s become 4C’s and/or 4E’s. Image Credit: compilation by the author of frameworks from McCarthy (1960), Lauterborn (1990), and Fetherstonhaugh (mid 2000’s). New Frameworks: a focus on customers and their journeys As seen in the diagram above, these new frameworks place a stronger emphasis on our customer and her journey with our company. We are not selling a product and its features, but an answer to our customer’s needs. Who does our customer want to be, and how can our product help her achieve that image? A best practice is turning our attention from our product initially, and focusing on our customers first. Each of the 4C’s reframes our original “P” metrics to be more applicable and customer-centric. Moving beyond the user, the 4E’s make the framework more holistic, with a deeper focus on customer experience and journey. Not only should our company be developing a product that fits the market by solving a customer’s needs, but we should be creating continuous pre- and post-sales connections. Ann Handley, author of “Everybody Writes” and world’s first Chief Content Officer, says, “Make your customers the heroes of your stories.” But perhaps we should be helping our customers become the heroes of their own stories. People buy into Apple’s mission of “challenging the status quo and doing things differently” not because they are happy to be associated with a company that has cool values, but because they too hope to become or be seen as rebellious and innovative. I swear, I’ve never waited 8 hours in line for an iPhone. Futurama courtesy of GIPHY. Louis Vuitton gets customers to spend $10,000 for a handbag they don’t need, not because they have a super unique product, but because the company plays into consumers’ fantasies of being glamorous and worldly. They achieve this by using beautifully photographed celebrities holding handbags while they walk through exotic locations, emphasizing the storied heritage of the brand, providing a scarcity of purchasing channels, and offering personalized service throughout the purchasing journey. For more on this, see Simon Sinek’s Why, How, What mindset TED Talk and Alan Klement’s Jobs to be Done theory. The 5P’s: Customers are our co-stars Just as The Beatles had a mysterious, missing fifth member, so too could our trusty 4P framework. There have been many candidates for this role: people, proliferation, permission, personalization, physical evidence. But the word that seems most relevant for today’s social society is participation. The digital transformation has created a participation revolution: · We watch, with hundreds of millions of other people, the birth of a baby giraffe. · We share our vacation pictures with that girl we met once in 5th grade (and hundreds of other people). · We send 140 character public message to CEOs of our favorite products and lead singers of our favorite bands. We participate in the day-to-day lives of friends and strangers alike, and consumers expect the same from favorite brands and new companies. Businesses that do not engage customers at all cycles of product life risk dying a quick, lonely death. So how can companies foster participation with their customers to give them a starring role? Introducing four more “C” words to think about: The New 4C’s: better, faster, stronger. Businesses should focus on building excitement and intimacy in their relationships with their customers through both company- and customer-led engagement. We can achieve these more meaningful relationships with (a) communication, (b) creation, (c) care, and (d) community. Image Credit: Modified from the Marketing through Social Media course at HEC Paris, Jouy-en-Josas, France a) Communicate: Using social media and content marketing to reach your audience and interact with them in digital spaces they frequent and feel comfortable in b) Creation: Allow users to create content that sells the product or service for you, making them the stars — influencers, evangelists, and buzz generators c) Care: Develop customer care avenues that are efficient and effective, so as to nurture loyalty and trust in your product, as well as continuously monitor progress d) Community: Build a space where like-minded lovers of your product can come together and bond with the company and themselves Using these new 4C’s to build a strategy for customer acquisition, retention, and referral will allow our audience to participate more authentically, and become the authors of their own product evolution story. Content Marketing: Is it me you’re looking for? Now that we’ve built a strategy that focuses on customer participation, we need to maintain engagement. How do we implement these frameworks? Remember: our focus is no longer on product, but on customers’ need and wants and their experience. One of the best ways to do this is by producing quality content. Lookout for these pitfalls when attempting a digital strategy: 1. Producing content simply to fill space in our channels: A focus on outbound messages and acquisition content, but not enough community building to bolster participation. 2. Not coordinating content across channels: Content is produced randomly and excessively, with little thought to design or consistency. 3. Creating content on a campaign-by-campaign basis: No long-term vision about how customers interact and experience content over time. 4. Not curating content that is channel specific: Mixing up which users frequent what sites, and how they interact with the content; not understanding which content “pops” on each site. Plan a coherent, thoughtful engagement plan with an emphasis on holistic experience. The hope is that further participation and community building will happen organically if the correct seeds are planted early. From Strategy to Reality Frameworks and theories may seem clunky and burdensome as we try to employ them. But they can add value when we understand how to adapt them. It doesn’t matter if there are 4P’s or 4C’s. What matters is how we can build on these concepts to create meaningful stories and experiences for our customers. What’s the one word you would add to your marketing mix framework? It doesn’t have to begin with “P”! I’d love to hear your ideas of how marketing strategy is evolving, either in the comments below or at [email protected] A big fist bump to Thomas Maremaa, Haomin Xu, and Christine Luc for the thoughtful comments. Props to Elizabeth Braden for editing. And a heartfelt thank you to Tradecraft and HEC Paris for providing me space to learn and grow.
https://medium.com/tradecraft-traction/the-next-evolution-of-marketing-mix-growing-our-company-in-the-me-generation-d3e98779a21d
['Jessica Poteet']
2017-08-03 23:09:31.242000+00:00
['Strategy', 'Marketing', 'Growth Hacking', 'Millennials As Consumers', 'Digital Marketing']
👔 Career Goals
C A R T O O N 👔 Career Goals What’s yours? Cartoon by Rolli Cartoonist’s Note №1 Though I personally enjoy my job, and can’t imagine retiring, I’ve observed that I’m in the minority. That said, I have no specific goals, other than to continue to exist. How about you? Cartoonist’s Note №2 This cartoon will earn no money (Medium recently all-but-demonetized poetry, cartoons, flash fiction and other short articles). Please consider buying me a coffee. More coffee=more cartoons for you to enjoy. Cartoonist’s Note №3 Like this cartoon? Get in on a coffee mug! Cartoonist’s Note №4 This cartoon is brought to you by the letter “D.” “D” is for Dr. Franklin’s Staticy Cat and Other Outrageous Tales, my collection of humorous stories and drawings for children. Cartoonist’s Note №5 My new one-man Medium magazine is called — Rolli. Subscribe today. Cartoonist’s Note №6 From now on, I’m letting my readers determine how often I post new material. When this post reaches 1000 claps — but not before — I’ll post something new. Cartoonist’s Note №7 You might like these cartoons, too.
https://medium.com/pillowmint/career-goals-7dec6f74f743
['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites']
2020-03-01 16:54:33.974000+00:00
['Humor', 'Comics', 'Cartoon', 'Coronavirus', 'Work']
The Gangster in the Blue Serge Suit
After his release Dio moved to Allentown, Pennsylvania where he opened a dress manufacturing business. He sold it in 1950 before moving back to New York to open another dress company. If there was any doubt to Dio’s motivations in labor organizing, it was exemplified in what he did after he sold the Pennsylvania business for $12,000. One year later, Dio returned, demanding another $11,200 to use his influence to prevent the plant from unionizing, while at the same time scheming with corrupt union officials back in New York to take control of his own charter, the United Auto Workers-AFL Local 102. Dio’s association with the unions brought heavy scrutiny from law enforcement which knew his reputation. This became a problem in his quest for union leadership. The parent organization accused Dio of stacking the management of local charters with unsavory characters with criminal histories, and sought a way to remove Dio and his ilk from the organization. Through the course of their investigations, they were even left baffled as to how Dio, who had never worked in the trade, acquired the charters to begin with. Union officials sought the procedural means to oust Dio, but like the challenges of Dio’s criminal prosecutions, the union investigations were unable to find victims willing to testify against him. They finally succeeded in 1954 when Dio was successfully prosecuted for tax fraud due to income he had failed to report on the sale of his Pennsylvania dress business, and the bribes made to keep the plant non-union. With his 60-day jail sentence, the union leadership finally had something they could use to justify Dio’s ouster. In the end, the unions accused Dio’s local organizations of not even having members, deeming them “paper locals” due to their inflated or imaginary numbers. The Times reported that that the New York District Attorney’s office said Dio’s local existed for the sole purpose of forcing “extorsive action” against the public. Yet even with Dio’s setbacks and brief jail sentence, he was still a stubborn problem for the authorities and the unions. One of the challenges to prosecuting individuals like Dio was uneven application of the law across the country as well as between city, state, and federal authorities, according to McCarthy. “It was a hodgepodge of enforcement,” he said. It took time for them to evolve and catch up with the criminals. In 1956, the U.S. Attorney’s office warned that New York was on the verge of a “gangster invasion.” Much like the realignment of security agencies after the 9/11 attacks, city, state, and federal law enforcement started working on ways to share evidence and communicate more efficiently to effect more successful prosecutions of organized crime. The timing was appropriate. After Dio’s release in 1954, he went right back to the labor racket he knew so well, and his methods grew even more brutal.
https://medium.com/memory-project/the-gangster-in-the-blue-serge-suit-801da4f616be
['Ben Feibleman']
2017-05-02 18:52:03.673000+00:00
['Organized Crime', 'New York', 'History', 'True Crime', 'Mafia']
Octhum: a tool to unify different Artificial Intelligence approaches
Artificial Intelligences, in few words, are models that simulate human brains. For an example, Neural Networks, one of more famous artificial intelligence approach, learn about a scope (through an information database file or another font that contains patterns), simulate the neurons signals and your connections (synapsis), considering the weight of each one in a “brain processing”. Exactly this happens in our brain: we grew up learning patterns our entire life, classifying them in images, audios, and touches. The great problem is that there isn’t an easy way to reuse artificial intelligence software, as we reuse our brain to process everything. If you need to work with tumor patterns, you will need to acquire a neural network that will have specific synapsis weights, and neuron quantities applied to this scope. If you are a biologist and need to recognize whales species by their noises, you will need to acquire another neural network platform, containing synapsis weights and neuron quantities completely different than the first case. Octhum was made to solve this problem, training the intelligences with many models, verifying which model is the best and use it. In this way, we can use various scopes in the same platform, with each scope using the model that best suits. This is possible because Octhum requires the same formatting in the database file content. Below we have two examples: an intelligence to decide what’s the color, based in RGB inputted, and another intelligence to decide what’s the abalone’s sex, based in some characteristics, like rings number, viscera weight, height, width, and others. In the first intelligence, we should format the CSV file (used as database file in Octhum neural networks training) as in the image below, where the first column contains the final classifications, and other columns contain the inputs (or variables) associated with each classification. The second and third lines contain the minimum and maximum values of each variable (only to validation). CSV formatted containing some RGBs and your respective colors In the second intelligence, we should format the CSV file as we did in first intelligence, maintaining the same structure. To train any intelligence, the file format should be the same. CSV formatted containing some abalone’s characteristics and your respectives sexes Octhum will process these databases and will store the configured neural network, accessing it when the user wants to use the intelligence. When this happens, Octhum receives the values of the variables (defined by the user), input them in a neural network, and give a response with the final classification. The images below show the first intelligence being used. Fields to input variable values R=255, G=0, B=0 is red R=0, G=255, B=0 is green
https://medium.com/datadriveninvestor/octhum-a-tool-to-unify-different-artificial-intelligence-approaches-69efdb817564
['Vitor Fonseca']
2019-07-06 00:47:00.860000+00:00
['Neural Networks', 'PHP', 'Rest Api', 'Artificial Intelligence', 'SaaS']
Python 3.9 Updates in 2 Minutes
The stable version of Python 3.9.0 has been released on 5th October 2020. Let’s see the new major features. Dictionary Merging Consider two dictionaries having the same key-value pairs except one of the values in one dictionary. For example, a person's email id has been changed recently which is in a new dictionary(b) and you would like to update the email id in the original dictionary(a) containing other details. There are two ways to do it(updating dictionary a) in Python 3.9. Output: {'id' : 10, 'username' : 'python3.9', 'email' : '[email protected]'} Type Hints Python now supports native type hinting. You can have a list of integers only, if you try to add a string to the list it throws an error. Removing Prefix/Suffix Suppose I have a list of doctor's names and one out of them does not have ‘Dr.’ prefixed. Now I would like to remove the prefix so that I have only the names on my list. Let’s first see how it was done before Python 3.9. Output: ['Leonard McCoy', 'Beverly Crusher', 'Julian Bashir', 'The Doctor', 'Phlox'] Python 3.9 provides two functions to remove prefix and suffix: removeprefix() removesuffix() Now let’s see how to the above job after Python 3.9. Output: ['Leonard McCoy', 'Beverly Crusher', 'Julian Bashir', 'The Doctor', 'Phlox'] Time Zones Python 3.9 provides native capabilities to specify the timezone. Previously we had to do this by installing the third-party library called pytz. The below code shows the time by timezone India.
https://medium.com/towards-artificial-intelligence/python-3-9-updates-in-2-minutes-30ab522c1f3c
['Sujan Shirol']
2020-10-08 03:19:49.969000+00:00
['Programming', 'Python Programming', 'Python3', 'Updates', 'Python']
Two-factor authentication flow with Node and React
In this article we will write a simple React app to demonstrate a two-factor authentication flow, which will prompt for username and password and then a time-based generated password. We will also generate a QR code containing the two-factor configuration that can be read by authenticator apps such as Google Authenticator, in order to generate one-time passwords. If you are not familiar with two-factor authentication and how one-time passwords are generated, I recommend this excellent article. Here we will focus on client and server interaction to successfully implement a two-factor authentication flow. The full code is available at http://github.com/OnFrontiers/mfa-demo-node Project structure There are quite a few files to bootstrap our app and handle the different concerns. Hopefully I managed to keep their implementation to a minimum, so bear with me ;) Let’s take a look at them: - app - components - App.js - Input.js - Login.js - api.js - index.js - server server.js storage.js webpack.config.js package.json Here is where our app starts. We try to login with the session cookie and and then initialize the app: Login request to the server: The App component will control the authentication flow: Our stock login form: And a simple input component: Here is our server implementation. We use a cookie session and serve our static files on the public folder. The /login endpoint checks the user credentials if provided, otherwise it checks if a user has been restored from the session cookie: The user state is saved in a .json file. This is not suited for production use, but it will do for our demo: The dependencies: There is a build configuration for the client bundle and another one for the server bundle. Both are processed with Babel: Run the demo app Install dependencies: npm install Build the app: npm run build:dev Start the server: npm run server:dev It will be available at http://localhost:8080: Setup two-factor authentication There are two types of one-time password: HOTP (HMAC-based one-time password) is an algorithm that uses HMAC with a secret to generate a one-time password based on a counter value. TOTP (Time-based one-time password) provides the current time (typically in 30 seconds increments) as a counter value to generate a one-time password using HOTP. The algorithm implementation is taken from the article mentioned above which directly follows the specs RFC 4226 (HOTP) and RFC 6238 (TOTP). In this demonstration we are going to use TOTP. We are going to need new dependencies: npm install base32-encode base32-decode qrcode We use the verifyOTP function to verify that the code provided by the user is correct. Note that we iterate over the window argument which defaults to 1, effectively checking time — 1 , time , and time + 1 . The reason is that by the time the user send the verification code, the 30 seconds window for that code might have already passed and it would be considered invalid. Thus, for a better experience, we consider the last (and next) code valid as well. For better security, a window higher than 2 is not recommended. If you choose not to use an authenticator app on your flow, you could send the code generated on generateTOTP to users via SMS or Email. We need two new endpoints to setup two-factor authentication: /mfa_qr_code : this endpoint generates a random secret for the authenticated user (if not yet generated), encodes the configuration URI as a QR code and serves it as a PNG image. A typical configuration URI looks like this: otpauth://totp/MfaDemo:alan?algorithm=SHA1&digits=6&issuer=MfaDemo&period=30&secret=5CZ4UNFL54LOGJ24ZIWUHBY MfaDemo is the application issuing the configuration and alan is the application user. This will be displayed on the app authenticator UI. Authenticator apps typically use SHA1 and issue 6 digit codes every 30 seconds by default. The secret value will make sure that the generated code is unique for that user. /verify_otp : this endpoint verifies that the code sent by the user is correct, returning true if valid and false if invalid. If valid, it also sets mfaEnabled user attribute to true . When MFA is enabled, we want to require a one-time password when the user is trying to log in. We will do that in a moment. This is how our server code looks like after adding the two endpoints: On our client code we add the API call to /verify_otp and create a new form to prompt the one-time password. If MFA is not enabled yet, we show the QR code image, so the user can scan it on the authenticator app. Our App component after adding MFA setup to the flow: Require one-time password on log in Once MFA is configured, we want to prompt the user for the verification code after log in. For that we need to make some adjustments in our server. When validating the session on /login , we will check if MFA is enabled and whether is has been verified on the current session. If not, we return the 403 HTTP code, which in our app has the semantics that the MFA is required. If we receive a 403 response, we prompt the user for the verification code. if (req.user.mfaEnabled && !req.session.mfaVerified) { return res.status(403).end(); } On /verify_otp , if the provided verification code is valid, we flag the session as mfaVerified : req.session.mfaVerified = true; And finally, to protect our future endpoints, we add this middleware: // Routes beyond this point must have MFA verified if enabled app.use(function (req, res, next) { const user = req.user; if (user && user.mfaEnabled && !req.session.mfaVerified) { return res.status(403).end(); } next(); }); Updated endpoints on server.js : Our API call logic is updated to consider the HTTP 403 response: The app initialization will pass the requireMfa prop to App component: And the App component will use it to control the authentication flow: Conclusion It can be tricky to get an authentication flow right. Hopefully this demonstration will give you some insights to implement your own according to your needs.
https://medium.com/onfrontiers-engineering/two-factor-authentication-flow-with-node-and-react-7cbdf249f13
['Alan Casagrande']
2020-12-15 13:19:32.942000+00:00
['Nodejs', 'Software Development', 'Two Factor Authentication', 'React', 'Cybersecurity']
5 Lessons From a Trauma Survivor
To varying degrees, as a result of Covid-19, everyone on the planet is joining this previously elite club. Trauma survivors. Apart from the physical, economic, cultural and social damage, there will be an enormous impact on the mental health of everyone from survivors and the medical professionals to those who found their livelihood evaporate overnight. Those in abusive relationships will have the trauma complicated by the undesirable imprisonment with their abuser. Children sold the story that steady progression through the education system is the only way to achieve a meaningful life will find themselves swimming in currents that no one had warned them about and for which they are ill-prepared. Grief will become commonplace. There will be a massive spike in those exhibiting the symptoms of PTSD The main symptoms of PTSD are: Re-living the traumatic event through distressing, unwanted memories, vivid nightmares and/or flashbacks. This can also include feeling very upset or having intense physical reactions such as heart palpitations or being unable to breathe when reminded of the traumatic event. Avoiding reminders of the traumatic event, including activities, places, people, thoughts or feelings that bring back memories of the trauma. Negative thoughts and feelings such as fear, anger, guilt, or feeling flat or numb a lot of the time. A person might blame themselves or others for what happened during or after the traumatic event, feel cut-off from friends and family, or lose interest in day-to-day activities. Feeling wound-up. This might mean having trouble sleeping or concentrating, feeling angry or irritable, taking risks, being easily startled, and/or being constantly on the lookout for danger. It is not unusual for people with PTSD to experience other mental health problems as well, like depression or anxiety. Some people may develop a habit of using alcohol or drugs as a way of coping. [Pheonix Australia] Lesson 1. Nothing will ever be the same again. I have had two major traumas in the past five years. A brain tumour administered the first. It knocked out all of the hearing in one ear overnight, caused me to lose my balance necessitating walking with a stick and gave me several months of facial pain, anxiety and depression. After four weeks of sick leave, I tried to go back to my job as a music teacher. I wish someone had told me that nothing would ever be the same again. I wasted enormous amounts of energy, trying to get my life back to normal. I went to countless meetings, we agreed a phased return to work and carried out an assessment of fitness for work all because I wanted life to be normal again. The moment I walked back into that school, I intuitively knew that I was no longer capable or interested in being a teacher. Walking into a crowded classroom was a terrifying experience. I could no longer tell the direction in which sound was coming. I had no depth of sound field, so the voice of the child in front of me was drowned out by all the other voices in the room. In a music classroom, I was overwhelmed with a cacophony and unable to use my hearing to focus on one sound to the exclusion of others. My mind drove me on. “You should be able to do this! You have to earn a living. There are plenty of deaf teachers. You are only deaf in one ear.” After two trial lessons, I gave up. Thanks to a wonderful boss, I was given a part-time job in a quiet office with no further contact with children. The lesson is - look at what you want to be doing with your life anew. If you have lost your job, are you going to bust a gut to get back into the same line of work or could you step into something more challenging or fulfilling or more suited to your current situation? If your relationship came under strain during this period, are you going to soldier on with it or use this opportunity to move to something new? If your business collapsed will you automatically look for support to rebuild it or will you step sideways into something more relevant to the times? Acknowledge the grief. When I look back at those weeks struggling with the loss of my hearing and my livelihood, I can now see it as circumstances allowing me to make changes in my life that were well overdue. Can you grasp the awful circumstances of a pandemic and see a personal opportunity for growth opening up? Lesson 2. You will find yourself responding to events in a way that seems out of your control My Second Trauma was in July 2018 when my beautiful, extraordinary daughter Holly took her own life. It was entirely out of the blue, she seemed to have everything she needed in life, but in a shockingly short period, her mental health deteriorated and became fatal. Holly In a short article, I can’t possibly express all the feelings that I have been wading around in since that time. What I do know is that my body can suddenly start shaking for no apparent reason. My mind will sometimes become triggered, and I can no longer hear a word anyone is saying. I have had cramps in both legs and disconcerting dizzy spells. I accept them now as a result of trauma. I don’t try to medicate or expect them to go away. I accept I have suffered trauma, and I will always have reactions to some events, and I will not always know why. The lesson here is not to try burying your traumatic experience. After WW2 many in my parents’ generation tried to live their life without referring to the awful events in their youth. It is not a good idea. Your body will remember if your mind tries to block it out. Learn to listen to your body. It is telling you that there is unexpressed emotion stored up. There is nothing wrong with you. It is not something you need to fix. I am practising telling people when it is happening. Often I am feeling powerless in some way. Recently I was watching a political interview and simply had to switch it off and breathe as my body was shaking. I was powerless to prevent Holly from dying, and whenever I feel powerless, my body reacts. Lesson 3. You are a different person since your experience. Change has already happened. You may not recognise yourself, your reactions or your responses. Trying to resist this change is futile, painful and ultimately useless. After the death of my daughter, I desperately wanted to carry on as usual. It was not viable. Pretty quickly, I enrolled on a Counselling Diploma course. I felt I needed something purposeful to do and something that would connect me with Holly, who had a Masters in Psychology. The change in career has served me well. What has not helped me is trying to keep other activities going because they were part of my routine. I was in a band that I had been enjoying for a couple of years, and I was keen to keep it going. After a while, this became painful. I was no longer the same person, but I found it difficult to find expression for the new me. It all ended in tears. My lesson from this is that I have to accept that I have changed. I also have to allow that others will continue treating me as if I was the old me. I have to enlighten them. There is no shortcut to this. As a man trained to hide my feelings and keep a stiff upper lip, it was extraordinarily difficult. Over time it becomes easier. I have learnt to trust my feelings and express them as best I can. If I feel assumptions are being made based on how I used to be, I make it clear the assumptions are wrong and enter into dialogue. Take some time to assess what has changed in you. What can you create for your future in the light of that? Lesson 4. Purpose and meaning become much more relevant to you. The cumulation of my two traumatic events has pushed me to a new level of purposefulness. No longer will I accept doing a job just because it pays the bills. What sort of a waste of life is that? I focus on doing things that are meaningful and fulfilling, and at the moment, I trust that I will make a living at it in the long term. The fabric and structure of education, society and employment are now put on hold to support us through this awful crisis. Maybe now is the time to reassess. When you can rebuild your life, what will you be delighted to bring back? What would you prefer to leave out? Lesson 5. No one outside of yourself will ever give you what you need. The first reaction to any trauma is to blame. Who did that to me? How can I call them to account? How can I make them change their ways? How can I punish them? Blaming is pointing out there, rather than in here, into your own mind, when you find yourself in a painful or uncomfortable experience. Blame means shifting the responsibility for where you are onto someone or something else, rather than accepting responsibility for your role in the experience. Iyanla Vanzant With this pandemic, it is going to be difficult to accept responsibility for our part in it. If you want to move forward and create a new life from the ashes of the old, you have to embrace this. My mind went into overdrive with blame when Holly died. I started by blaming myself and my wife, then her husband, the workplace she was employed at, the GP she visited. I soon realised this was all a futile attempt not to blame Holly for her actions. Ultimately I had to accept that what happened happened and I will never know why. I have no desire to waste vast amounts of energy trying to unearth the unknowable. When the dust has settled on Coronavirus will you leap straight back into all the things you were doing before? Or will you give yourself some breathing space? Will you recognise that you and your loved ones have suffered a massive trauma? You need time to heal and rebuild yourself and your lives in light of the different world in which we find ourselves.
https://medium.com/invisible-illness/5-lessons-from-a-trauma-survivor-e120e223d4b7
['John Walter']
2020-03-30 06:25:32.478000+00:00
['Trauma', 'Covid 19', 'Mental Health', 'Grief', 'Recovery']
HodlBot Launches on Product Hunt
I just wanted to write this quick post to thank everybody for supporting us along the way. We’re going after #1!!! If you haven’t already, please drop by and give us some ❤️. Please don’t feel obligated to leave an upvote. We’d prefer if you dropped a comment and gave us some feedback instead 😊. Our Maker comment on Product Hunt: Hey Product Hunt, Our platform first went live last fall, a year later we’re approaching $100M in executed trades 🎉with +10,000 users from 80 different countries 🌎. After following the PH community for years, we’re excited to share HodlBot with you. We are fans of indexing & passive investing. Historically in the equity markets, indices have outperformed 90% of active managers over a 15-year period. We believe a well-diversified cryptocurrency portfolio will yield similar results compared to active traders. We initially created HodlBot because we were frustrated there weren’t any simple solutions out there for everyday investors. Existing solutions either were only available for accredited investors, had terrible liquidity, or simply didn’t track the price of the underlying assets correctly. We took a different approach by directly integrating HodlBot with the APIs of popular cryptocurrency exchanges (Binance, Coinbase, Kraken, Kucoin, Bittrex). Our users hold the underlying assets, we don’t have withdrawal access, and we follow best practices to encrypt user data. When we first launched a MVP last year, all it did was diversify your portfolio across the top 20 coins by market cap. Since then we’ve added the ability to: Since then we’ve added the ability to: Create a custom portfolio weighting users can tweak Create a custom index based on a starting & ending rank, with percentage caps, and different weighting strategies Automatic rebalancing (custom frequency or threshold) that keeps your portfolio on track Backtest a portfolio to see how it would have performed in the past Blacklist coins you don’t want to include in an index Cash-out a % of your portfolio into any coin in order to exit the market -Track the performance of your portfolio Diversify your portfolio across cryptocurrency exchanges (Binance, Kraken, KuCoin, Bittrex, Coinbase) At HodlBot, we’re big believers in being transparent and writing down what we’ve learned. If you want to learn more about what we’re doing, here are some of our best blogs: Until next time friends, Adios!
https://medium.com/hodlbot-blog/hodlbot-launches-on-product-hunt-e53a695fb6d5
['Anthony Xie']
2019-10-23 21:22:17.108000+00:00
['Bitcoin', 'Cryptocurrency', 'Fintech', 'Product Hunt', 'Startup']
5 Steps to Build Meaningful Communication With Your Customers
Everything you need to know about responding to reviews here. It’s also important to monitor your direct messages on Facebook and Twitter to ensure you’re responding to any questions or comments coming in. More on extending your customer service online here. These platforms are where you’ll get the most valuable feedback on what’s working and what’s not at your business — and what’s resonating with your fanbase. In your responses, you can ask questions to try to understand where their input is coming from and let them know that you want them to have the best possible experience at your business. If you’re responsive, polite, and open to feedback in your responses, you’ll be able to build brand loyalty because your customers will feel heard and appreciated, which is, after all, why they reached out to write a review or send a request in the first place. Plus, it’s not just the reviewers who will see your responses, but anyone visiting your pages will too — and with thoughtful responses, visitors will see that you’re a brand that cares about its customers and the customer experience. 4. Get sharing Encourage your customers to get sharing! According to a 2016 PwC poll, 45% of shoppers say social media, comments, and recommendations influence their shopping behavior. By combining loyalty programs with social media, such as extra points for sharing your posts, Instagram contests, etc., you can grow your audience and improve how your brand is perceived. With mobile apps, a recommendation on social media brings in, on average, 3–5 new downloads, out of who 1–2 users become regular customers, says LoyaltyPlant. 5. Engage Communicate! With a mobile app, businesses can create a useful and effective channel of communication. On some apps, businesses can ask customers to leave comments, and respond to those comments directly in the app. This is a good way to handle some negative feedback offline. You can also directly reward unhappy customers in the app — building a relationship and creating a potential loyal fan. A brand that evolves with their customer’s wishes and trends is a brand that can grow and last, according to LoyaltyPlant. “More than ever, users demand brand interactions and if the company doesn’t respond on social, it’s easy to lose customers. That’s why engaging, conversing, and contacting customers via social media can have tremendous payoffs.” — SproutSocial This means responding to reviews and direct messages as we mentioned above, but it’s just as important to respond to all interactions on social media from your customers — posts, tags, comments, tweets, and mentions. If your customers are posting about your business on Instagram — like, comment, and ask to repost on your business page! If your customers post about your business on Facebook — like, comment, and share to your business page! If your customers are tweeting about your business on Twitter — like, retweet, quote tweet, and reply! Thank your customers, like their content, and reshare their content. This is the best way to let your customers know that you value them and their interaction and engagement with your business both online and offline.
https://medium.com/main-street-hub/5-steps-to-build-meaningful-communication-with-your-customers-f518e02595fb
['Main Street Hub']
2018-07-31 15:10:16.634000+00:00
['Social Media', 'Social Media Marketing', 'Apps', 'Small Business Marketing', 'Marketing']
How to Make Negative Online Comments/Reviews Work for You
“I had a terrible experience here and would urge anyone reading this not to waste their time!” Probably not something you’d want to see on your social media comment section or reviews page. The truth is, no matter how many checks you have in place, someone, somewhere is going to take issue with you/your business. The only thing worse than an upset customer is an upset customer who goes public — to the rest of your customers. When these social media critical situations happen, are you prepared to handle them? People have opinions on just about everything, and no one is immune from disfavor. Negative comments or reviews are a logical consequence of working with people, but this doesn’t mean that you can’t turn them around. WHY YOU SHOULDN’T IGNORE NEGATIVE COMMENTS Before we dive into how to turn negative reviews or comments around, it’s important to understand first WHY you can’t just ignore them. The customer journey today is much different now than it used to be: The Stages of Customer Advocacy — THEN: 1. Awareness: The customer sees a newspaper ad, billboard, or your actual business. 2. Evaluation: The customer goes to Yellow Pages. 3. Acquisition: The customer comes to your physical store location for purchase. 4. Engagement: The customer calls your business phone number to leave compliments, suggestions, concerns, or complaints. 5. Advocacy: The customer recommends (or recommends against) your business to others in conversation. The Stages of Customer Advocacy — NOW: 1. Awareness: The customer sees your business online (on Instagram, Facebook, Twitter, LinkedIn, Yelp, Google Ads, Google Maps, etc.) 2. Evaluation: The customer Googles your business, reads reviews, browses your website, social media, etc. 3. Acquisition: The customer makes a purchase online (or comes to your physical location if they have to). 4. Engagement: The customer engages with you on social media to leave compliments, suggestions, concerns, or complaints. 5. Advocacy: The customer recommends (or recommends against) your business by their sharing experiences on their social media, your social media, or crowdsourced review sites. Since the customer journey now is almost entirely based online, it’s important to have your digital and social presence embody your brand values. Many overlook the importance of their online presence, thinking that the quality of their product will carry them through. Even if you have the best Thai restaurant in all of Los Angeles, one negative comment in the top scroll left unchecked could do more damage than you think. HOW TO RESPOND TO NEGATIVE COMMENTS: Obviously, it’s not hard to see why you want to have advocates on social media. But what about those who don’t have something kind to say? Here’s an encouraging fact: customers aren’t looking for perfection. They are looking for respectful, kind, and genuine customer service, and a bad review or comment is a great place to showcase it. 1. Upon Detection of the Comment, Stay Calm Initially, reading a negative comment about the product of all your blood, sweat, and tears may be hard. Your first instinct may be to justify, defend, and correct that comment. Instead, take a moment to read it thoroughly and compose your thoughts. Once you have collected yourself, begin to craft a response. 2. Move the Conversion to a Private Setting Now, the customer has left their comment/review in plain sight. Your public response to this should serve two purposes: It should showcase your best customer service skills and brand values. It should move the rest of the conversation with the customer to a private setting, such as email, or a direct message inbox. Tip: to subtly put a positive spin on the situation, adopt some of the same verbiage the customer used in your response (an example to follow). 3. Rectify the Situation Privately If you have the option to contact the customer directly in private, take the initiative to do so. If you don’t have the option, be sure you’ve left an invitation for the customer to contact you in your public response. In either scenario, be kind, courteous, and willing to go the extra mile. TYING IT ALL TOGETHER: A HYPOTHETICAL SCENARIO Let’s say a customer leaves the following post on your Facebook Page: “I’ve bought two products from this site and have never had an issue before. This time, my shipment somehow managed to arrive 2 weeks late. When I tried to contact customer support, not only was I HUNG UP on, but was put on hold for 20 minutes when I tried to call again! This is unacceptable. I will NOT be coming back.” Your public response should be: “Hi [INSERT FIRST NAME], I’m very sorry about your experience. At [INSERT COMPANY], customer care and satisfaction are of utmost importance to us, and a situation like the one you described is completely unacceptable and in no way reflective of our company values. Please allow us to resolve the issue for you. I’ve sent you a direct message, and look forward to learning more about your experience. Thank you, [INSERT FIRST NAME], for taking the time to write to us. — [YOUR FIRST NAME] from [INSERT COMPANY].” Why this response is effective:
https://medium.com/insights-from-the-incubator/how-to-make-negative-online-comments-reviews-work-for-you-3b5024f417d1
['The Incubator']
2016-09-13 21:46:47.290000+00:00
['Social Media', 'Reviews', 'Marketing', 'Customer Service', 'Customer Experience']
My father’s death by suicide inspired me to learn how to just ‘be’
My father’s death by suicide inspired me to learn how to just ‘be’ It’s through this present-tense state of mind that I find my rhythm, my sense of calm, and my appreciation for all that is. By Dana Mich Nine months ago, I stood at my father’s burial trying to gather my thoughts before speaking about his life to family and friends. It was particularly difficult because I had arrived at a day I had been trying to prevent, and had feared, for a very long time. My dad had just ended his life. But then, as I was standing there searching for the words, I remembered an article I had read only seven days prior. It was about ways to help yourself feel safe in an insane world. And so I began by sharing what I had learned: That “anxiety needs the future,” and “depression needs the past.” My dad suffered deeply from both of these things: his fear and lack of control over all that lay ahead, and his regret over the things he couldn’t go back and change. He suffered from an unhealthy relationship with time. He lost his footing in the here-and-now. And it made him struggle — as all too many of us do — with the age-old Shakespearean dilemma: “To be, or not to be.” Though it’s still difficult for me to admit it, this very question had begun to plague my mind just six months before my father died, during my own first battle with anxiety. And so as I stood there with my father about to be lowered into the ground with many knowing eyes upon me, I shared an answer that the article had given: to “be present.” It was an answer that spoke to my heart, and so I told them that — in that moment, and as hard a moment as it was — I was grateful to be with them. Ever since that day, I have been thinking a lot about being present. I’ve been thinking about being centered, being grounded. In short, I’ve been thinking about … being. And I began wondering why it was so difficult to come up with a concrete meaning for what was perhaps the most basic verb in the English language, without consulting the online search-engine gods. And I worried: Had I forgotten what it was to just be? Eventually, I turned to Google, and this is what it had to say: Be /bē/ (verb.): 1. exist. 2. occupy a position in space. 3. stay in the same condition. Sounds easy enough, right? Well … I’m not so sure, to be honest. After all, the word “be” is actually most commonly used in its fourth meaning: “possess the state, quality, or nature specified.” This is when “be” is followed by other words rather than a period. Other — sometimes aspirational — words used by and for us humans like “smart,” “healthy,” “hardworking,” “good-looking,” “athletic,” etc. The list goes on and on. After some thinking on the subject, I began to wonder if the pressure of focusing on the many things we know we are supposed to “be” but sometimes fall short of (or believe we fall short of) diminishes our ability to more simply … be. To be in the traditional, unembellished sense: to be comfortable in our own skin; to be one with ourselves and our surroundings; to be at peace. (i.e. definitions 1–3 above). So, I guess my question really is … have we as a society forgotten how to just be? Ironically, I think it’s when we constantly try to “be” too many things at once (or perhaps one astronomical thing) that we entirely forget how to exist with any amount of calmness and composure in the present moment. When stressed beyond our normal capacity, our minds scatter and it can feel like we aren’t even inhabiting our own body. We can end up spiraling out of control, and losing our sense of place and time and self. We land somewhere dark and frightening and terrible. And it’s then, when we get to the very bottom of that downward spiral, that we think it might be better simply “not to be.” Because at that point, the thought of being anything at all has become unbearable. I know it all too well. I’ve been there once for a horrific, acute six-week stint, and I hope never to be brought back again. So, in the spirit of National Suicide Prevention Month, I thought I’d share how I go about keeping anxiety and depression at bay. Yes, I’ve been doing a lot of thinking about just being. But more than that, I’ve been putting it into practice. I’ve learned how to quiet my mind and focus on the present moment. I meditate, breathe and practice yoga. And building from that, I write, read, run, and do all the things I’ve always enjoyed. But here’s what’s different: I’m newly practicing mindfulness and gratitude all the while. I’m ensuring that my brain is present where my body is. I’m making the effort to focus and mentally expand upon on all the simple things that keep me going. It’s through this present-tense state of mind that I find my rhythm, my sense of calm, and my appreciation for all that is. Now, to be honest, it doesn’t always come easy (even for a mentally healthy, happy, neurotransmitter-balanced brain). In fact, it truly takes constant effort. But if, God forbid, there is to be a future struggle in store for me, I also know better how to take it back to the basics. I know how to close my eyes, to find myself … and to be. To truly just be. Perhaps that is our answer.
https://medium.com/thewashingtonpost/my-fathers-death-by-suicide-inspired-me-to-learn-how-to-just-be-4501b0934a9e
['Washington Post']
2016-09-12 21:31:50.940000+00:00
['Life Lessons', 'Meditation', 'Mental Health', 'Depression', 'Mindfulness']
3 to read: Making journalism crowd funding work | Finding subscribers | What Jill Abramson got wrong
Jan. 26, 2018: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co How The Correspondent became the largest journalism crowdfunding project in history — without 1 story on its site: News sites are increasingly turning to readers to be their financial saviors. But it’s not easy. So here’s a fascinating look at how The Correspondent patiently and carefully laid the groundwork for raising $2.6 million from more than 45,000 supporters. For people interested in starting their own crowd-funded news sites, this is a primer on how to do it well. Great story by Emily Goligoski and Aron Pilhofer for MembershipPuzzle.org. How many paying subscribers do you need to keep a money-losing magazine afloat? A regional mag finds out: The digital age has not been kind to regional magazine, which at one point were fat, happy money-makers. Now they’re hanging on by their fingertips. So when Arkansas Magazine announced it would shut down if it didn’t get enough paid subscribers, the staff jumped in, pushing hard on social media. Here’s what happened. By Laura Hazard Owens for Nieman Lab. What Jill Abramson gets wrong about the digital journalism: Abramson, the first female executive editor of the NYT, has written a book, “Merchants of Truth: The Business of News and the Fight for Facts.” It’s a take on four big media players — the NYT, WaPo, Vice, and BuzzFeed, where media is headed, and her own bitter falling-out with the NYT. But apparently she had trouble wrangling the truth down about Vice and BuzzFeed. An angry war of tweets has erupted over what reporters at Vice and BuzzFeed claim are errors in the book. As Josephine Livingstone chronicles for The New Republic, it’s not a pretty. But it makes an interesting read.
https://medium.com/3-to-read/3-to-read-making-journalism-crowd-funding-work-finding-subscribers-what-jill-abramson-got-d41b685369e9
['Matt Carroll']
2019-01-29 14:10:08.988000+00:00
['Media', 'Journalism Ethics', 'Media Criticism', 'Jill Abramson', 'Journalism']
6 Ways Strength Training Boosts Emotional Strength
6 Ways Strength Training Boosts Emotional Strength Tips from an ACSM Certified Personal Trainer As a Personal Trainer, when I speak about strength training and weight lifting, a lot of people picture big guys throwing around weights. There are so many more benefits to strength training than a six-pack, and they have nothing to do with how we look! 1. A New Meditation Practice to Manage Anxiety You may never have thought of lifting weights as a meditative practice. But it’s a focused time to count breaths and reps that can quiet even the busiest monkey mind. While meditating, it can be easy to let the mind wander to dinner plans or frustrations, but while training with a challenging and heavy weight, you have to concentrate fully. 2. Reduce Stress Regular strength training reduces cortisol levels in the body. The only caveat — prioritize rest and restoration. Too much intense training without time to repair will raise cortisol levels. Shoot for consistency and sustainability. 3. Increased Cognitive Ability Strength training builds connections between neurons, muscles, bones, and body awareness. Being in tune with the body allows us to become more self-aware and focused when it comes to identifying our needs and regulating our emotions. 4. Progress over Perfection The process of focusing on progress and the process of weight lifting is a big life lesson. Strength training forces you to break down each session and progressively improve in small and measurable increments. That progress has hills and valleys, but when you zoom out — it’s inspiring to see that each session contributes to a greater goal. 5. Creates a Supportive Community There are thousands of online and in-person fitness communities. Everyone shows up to improve their fitness regardless of their career, background, or what kind of day they had. The best of these communities have encouraging environments for any level and provide a great way to socialize in a low-pressure, fun environment. 6. Builds Confidence I’ve heard from many of my clients that strength training has helped them feel more confident in other areas of their life. To lift something they never thought was possible creates a sense of possibility and pride in other areas of their life. If you’ve never tried strength training before, starting out with a simple program just 2 x a week can have amazing benefits!
https://medium.com/joincurio/6-ways-strength-training-boots-emotional-strength-146e5c469cba
['Melissa Schwartz']
2020-12-08 22:13:57.337000+00:00
['Anxiety', 'Mental Health', 'Fitness', 'Personal Development', 'Stress']
My Childhood Tourette’s, Explained
Image Source: Photo by Andrea Piacquadio from Pexels My Childhood Tourette’s, Explained My verbal tic put me up against the world at a young age. The inexplicable feeling of being eaten up by an urge to verbally tic began when I was in the third grade. It was highly invasive — I always needed to clear my throat and make a deep humming noise several times per minute. No matter how deeply I breathed or tried to clear my mind, there it was — hmm, hmm. Hmmmmm. I think that maybe there’s something to be said about my tic during the times for when I was alone. I was a solitary child who strongly preferred reading and staring at the wall to imagine things over playing with my peers. When I came home from school, I’d read, and at school it was my favorite thing to do during recess. The second I was finished with my classwork, which I’d rush through in order to read, I would often find myself imagining the books I would write once I was old enough. Maybe my tics were a way of grounding myself so that I wouldn’t get lost when I was staring into space or living inside the world of a story. Considering my verbal tics during times where I was required to assimilate with my peers is a whole other story. I wanted to turn it off so badly. It was painful to have to give in to making the deep humming sound when I’d already earned the title of “rabbit teeth” by my classmates and begun experiencing teasing because I was the first kid in school to have braces. I had large buck teeth. The other kids laughed before my teeth were fixed and while they were actively being fixed by the braces, so the last thing I needed was yet another reason to be laughed at. Don’t get me wrong — a lot of kids were really nice and forgiving about my little problem. To my face, most of them were decent about it, but I still didn’t fit into any groups and all of my attempts to make friends repeatedly failed. Some of the other girls were really embarrassed to be friends with me, and it was the kind of thing where we’d play at each other’s houses outside of school but they’d still pretend not to know me during class and I was always left out of the birthday parties. Friendless with a strange problem, my parents pulled me out of school for a few weeks.
https://medium.com/curious/my-childhood-tourettes-explained-bca29078618c
['H. M. Johnson']
2020-11-04 10:26:23.786000+00:00
['Tourettes', 'Childhood', 'Psychology', 'Tics', 'Children']
How does my first mechanical keyboard work?
Where to buy it? Mechanical keyboards are sold almost everywhere, Amazon, Bestbuy, Newegg, even Walmart. But Ducky One is only sold from Mechanical Keyboard Catalog and Guide. (mechanicalkeyboards.com). For an unknown reason, most of the keyboards there are out-of-stock. Maybe people are so bored or so lonely at home, they all want to make noises. So they wouldn’t feel that lonely? One buying tip: check on the In-Stock Page, find one version you like the best, and then put an order. This way, you could get your keyboard much faster. What to buy? From a size perspective: the mechanical keyboard has the following types (from large to small) Full size: including a separated number pad Full Size Keyboard. It even includes a separated number pad. © Ducky Regular size (TKL): no separated number pad, but has separate special function key zone (HOME/END/PgUp/PgDown). Only missing part is the number keypad, which is completely optional. © Ducky SF (Sixty Five) size: with stand-alone arrow keys/Delete/PgUp/PgDown. Remove all the F1-F12 keys. Ducky One SF. I decided to buy this one. It is compact with the essential special functions keys. © Ducky Sixty-size: comparing with SF, removes all the stand-alone arrows/Delete/PgUp/PgDown keys. Only 60% size comparing to TKL. This is the most compact model. © Ducky Whichever model, it has all the keys. You could make your decision completely based on your taste and preference. For my daily workload, I do coding and word editing a lot. So, I would prefer a separated DEL and arrow keys. And I like the smaller model, which could offer better mobility and save spaces. That’s why I decided to go with Ducky One SF. The Keys As a first-time mechanical keyboard buyer, you might be surprised to notice that you have to select from a list of different switches, such as the below image. What are these “Switches”? What do those red, blue, brown colors mean? How should I make the decision? Switch Types — Mechanical Keyboard (mechanical-keyboard.org) Mechanical Keyboard Switches There are lots of YouTube videos talking about the differences. I would recommend one as below. Which Cherry MX Key to use? | BeatTheBush — YouTube For me, I like longer key travels and louder sound. So I decided to go with blue. LED Backlights As a grown-up, I am not a big fan of those shiny lights. But it is a completely different story for my kids. They love it and can not stop playing with it. Even as a toy, it is a popular one. Besides the entertaining effects, I find the LED backlights are sometimes useful. It could light up and residual for 3 seconds for the keys I just pressed. I could use that to help me understand that I pressed the wrong key to make my program crashed. LOL. Capital Key It is not only big but also very convenient and easy to reach. But it is almost the most useless key. There are lots of articles, tips, small programs to reuse this little key. Most of them are related to change registration file or install new software, etc. There are all software dependent solutions; once you have changed to another machine, you need to redo all the setup. Docky One SF provides one universal solution by changing the function from keyboard level. As long as you are using the same keyboard, you will have the same experience across different machines. How to switch Caps Functions in the official user menu. Ducky_One2_SF_usermanual_V4_20190624_ol (duckychannel.net) Usage Tips: For me, I switch the Capital Key to be “Fn” key. This way, it is much easier for me to move my cursor when typing without my hands moving away from the base location. With the Fn + A/W/S/D, I could easily move my mouse cursor, Fn + R/F, to scroll up and down. All of these functions are convenient for developers who wish to use a keyboard to do everything. Once get used to it, it could be very efficient and productive. All these fancy usage tips, are printed directly on the side of the keys. All these fancy usage tips and more are all printed directly on the keys’ front side. It is easier for users to refer to. Summary: Introduces basic bits of knowledge to buy your first mechanical keyboards, such as keyboard sizes and switches. How to maximize your productivity by personalizing the keyboard settings. Usage tips to increase your efficiency with your new keyboard. Hope you would enjoy the reading and encouraged to buy one and taste it for yourself.
https://medium.com/technology-hits/here-is-my-first-mechanical-keyboard-6833ae0d34f0
['Binlong Li']
2020-12-24 05:54:45.984000+00:00
['Review', 'Tools', 'Productivity', 'Mechanical Keyboards', 'Developer']
Never Give Up Writing
The First Rejection When I was twenty-one years old, I confidently entered a poetry competition with a poem I had written a few years earlier. My teacher had said how great and original it was. My family and friends had all loved it. It was a sure winner. Of course, what I’d failed to realise is that everyone who entered would be submitting great poems that had received great feedback. The sure winner didn’t even get shortlisted. My first taste of rejection. It hit hard. Lying Dormant I felt so defeated after that failure. I started to convince myself that those who had said they liked my poems were lying. I had been writing poems since childhood but suddenly I felt drained of the will to put pen to paper. I moved through my twenties without writing down the rhymes that flooded my mind. Enter Inspiration Life, as it so often does, started throwing inspiration at me, but not just that, it opened up the world of poetry to me once more as I began introducing poetry to my son. I had continued to enjoy the poetry of others but not as much as I had before I’d felt cast out. Rooky mistake. Poetry is not a club from which you can be excluded. It is your outlet and your solace. Reading the works of others will nurture your own creativity, plus, it’s food for the soul. The Power of Social Media Ah, social media. Where would we be without it? Suddenly I saw that there were people putting their poems out there. Writers whose poems became something to look forward to, like the brilliant Kevin Heads, who not only inspired me but also my son to put pen to paper, and share it with others. Enter Medium I soon noticed that Kevin and other writers I admired were publishing their work on Medium. I decided to look into it and give it a go myself. The rollercoaster began. Ups and Downs First I felt elated — people were reading my work! More than that, they were clapping, highlighting, and responding to it! Then I received the email that one of my poems had been curated! I felt on top of the world, but it wouldn’t last. The poems I felt were my best works were being rejected for curation. I couldn’t understand it and started obsessing over stats, reading curation guidelines, and articles promising to unlock the key to Medium success. Undeterred, one article led me to submit my work to magazines via submittable. Oh no, more rejection. Thankfully, I have developed some resilience since my first taste of it all those years ago. I remembered that I write poetry because the words come to me, and I share it because some people might enjoy reading it. It’s easy to get lost in the quest for success, a journey I’d not intended taking, not this time. Then success came knocking my door. We each measure success in our own ways. For some, it’s acceptance from the writing community, for some, it’s measured in Medium stats; the list of ways it can be measured and interpreted goes on. For me, it came in the form of a competition, the very thing that had put a stopper on my creativity all those years ago. Facing the Fear I bit the bullet and entered a competition, this time with no self-assuredness or expectations. I came across it accidentally whilst looking for poetry with my son — Kids Poetry Club, a poetry podcast and website, was running a poetry competition, but adults could enter too. Tasked with writing a poem for children on the theme of Spring, armed with my pencil and inspiration outside my window, I set to it. Before I knew it I’d entered! First came the shortlisting, then the news that I was a finalist, and then that wonderful moment when I learned I had actually won! I realise now that my success was not winning the competition, but entering it; I had faced my fear. My success was not winning but entering, accepting that not winning was a probability yet not letting that possibility stop me. Never Give Up I stopped worrying about winning and being judged to be good by others and remembered the joy of writing. Never give up writing if writing is what you do. If you enjoy it or if it helps you express yourself, forget about those stats and let yourself get lost in your words and the words of others.
https://medium.com/scribe/never-give-up-writing-17be8ac0bc36
['Melissa Speed']
2020-05-24 14:57:53.360000+00:00
['Rejection', 'Writing', 'Success', 'Self', 'Poetry']
Duck Tales: The Wonders of Integration
Duck Tales: The Wonders of Integration How I implemented the Integration with Third-Party Services using Request Interceptor and Spring Retry Template Carl Barks painting of Uncle Scrooge McDuck. Disney’s Duck Tales was a classic cartoon series in the early 1990s, at the time when I grew up. If you watched it, you probably just said “Duck Tales… woohoo!” in your head. The cartoon series told the story of Scrooge McDuck (an elderly Scottish anthropomorphic Pekin Duck known as a business magnate and the richest duck in the world) and his three grandnephews. Each episode told us about Scrooge and his companions’ various adventures, most of which frustrated the Beagle Boys’ attempts to steal Scrooge’s fortune. Fast forward to the new season I’m making of Duck Tales. In (my) first episode we will integrate our system to Scrooge’s banking services (“Scrooge McDuck Money Bin inc”). Scrooge McDuck is willing to initiate electronic (21st century, you know) money transactions on behalf of the company I’m working for. In short, we developed a financial technology platform that provides instant credit to small businesses. “A day without looking at me Money Bin is like a day without sunshine!” — Scrooge McDuck Following the quote, once a day, the scheduler runs on our database and prepares the list of electronic payments that should be transferred to the client-business. The transfer is actually performed by the bank. So we send the request to Scrooge to originate the provided payment (“Payment Origination”). The status of each individual payment can be tracked by polling to “Check Payment Status” API. Now, Scrooge McDuck vault, the Money Bin, has a thick door, which is designed to prevent villains from breaking in and diving into the McDuck fortune. This is an authorization token that is required by each API call. This token is valid for 24 hours. So far so good, right. Here’s come a catch: you can receive a new access token only after it has expired. “I’ve got to get you to my vault. It’s the only safe place” — Scrooge McDuck This shouldn’t be an issue if you perform a few calls a day. I mean, you can just get an access token before each communication with Scrooge’s bank. But, what if you perform hundreds of these calls? This might bump you into the threshold limit, and will quite ball up your logs. The challenge was therefore to optimize code by eliminating any unnecessary API calls. The idea was to generate an access token on demand (on authorization error failure), using a try-catch with a retry mechanism. Well, the odd thing to say, I’m using Spring Cloud OpenFeign for this integration. Let’s move forward, I’m going to show you my way to overcoming the challenge.
https://medium.com/swlh/duck-tales-the-wonders-of-integration-e778da48e0eb
['Gene Zeiniss']
2020-09-29 05:28:37.623000+00:00
['Authentication', 'Openfeign', 'Integration', 'Java', 'Spring Retry']
Introduce Yourself to Everyone!
Pin Your “About Me” Stories in Your Profile! With the recent changes with Medium users’ profiles, many writers noticed the new profile format is now a long-form view. This gives more exposure to the top parts of the profile. A few writers decided that they could write About Me stories and pin it to the top of their profiles for more visibility. However, with the changes comes downsides. Since the profile is long-form, it may take a while for your followers to scroll down to the articles you want them to see. Realistically, you can’t expect your followers to continuously scroll down your profile to find your articles. An About Me story provides the ability to give your followers shortcuts to your favorite stories. I wrote up a quick About Me story, published, and pinned it to the top of my profile. This allows my followers to quickly access some of my content via my About Me since I provide links to my top-performing or personal stories in my About Me story. Top writers on Medium like Zulie Rane and Danny Forest highly recommend every writer whipping up an About Me story. Additionally, About Me Stories is a natural fit for the changes that are happening with Medium, especially with their emphasis on a more “relational Medium.” It helps humanize the writers more, putting more dimensions behind your favorite people on Medium! Lastly, About Me stories can be advantageous to those who are seeking to build their following on the platform and expand their personal brand. You can pin your story by going to your profile, scrolling to the story you want to pin and click the “…” at the bottom right of the story. Click “Pin Story”.
https://medium.com/about-me-stories/submit-your-story-53edfdf6527
['Quy Ma']
2020-12-20 08:54:55.686000+00:00
['Writing', 'Hello', 'About Me', 'First Post', 'Introduction']
Amazon Go: Redefining Shopping
Amazon officially opened its first Amazon Go store in Seattle to the public on January 22nd. The store uses technology such as ceiling cameras and electronic sensors to track each individual customer’s purchases as they shop. To enter, customers scan their smartphones loaded with the “Amazon Go” app on turnstiles similar to those in a subway. Once inside, they can then choose from a wide range of items and put them in their shopping bags. As customers pick up items, they are added to their Amazon Go account — and removed when placed back, with the help of the advanced shelve sensors and ceiling cameras.The receipt is issued once the customer exits, and their account is automatically billed. The store has been open for Amazon employees since 2016, though Amazon was hesitant to open to the public as there were — and are––a few issues. For instance, misplacing an item onto the wrong shelf can cause the billing system to misread it when picked up by others. Amazon are also hoping to incorporate this technology into Whole Foods, the chain they acquired for $13.7 bn last year.
https://medium.com/newscuts/amazon-go-a-technological-miracle-fca6958bd148
['Bharat Sachdev']
2018-01-28 06:47:24.430000+00:00
['Amazon', 'Technology', 'Tech', 'Amazon Go', 'Shopping']
Don’t Touch Me, I’m Sleeping
Don’t Touch Me, I’m Sleeping Tired out, touched out. Sleep. We all talk about it, think about it, and many of us don’t get enough of it. Parents, in particular, lament the lack of sleep associated with having newborns and infants. All day they work, either at home or a job outside of the home, and then all night they are on call to answer the mews and cries of their children. It’s tiring. I know; I am a mom to a toddler. I work from home as an editor during naps and down times, but when she is awake I tend to my little girl. She requires a lot of attention, as all toddlers do. Aside from the usual cooking and cleaning, playing and reading, I do a lot of touching. A LOT of touching. We hold hands, hug, snuggle, she sits in my lap while I read. She holds my leg while I wash dishes. I wear her in baby carriers on walks and grocery store trips. I walk around with her in my arms, singing her to sleep. That is a lot of physical contact day after day, all day long. Don’t get me wrong, I soak it in. I relish the short amount of time that she still wants to hug me, to hold my hand, and to sit in my lap. Before long she will no longer reach up for me. But for now, we touch all day long. When my husband gets home, even more physical contact begins. We too, hug and hold hands. He likes for me to scratch his back when we watch TV at night. Not to mention any sex that might occur. As a wife, and a mom, I often become “touched out” by the end of the day. When I drop into bed at night I once again become autonomous. I cocoon myself in my jersey sheets and my down alternative comforter. I roll to the edge of the bed and I exile myself from touch. My husband respects my sleeping boundaries. He understands that at the end of the day I would rather not spoon or rest my head on his chest. I used to wonder if I was alone in this. Is something wrong with me that I don’t want to be touched? Does everyone else just want to get in bed and snuggle their spouse every night? Yet, here I am, putting imaginary boundaries between my husband and me when I collapse onto our mattress. Yet if I have learned anything in my time talking with other moms, it is that I am not alone in this. Sometimes all of the touching becomes too much and we just need a time and place to be alone. For our bodies to once again be our own. We need this time so that when the alarm starts ringing we can get out of bed and welcome a hug from over the rails of the crib with smiles on our faces.
https://medium.com/hearthandkin/dont-touch-me-i-m-sleeping-12a4d9bb2fa0
['Kayla Grant Coons']
2017-03-10 13:43:33.764000+00:00
['Kids', 'Storytelling', 'Mom', 'Parenting Stories', 'Motherhood']
Crown Meeting Culminates with Use Case Focused Community Meetup at the BlockchainHotel in Essen, Germany
Crown Meeting Culminates with Use Case Focused Community Meetup at the BlockchainHotel in Essen, Germany Crown Team welcomes international community at the public meetup Proof of Stake, Crown Platform architecture, development roadmap, Use Case implementation, central topics of the Development Week Use case centered community meetup full of innovative ideas and approaches as blockchain technology evolves Next meetup planned for begining of November in Malta Crown Community @ CrownEssen18 The BlockchainHotel in Essen, Germany, has served as venue for 10 days packed with development, strategy meetings, and a new edition of the legendary Crown Community Meetup, which congregated individuals and businesses from New Zealand, United Kingdom, Czech Republic, Ukraine, Spain, Netherlands, Armenia, Belgium, and Germany, who openly discussed the evolving blockchain landscape while offering innovative approaches on how to keep up with the swift economic and technical changes that blockchain projects are going through. This article offers a chronicle of the most important achievements during this time. Developers Week The development team composed of Artem, Ashot, Volodymyr, Chris, and Jacob, were first to arrive in Essen on Wednesday July 25th, hereby kicking off the crown team gathering. Artem and Volodymyr arriving at DUS International on July 25th Their development sessions started on Thursday and lasted for a week. Working in a distributed team demands periodical face to face interaction. Complex discussions are necessary and not always possible or efficient through online communication tools. Below, @artem.bv explains what they achieved during this time. “We need to meet face to face sometimes to discuss development plans and technical details. It brings more energy to the team and makes its collaboration more effective.” Crown developers sharing desktop Before the community meeting in Essen on 4th of August, the Crown Development Team had working development sessions for a week organized by @crownfan at the BlockchainHotel. The developers Ashot (aka @ashot), Volodymyr (aka @vshamray), Jacob (aka @hypmermist), Chris (aka @dzlbobo) and myself arrived between 25th and 26th of July and we started working on the ongoing Crown tasks together. We need to meet face to face sometimes to discuss development plans and technical details. It brings more energy to the team and makes its collaboration more effective. The main topics of our dev sessions have been the following: 1) Retrospective & analysis of the last release 2) Short-term planning The next milestone is available here: https://gitlab.crown.tech/crown/crown-core/milestones/3. It is not a release milestone but rather a dev milestone. After this one is over we will plan another one depending on the community demands and available resources. 3) Long-term planning based on the latest roadmap points https://crown.link/3n we created a Project Map plan for the next year. We are going to build a backlog from these tasks and let people choose what is more important for us to build for the next stage. 4) Crown Platform Architecture The biggest part of the meeting was devoted to the technical discussion of the Crown Platform architecture. Different use cases and appropriate tools were discussed, such as API-based development and Turing-complete smart contracts, agents voting & ID registration implementation details 5) Trezor, Coinomi, Co-Pay and other wallets to support Crown 6) NextCloud setup, Gitlab update, Gitlab Ultimate edition submission, Test Cases management tools, Sandbox testnets, Hotfix of the crash, 0.12.5.1 patch Technical discussions on the Platform architecture, Proof of Stake, optimization of tools and on the go hotfixes were central contents of the development sessions As we can see from Artem’s summary, the development week included a study of the Proof of Stake design, formalizing a plan for the upcoming development steps on the Crown Platform, structuring of tasks and times, as well as joint analysis and debate of the rich documentation that the developers have been producing during the last months. It was a great experience to witness our dev team living and working together during this time. After the daily full time sessions, we went out for dinner and spiced the beers with discussions on the state of cryptographic solutions, ideas, and of course, spent time getting to know each other better. That is one of our secrets to Crown crypto: A well bonded and coordinated team! Community Manager @ahigherbridge stresses the importance of Community at Crown in one of his tweets Team Sessions Shortly after, further team members started arriving to the venue. Co-founders, International Representative, Strategy Advisor, Community Manager, Support Lead, Head of Marketing, among others, worked together on shared tasks and were able to explain in detail what they are currently working on. Team meeting on August 2nd We have encountered enormous genersoity by some of the oldest Crown Community members While the general meeting on Thursday August 2nd was relaxed and agile, the team meeting on Friday August 3rd prolonged itself for more than 6 hours without A/C. The toughest matters were discussed and agreed on, including a revision of the economic situation of the project. Our main goal at the moment is to fund development in order to release version 1.0 of Crown Platform. To launch the platform, we need to secure our developers’ funding, as they work full time on Crown and have no additional income. With the monthly superblock value going below $10.000, we were forced to find alternative funding solutions and lucky to encounter enormous generosity in the last months by some of the oldest Crown community members, who have been donating incessantly so as to guarantee the achievement of our goals. The hardest decisions are usually the most important ones for the project We are set for another development period Crown is a distributed and horizontal organization. This entails that no one is capable of taking decisions that affect the whole project, and that we only have leads responsible for the areas they have been designated to. In order to be able to take ecosystemic decisions, the team has to sit together and debate. We are happy to announce that we are set for another development period and that despite the market conjuncture we are suffering, the Crown project is currently not at risk of discontinuing development. This is consequence of an enormous effort made by all affected parties: Developers reducing their salaries, other team members agreeing to receive less compensation, as well as secondary departments dropping their proposals to prioritize core development. Aitorjs joins the team to work as web developer Beyond the development achievements, we also reached general Crown Team milestones: 1) Development plans formalized and presented to the team. 2) Financing secured for Q3 2018 3) Aitorjs joins the team to work as web developer. He is a great addition to the community with a wealth of technical experience and also a blockchain enthusiast. Welcome on board, Aitor! 4) farid tejani, chief of marketing, presents his new strategy “taking Crown to market”. View the whole presentation here. 5) SuccessFiles segments were screened at the Community Meetup 6) Crown Platform architecture explained in detail by @artem.bv. View the whole presentation here. Stay with me to find out how the community meetup went :) Community Meetup Our face to face Crown meetings are based on two events. Closed team meetings and an open Community Meetup. While the first event is aimed at development sprints and decision taking on important and complex matters, the Crown Community Meetup is a way of meeting old community members, inviting new ones to join the project, networking and enjoying some time together. Despite the intense heat and vacation time, we were able to attract blockchain enthusiasts and present the Crown project to them. This is a recap of the agenda for the day: The Community Meetup Agenda The Community Meetup was not in any way a unidirectional communication act. We invited the Crown Community and partners to present their projects. And they accepted! After an introduction to the history of Crown by co-founder Jan Brody, a technical presentation on the Crown Platform Architecture by Artem, and the marketing strategy design for Crown Platform by farid tejani, our new Marketing Lead, in this edition we were very proud to count on the strong presence of regional and international start-ups that are developing innovative ideas on how to use blockchain technology on Crown, such as Katalytics, Blockchain Solutions, Digital Cactus, and BlockchainHotel. Jan Brody: “In this difficult period in crypto, we are not afraid, because since 2014 we are building a real community which stands on firm and fairplay values. These values will be at center stage in the eyes of new entrants when the blockchain environment recovers in next months” Crown Community waking up to Jan Brody’s presentation “From Crowncoin to Crown Platform” early morning November is the month in which the commercial and educative segments of Crown Platform will be screened on national TV US-wide Artem: “We will prioritize our development depending on the interests and petitions of the Crown Community” Artems presentation was demanding but well structured and understandable Farid Tejani, chief of marketing, presented his new strategy “taking Crown to market” Farid Tejani: “Who is the customer? Only when we understand our customers can we know how to serve them.” Daniel Markwart did not only hold an impressive talk on Blockchain Solutions, but was also the lucky winner of the Systemnode Giveaway Innocent hand drawing the Systemnode Giveaway won by lucky Daniel Markwart from Blockchain Solutions. Congratulations! Special thanks to our dedicated crown supporters Estandar and CryptoWidow Special thanks to our dedicated crown supporters Estandar and CryptoWidow for taking the trip from UK and sharing their positive vibes and optimistic lifestyle with us. Katalytics: “The adoption of technology does not solely depend on its quality as technology, but on usability, real life applications, and its actual impact on peoples lifes.” Tobias Gretenkort from Katalytics.de Panel Discussion I was impressed by the endurance that the community showed during this very long and HOT day. They held through 45° listening to all the talks and still had enough energy to participate in a thriving panel discussion on the BlockchainHotel rooftop, moderated by farid tejani, where they were able to cool down with some drinks such as the already famous CrownBeer. Farid has been so kind as to share a summary of the intense discussion: “Crown Use Cases for the Web 3.0” Participants in the discussion: Pedro Herranz, G-Me / Contastik Tobias Gretenkort, katalytics.de Daniel Markwart, Moritz Stumpf, Blockchain Solutions Gökhan Köse, BlockchainHotel Thorsten Hunsicker, Crypto-Rockstars The panel talk focused on exploring the anticipated use cases that could be launched within the next two years, in order to cut through the hype of blockchain projects. There are so many ICO projects launching with increasingly esoteric, ambitious and narrow use cases, we wanted to discuss more direct and practical ways in which Crown is today able to bring value to the user. Anonymised data sets permit wider and more distributed research of new treatments The discussion started with a deep dive in to the data sharing economy, specifically around the possibilities of using Crown to enable data sharing amongst healthcare professionals and the supporting industry. Opinions were enthusiastically divided on this; on the one hand blockchain technology offers a great solution for shared data transfer, however there were concerns (interestingly advocated by the Crown team rather than the project) around the immutability and permanence of the data being stored, privacy, and the way in which access to this data would be managed. Whilst access to health data can be incredibly powerful for the health professional in treatment, the blockchain’s permanent storage is not preferable. Instead, a better use case might be anonymised data sets permitting wider and more distributed research in the search for new treatments. Panel Discussion “Crown Use Cases for the Web 3.0” Moderated by farid tejani the challenge consists is linking the physical world to the digital world, marking the physical asset directly and uniquely to the digital proof of legitimacy Crown to the roof We then moved on to identity, supply chain and anti-counterfeit use cases. Here the panel felt that there are several very strong use cases, an example was given around Nike who found their market flooded with counterfeits. The problem is often created within the supply chain, it is tempting for outsourced manufacturers to over produce stock during quiet production periods, which later finds its way in to the market. The panel rightly identified that the challenge is linking the physical world to the digital world; marking the physical asset directly and uniquely to the digital proof of legitimacy. The Crown team is exploring ideas ranging from QR codes to unique digital signatures woven in to the atomic structure of fabrics and materials. The panel was very excited by some of the practical applications that are coming forward in the area of democratising finance such as the Contastik App automating tax returns for Spanish consumers. After the panel discussion, further merch and swag was given away After the panel discussion, further merch and swag was given away and the community took home Powerbanks, T-Shirts, Crown Bags, and tons of stickers to spread visibility of Crown. Conclusion To wrap up this report, I asked international representative and Crown event organizer, Olya aka @riseandshine13, for some experienced words: Meetings like Essen’s show a lot about Crown Platform. At a time when many projects are falling apart after a 7-months bear market, we were able to show our strong presence and long-term goals, as well as our diverse and broad infrastructure design, which permits almost infinite use cases. Despite intolerable heat and vacation time a lot of people came to network, learn about Crown and share their ideas. Jan Brody and Filip Major created something great when they founded Crown… …and now Crown is a distributed Community driven project! Thank you for supporting the Crown project. We hope to see you all again at our next event in Malta! Join us on Telegram: https://t.me/crownplatform Join us on Twitter: https://twitter.com/crownplatorm Join us on Mattermost: https://mm.crownlab.eu
https://medium.com/crownplatform/crown-meeting-culminates-with-use-case-focused-community-meetup-at-the-blockchainhotel-in-essen-c496f08df715
['J. Herranz']
2018-08-15 11:26:55.423000+00:00
['Blockchain', 'Bitcoin', 'Cryptocurrency', 'Development', 'Meetup']
Only You Can Make This Change
How to be accountable Image by Pete Linforth from Pixabay You must take personal responsibility. You cannot change the circumstances, the seasons, or the wind, but you can change yourself. That is something you have charge of. Jim Rohn It’s easy to let someone else be in charge of your circumstances. You can choose to blame others for your misfortune or quickly reply with, “It’s not my fault.” When you take this route, you are surrendering control. Initially, it may be just one decision. Over time, however, it becomes a way of life, and soon you will be a prisoner in this web of surrender that you have sewn for yourself. It’s time you changed that and seized responsibility for your life. This is the only way you will develop resilience as you clash with the challenges you encounter, working towards a life of meaning and purpose. You Have Other Choices Every situation you encounter provides you with options and choices. It goes without saying, those choices all have consequences. As you make these decisions, you will want to weigh the risks and rewards of each alternative. One way to do this is to list up all the ways you can take action, then create a plan to change your situation. Or you could brainstorm pros and cons for each option. Talking to others is another way to understand your situation and options. Another person’s perspective may generate ideas you had not previously considered. Accept It You can say, “Oh, well, there is nothing I can do about it.” Unfortunately, that won’t get you very far. Accepting your undesirable circumstances will turn you into a constant complainer. That is a good way to get yourself a starting position on Team Blame. Team Blame’s game is to throw every undesirable aspect of their lives to someone or something else, saying that it is outside of their control. “I did not get a raise because my manager doesn’t like me.” “I guess this relationship was not meant to last.” “It’s not my fault I was late, there was more traffic today than usual.” Refuse to accept that you have no control over your life. Stop looking for problems and start looking for solutions. Accepting that the circumstances are outside of your influence is an easy way to cast aside responsibility. It may also temporarily make you feel better. In the long run though, you are blurring the lines of personal responsibility and tipping the scales of control and independence against yourself. Don’t accept being a victim. There are many things you can change beit with incremental steps or in one big shift. Change It Don’t just accept things at face value. Decide to make a difference. You actually have more control of your life than you think. It only takes one conversation to start a movement, provided you engage the right people. Confidently voice your concerns, describe the roadblocks, and seek solutions to those things you previously surrendered to. If you did not get a raise, what could you have done to improve your job performance? Before your relationship falls apart, take a more active role in nurturing and developing it. And instead of blaming the traffic for your tardiness, leave earlier, find alternative routes or check the traffic reports to be more in charge of your morning commute. Don’t accept mitigating circumstances, plan and act in ways which establish your personal responsibility. Doing so will provide you more options and opportunities. Setting Boundaries If you find the situation disagreeable, you can remove yourself from it altogether. While this withdrawal may seem to be skirting the issue, it is sometimes better to cut your losses and move on. This type of realization takes some courage as you need to believe your talents, time and tenacity would be better served somewhere else. Removing yourself from a situation that drains your physical and emotional energy can light a fire in you to achieve more. Be true to yourself above all else. Have a sense of what your efforts can impact and those situations which run counter to your ideals and beliefs. Give your energy and presence to taking action that will make you a better version of yourself. Eliminate the circumstances from your life which fail to serve your development as a person of morals and integrity. You deserve that. When you choose to remove yourself from a constricting situation, it is important to be resolute and not give in to self doubt. Could have, would have and should have are ideas you need to separate yourself from. Those doubts will lead you right back to where you started. Wavering after you take such a big leap of faith can lead to dire consequences. You may find your position in a relationship weakened, or have lingering feelings of not being up to the challenge. Once you have decided to remove yourself, go all in to get out. Run, don’t walk and whatever you do, don’t look back. With some distance, your confidence will grow as you begin to realize you are now on a better path. If you want to take responsibility for your life, you must choose to remove yourself from the situation, change it, or accept it totally, and you must choose now. Then accept the consequences. Eckhart Tolle But… No matter which road you take, you will fail. That’s ok. We all make mistakes and then we work to get better next time. Don’t be afraid of that. Failure sparks growth and growth leads to mastery. There is no way for you to become good at something without failing first. Believe in yourself, know you are destined to succeed and choose to be accountable for your progress through not only the failures but successes too. No one else can do it. The willingness to accept responsibility for one’s own life is the source from which self-respect springs. Joan Didion Growth As you take more and more responsibility for your decisions, actions and development a wonderful thing will happen, you will find a new source of energy. You will be open to challenges and see possibilities which you may never have known existed. These doors which are opening for you will be of your own making. Believe that you can be more, that you can do more, and that through this belief all things are possible. Have the courage to accept with uncompromising faith and belief in yourself. Accept responsibility for your life. Know that it is you who will get you where you want to go, no one else. Les Brown Summary Humans tend to be lazy. We like to take the easy way out. We look for shortcuts and we would like to pass responsibility for problems around like we hand out business cards at a convention. You can choose to do that too. Or, you can stand up and be accountable to yourself, your friends and family, and your destiny. Refuse to accept that you have no control over your life. Stop looking for problems and start looking for solutions. Regardless of where you are in life, choose to take some action now. Big or small, do something to take back control of your life. Then build on that. Only you can get yourself from the valley to the mountain top.
https://medium.com/illumination/only-you-can-make-this-change-ebee43f789e6
['John Cunningham']
2020-10-15 02:19:53.246000+00:00
['Self Improvement', 'Change', 'Motivation', 'Personal Growth', 'Positive Thinking']
Using Twitter to forecast cryptocurrency returns #2 - Mining Cryptocurrency with CoinGecko API
Using Twitter to forecast cryptocurrency returns #2 — Mining Cryptocurrency Data with CoinGecko API Gearing up for returns forecasting with VAR Screen grab from CoinGecko After going through Tweets mining as explained in my first article, gathering cryptocurrency returns wasn’t difficult at all. I had decided to use pycoingecko, which is a Python wrapper around the CoinGecko API. The key 2 things to note are: You need the id of the cryptocurrency in order to download the market data the granularity of the data is automatically determined by the number of days you are downloading for Getting id of cryptocurrency Go to the CoinGecko API and execute the “/coin/list”. Get the ids of the coins required. In my case, I have the list below: Downloading historical market data I used get_coin_market_chart_by_id, the wrapper around “/coins/{id}/market_chart” to get my historical market data. The granularity of the data returned depends on the number of days we are getting: minutely data for duration within 1 day hourly data will be used for duration between 1 day and 90 days daily data will be used for duration above 90 days Since I do not have the luxury to collect hourly data beyond 90 days, I can only stick to daily market data. The code snippet below returns 300 days worth of historical market data against USD. I’m only going to store the prices returned for the given timestamp. Let’s convert the timestamp into a user friendly format: Let’s align the data into a format that is more generic: Now, that was not too bad right? The only confusing part during data collection was the granularity of the data as I overlooked the API method definitions. My notebook is available in the atoti notebook gallery for your reference. Now, let’s take a breather before I start performing my time-series analysis.
https://medium.com/atoti/mining-cryptocurrency-with-coingecko-api-9e296ee980ea
['Huifang Yeo']
2020-12-15 02:46:55.618000+00:00
['Cryptocurrency', 'Data Mining', 'Python', 'Coingecko', 'Use Cases']
Writing Angular Services in Scala
Those following my blog posts know that I like to take Scala everywhere. This time, let us write Angular services in Scala. If you don’t know Angular, it’s a frontend web framework developed by Google in TypeScript, similar to React or Vue. It is a component based framework, and to each component is associated a TypeScript class aiming to control what the HTML component must do. These classes can use services. A service is simply another (usually global) instance of a TypeScript class, either with plenty of facility methods (for example for making http calls), or with global object used to pass information from one component to another. Our goal today is to discover how one can create and use Angular services in Scala. Using Scala.js, we can compile our Scala code into plain old JavaScript, and export some of our classes to be used, precisely, by the JS world. Let us get started. TLDR: If you want to jump right into the action, you can head over the accompanying repo. Commits in the repo follow along this article. The master branch shows the final version. Setup the project In order for everything to work properly, we simply need a bunch of plugins for managing the project. There are basically three things that we need to do: telling TypeScript what are the types that we are going to provide it, manage the npm dependencies that we want to use, and tell Scala what are the types that exist in JS/TS in the dependencies that we use. Luckily for us, there are exactly three plugins to do just that, to be added inside `project/plugins.sbt`: addCompilerPlugin(“org.scalameta” % “semanticdb-scalac” % “4.3.10” cross CrossVersion.full) scalacOptions += “-Yrangepos” /** Explicitly adding dependency on Scala.js */ addSbtPlugin(“org.scala-js” % “sbt-scalajs” % “1.1.1”) /** Plugin for generating TypeScript declaration file. */ resolvers += Resolver.jcenterRepo addSbtPlugin(“eu.swdev” % “sbt-scala-ts” % “0.9”) /** Plugin for generating Scala.js facades from TypeScript declaration file. */ resolvers += Resolver.bintrayRepo(“oyvindberg”, “converter”) addSbtPlugin(“org.scalablytyped.converter” % “sbt-converter” % “1.0.0-beta18”) /** Plugin for managing npm dependencies. */ addSbtPlugin(“ch.epfl.scala” % “sbt-scalajs-bundler” % “0.18.0”) And now we can simply enable all of these, together with some barebones configuration, in our `build.sbt`: name := “AngularServices” version := “0.1” scalaVersion := “2.13.2” /** npm module will have version “0.1.0” */ scalaTsModuleVersion := (_ + “.0”) /** Enabling ScalaJS */ enablePlugins(ScalaJSPlugin) /** Enabling ScalaTS */ enablePlugins(ScalaTsPlugin) /** Enabling Scalably typed, with scala-js-bundler */ enablePlugins(ScalablyTypedConverterPlugin) We are all set to start creating a JS module for Angular. A first service One of the strong suit of Scala is its standard collection library. For example, the native JS Array api has no method for taking distinct elements in the array. Let us fix that. Here is a simple implementation which does that (to be put into src/main/scala/angular/ArrayEnhanced.scala ): Well, this came for free. That’s the power of Scala. Let us now generate the JavaScript and the TypeScript declaration file by running the sbt command `scalaTsFastOpt`. After it finishes, you can go have a look into target/scala-2.13/scalajs-bundler/main/angularservices-fastopt.d.ts and you’ll see … nothing! That’s right, because we didn’t actually tell Scala to export this class, nor its members, to the JS world. This is easily done by adding the two annotations to the class: Issuing the command again, the content of the declaration file is now Note: the careful reader maybe asked himself why the type of f in the distinctBy method was the weird looking js.Function1[A, B] instead of A => B . This is because the type A => B is a Scala function, and pure Scala objects do not enter the JS world. Hence, if we had asked for f: A => B , it would have been impossible for JS to give us the correct argument. Adding the Angular project Until now, we didn’t do anything specifically related to Angular. Let us do that now. We are going to add an Angular project within the Scala project directory, into `webapp` directory. To that end, we go to `webapp` directory and issue (outside sbt), the command ng new FromScalaWithLove (I chose not to add angular routing and I chose CSS for styling, as we won’t use that.) We can safely delete the content of the app.component.html file, and replace it with <h1>From Scala, with love</h1> Issuing ng serve in the Angular project FromScalaWithLove , and heading to localhost:4200 should make this title appear. Using our Scala service The first thing is to copy paste the files in target/scala-2.13/scalajs-bundler/main into webapp/FromScalaWithLove/node_modules/scala-module . That will make our compiled JS code, together with the type definitions, available to TypeScript. And in app.component.ts , we can add the lines Saving the file should make Angular recompile and refresh the page. Ew, doing this gratified us with an unfriendly `ERROR NullInjectorError`. This makes sense because we never tell the Dependency Injection (DI) mechanism of Angular to take care of our class. This is easily fixed by adding the ArrayEnhanced type inside the providers array in the app.module.ts file. Saving again should make the error disappear. We can now happily use the ArrayEnhanced service, for example by adding these lines inside the constructor of the app-component : Polishing the dev experience Having to copy-paste the compiled files inside Angular node modules is a tiny bit annoying. We want to automate this process. This is simply done by adding the following lines to the build.sbt file: After reloading sbt, we can issue the makeModule command, which will copy-paste everything properly. Note that in an actual setup, we would like something a bit more fine tuned than hard-coded paths. For now, however, it will do. Embracing RxJS The Angular ecosystem makes heavy use of the FRP library RxJS. An Angular user will then most likely expect a service (with asynchronous behaviour) to return Observables instead of, for example, promises. In order to do that, we once again need to change the build.sbt file, in order to add the npm dependency that we want. In this case, `rxjs`. For the sake of speed, we will also tell our project to use yarn instead of npm. This is done by adding the following lines: Compile / npmDependencies ++= Seq(“rxjs” -> “6.4.0”) useYarn := true (You might want to clean first in sbt in order to avoid some weird shenanigans.) Now you can reload and kick ScalablyTyped off by issuing compile . This is going to take quite some time (probably 2 to 5 minutes), because all the TypeScript definitions have to be compiled into Scala.js facades. But don’t worry, this is only a one time process. We can now create a Scala class which will expose an Rx observable for Angular to use. Let us start with something very modest: Issuing the makeModule command should now create a declaration file containing (among others) Since this command automatically copy-paste into Angular’s node module directory, the shiny new EmitRxObservable service will be available immediately. Don’t forget to register it in the app module providers, though. Note: you will likely hit the following error: ERROR in node_modules/scala-module/angularservices-fastopt.d.ts(9,7): error TS1086: An accessor cannot be declared in an ambient context. which comes from the fact that the interfaced exposed our Scala def as a get accessor. This is “solved” by adding the compilerOption `”skipLibCheck”: true` in `tsconfig.json`. I am not a TypeScript aficionado, so there is perhaps something better to do… We can now happily consume our service by adding the emitRx: EmitRxObservable to the constructor of the app-component , and for example the line emitRx.naturalNumbers.forEach((e) => console.log(e)); in the constructor. Upon reload, the console should show the natural numbers getting printed. More Scala, please! Up until now, we didn’t really do any Scala. We merely used some JavaScript in disguised, but Scala has a lot to offer. For example, it is very good at manipulating data. We can define a model User , to be used by TypeScript in the project, directly from Scala. This will be a case class with its member exported: We add a facility method maybeDateOfBirth to be used inside Scala, as Option[A] is more Scala friendly than A | undefined . However, even if this method will exist in TS (because we export all members), it won’t be usable since an Option is a pure Scala type, hence opaque. More precisely, it will be usable but TS will not be able to do anything with the returned object, except carrying it along and possibly pass it back to a Scala.js function. But TypeScript will be able to create instances of User (since it knows all the types of the constructor), and we can therefore ask for them in a Scala service. One example is done below: And from TS, this can be used by doing This shows a tiny bit of what is possible doing Scala: we can define models, let TypeScript instantiate them, and use them as regular Scala objects. The only thing that we need to be careful about is to ask from, and return to, TypeScript, only stuff that it understands. Let’s go crazy Scala also has a gigantic ecosystem with high quality libraries. There is no reason we shouldn’t use them for our Angular projects! Let us imagine this use case: you are creating a dashboard of some sort, and you need to download a certain amount of data. You don’t want to download everything at once. You are then given a list of indices [0, 1, …, n-1] and you need to make a call for each of these. However, you know that some of them will take longer than others to be processed by your backend, so you don’t want to have these guys be a bottleneck. Also, sometimes your calls fail for some reason (not that often, but it happens) and in these cases you would like to retry twice with some back-off. Ultimately, this is what you want to do: make n http calls to your backend, always 3 of them concurrently retry twice those who fail, if, despite the two retries, one call still fail, the whole process should fail, be able to track the progress, return an observable which will emit once an array with the result of the n calls, in such a way that the j-th element is the result of mapping j, and give the user the possibility to cancel the process before completion. Good news, this is going to be piece of cake! We are going to use the ZIO library for that. Other good choices could be AKKA stream or Monix. The first thing to do is to add the ZIO dependency to our project. Add the following line to the build.sbt file: libraryDependencies += “dev.zio” %%% “zio” % “1.0.0-RC21–2” We will also need the implementation of the comprehensive java.time library for Scala.js, available via libraryDependencies += “io.github.cquiroz” %%% “scala-java-time” % “2.0.0" Function signature The function that we are going to expose to TypeScript will have the following signature: where program argument is thought of as an asynchronous observable emitting only once an element of type U . We could also ask for a function returning a js.Promise . We choose the Observable type because it is the one returned by Angular’s `HttpClient`, for example. Note that we only need to export the members of the CompletionState class, because TypeScript never needs to create an instance. It is only required that it understands the ones we are going to give back. From Rx Observable to ZIO effect We need to turn this program function into a ZIO effect that we are going to use afterwards. The program assumes that the returned observable might fail, so we need to take that into account. ZIO has us covered and has the function effectAsync to do just that: This function thus lifts an observable returning a U into a ZIO effect that might fail with a js.Error , and might succeed with a U . Note that you could very well preserve the fact that in TypeScript, an error can really be “anything”. In that case, we would have asked the ZIO effect to fail with a js.Any instead of js.Error . The retry policy We decided to allow each program to fail a certain number of times. In ZIO, you need to provide a “retry policy” describing the rules to follow in the retry. We can build a retry policy that fits our needs by doing If the reader is not familiar with ZIO and wonders why this thing does what we want, they can head over here. The global execution plan The last ZIO piece is a pure function taking the inputs from TS, and a bunch of small helper ZIO effect that are going to be actually built by using Rx Observables. Here is the function The program argument is the program provided by TS, lifted to ZIO. The nextProgress effect will notify that a new program has finished. The two effects complete and fail happen at the end, the former when the whole thing succeeds and the latter when it fails. As we see, the implementation is pretty straightforward. The funny symbol <* means that the right effect will be executed after the left one, but its result will be discarded (similar to Rx tap operator). The bridge to JS world We are now ready to implement our function. We create observables for ingesting the progress and the output, and we lift that to ZIO. We then run the program as a cancellable future, and expose a JavaScript function to cancel it. Here is the full implementation: And that is all. The nice thing that we get from this is that our execution function is pure and can easily be tested. The Scala compiler is also able to ensure us that the global program will never fail. That means that, from TS’ side, we can be certain that the only errors happening will be the ones coming from the input programs, or the TaskCancelledError in the event that TS cancels it. Using it We can now use our powerful function from Angular. The interested reader will find in the repo an integration with Angular UI. Below, we simply mention a usage with the console. Let us write a “dummy” program simulating an asynchronous computation: This program returns the input one second later, failing with probability 0.1. It prints the input in case of success, and warn “boom” the input in case of failure. Injecting an instance of ZIOService in our component, we can for example use our function like this: You will be able to see that elements get printed 3 by 3 the progress will be displayed accordingly from time to time, a “boom j” will be displayed, and you will see that the value will be printed after bigger numbers the result array printed at the end is well ordered, as expected. If you want to see the cancellation in action, you can for example do I hope this example demonstrated that Scala, with the help of ZIO, can give you an enormous amount of power via a straightforward interface. It’s now time to draw conclusions from all of this. Why and when do this? Why should one make Angular Services in Scala? I believe there are a variety of good reasons that I try to discuss below: your backend is in Scala. In that case, you will be able to define all your business models for the backend, and expose them to JS/TS to be used immediately. And you can write, in Scala, the type-safe versions of the http calls that you want to make to your backend endpoints. That way, all of your models will be completely in sync, you will be able to have an efficient Scala-to-Scala communication (the ScalaTs repo actually has an example of that). you need to make very advanced stuff, like above. Using ZIO is only one possible example. But Scala has a lot to offer and is perfectly suited to model complicated business domains. you want to go “all in” and make all your services in Scala, leaving to TypeScript and Angular only the responsibility of the controllers. That way, you can have a nice and clean Scala project, exposing just the right amount of information to your components. You are forced (in a good way) to keep a clear separation of concerns between components and services testing your services will be a lot easier. Scala has amazing test libraries that will allow you to extensively test your services, mocking their concrete implementations if need be. Caveats There is no such thing as a silver bullet in computer science. This technology is no exception. I can see at least three “drawbacks” that I personally think should not keep you away from choosing it, but you should be aware that they exist. bundle size: the compiled JavaScript file from Scala.js is one big fat file of easily 4 mb. In today’s fashion of doing single page applications, this should not be too much of a deal. But it certainly means that you shouldn’t do this only for the `distinct` method, as shown above scalably typed typings: scalably typed generates Scala.js facades from TS types for you. Given the nature of TypeScript, they are sometimes a bit cumbersome to work with. If you happen to need a fine tuned facade for one of your libraries, it might be worth writing them by hand. It’s really not that hard The ScalaTs plugin is young: the following months, perhaps you will find some very advanced use case that the plugin is not able to handle. No worries, you can still write things down by hand, and raise an issue! Conclusion Writing Angular services in Scala is amazing. To me, the advantages largely outweigh the caveats. Especially if your backend is in Scala. The beautiful thing is that most of the above apply not only to Angular, but to any JavaScript/TypeScript project (even node.js ones!). We did not cover using Angular-Angular services within our Scala services, but it is certainly possible to do so. Don’t hesitate to give it a try! It is easy to get working with and, who knows, it can be a nice entry point for you into Scala…
https://antoine-doeraene.medium.com/writing-angular-services-in-scala-e83fd308b7c3
['Antoine Doeraene']
2020-07-20 11:27:43.294000+00:00
['Zio', 'Angular', 'Typescript', 'Scala']
Why I Have No Christmas Tree
When this year’s Norway spruce made its way to the Rockefeller Center in New York City to be hoisted and decorated for the annual switch-on, it drew scrutiny not only for its bedraggled appearance, but for its symbolism. The Rockefeller Center’s tree-lighting ceremony, a New York City tradition since 1933, attracts thousands of spectators in ordinary circumstances. This year — a year of hot flaming garbage, a year that launched a thousand memes and declared the ‘worst ever’ by Time — the tree herself, prior to being festooned in her customary trimmings, was the subject of sympathy and ridicule alike. Initial pictures of the honorary evergreen being hoisted into position by a crew of high-vis-jacket-clad workers attracted a few commiserating comments on Twitter comparing the tree (looking worse for wear after a 300km journey lying horizonal in a truck) to someone who’d cut their own hair, a forlorn figure as scraggly and weather-beaten as the rest of us are looking/feeling as 2020 finally wraps. Photo by Wesley Tingey on Unsplash But when a small owl was discovered in its branches, the environmental impact of the modern word’s extravagant Christmas traditions once again entered the spotlight. Whether the miniature stowaway hitched a ride in the branches undetected or came to roost in the tree only after its arrival at the Center will remain a mystery. The crew responsible for cutting down the chosen tree appease concerned members of the public by dutifully checking the boughs and branches for wildlife — and presumably, removing that wildlife before transit. The owl, named Rockefeller (‘Rocky’), was rescued from the tree and later rehabilitated at a New York wildlife centre. Environmentalist Brian Kahn, writing for Gizmodo, singled out Rocky’s misadventure to highlight America’s ‘toxic relationship with nature’. Kahn called for the Rockefeller Christmas Tree, a symbol of ‘extractive capitalism’, to become the next casualty of cancel culture. The Norway spruce is, obviously, not native to the United States, yet it was a micro-habitat for numerous species of local wildlife for as long is it stood in the small town of Oneonta in upstate New York. There it grew from sapling to tree, unaware of its fate as a gigantic Christmas prop, a holiday amusement for a nation. Photo by Josh Bean on Unsplash From Kahn’s point-of-view, the Rockefeller Christmas Tree is a rather sad sight. Though technically an alien, it has been alienated from its purpose, stripped of the wildlife it once sheltered (with the exception of Rocky the owl who snuck past the tree crew) and dressed up in an undignified manner. Arguably the Christmas tree that started it all, it is strung with thousands of electrical bulbs to be stared at by millions as it stands, rootless, forestless, cold in the heart of New York City. Kahn’s heightened empathy with the Rockefeller Christmas Tree is probably to be expected from a staunch environmentalist, but I can see his point. Photo by Christin Hume on Unsplash It was a happy ending for Rocky the saw-whet owl, who was later released in to the wild, and the Rockefeller Tree, after recovering from its journey from upstate, turned out as beautiful as ever when it was illuminated on December 2. Kahn’s commiseration with the tree illuminates America’s toxic relationship with nature. It also lays bare the American Christmas season’s equally noxious attitude to the environment.
https://medium.com/an-idea/why-i-have-no-christmas-tree-12b8f0291801
['Aimee Dyamond']
2020-12-14 03:53:57.780000+00:00
['Environment', 'Christmas', 'Plastic Pollution', 'Animal Rights', 'Consumerism']
Finding the Proper Left-Footed Right-Sided Player for Marco Silva — ToffeeTargets
Written by @RyanSoccerAA Everton has secured a new midfielder and a striker over the past weekend. One area of need is a winger now. We put together a comprehensive look at who Marco Silva and Marcel Brands might target. Now that Everton has signed a replacement for Gana Gueye (see our article on replacement candidates including information on Gbamin: here) and has secured Moise Kean to play CF (see our 2-part series on CF candidates here), Marcel Brands needs to push forward and find ways to improve the club prior to the close of the transfer window. Everton’s defense was stout last year, but the club needs to find a way to score more goals to challenge the top 6. Marco Silva’s Everton scored 54 goals with 33 from open play, good for 8 thin the PL. While that’s a massive improvement over the previous season’s tally of 44, it’s still 9 fewer than 6 thplace Chelsea and 13 fewer than 4 thplace Tottenham. Marco Silva had previously indicated his need for a left-footed player to play in his wider right forward position. Richarlison has shown the ability to play there, but he is not left-footed and Silva might want him back on the right. Theo Walcott plays on the right, but he’s not left-footed and frankly, was a huge part of Everton’s ball retention issues in the first half of last season. Lookman, Sandro, and Onyenkuru are all right footers that are more effective on the left, which is a part of why all won’t be with the club this upcoming season. Yannick Bolasie and Kevin Mirallas are also not viable candidates and likely be moved on by the end of August. There have been several players rumored to be in the crosshairs of Marcel Brands and Marco Silva that seem to fit the profile of right-sided, left-footed inside forward and are in his desired age range (26 and below). If history and tactics are any indications, we believe Marco Silva is looking for more goals and chances, and thus we focused on left footers that played significant time on the right, and looked at their 1) ability to score, 2) ability to create chances for their teammates, and 3) take care of the ball. We also then put all those factors together, put some relative weight on those data points over the past 2 years that we felt were most critical, and came up with a composite performance score. But of course, all data is subject to different conditions, so it’s always more important to watch these players play and many a full assessment of their abilities especially with respect to how Everton play. The Goal Scorers Of our population, we developed a scoring system based on a group of stats including non-penalty goals per 90, xG to G differential, shots on target, shots on target %, and goal convention %. Based on our data, the following players were the most effective in putting the ball in the back of the net: Bertrand Traore, age 23, Lyon. This probably isn’t a surprise to anyone. Everton has been linked with Traore since early in the summer for good reason. Traore has great composure in the box and can flat out score from the right side. He also seems to fit from a defensive standpoint. He has good size for the position and is excellent in the air with the highest % of aerial challenge wins in our sample. He has also had success pressuring the other team and winning tackles. He is good in the dribble with a high success rate and above average frequency, using his pace and power combination to create space primarily for his shot. He has a high passing rate for both forward passes and from distance. The knock on Traore is that he’s simply not very good at creating chances for his teammates, overwhelmingly left footed, and very predictable. He’s not the best crosser of the ball and his xA / key passing numbers are near the bottom of our population. That being said, I have seen him use his right to send the occasional square pass to teammates in the box, and when he does pass, at least he’s taking care of the ball and good in possession. Everton do need goals, he would likely fit into Everton’s tactics, and likely wouldn’t break the bank despite Lyon’s initial asking price of being €40M. Nicholas Pepe, age 23, Lille Another obvious selection, Pepe appears to be on the way to Arsenal for a fee of around €80M. Pepe has size and pace and is predominantly a goal scorer like Traore, but offers more creatively, while taking more risks to do so. He is deadly cutting inside and scoring off his left foot, possessing the highest goal conversion numbers in our sample and can score from inside or outside the box. Pepe usually looks to run with the ball when he receives it an attack immediately. While primarily a goal scorer, he’s efficient with through balls and short passes in the box to set up him teammates when he sees numbers ahead of him blocking his path to goal. A fair criticism of Pepe is that he’s not efficient in beating players off the dribble. That being said, he’s an elite goal scorer that is multi-dimensional and has earned his high priced move to the PL. Marco Silva is said to have been a big fan. Anderson Talisca, 25, Guangzhou Evergrande I know, I know, he’s a complete headcase that is playing in a very poor league. I would be highly skeptical of anyone making a play for Talisca, but his numbers are ridiculous. He’s usually listed as an ACM, but he drifts out to the right A LOT and cuts in on his left to devastating effect. He’s averaging almost 5 shots a game over this last season and outperforming his xG by quite some distance. He’s also creative, he takes care of the ball, and he has excellent defensive numbers. It’s a poor league, but he is completely dominating it and has the size (6'3"), speed, and skill that would make him absolutely perfect on the right side in Silva’s tactics. For many different reasons, it’s unlikely to happen, but it’s an interesting proposal. The Creators There were several players that stood out in our population for their ability to create for their teammates. Some may not seem like typical inside forwards or wide players, but they occupy a space on the field that makes sense in Marco Silva’s tactics and they create chances, which Everton is seriously lacking in their current first team. Again, we took a combination of data points including various passing and assist numbers as well as a very minor factor for crosses. Marco Asensio, Age 23, Real Madrid Realistically, Asensio is going nowhere. He’s also blown his ACL and won’t be back in action in 9 months, and likely not at full speed for well over a year. Moise Kean aside, he’s simply not the type of young player clubs make available. But he does play a lot in the right channel area and creates a lot of opportunities for his teammates. He is probably the best crosser of the ball in our population and has one of the highest xA per 90 numbers in our data. What’s almost more impressive is his ability to keep the ball. He is proficient in dribbling and passing, giving the ball away at a much lower rate than virtually everyone in our population. Conversely, Asensio is not a prolific goal scorer. He shoots with accuracy and has scored his shares of goals, but his conversation rate is on the low end of our sample. Regardless, Asensio is extremely talented, but likely not going anywhere. Viktor Tsygankov, Age 21, Dynamo Kyiv The Israeli-born, Ukrainian national has put up some remarkable numbers over the past 2 seasons and would seem to be a perfect tactical fit offensively in Marco Silva’s tactics. Tsygankov loves to cut in from the right and can do all sorts of things with his wand of a left foot. He has high xA per 90 numbers and the highest 2 ndassist numbers in our sample. Tsygankov is not just a creator, though, as he can attack and hurt defenses in many different ways. He is extremely efficient with his shooting with one of the highest shots on target % and goal conversion rates in our sample. In watching him on film it’s easy to see why. He’s also fantastic dribbling in open field and in tighter quarters. He loves to take players on and seems to glide with the ball at almost top speed and with control. He has lots of tricks and loves to set up defenders off his left foot, but also shows some capable skills with the right, even if he is predominantly left footed. While it would be a significant leap to the PL and he would likely not be penciled into the starting XI any time soon, he’s already shown ability at the international level with Ukraine and seems to have all the skills one needs in order to make that type of jump. Martin Odegaard, Age 20, Real Madrid (currently on loan to Real Sociedad) It’s worth noting that Odegaard is often listed as an attacking or central midfielder, but he did most of his work out on the right with Vitesse in Holland. We recognize the Dutch league is not a top-five league, but Odegaard was truly the focal point of the Vitesse attack. Real Sociedad has since taken the Norwegian on loan from Madrid in yet another smart move from the San Sebastián club, thus he’s not moving to Everton any time soon. If last year is any indicator, Sociedad fans will likely see Odegaard on the ball A LOT attacking on the dribble and creating time and space for him to exhibit his remarkable array of passing skills — short, long, through ball, smart passes — that were tremendously effective in creating goals for his teammates. Odegaard also has good defensive numbers and works hard on the pitch. If there is a fair criticism of Odegaard, its that he is not a great finisher and does not convert a lot of his chances into goals. Regardless, he is only 20 years old and it will be interesting to see how he fares in La Liga in a very exciting Real Sociedad side. The All-Arounders There were other players that scored high in virtually all areas we looked at, two of which Everton has been linked to, with the other two likely out of Everton’s reach: Malcom, Age 22, Zenit St. Petersburg Malcom made the surprising move to Zenit just 12 months after making a last minute switch to Barcelona spurning interest from Everton and Roma. Even with his limited minutes and perceived ineffectiveness at Barcelona, it’s hard to argue that Malcom doesn’t have the perfect profile for Everton under Marco Silva. He takes care of the ball, he creates, and although he doesn’t take a lot of shots, shows ability to score. Although Malcom is not the biggest player, he’s strong on the ball and seems under control in open space or heading into the box. Although he doesn’t do it often, when Malcom attacks with the dribble he’s remarkably effective. Much like when he shoots, he hits the target and converts at a higher rate that almost everyone in our population. He also creates well for his teammates with a high xA per 90 number. Malcom seems like a perfect fit at Everton, but obvious the club were not interested in matching the reported €45M and €6M a year in salary offered by Zenit. Although some believe he had a poor season at Barcelona, I attribute some of it to bad luck and personally have no reservations about his talent. His agent definitely deserves a lot of credit for making himself a TON of money the last 12–13 months and one can’t help but think he might be back on the market again next summer if he has a big year with Zenit. David Neres, Age 22, Ajax After solid performances considering his age in the Copa America for Brazil and outstanding moments in the Champions League, Neres has attracted interest from several top clubs including several in the PL. While the Dutch league isn’t the most competitive league, especially for PSV, Neres still stood out for a very good side. He is a bit sloppy with the ball at times, but he loves taking players on with a wide array of tricks to go with tremendous pace and deceptive strength. He has excellent off the ball movement that allows him to get in advanced positions to either receive a ball with his back to a defender and turn and score or to use his pace to beat a defender to the far post for a tap in. He finishes well, his goal conversion numbers are high, and while he’s left-footed, he can play both sides of the pitch well. Neres defensive numbers aren’t particularly great, but not many can create and score as well as he can at his age. Pablo Dybala, Age 25, Juventus It’s pretty safe to say he’s not coming to Everton, although it does appear that Juventus is willing to move him for the right price as he was a major negotiating chip in a Lukaku deal that fell apart. Dybala is obviously a massive talent and although is thought of as more of a second striker (or a man without a position), he spends most of his time on the right side in a position that would fit in with Marco Silva’s tactics. Dybala is a terrific goal scorer and although he might not elite, he can create a bit for his teammates as well. At times he’s looked like what he is supposed to be — one of the best talents in football. Spraying passes all over the pitch. Scoring worldlies from distance. Showing tremendous control in tight quarters and accelerating out of trouble. At other times, he’s seemed lost or ineffective and some question whether he’ll ever achieve his potential, even if it has been set almost impossibly high. Ultimately, he’s not a legitimate option for Everton. Kai Havertz, Age 20, Bayern Leverkusen Another player Everton has almost no chance of getting, Havertz is one of the biggest talents in football. He had a remarkable year this past season considering how young he is, with some spectacular displays of talent that bodes well for his future. Havertz is tall (6'2"), lanky, and somewhat unassuming, but he’s got a wonderful touch, vision and creativity, and a hammer for a shot that can find the back of the net from almost any place on the pitch. He’s still a bit inconsistent and does try some ridiculous stuff at times, but it’s remarkable how often he pulls some of it off. He uses an arsenal of clever passing techniques — chips, through balls — to spring teammates into space and create chances. He’s very effective in a playmaker role and he can use both feet well, although he seems to spend a bit more time on the right and likes to cut inside with his preferred left foot. His dribbling is also quite effective, and he’s got enough pace to blow right by opponents. It’s hard to find much to criticize Havertz about as he’s still so young and has had such a successful year. Perhaps he could look for his shot more and maybe work on some of his defensive qualities, but there’s very little that I see that could prevent him from being one of the best players in the world in the next decade. I think as with most Germany wunderkinds, it’s easy to see him at Munich even though rumors indicate that Leverkusen rejected a €90M offer and do not want to sell. With the departure of Julian Brandt, Havertz is likely to play even more than he did last year and I expect him to have an even bigger year. Notable Mentions There were other players that certainly piqued our curiosity that might be worth taking a chance on that we thought were worth noting: Robert Skov, Age 23, Hoffenheim Skov put up impressive attacking numbers across the board for Kobenhavn last year, albeit in a lower league. He may profile more as a CF and doesn’t appear to have top-end speed, but it’s hard to argue his tremendous left foot and his intelligence with the ball. We are eager to see what he can do at a higher level with Hoffenheim this upcoming season. Rony Lopes, Age 23, Monaco Lopes has recently been linked to Everton and had a tremendous 17/18 season, yet did not replicate that kind of production last year. Lopes plays predominantly on the left and relative to others in our population has a tendency to look for his teammates via the cross as would a typical winger playing on the side of his dominant foot. Still, his finishing numbers are efficient and as clever as he is with the ball, one has to wonder if he might benefit from a change of scenery and even side of the pitch. Lopes is young enough to turn things around and if Everton made a move and got the player that scored like he did in 17/18, it could be a classic case of bargain hunting that could make Marcel Brands and Marco Silva look like a genius. Ricardo Orsolini, Age 22, Bologna Orsolini is a key member of Bologna and makes a lot happen on the pitch. He generates a lot of shots, serves up a ton of crosses (highest number in our population), and has a terrific left foot. He has lots of tricks with the dribble and takes players on with a very high success rate. He’s shown to be an exceptional player at the youth level for Italy after his breakout in the U20 World Cup in 2017 and Bologna wisely exercised an option to buy the player from Juventus this summer, so it’s unlikely he is going anywhere. Irrespective of his transfer status, not sure he’s the best fit for Everton. He does cut inside at times but also likes occupying the wider spots outright, which Marco Silva I believe wants to be taken by the right-back. In addition, while Orsolini has good size and strength, I’m not sure his pace and his overall game translate well to the PL. He wouldn’t have the same time on the ball and when he breaks into space, he’s very measured and doesn’t really do things at pace. Orsolini is still young and a very productive player in a top league for a side that isn’t the best, so that also needs to be taken into account. Either way, he’s not a likely candidate for Everton this window. The Young Guns There were several younger players worth noting that made our list and while they currently don’t have the type of production that some of the others have and would likely not play a major role even off the bench in the current Everton team, they offer promise for the future. Moussa Diaby, Age 20, Bayern Leverkusen Diaby is a smallish left footer that plays more like a wing on the left but can play on the right and cut inside to attack the goal. He’s undersized, but has tremendous quickness and pace with or without the ball. He’s a very exciting young prospect that showed to be a handful in the U20 World Cup at times and wasteful at others. Diaby has shown a good first touch and some natural playmaking ability. He’s a typical young prospect that has physical skills and great moments, but is still inconsistent. He moved to Leverkusen this off season and should get more opportunities to show his talent than he did at PSG. Leon Bailey, Age 21, Bayern Leverkusen Bailey is another Leverkusen speedster that tends to play more as a winger on the left. He’s very active on the pitch and plays with a certain degree of flair that is entertaining to watch. He’s effective at setting up his teammates and less effective in scoring and finishing his chances. He seemed to spring to life a bit when current head coach Peter Bosz came in last December, but after a slightly disappointing Gold Cup, it will be interesting to see how he fits into Bosz’s plans. Calvin Stengs, Age 20, AZ Stengs has power, pace, size, and loves to drive forward with the ball, predominantly on his left, and has a powerful shot. Again, he’s a typical inconsistent youth player that is still a development year away from moving to a bigger club. He also strikes me as more of a central midfielder, so we shall see if he can take his performance to another level this season. Ianis Hagi, Age 20, Genk The son of Gheorghe Hagi moved to Genk this summer for an estimated fee of €8M after a fantastic showing in the U21 Euros for Romania. While Hagi showed impressive finishing in the U21s, he was more of a central playmaker for Viitorul in the relatively weak Romanian league and statistically a poor finisher. Genk has a very accomplished scouting department and gives a lot of chances to younger players, so it will be very interesting to see how he develops this season. Samuel Chukwueze, Age 20, Villareal It’s very easy to be enamored with some of the highlights created by Chukwueze in La Liga this year. He’s scored a handful of goals, he’s attacked well via the dribble, he’s gotten into the box, and he’s looked very fast and dangerous. He’s been a bit inconsistent, doesn’t offer much defensively yet, and hasn’t created too much for his teammates, but he doesn’t look overwhelmed by the league, either. He’s definitely a smallish player, even for La Liga, but his athleticism and tricks on the ball are electric. He held his own for Nigeria in the AFCON this summer and has been the subject of transfer rumours for big clubs such as Bayern and Liverpool. Although there hasn’t been much buzz recently on a move, he just changed management teams a few days ago and perhaps something is in the works. He is a very talented player and if he stays at Valencia and has a big year the competition could be intense in the future. Any Everton links have been tenuous at best and the club is likely looking elsewhere. Wilfred Zaha — Data Anomaly Although he didn’t score very high in our composite rankings, we felt it was important to independently address Zaha as he’s such a unique player. It’s important to note the team and the league he plays in. Crystal Palace is not a bad side, but they are poor in possession and traditionally very defense-oriented. As a quick sample, here are the last 5 managers at Palace — Tony Pulis, Alan Pardew, Sam Allardyce, Frank de Boer, and Roy Hodgson. The most attacking of these managers, Frank de Boer, was promptly fired after 5 matches. This past season, they sat back and relied on the counter a lot and pressured little. Zaha takes more people on off the dribble than anyone in the PL and pretty much anyone in the top 5 leagues in Europe. Other than Saint-Maximin, who is nowhere near the player. Eden Hazard is the only other player with over 400 during the past season, Zaha is at 464! He was often isolated at Palace and upon watching him on film, he was indeed double and triple-teamed often making it difficult for him to be compared to other players with stronger sides and multiple options. Crystal Palace was 18 thin the PL in passes into the final third so if Zaha was going to get close to goal, he was pretty much going to have to do it himself, which he did successfully. Not surprisingly, he was fouled 100 times, more than anyone else in the league and only Hazard was even over 75, and drew quite a few penalties and probably deserved to draw a few more. In terms of creating and scoring chances, Zaha is a more efficient creator than a scorer. He was 5 thin the PL in key passes and secondary assists and in our sample, his crossing accuracy was 3 rdhighest with his creation numbers slightly above average. Considering his team and league, that’s pretty impressive. It’s also worth mentioning that his actual assist totals have been lower than his xA numbers 4 of the last 5 seasons. His scoring numbers in our sample were middle of the road, but considering the attention he gets, it would be interesting to see how he’d do in a different side. He still had the most successful attacking actions in our sample, so the bottom line is that Zaha makes a LOT happen in attack. That being said, Zaha does turn the ball over a lot. Although he doesn’t have extraordinarily high lost ball numbers, and my eyes tell me he’s very good at holding the ball up under pressure, it’s hard to argue with the fact that he was disposed more times than anyone in the PL and average almost more than twice as many as anyone else in the league. Again, he’s also taking guys on more times in more difficult circumstances than anyone else, so the numbers are all relative. Still, for a team like Everton that was dreadful in losing the ball and keeping possession even under little pressure, that’s a concern, especially with the loss of Idrissa Gana Gueye. A Gylfi Sigurdsson, Andre Gomes, and JP Gbamin midfield isn’t the best defensively and the risk is that Gbamin is isolated dealing with a high number of counter-attacks as a result of unsuccessful attacking actions. Hakim Ziyech — Data Anomaly 2 Ziyech was initially not in the data sample as he is more of a playmaker, although he is left-footed and predominantly played on the right side this past season. However, since the rumor mill suggested that Marcel Brands had asked Mina Raiola to help facilitate a deal for Ziyech with Ajax, even if he isn’t even his agent. We thought at Toffee Targets that we should take another look at Ziyech as a result. Ziyech’s numbers are extremely impressive. There’s no one in our sample that His creation numbers would be the best in our sample — his xA per 90 numbers is far and away the highest and he averages more long passes than anyone else by quite a bit. His scoring numbers are very high, although it’s driven by what would be the 2 ndhighest shot total in our sample as he’s not a very efficient scorer. Ziyech also loves to take a player on with the dribble, albeit with one of the poorer success rates in our population. Much like Zaha, Ziyech makes a LOT happen in the attack, just in a league that frankly, he’s too good for at this point. What Will Brands and Marco Silva Do? The Scenarios There are a lot of different ways Everton could go this window. The right player really depends on what angle they are taking. So we’ve looked at a couple scenarios: Scenario 1: Unlimited Budget, Best Player Available. Zaha. Moshiri has plenty of money, but FFP has to be a factor as does sustainability. If Brands knows he can move a lot of players out and recoup some fees, Zaha is already an elite PL player and brings exactly what Everton needs most — goals. Yes, his turnovers are a serious concern. When a player gets disposed of almost twice as many times per game as anyone else in the league and then ALSO leads the league in bad touches, it’s a major concern. This was maybe Everton’s biggest issue last year. However, the idea that he gets doubles every time he touches it is real. It could go south, there is a serious risk, but this could be the difference between 7 thand a CL spot. If the price is too high, Ziyech is probably the second choice, although there are some of the same concerns even if he’s a very different player. Scenario 2: None of the above. Brands may not see enough value in any of these guys honestly. One realistic scenario is Brands and Marco Silva goes after a bigger target in another position like Doucoure, shores up our midfield, gets us to play a 4–3–3, and moves Gylfi out wide. I have to admit, the more I think about it, Gylfi would be deadly in one of those two positions. A Doucoure/Gbamin defensive mid pair would support wide attacks from both sides, get our FBs upfield, allow Gomes to roam freely without exposing his defensive inadequacies, and Doucoure/Gbamin don’t need much help building from the back. Those two would be DEVASTATING in midfield and Silva could play more of the true 4–3–3 that he’s wanted. Scenario 3: Talented Backup Now, Future Star Later. Tsygankov. I could even see this paired with Scenario 2. It’s not just the numbers with Tsygankov, he just looks special and seems to have it all. He can dribble, he can score AND create, he’s good under pressure, he’s smooth and fast with the ball in transition, and his left foot is amazing. To see this kind of talent and composure at any level with his age is impressive. I don’t believe he would break the bank, either. Scenario 4: Solid squad player, limited budget. Traore. I just don’t see him going anywhere near some of the initial quoted prices. I recognize Lyon has sold Mendy, Ndombele, and Fekir, but they’ve spent a bit as well and have a ready-made backup in Cornet who really came on at the end of the year. I know he’s not the most creative, but he can flat out score. He’s got good size and can defend in Silva’s scheme. Scenario 5: Unlimited Budget, Best Young Player Available. Neres. I might’ve said Malcom, but the more I watch Neres, the more I’m impressed. I truly didn’t appreciate his movement and his strength until I gave him a closer look. He just makes himself dangerous no matter if the ball is at his feet or on the other end of the pitch. It would take some adjustment to the PL, but I just saw him elevate him game in the Champions League at times and really believe he could be an impact player in the very near future, and a good option right now. It’s not as if Bernard and Richarlison are bad options and Neres’ ability to play either side could be invaluable, as he could allow Richarlison to play on the left where he’s probably a bit more dangerous.He might be cost prohibitive at this point, however, with all sorts of large clubs supposedly after him at this point. Final Conclusion I really think Marcel Brands and Marco Silva are going to find a way to make Scenario 1 happen. With the lack of perceived progress, some of the other big 6 sides are showing this window, there is an opportunity for someone outside to breakthrough. Wolves are dealing with Europe and have added, but most players that were already there. Leicester are a good side but have to replace Mcguire and it’s critical that they use those funds wisely. Watford hasn’t added much and If Everton steals Doucoure, it could make things difficult for them. Part of me really hopes Brands stays the course and focus a bit more on younger players, but if the budget is there to still get in a CB as well as a RB and FFP isn’t an issue, I can certainly understand trying to get Zaha or Ziyech. Either way, it’s going to be an exciting couple of days. As always, make sure you follow us on Twitter @ToffeeTargets for more up to date Everton transfer news. Also, give me a follow on Twitter @RyanSoccerAA. Love talking about in-depth analysis on players across Europe. Be sure to check out my post on Moise Kean vs. Rafael Leao
https://medium.com/the-sports-niche/finding-the-proper-left-footed-right-sided-player-for-marco-silva-toffeetargets-7441f3bf7837
['Christian Cappoli']
2019-08-06 13:31:41.824000+00:00
['Premier League', 'Everton', 'Soccer']
A Map for All Seasons — A New Path for Hikers
Follow the path around the central cyan shapes I have always been a bit of a walker and one of the first things I would do when going somewhere on holiday was to buy an Ordnance Survey map of the area and a book of walks. Sometimes the walk was good but often it was only ok. There was never a bad walk. The quality of the writing was sometime mediocre and often the illustrations and maps left something to be desired. And often I got lost — but I got better. And better, and better at choosing a good book from a bad one. Skip many years and enter the digital age. I think I paid around £300 for my first GPS device and immediately started to create my own routes — something that I loved doing and still do to this day. Creating a pleasing walk is every bit as enjoyable as a fine wine: even more so when it is an area I know little of. Or walking the length of the country (Cape of Ness to Brighton) which I navigated entirely with my phone in 2012. And then, yes, my Garmin was relegated to my discarded gadget drawer, once smartphones came along. Here’s one of my latest routes, created using an app called Viewranger. Enjoyable as this is, I have always wanted a program that would allow me to generate a route automatically — and, indeed, these do exist now (e.g. Plotaroute, Routeshuffle) but, ironically, what they do is a bit too automatic. Pick a starting point. Add the route distance. Press a button. Hey presto. I want something different — to tinker — to mod — to include certain features — to keep away from rivers when it has been raining a lot — to walk in the woods when there are mushrooms or bluebells — to explore the whole of an area when on holiday. And now I think I have found the answer. In the map above, I have coloured areas that are bordered by paths or quiet roads. Big roads are ignored. You can now happily walk around any one area, or you could join two or more adjacent sections and you would be able to walk around the whole perimeter (see the example at the top of the page). This doesn’t exist yet — above is just a map with some fancy colours on top. In my imagined version one will be presented with the above and each of the areas can be clicked on or off. As you add them, the perimeter of the selection will be automatically calculated as the amount of height gain/loss. Areas can be colour-coded to indicate features such as woods, rivers, views and could even be rated by users for beauty, quality of surface etc. Modern art? An unexpected outcome of this method is that a whole new method of navigation can be seen. For example, starting from the arrow a clockwise path can be described as YG, YR, YB, BG etc. Or in other words, yellow on my left and green on my right until it changes to yellow on my left and red on my right. (Don’t follow red/green). It then becomes yellow/ blue etc. If you want to walk in an anticlockwise direction, YG becomes GY. I can envisage a simple app (or even on your smartwatch) that alerts you at every colour change and indicates the new direction (alerting you if is detects that the colours are wrong). Implementation. I think I have worked out a way of making the app and it involves QGIS and Python. QGIS is a free open source geographic information system (GIS) application that supports viewing, editing, and analysis of geospatial data. This would be used to create a polygon layer (the shapes) that are bounded by footpaths (a lot of work required here). This information is held within a database which can be manipulated ins software. The Python programming language is integrated with QGIS and the next stage is to learn how to use it — watch this space!
https://medium.com/swlh/a-map-for-all-seasons-a-new-path-for-hikers-3d06da2beee6
['Richard Vahrman']
2020-12-12 20:16:57.715000+00:00
['Python', 'Qgis', 'Maps', 'Hiking', 'GIS']
Diversifying Your Reading List? Don’t Expect Me to Do Your Research
Diversifying Your Reading List? Don’t Expect Me to Do Your Research I’m done congratulating semi-woke dudes. Midway through my reply to a tweet asking for book recommendations, I second-guessed myself. Why should I help you, I found myself thinking. The you in question was, unsurprisingly, a white dude. The dude in question had tweeted a version of what first seems innocuous, laudable. A worthy goal. He said that he’d recently noticed his bookshelf was full of books by white men and announced that he wanted to read more women, POC, and LGBTQ authors. Who could recommend some good books? I could, and I’d started to (pointing to Kiese Laymon’s Heavy and Tommy Orange’s There There, recent reads, then suggesting T Kira Madden’s Long Live the Tribe of Fatherless Girls) but something stopped me from tweeting out my recs and it wasn’t worry over literary Twitter judging my taste in reading. I wanted the dude to do his own research. I didn’t know what kind of books he liked to read — if he was a fan of mysteries, he might not have liked my picks. I did agree with his sentiment — we should all be reading more books by historically marginalized authors, and I’ve been working on diversifying my reading lists in recent years, too. But his ask felt both self-congratulatory (I mean, it’s Twitter) and lazy. Dude just noticed? Yeah right. I’ve noticed that certain men, when they try to be good allies to women and queer folk, often expect to be appreciated for their efforts in allyship, and I didn’t have that in me. Yes, I can recommend you some books I enjoyed. Yes, I have a library science degree and I worked in a bookstore, and I do actually enjoy recommending books and sharing books and talking about books. But websites from Bustle to Electric Literature are full of reading lists containing popular books by authors who happen to be fill-in-the-black: Queer, immigrants, black, POC, women, and so on. Whoever you want to read, there is a done-for-you list that will diversify your shelves. Dude does not need me or anyone else to tell him what to read. He can just Google it and help himself. As a queer woman trying to make a living and get published, I don’t have the time to help bring dude up to speed on the literary contributions of everyone else. No more free emotional labor in the form of a retweet or a reply. I’ve got nothing against trying to diversify your reading list. Certainly, we could all use a different perspective or different lens than how we normally see the world. It’s just that I don’t need to help you get there if you’re a well-meaning white dude. Consider this the invitation to step up, and do better. Dig in, and do your homework to read more inclusively The journey in discovering new-to-you authors who happen to be women or queer or POC (or all of the above course) is a rich one that will surprise you if you’re willing to take it. But you gotta take that journey for yourself. You know what you like much better than I do. And after recognizing the cis-hetero white goggles you’ve been wearing all these years — the stories you value and the voices you’ve heard (check out this Fairer Cents episode on men’s reaction to women’s voices), it’s time for you to get down to the work the rest of us have been doing.
https://medium.com/fearless-she-wrote/diversifying-your-reading-list-dont-expect-me-to-do-your-research-a8d9d4b6b4d5
['Lindsey Danis']
2020-03-03 16:01:05.539000+00:00
['Self Improvement', 'Books', 'Women In Tech', 'Diversity', 'Feminism']
What an Astronaut’s Mental Breakdown Teaches Us About the Dangers of Snap Decisions
SELF-IMPROVEMENT What an Astronaut’s Mental Breakdown Teaches Us About the Dangers of Snap Decisions And what you can learn from her tragic mistakes. Illustration provided to the author by Tauland Sinani Lisa knew that if she wanted to drive 950 miles in less than 15 hours, she had to forgo any pit stops — so she brought diapers. Diapers weren’t the weirdest thing Lisa took before hitting the road. She loaded her car trunk with a steel mallet, a buck knife, a BB gun with ammo, latex gloves, tape, garbage bags, and pepper spray. Fortunately, Lisa managed to use only the latter in her kidnapping attempt. Lisa’s criminal adventure started with a discovery from the day before. When she found out her boyfriend had left her for another woman — Colleen Shipman, the emotional impulse to react was too hard to resist, even for an intellectual-elite like Lisa. Lisa’s brain had earned her a master’s degree in Aeronautical Engineering. She later served as a pilot in the US Navy before securing a spot at NASA as an astronaut. Yes, Lisa Nowak was an established astronaut. Except, she decided to throw away her successful life path and go from exploring outer space to inhabiting a jail cell. As much as we can relate to the anger, disappointment, and frustration Lisa felt after being dumped for someone else, her response was everything but appropriate. So, we ask: Where did Lisa go wrong, and what could she have done otherwise to prevent her downfall?
https://medium.com/big-self-society/how-a-space-hero-crash-landed-a-criminal-suspect-c3a42cc946c8
['Nabil Alouani']
2020-11-12 20:04:37.300000+00:00
['Decision Making', 'Self', 'Psychology', 'Advice', 'Life Lessons']
And Now, a Word About Commas
And Now, a Word About Commas Spoiler alert: I’m pro Oxford. Photo by Louise Smith on Unsplash Was there ever a punctuation mark more likely to spawn arguments than the comma? At its most basic, a comma indicates a spot in a sentence to take a brief mental “breath” — or even an actual physical breath if the sentence is a long one. The most frequent uses of commas are to either separate items in a list or series (e.g. apples, oranges, lemons, and tomatoes), or to set off dependent clauses at the beginning of a sentence. For example, this sentence has a dependent clause at the beginning that tells the reader that what follows is a sample of the idea introduced in the sentence directly preceding it. There are officially a gazillion reasons and ways to use commas, so we clearly can’t try to examine them all. Let’s take a look at some of the more common ones. Conjunctions Using commas with conjunctions (and, but, or, nor, for, etc.) helps join two independent clauses into a single, longer, and more descriptive sentence. Tomatoes are red, but they don’t turn your hair red. In the preceding example, the two parts of the sentence are both short; modern publishing is leaning more and more towards eliminating punctuation as much as possible, so if a comma isn’t needed to avoid confusion (like above) you can often eliminate it. If you want to err on the side of proper grammar, then go ahead and use the comma. Parenthetical Descriptors In the following example, the phrase between the two commas is known as a parenthetical; it’s called this because the descriptive phrase would look appropriate either set off by commas, as here, or encased in parentheses (as here). The key to recognizing parentheticals is that if the phrase can be cut out of the sentence without changing the meaning of the sentence, then it’s a descriptor. The parenthetical is in bold print. Tomatoes, while extremely delicious and full of vitamins, are difficult to grow at home. Commas set the dependent phrase apart from the basic sentence. One of the concerns of using commas with parentheticals is that there can easily get to be too many commas in a sentence. Consider the following sentence: Tomatoes are the most popular vegetable in America, although of course, tomatoes are actually fruits and not vegetables at all. This sentence shows the proper use of commas surrounding the parenthetical comment. There is a temptation to put additional commas after although and after fruits because of a natural pause that happens when reading the sentence. However, since that pause happens naturally, and the parenthetical is essentially an interjection or modifier rather than essential information, commas are not needed and should be avoided for the sake of a cleaner look on the page. Introductory Phrases Similar to parentheticals, introductory phrases occur at the beginning of a sentence and can be removed without altering the meaning of the sentence. While extremely delicious and full of vitamins, tomatoes are difficult to grow at home. Some introductory phrases are considered absolutes and are always set off by commas. Examples include: Names (when addressing someone directly): Hello, Peter, do you like tomatoes? (when addressing someone directly): Hello, Peter, do you like tomatoes? Titles : Martin Luther King Jr., Civil Rights activist (note that the suffix “Jr.” is not separated from King’s name by a comma. The convention of separating Jr. and Sr. from the name by a comma is generally fading away; it’s also important to note that King himself never used a comma before Jr.) : Martin Luther King Jr., Civil Rights activist (note that the suffix “Jr.” is not separated from King’s name by a comma. The convention of separating Jr. and Sr. from the name by a comma is generally fading away; it’s also important to note that King himself never used a comma before Jr.) Dates: July 4, 1776 Exceptions (of course) occur for the use in dates: If there is no specific day, there is no comma. July 1776 Also, when using the international/military/genealogical format, there are no commas used. 4 Jul 1776 Exact place names: Del Rio, Texas, grows more tomatoes than any other place in the US. Just to mix things up a bit, if the place name becomes a possessive, no comma is used after the name of the state or country. Florence, Italy’s use of fresh tomatoes exceeds that of other cities. Coordinating Adjectives When listing a number of attributes of a given subject, commas help keep the adjectives in their proper places. The tomatoes at the farmers market are red, ripe, and still on the vine. VS. The tomatoes at the farmers market are red ripe and still on the vine. In the first sentence, we get a clear description of the tomatoes. In the second sentence, we’re not sure whether the tomatoes are “red and ripe” or “red ripe”, which implies there might be other colors of ripe like “blue ripe” or “pink ripe”. Also, “still on the vine” tells us how the tomatoes are presented in the first sentence; but in the second sentence, without the comma to guide us, we might think that the tomatoes are “still” (as in motionless) on the vine. See the difference? It is this problem with coordinating conjunctions that is at the heart of silly comparisons like: “Let’s eat Grandma!”, “Eats shoots and leaves”, and the immortal “The bullet is in her yet.” Oxford Commas There are more uses and abuses of commas in grammar style guides than there are stars in the sky. But the argument that seemingly will never die is the so-called Oxford comma. Also known as “serial commas”, Oxford commas are those commas that come at the end of a list of items in a sentence, between the second-to-the-last item and the final “and”. Tomatoes can be green, red, orange, and purple. The comma before the “and” is the Oxford comma. Lots of American publications prefer that lists like this do not use another comma before the “and” (or other conjunction). Tomatoes can be green, red, orange and purple. In a short sentence like this, that’s not a problem — though you could argue that without the Oxford comma, the sentence is saying that a tomato can be orange and purple at the same time. This distinction becomes more critical when sentences are longer and more complex. Everyone knows that tomatoes are the tastiest of all vegetables, though the argument rages over whether we should put vinegar salt and pepper or sugar on them. So, are we saying we should put “salt and pepper” OR “sugar”, or do we mean “salt” AND “pepper or sugar”? Are our choices “salt” plus one of the other two, or are they “salt and pepper” or “sugar” by itself? And what about “vinegar salt”! An Oxford comma settles the debate. Everyone knows that tomatoes are the tastiest of all vegetables, though the argument rages over whether we should put vinegar, salt and pepper, or sugar on them. (My grandma, by the way, swore that sugar was the best thing on tomatoes.) Note that grammar checking software will vary in what it tells you to do. Grammarly will tell you that the Oxford comma is necessary; Word will not. This is not a complete look at all the vagaries of comma usage. If you are required to adhere to a particular style guide, then the comma rules I’ve just outlined could be out the window. For the sake of all parties involved, avoid getting into arguments about Oxford commas. Here’s your choice of my free guides: writing, personal improvement, or the environment. DRM is the publisher of What to do About…Everything, and Boomer: Unfiltered. She writes in science, mental health, and environment on A Writer’s Mind.
https://medium.com/what-to-do-about-everything/and-now-a-word-about-commas-d185a8fb0622
[]
2020-09-06 04:21:07.095000+00:00
['Writing Tips', 'Writing', 'Editing', 'Grammar', 'Humor']
Unlikable Main Characters — Should We Write Them?
Photo by Mahdi Soheili on Unsplash What happens when you create a main character who’s not very likeable? Does it matter? Isn’t it interesting for the reader to find out why he/she is unlikable? Wouldn’t the fact that the character changes and becomes OK by the end of the story make the reader keep reading? This is an eternal question. It relates more than anything to why we read. We read for pleasure and enjoyment, so why would we choose to read about someone who makes us feel uncomfortable or annoyed. We read for escapism, to get away from the negative things in our lives, so we probably don’t want to read about a negative character. We read to find out about interesting characters, their lives and what happens to them — in the process, we grow to care about them and want them to be happy (like we want to be happy). Unlikable? Don’t care. So mostly, the answers to those initial questions are no. Except for the one about redemption, but in that case the writer has to work really hard to keep the reader somehow empathizing with the main character from the beginning. It’s all to easy to close the book or skip to the next one on your e-reader. There are always exceptions to this, and the one that everyone tends to quote is the guy in American Psycho. Because lots of people read that book, or said they did. But did they read it to find out what happened to the main character, whether he came good? Or because it was so violent and disgusting that they were waiting for him to get his come-uppance? Some people read it because it was cool to say you had. I’ve never heard of anyone, even reviewers, who said they liked it, and really liked the main character. (If you do, don’t tell me!) Like is probably a misleading word. What we usually talk about is empathy — we feel something for the main character, perhaps pity or some kind of identification, and we grow to care about what happens to them. But the writer has to give us something in the first few pages to latch onto. Something hopeful. Something that suggests this character has another side that we might like if we’re let into it a bit more. We keep reading because we hope the character will redeem him/herself, show they aren’t so bad, show they can change, show that they will come to understand the world and themselves a little more. There’s a well-known screenwriting book called Save the Cat by Blake Snyder. The title literally means the character should ‘save the cat’ early in the story in order to show their positive side. To show us good things are possible with this person. A movie I often use in class to demonstrate structure, Sixteen Blocks, also demonstrates this facet. The guy is a has-been, alcoholic cop who’s given a simple job — get a felon to the courthouse on time. The cop ‘saves the cat’ when he saves the felon’s life early on, showing us that underneath the slob we see still beats the heart of a hero. Photo by Susan Yin on Unsplash The most common reaction to an unlikable main character is to stop reading. Who cares if he/she dies? Wins through? Changes on Page 299? If we’re up to Page 20 and the character is awful or stupid or apathetic or depressing, we stop. Plenty more books out there. As writers, the last thing we want is a reader giving up on our book. The key questions to ask yourself as you write are: why is your main character unlikable? Was it intentional? If so, especially if you intend redemption later in the story, how can you show a glimmer of hope early in the story? If it wasn’t intentional, I would suggest you yourself don’t much like your character — yet. You haven’t got a handle on them, you can’t get deep into their thoughts and feelings yet, and you don’t really know why they (and not another character) are in your story. Sometimes we have to write our way into loving our character/s, which takes time and work. But it’s work you need to do. You may find my Medium article on interviewing your character helpful. This question came up for me because of a book I’ve just read, The Watchman by Robert Crais. If you’re a Crais fan, you’ll know that his detective is Elvis Cole, whose sidekick is Joe Pike. Inscrutable, iron-faced, unfeeling Pike. Now Pike gets a book all of his own, with Cole as the back-up. If you want to read something where the main character is unemotional, cold-blooded, and acts like a machine, and then see how the writer gradually unpeels him, little by little, to reveal his vulnerable side, this is the book for you. Crais never overdoes it. All the way through, Pike remains the consummate soldier of fortune, able to kill without compunction when required. Yet every so often, we see a little crack of light, and even though most readers probably won’t finish the book “liking” Pike, I think they’ll understand him better and feel that empathy I mentioned. More importantly, you can use a book like this to examine how the writer does it. Underline or highlight every tiny instance where Crais shows us the vulnerable snippets of Pike. It’s not impossible to have an unlikable main character, but it takes a lot of thought and writing skill to make it work without putting the reader off. Learn from the books that do it well.
https://sherrylclark.medium.com/unlikeable-main-characters-should-we-write-them-974bf2ec7a55
['Sherryl Clark']
2019-05-12 07:28:55.614000+00:00
['Writing', 'Fiction Writing', 'Unlikable Character', 'Character Development', 'Character']
On the “Archigram-What-Organisation-You-Must-Be-Joking-Mate”
The bare facts are these. Six youngish men come together in various flats in Hampstead, London, in the early 1960s. They produce a magazine-like publication Archigram, that lasted from 1961 to 1970 (roughly), and the firm that had grown out of it, Archigram Architects, lasts until 1975. 900 drawings are produced along the way, yet assessed In terms of built projects they produce only a playground in Milton Keynes and a swimming pool for Rod Stewart. If that. And yet they influence architecture profoundly. Their work is the thing, and should be pored over time and time again (see refs. below), but the question here is whether their organisational structure aided this extraordinary state of affairs. The ‘rock group’ motif attached to Archigram is a little overplayed — generally the analogy goes they were “the Beatles of architecture”, a lazy comparison based around their perceived insouciance, iconoclasm and psychedelic visuals, exploding out of a then-stuffy trade. “A necessary irritant” as Barry Curtis called them. Firstly, they were of course far better than the wildly overrated Beatles. (Even musically: in the retrospective at the Design Museum a few years ago, the visitor was confronted with The Yes Album playing, from a messy mock-up of their studio, but it really should’ve been Ornette Coleman and Albert Ayler.) Secondly, the key point of difference is that they heavily influenced without making buildings. Could a band influence as much without releasing a record? In this, they were part of a tradition of un-built but visionary work that makes architecture and urbanism almost unique in design practice. So what set them apart was the publishing. That espoused a take on modernism informed by a generally positive reaction to the technology and media that had which emerged, with necessary inventiveness, from WWII, a conflict that was still front of most people’s minds, self-evident in the shattered cities around them. This optimism and invention is then allied to the ‘post-scarcity’ culture that emerges in the late-’50s, as they cut and paste the space race onto colour telly and pop-art and planned obsolescence, spray-painting structural engineering with beat poetry and Harold Wilson’s ‘white heat of technology’, fusing Monty Python montage into avant-garde internationalist happenings in, wait for it, Folkestone. In pursuing the unbuilt, ephemeral, temporary and informational, they are precursors for a version of the 21st century (at least the one unaffected by peak oil). Their proposals for Instant, Walking or Plug-In Cities, Suitaloons and Living Pods, were radical, fluid, malleable, intimate and transient: “tune up, clip on, plug in” into “rooms (that) expand infinitely. Our walls dissolve into impermeable mists or into the imagery of stories and fables …”. Yet their own structure remained relatively solid. If not the band, the architectural practice was essentially their recognisable model, though that is usually just as rife with splits, egos, and partners flouncing out over non-musical differences. There seems to have been little of that in Archigram’s dissolution. Only that a large scheme in Monte Carlo fell through, and their fabric couldn’t stretch over the distance from Folkestone to Los Angeles, which is a long way geographically but even further culturally.
https://medium.com/dark-matter-and-trojan-horses/on-the-archigram-what-organisation-you-must-be-joking-mate-3ad809f1da8a
['Dan Hill']
2020-02-12 21:13:29.744000+00:00
['Management', 'Strategic Design', 'Architecture', 'Organisation', 'Design']
Learn how a Microsoft designer built an internal Icon Library in his spare time
Imagine a world where icons coexist no matter which design tool a UI/UX professional prefers. The icons live peacefully together, properly tagged and classified, easily searchable by everyone. One designer at Microsoft, Jackie Chui, built this utopia. In three weeks flat, he spun up a browser-based library with the company’s full set of 4,000 icons. He made searching intuitive by including class names for engineers and tags for designers. Engineers can even do a reverse search to grab metadata by pasting an icon into the library. Perhaps the best feature of them all is cross-tool functionality. Because Jackie’s library runs in the browser, anyone at Microsoft can bring the icons straight into their preferred design tool. No more arguing over which application should rule them all. If you feel like your icons live in a siloed design nightmare, take a look at Jackie’s pragmatic approach to building a shared icon library. How to set up your own Icon Library 1. Research the competitive landscape and your users’ needs First, like any designer worth their salt, Jackie did his preliminary research. He talked to several designers at Microsoft and watched their design processes in action to best understand their icon issues. Next, he looked for existing products on the market. The best Icon Library with organizing and copy/paste capabilities that he could find was IconJar, but it still lacked several features Microsoft needed: It didn’t have the ability to share and crowdsource tags. Jackie knew he wanted everyone the autonomy to contribute tags for icons. It wasn’t browser-based, meaning the icons couldn’t live in the cloud. It was Mac only — a big turn-off to the company who created Windows. He decided he’d need to build a tool from scratch to meet the company’s unique needs. He wanted to keep the project a secret and launch it as a fun surprise for his coworkers, so he worked on it after-hours. 2. Design and develop your tool with the wisdom of a designer and engineer As an early step, he built a basic version of his planned icon library’s UI in Sketch. (These days Jackie uses Figma — we’ve become the primary design tool for Microsoft’s Cloud and AI design studio teams 💪.) He took inspiration from IconJar’s design and mocked up a similar version, styling it in Microsoft’s Fabric design language. Once he was finished with the design, he needed to build it. He had some experience with HTML, CSS and Javascript from creating his portfolio website, so he decided to leverage that existing knowledge by tapping a beginner-friendly Javascript library called Meteor.js. It allowed him to build both the front-end and back-end databases without spending too much time mastering a new programming language. He took a React tutorial on Meteor’s website, and applied what he learned to build the icon library. For example, Meteor’s tutorial taught Jackie how to build a database for a to-do list, and he extended that concept to creating a database for his icon library. With a real problem in mind that needed fixing, he felt more than motivated to learn along the way. 3. Collect and extract the icons from the company’s repository For those not in the know, icons are normally stored in a font file. The challenge with icons is that you cannot ‘type’ the icons using your keyboard since there are many more characters than keys on a keyboard. Instead, you have to copy the ‘icon character’ and paste it into your design to use. Previously, designers had to keep a file with a bunch of icon characters that they used often so they could copy and paste in their designs. What Jackie’s tool does is allow designers to search and copy the ‘icon character’ easily in one central location. He downloaded and extracted the files and got the actual unicode characters of each icon. Next he found a Microsoft documentation page with all the icons and their corresponding class names used by engineers. He copied the list of icon names into an Excel spreadsheet then converted that into a JSON file to use in his code. Presto. 4. Distribution After three weeks of not sleeping a whole lot, the first version of his Icon Library was ready. He hosted the tool on Microsoft Azure, a cloud-computing service for building and managing services and applications that is open to the public . At this point he didn’t think too much about the distribution, he just sent a simple email with a link to the tool to his team, then it spread to the rest of the design studio through word-of-mouth. Instantly it was a hit (see pile of emails below for evidence.) Currently, Jackie’s working on a V2 of his Icon Library. His coworkers are helping him find bugs and suggesting new features, however it’s already wormed itself into many designers’ workflow. Once he’s done building the last few features and fixing the remaining bugs, he’ll share it more publicly to other Microsoft teams to extend the tool’s impact. Plans for the future So, what’s next for our tireless overachiever? 😉 In the coming months Jackie wants to develop a converter that, with one click, could take all 4000+ icons from his Icon Library and turn them into Figma components. This would allow everyone at Microsoft to organize and search for icons much more easily inside Figma without having to go through an intermediary tool. Yep, his new tool will make his last one obsolete. The potential for this goes far beyond Microsoft icons. Eventually Jackie plans to make his converter available to the design community, allowing anyone to be able to drop in an icon font file and get all their icons back as Figma components.
https://medium.com/figma-design/learn-how-a-microsoft-designer-built-an-internal-icon-library-in-his-spare-time-1dfa5f84e886
['Valerie Veteto']
2018-08-16 19:53:07.542000+00:00
['Design', 'Editorial', 'Technology', 'UX']
A shot of coffee IS a shot of whiskey at 8am.
A shot of coffee IS a shot of whiskey at 8am. At least to me. I love it; coffee energizes me, I can think free drinking it; I’m nicer to everyone. If that is not a whiskey effect I don’t know what is.
https://medium.com/no-air/a-shot-of-coffee-is-a-shot-of-whiskey-at-8am-cafda776261
['Toni Crowe']
2019-11-07 16:23:47.556000+00:00
['Coffee', 'Writing', 'Short Story', 'Relationships', 'Friendship']
We have released Non-fungible Token platform. At the same time, We are pleased to announce the release of its very first project “Gene A.I.dols” which is a Blockchain Dapp game to create your own pop
We have released Non-fungible Token platform. At the same time, We are pleased to announce the release of its very first project “Gene A.I.dols” which is a Blockchain Dapp game to create your own pop star using Artificial Intelligence(A.I.) from artificial gene. Akihiro Yamase Follow Feb 14, 2019 · 3 min read We are releasing Non-fungible Token platform soon in spring 2019. Furthermore, at the same time, we are also releasing its very first project of Token economy project, a Blockchain Dapp game “Gene A.I.dols”, joining hands with alt Inc.(Headquarter: Chiyoda-Ku, Tokyo, Japan / CEO: Kazutaka Yonekura) and DataGrid Inc.(Headquarter: Kyoto, Japan / CEO: Yuuki Okada). ICO Platform Implementing DAICO to start with, ICOVO had been developing variety of token economy platforms. ICOVO is planning to release Non-fungible Token platform soon in spring 2019 which will be used in Dapp game ”Gene A.I.dols”. You can create your own pop star in the game ”Gene A.I.dols” by combining the world’s one and only appearance and voice. Appearance will be generated using AI-generated GAN(Generative adversarial network) Algorithm from artificial gene, and vocal fold will be generated using speaker adaptation technology. By breeding artificial genes of created pop star pairs, you can create a brand-new popstar’s appearance and voice which passes on their intermediate genes. In addition, this service will not only enables you to generate appearance and voice from artificial gene but to the contrary, also enables you to create artificial gene-information from the existing facial images and human voices. Further speaking, you could also create a brand-new pop star by breeding artificial gene generated from existing human and pop star made by AI. Since your own pop star will be tokenized to ERC721 standard token and written on Ethereum smartcontract, you can hand it over to the third party through corresponding wallets. Looking ahead, we are also planning to provide services compatible with VR (virtual reality). By using vocal fold information of each pop stars and conversation engine written on artificial gene, having a chat with created pop stars through VR will also be realized in the game in the near future. ”Gene A.I.dols project” is a joint project of ICOVO, alt Inc. and DataGrid Inc. . The new joint project will combine ICOVO’s overall service design and blockchain related development, alt Inc.’s vocal fold development and DataGrid Inc.’s appearance development. <Gene A.I.dols> We release Version 1.0.0 by Spring 2019.This version enables pop star facial image creation and breeding. Achieving the sales plan goal , additional functions will gradually be implemented. We will also support a range of additional payment methods credit card payment using Japanese YEN and USD together with current Ethereum(ETH) and OVO(ICOVO’s Token). Adding body structures, we also plan to offer chat service with VR correspond pop star. 〈Gene A.I.dols Project〉 Project company: ICOVO AG, Alt Inc., DataGrid Inc. Official Website : https://gene-aidols.io/ Service Release : Spring 2019 Our service : Providing Gene A.I.dols service 〈alt〉 Alt Inc. CEO Kazutaka Yonekura 8F, UNIZO Higashi-Kanda 3-chome Building, 3–1–2 Higashi-Kanda, Chiyoda-ku, Tokyo, Japan Official Website : https://alt.ai/ Our services :alt inc. (founded in November 2014) is strongly supported by experts at home and abroad in artificial intelligence research. In addition, is working on the development of “P.A.I. (Personal Artificial Intelligence)” to make personality into data on the cloud computer. 〈DataGrid〉 DataGrid Inc. CEO Yuuki Okada Kyoto University International Innovation West building 1F, 36–1 Yoshida-honmachi, Sakyo-ku,Kyoto, Japan Official Website : https://datagrid.co.jp/ Our services : Development of creative AI, Development of Consultation of AI related system
https://medium.com/icovo/icovo-have-released-non-fungible-token-platform-80f8ea729df6
['Akihiro Yamase']
2019-02-14 00:56:02.983000+00:00
['Ethereum', 'Non Fungible Tokens', 'Blockchain', 'Artificial Intelligence', 'English']
Finding concise answers to questions in enterprise documents
Authors: J. William Murdock, Avi Sil, Anastas Stoyanovsky, Christophe Guittet Photo from Mick Haupt at Unsplash: A compass, used here to symbolize the task of finding Many business applications use some sort of search to find documents or passages. Some may also want to find answers within those documents or passages. In this article, we provide some examples of this task. We explain its relationship to cutting-edge research technologies. We then describe the answer finding beta capability in IBM Watson Discovery. We discuss ways of using that capability in a business application. We talk about limitations of the technology and plans to address those limitations. Finally, we discuss the availability of the capability and ask for feedback. The Answer Finding Task Consider the following question: What versions of Firefox does InfoSphere Information Server 1.3 support? An IBM support page answers this question by saying: If you open the InfoSphere Information Server Web Console with Internet Explorer 11, you may get the error message: IBM InfoSphere Information Server supports Mozilla Firefox (ESR 17 and 24) and Microsoft Internet Explorer (version 9.0 and 10.0) browsers. For some applications, finding this document might be enough to be useful. For many though, it would be better to also emphasize or highlight the exact answer: “ESR 17 and 24”. The example above is an explicit question, but it could also be phrased as in implicit question: InfoSphere Information Server 1.3 Firefox versions This query is not grammatically a question, but it does have roughly the same meaning. So we would expect the same answer to this one too. The answer in these examples (“ESR 17 and 24”) is a literal string within the text. This is a defining characteristic of answer finding: it finds answers in the text itself. It does not create new answers by drawing inferences. For example, if you asked an answer finding system “What is 4 plus 7?” it would not be able to give you a correct answer unless that system had some text that explicitly said that 4 plus 7 equals 11. The text would not need to use those exact words, but it would need to say something to that effect. Research in Answer Finding Photo by Michael Longmire from Unsplash: A microscope, used here to symbolize research Finding answers in a collection of text requires finding relevant chunks of text and finding answers within those chunks. These two subtasks are often addressed using separate technologies. Finding relevant text is generally referred to as search. Search is often done using Information Retrieval capabilities such as Apache Lucene. Information Retrieval typically involves counting how many times each word in the query matches the target text. Information Retrieval gives more weight to terms that appear infrequently in the collection of documents. The combination of how many terms matched and how infrequent each of the matching terms were is used to rank search results. Finding an answer within a single chunk of text (e.g. a paragraph) is sometimes referred to as “machine reading comprehension”. The task resembles the reading comprehension task that is common in standardized testing of children. The system gets a passage and a question and answers the question. With that said, most “reading comprehension” tests for children involve inferring an answer, not just finding an answer. So “machine reading comprehension” is an imperfect label for a component that finds an answer in a passage. However, it is commonly used to describe technology of this sort in a variety of scientific publications. Popular data sets for testing such systems include Google Natural Questions data set (for English) and the TyDi data set (for 10 other typologically different languages including Bengali, Russian, Finnish, etc. and even low resource languages e.g. Swahili). You can learn more about the data sets and see which research systems are effective at those links. The leaderboards on the corresponding pages show which systems are doing the best on the data at a given time. GAAMA (acronym for Go Ahead Ask Me Anything) from IBM Research, which is trained on top of a large multilingual language model called XLM-RoBERTa, is generally at or near the top of the ranking for finding short answers. Answer Finding in IBM Watson Discovery The answer finding capability in IBM Watson Discovery starts with search. It uses Information Retrieval technology to find documents and passages. Next Watson Discovery calls its GAAMA model. Using that model, it extracts answers from the passages. Finally, it returns the documents, passages, and answers. You can see all the details of how to use the beta answer finding feature for IBM Watson Discovery v2 in the API documentation once it is published (which will be soon). Below, we provide just a brief example and introduction. API Example Here we provide an illustrative example. Consider the query: {“natural_language_query”: “InfoSphere Information Server 1.3 Firefox versions”, “passages”: { “enabled”: true, “max_per_document”: 3, “characters”: 850, “fields”: [“title”, “content”], “find_answers”: true, “max_answers_per_passage”: 1}} For this query, Watson Discovery first searches for documents related to “InfoSphere Information Server 1.3 Firefox versions”. Because passages.enabled is true, it then tries to find passages for each document. Because passages.max_per_document is 3, it finds at most 3 passage for each document. Because passages.characters is 850, the passages are roughly 850 characters long. The passages come from fields named “title” or “content” based on the passages.fields parameter. Because passages.find_answers is true, Watson Discovery then tries to find answers in the passages (using its GAAMA model). Because passages.max_answers_per_passage is 1, it finds at most 1 answer in each passage. For this sample query, Watson Discovery returns a list of documents. Within each document, there is a list passages. Within each passage, there is a list of answers. Here is an example of a passage within a document within this search result: {“passage_text”: “<em>InfoSphere</em> <em>Information</em> <em>Server</em> Web Console with Internet Explorer 11, you may get the error message: IBM <em>InfoSphere</em> <em>Information</em> <em>Server</em> supports Mozilla <em>Firefox</em> (ESR 17 and 24) and Microsoft Internet Explorer (<em>version</em> 9.0 and 10.0) browsers.”, “start_offset”: 287, “end_offset”: 526, “field”: “content”, “answers”: [{ “answer_text”: “(ESR 17 and 24)”, “start_offset”: 446, “end_offset”: 700, “confidence”: 0.6925222}]} This passage object starts with the text of the passage. In the text, keyword matches to the query are emphasized. Next is the start and end offsets of the passage. After that is the field that the passage came from (which is “content” in this example). There is a list of answers, with one answer (because we set passages.max_answers_per_passage to 1 in the query). Each answer has text, offsets, and a confidence value. The confidence ranges from 0 to 1 and is an estimate of the probability that the answer is correct. Business Applications Photo by Sebastien Gabriel from Unsplash: Office buildings in San Francisco, symbolizing business We envision two major classes of applications for this technology: search and document review. In a search application, a user provides a query, and that query that is used to find documents and find answers within the documents. In a document review application, an end-user first finds a document that they want to review. That user then wants to get information from that document. For example, a user might first find a contract and then ask a question about that contract. To use answer finding for document review in Watson Discovery, add a filter to your query to restrict answers to the selected document. In general, we expect answer finding accuracy to be higher and for document review use cases. For example, consider a search query like “What is the deadline for product delivery in the 2021 wholesale contract with SampleCo?” Within this query are two separate tasks: (1) finding the 2021 wholesale contract with SampleCo and (2) finding the deadline for product delivery within that contract. It is much easier for Watson Discovery if you separate these out into two steps. The user would first find the 2021 wholesale contract with SampleCo (perhaps by searching on the query “2021 wholesale contract with SampleCo” or perhaps by scrolling through a list of contracts). The user would then ask questions about this contract such as “What is the deadline for product delivery?”. Either approach can work and get correct answers. The document review approach is likely to get correct answers more often. However, the search approach may present a better, more convenient user experience. We recommend considering both options depending on the needs of your end users. Once you have found answers in your application, you need to show them to users. Typically answers make more sense within the context of the passage in which they were found. So for most applications, we recommend emphasizing the answer within the passage instead of showing the answer alone. In some case, it can be useful to present the answer first in a larger font and then show the passage with the answer in it. If your application has extremely limited room to show answers (e.g., a smart-watch app), then showing the answer text along might be make sense. However, in that case, we would recommend that you only show such answers when Watson Discovery has extremely high confidence. Providing a wrong answer without any context is often a bad and confusing user experience. Any answer finding system will get some answers wrong. Even if you are showing the full context, we recommend discarding low confidence answers. When answer finding is enabled, Watson Discovery will almost always return answers in every passage. Often these answers have very low confidence and are almost certainly wrong. The decision of how much confidence you should have before showing an answer is very subjective and depends a lot on the details of an application. For that reason, Watson Discovery does not discard low confidence answers internally. It relies on the calling application to decide whether the confidence is high enough for some use. For more information about how to select a confidence threshold, see this article. Limitations and Future Work Here are some key limitations of the answer finding capability in Watson Discovery: As noted earlier, the answer finding capability in Watson Discovery does not create answers, it only finds them. So if the answer is not stated in the text, it cannot find that answer. Answer finding is only useful for answering yes/no questions if there is some text to find that answers the question. For example, consider the question, “Should I add more insulation to my attic before winter?” If you have a document that says you should add more insulation before winter, answer finding can be useful for finding that statement. However, if you have a document that lists a set of advantages and disadvantages of doing so and does not draw a conclusion, answer finding will not help. Watson Discovery document and passage search may still be useful for finding that text, but there is not an answer in the text to find. The capability is generally not useful for queries that are not explicitly or implicitly asking a question. For example, a query like “IBM support” is a navigational query, where a user is trying to find a specific page, not get an answer. The answer finding capability is useful for questions where the answer is a single noun (e.g., “Who won Super Bowl XII?” — “Dallas Cowboys”) or a complex phrase or clause (e.g., “Why did the Cowboys win Super Bowl XII?” — “because their defense forced many turn-overs and obliterated the Broncos passing game”). However, more complex questions that require passage or document length answers are not a good fit for answer finding. So if your users only ask questions that need very long answers, answer finding may not be useful. If your users ask a variety of questions and a significant fraction of them do require shorter answers, then answer finding is more likely to help. In those cases, you can often rely on the answer confidences to know which answers to show and which to ignore. Watson Discovery’s answer finding is not particularly good at finding answers within tables or complex lists. For example, if I ask “IBM’s 2014 gross revenue”, answer finding is likely to get the right answer if there is a sentence stating IBM’s gross revenue for the year. It is much less likely to get the right answer from a table showing a variety of financial results for a variety of years. Some of these limitations may be addressed in the relatively near future. For example, Watson Discovery already has technology for analyzing tables and finding tables that are relevant to a query. So we may be able to get improved answer finding from tables soon, especially if there is a lot of customer demand. Availability of answer finding in Watson Discovery Answer finding is currently available for beta testing in IBM Watson Discovery v2 on public cloud for Premium plan instances only. If you have an Watson Discovery v2 Premium instance, we encourage you to try it out and provide feedback. Depending on the feedback that we get from the beta test, we may also make it available on non-Premium public cloud instances. If the beta test is successful, we will also make it available for private cloud via IBM CloudPak for Data. If you are able to try it out and have feedback based on the results you get, please let us know by posting in the IBM Watson Discovery Community. Even if you are not able to try this out, we would be happy to get feedback on the concept.
https://medium.com/ibm-data-ai/finding-concise-answers-to-questions-in-enterprise-documents-53a865898dbd
['J William Murdock']
2020-12-21 16:17:04.799000+00:00
['Machine Learning', 'Artificial Intelligence', 'Natural Language', 'IBM', 'Ibm Watson']
Wearing a Mask While Running Sucks. You Might Have to Do It Anyway.
Rules and recommendations aside, what’s the scientific rationale for wearing a mask when working out in the outdoors, and what are the practical considerations? Before we get to those questions, let’s be very clear: A workout in public is never more important than public health. Exercise is especially important for physical and mental health during this anxious time, but it must be done in a way that doesn’t endanger others in your community, says Melissa Perry, a professor of environmental and occupational health at George Washington University. If we want to prevent the new coronavirus from spreading, everyone — whether or not they have symptoms — needs to take steps to avoid transmission, and that includes those who need their daily run through the park. The dos and don’ts of exercising in a mask To be effective, the mask needs to cover both the nose and mouth. Whether you’re required to wear a mask while working out or if it’s your personal decision to do so, the mask needs to cover both your nose and mouth to be effective. It doesn’t need to be so tight that it’s pressing into your face, but it should be snug enough so that if you sneeze or cough nothing is projected outside of it, Perry says. The purpose of the mask isn’t to protect you from catching the virus, it’s to prevent the aerosols coming out of your nose and mouth from being projected at other people. After your workout, it’s very important to take a mask off by the ear loops, to avoid touching the part of the mask that accumulates respiratory droplets. Once you’ve been wearing a mask, think of it like a used tissue — you don’t want to touch it or leave it lying around. As soon as you’re done, the mask should go straight in the trash or the wash, Perry says. Never reuse a mask without washing it. Research suggests that cotton is the best fabric for a mask, and having more than one layer of fabric makes a mask more effective for everyday use. But for exercising, you’ll be best off using a face covering that isn’t so thick that you can’t easily breathe through it. The trick is balancing comfort with protection. “If you can see light through it, it probably is not providing the filtering protection you need,” Perry says. Does wearing a mask enhance the training benefit you get from exercise, by training your lungs or chest muscles to get used to working harder to inhale a sufficient breath? University of British Columbia exercise scientist William Sheel says this is wishful thinking. Some people have asked me whether wearing a mask might actually enhance the training benefit you get from exercise, by training your lungs or chest muscles to get used to working harder to inhale a sufficient breath. Another idea is that a mask might serve as a proxy for altitude training, by cutting down on the amount of oxygen getting to your lungs. But Sheel says this is wishful thinking: “I don’t see a physiological rationale here.” Altitude training works by putting you in a state of hypoxia, or oxygen deficiency, which in turn stimulates your body to increase its production of red blood cells. There’s no reason to think that breathing through a mask would replicate that effect, Sheel says. Over the years, various mask products marketed at athletes have promised to replicate high-altitude training, but these are “one of the biggest scams ever imparted on the exercise community,” says Benjamin Levine, director of the Institute for Exercise and Environmental Medicine at UT Southwestern Medical Center and Texas Health Presbyterian Hospital in Dallas. “Yes, a mask might make it a little harder to breathe, but wearing a mask doesn’t change your oxygen at all.” The drawbacks Masks can have downsides too, especially when worn during vigorous exercise.
https://elemental.medium.com/wearing-a-mask-while-running-sucks-you-should-probably-do-it-anyway-90547d8eea19
['Christie Aschwanden']
2020-05-04 13:51:55.808000+00:00
['Fitness', 'Running', 'Coronavirus', 'Exercise', 'Life']
“STQ Team”: Interview with Dmitrii Mushchinskii
Selling and buying goods worldwide with cryptocurrencies. Choosing products only with honest reviews. We have started the Storiqa with a simple pack of ideas. Making a future of e-commerce ain’t that easy. We are working hard to achieve a great success and reach the stars. We are the Storiqa team, and here are our stories. Read about us using hashtag #stq_team and find out that you’re one of us! Meet Dmitrii Mushchinskii — Storiqa UX Product Designer. Dmitrii, could you please tell us a little about yourself? What do you usually do and what are you working on now? I have started designing professionally 8 years ago. I developed interfaces of games, worked as art director, then worked for Nimax (a large digital agency in Saint-Petersburg), and created several global projects (similar to MailChimp, Adobe Spark). Now I’m a UX product designer of Storiqa and it seems to be the most interesting turn in my career. Wow, such a nice experience. In your opinion, how should the modern marketplace look? What trends do you follow for creating Storiqa own style? A marketplace should be clear for users. All in good time: everyone experimented and users liked it in the 2000–2010s, but now it’s different, I mean, now it’s time for standards. This is about users expectations: people have accepted a lot of conventions, they imagine clearly how services should look and work, they are waiting for an intuitive design. Each button, each section, system messages — all of these should be friendly and clear. Then how is important a design for a marketplace at all? Should it be more neutral or more creative? Should it be not distracting a user from any goods or quite opposite — original and recognizable? There is a balance between creative and neutral in every complex platform. The market already knows what users like and what they hate at all. We can’t ignore those researchers. New and creative ideas should be there but they shouldn’t complicate interaction. What marketplace elements are you especially focused on while developing a design? “On all of them” won’t be a satisfactory response, will it? A lot of attention is paid to key points where a user makes a decision (add to cart, make a payment). I attend to the unity of the design in order the user to see our corporate style all his way through the platform. What exactly you have found interesting in Storiqa? Storiqa is a very ambitious project! I joined the team almost in the very beginning, and I like seeing the development process in full, as well as protect the ideas set out. It is very tempting — to develop a huge marketplace and surrounding infrastructure, especially knowing that your leaders and other colleagues are also focused on the outcome. What are the most complicated goals you had to face with? What exactly has helped to resolve it? The most complicated task for me was to determine priorities and stick to the plan. I like “stucking” with an aesthetic task, showing perfectionism, improving something constantly. But if you think from the business point — you should make some fundamental things. It doesn’t mean that there is no place for the beauty and aesthetics — it means you should keep a balance. Is there anything in your job (a fact, short story or maybe any task you had) that would be curious for our community? I celebrate some cheerful events by making a gif picture often. This is a gif with TV, and there is a funny story seen on its screen. We call this Storiqa TV Channel, and my colleagues do like it! “Storiqa team is…”? Could you describe it in a couple of words? What is the secret of a successful team, in your view?
https://medium.com/storiqa/stq-team-interview-with-dmitrii-mushchinskii-db3b78a48a6c
[]
2018-09-25 11:01:12.917000+00:00
['Ecommerce', 'Startup', 'Storiqa', 'Interview', 'Stq']
On Managing Your To-Do’s and Hobbies Effortlessly
1. Be specific on your To-Do list To-Do list is not a lost game — After all, it is a widely used productivity tool. To-Do lists do work if you use them the right way. I’ve been making personal To-Do‘s for last 4 years, I use Google Keep app for it. Mine are not daily To-Do’s, but they are flexible. Whenever you are making a To-Do list: First thing to aim for are three characteristics — One task per bullet Quantifiable Actionable If every item in your list is a single task, which is quantifiable and actionable, that’s a good beginning. Second, put the deadline as the title of that list. Third, keep the duration short. How short? A week is just right.
https://medium.com/hapramp/on-managing-your-to-dos-and-hobbies-effortlessly-23804ff81d02
['Rajat Dangi']
2018-07-07 06:08:39.854000+00:00
['Self Improvement', 'Productivity', 'Lifehacks', 'Personal Development', 'Hobby']
A Lesson in Satire (and Romance)
A Lesson in Satire (and Romance) According to a Seventeen-Year-Old May 10, 1999 I want to start this paper by letting everyone know nothing is more exciting than writing a three hundred and fifty-word essay. I can’t imagine doing anything more useful for a mind-splitting headache than writing a paper for English class. My favorite thing about people, guys, especially, is how they play little mind games with me. It is positively stimulating never to know what people will do or never knowing how they feel. Having an unpredictable relationship like that, to the point where I can’t tell a dream from reality, is such great joy for me. Never knowing the truth about how someone feels about me or where exactly our relationship stands are what I call living on the edge. There is no doubt that I’m talking from personal experience. This someone does exist in my life. Let’s call this someone Mike. I get so much joy out of Mike’s mood swings and how one day he is so great to me, and the next day we are total strangers. I get so much pleasure out of only sometimes knowing who he is! Mike and I will spend our Saturdays together but only when he can fit it/me into his busy schedule, and I especially love it when he has better things to do and would rather be somewhere else than with me. One of the best things has to be only on rare occasions does Mike call me, and I get more joy out of this than you will ever know! My other favorite thing is how Mike and I will get so close, have such deep conversations, and share such deep stuff that no one else knows on the weekends and then during the week, he will act like nothing ever happened! I think it would only be right to conclude this paper by telling everyone the most delightful and adorable thing that Mike does; he will treat me like a princess, kiss my hand like royalty, and make me feel like I am so incredibly special to him. Then I turn the corner, and he is doing the same thing to some other girl. Addendum: This is one of the most random things I’ve ever posted on here. Get used to it guys — I am letting loose at thirty-eight! This is a paper I wrote when I was a Senior in High School. I cringed when I read it (and again when I typed it out). Rediscovering this paper leads me to believe two things: 1.) my writing has improved a little, and 2.) I’ve settled for less than I should have in my relationships. No more.
https://medium.com/recycled/a-lesson-in-satire-and-romance-d62ab176dfab
['Divina Grey']
2020-12-14 00:54:30.525000+00:00
['Life Lessons', 'Self Improvement', 'Satire', 'Writing', 'Life']
Enabling collaborative, self-service analytics with Tableau and Salesforce
When I joined Slalom in June 2018, the first project I worked on was a Tableau implementation for a company named Elsevier. Elsevier is a global information analytics business that helps institutions and professionals advance healthcare, open science, and improve performance for the benefit of humanity. Elsevier wanted to empower their global sales teams with timely insights to make better data-driven decisions every day. The Tableau implementation project focused on designing a solution that would integrate data from multiple systems, and serve it up through Tableau visualizations, embedded in Salesforce. Working on this project stands out as a highlight of my time at Slalom, due its multi-faceted nature and the revolutionary impact. Elsevier were on a path to becoming a more data-driven organisation when Slalom became involved and this project accelerated the trajectory towards their vision. Background When Elsevier approached Slalom, they were in the middle of a Salesforce implementation. They quickly realized that there was a gap in their ability to provide their sales teams with actionable insight and ultimately realize their vision; “to put good quality data and great analytics at the heart of all decision making”. The sales teams were reliant on manually intensive, non-scale-able processes for reporting. Elsevier did not have a single, consolidated view of key sales metrics due to the data being in various source systems. Furthermore, there was no single source of truth when it came to sales reporting and separate areas of the business reported performance using different metrics. Improving data visualization and self-service reporting capabilities were key factors in realizing Elsevier’s vision. As a result, Elsevier selected Tableau as their data visualization tool and asked Slalom to build a solution that would provide insight into key sales metrics across the business. Project Overview The project scope was broad as we focused on delivering the self-service solution to end users, as well as up-skilling the previously inexperienced teams in Tableau. We worked closely with Elsevier’s Sales and IT teams to understand their business requirements and then architecture solutions to deliver these to end users. Some of the key project objectives were: · Encourage and enable teams to work in a more collaborative nature, · Combine data from various source systems (including Salesforce) into a single datamart, · Up-skill members of the Elsevier team in Tableau and data visualization, · Embed insights into the day to day working life at Elsevier. Collaborative Working One of the key challenges we faced early in the project was the siloed nature of our stakeholders who were spread across three different business units in three separate continents. Each of these business units had their own methodology for tracking and reporting sales metrics, meaning there was no standardized company-wide approach. We worked closely with the different business units to develop a set of uniformed reporting requirements and metrics. We organised daily stand-ups, weekly show and tells and regular requirement gathering sessions. These enabled the business units to become more aligned and provided opportunities to share best practices and insights. Through these sessions, we were able to agree a conformed set of sales metrics that would be used for sales reporting throughout Elsevier. These metrics would form the basis of the datamart and Tableau dashboards that we would design and build. Data Mart Design and Build The data required for the sales reporting resided in several source systems, including Salesforce, Siebel and an Oracle data warehouse. To ensure all of this data could be analysed and presented within Tableau, we designed and implemented a data mart. The data mart incorporated tools including Talend, Redshift and S3 to pull data from various source systems, into a single data model (see figure 1). Figure 1. Data Architecture Overview We created an extract of this data source and published it on Tableau Server where it was refreshed daily. This allowed users to connect to this centralized data source and create their own reports and analytics using Tableau’s web authoring capabilities. We also designed and built six core Tableau dashboards based on the requirements gathered from the business units. These dashboards provided oversight of Sales Performance, the Opportunity Pipeline, Salesforce Activities and Forecasting. Upskilling the Team The Elsevier Sales Team had limited previous exposure to Tableau. Therefore, a key part of the project was to upskill both end users (who would be consuming the dashboards) and power users (who would become responsible for maintaining the dashboards). We used a combination of training, personalized handover sessions and a group hackathon to achieve this. These sessions ensured that upon completion of the project, Elsevier would be equipped with a workforce that could manage and enhance the dashboards we had developed. Since the first phase of the project many of the training participants have applied their skills to develop their own, standalone Tableau dashboards to respond to other business requirements. When I check back in with the team it’s fantastic to see how far they have progressed in their Tableau journey over the past several months. Self-Service Analytics, Delivered We were keen to bring analytics closer to the end users so we embedded the Tableau dashboards within Salesforce. This created a seamless, user-friendly experience. For the first time, Elsevier could obtain a single view of all their key sales metrics, allowing leaders to make quicker, more intelligent decisions. Over 1,200 members of staff from Elsevier could access the Tableau dashboards we built, by connecting to them directly from within Salesforce. The team are now using the dashboards to review Sales performance, prioritize workload and make faster, more-informed decisions. Since the completion of this project, Slalom have worked alongside Elsevier to successfully adopt Tableau into several other areas of their business, including finance and marketing. It has been great to see how the Tableau-Salesforce project has acted as a springboard for further success stories at Elsevier. I can’t wait to see more clients realize the benefits of bringing together these two powerful technologies.
https://medium.com/slalom-data-analytics/enabling-collaborative-self-service-analytics-with-tableau-and-salesforce-763174e61d38
['Andrew Herman']
2019-09-20 18:36:16.414000+00:00
['Tableau', 'Salesforce', 'Data Visualization', 'Embedded', 'AWS']
The Best Gifts for Writers in 2020
The Best Gifts for Writers in 2020 Surprise the writer in your life with the perfect gift Photo by Mel Poole on Unsplash Are you looking for a great gift for the writer in your life? Or are you a writer who wishes your friends and family would get you a thoughtful gift that aligns with your passion as a writer? InspireFirst has you covered right here with our list of best gifts for writers! Don’t go the easy route by slapping a gift card into an envelope and calling it day. Be thoughtful with your gift choice. Buuuuuuuut if you decide to get a gift card for the writer in your life, get them a Barnes & Noble gift card because writers love to read good books. Some of the following links are affiliate links and could pay us a small commission at no cost to you. Check out our Affiliate Disclaimer. Best Tech Gadgets for Writers Let’s start with some cool tech gadgets for writers. Electronics make cool gifts because they can make writing tasks more enjoyable, efficient, and convenient. Here are our favorite tech gadgets for writers: Photo by Aleksi Tappura on Unsplash Best Laptop for Writers Your friend or family member would love you forever if you took the time to research and purchase the best laptop for a writer. We dedicated a huge review article that breaks down the best laptops for writers. Here are our two favorite laptops for writers: Apple MacBook Air The Apple MacBook Air owns the crown as the best laptop for writers because of its speed, durability, and almost 12 hours of battery life! ASUS Chromebook Flip C434 2-in-1 Laptop This ASUS Chromebook is the best laptop for bloggers because it’s lightweight, has the super-fast and easy-to-use Chrome OS, possesses elite battery life, and delivers seamless connection to Google Docs and Google Cloud. Photo by Tomasz Gawłowski on Unsplash Headphones for Writers A good pair of noise-canceling headphones helps writers focus during their writing process. Brands like Sony, Bose, Skullcandy, JBL, and of course Apple Beats are all in the competition for the writers’ demographic. Here are our favorite noise-canceling headphones for writers. Bose QuietComfort 35 II Wireless Bluetooth Headphones Sony WH1000XM4/B Bluetooth Noise Cancellation Wireless Over-Ear Headphones Time Management Gadgets Time management is important in order for writers to be productive in their craft. You or the writer in your life needs to write when they’re supposed to write and generally organize their life optimally. Why? Because writing requires a lot of focus and diligently sticking to a schedule is the best way to dedicate time for focused writing sessions. Here are our favorite time management gifts for writers: Time Timer Audible Countdown Timer Use the Time Timer Audible Countdown Timer to do focused writing sessions. Use it to do the Pomodoro technique or any other time management technique that’s helpful for writers. This cool clock operates quietly, which is good for those who are easily distracted. The Time Timer is also a great timer for helping people with ADHD. Purchase the Time Timer Audible Countdown Timer Best Software and Services for Writers At InspireFirst, we utilize some of the best SaaS (software as a service) applications to keep us efficient and productive. Writers, authors, and bloggers need the proper tools to build a professional brand, one that consists of quality content, an informative website, and a clean and attractive visual brand. This is when good software comes to the rescue. Grammarly Premium Grammarly is a dream for writers who want to boost their content’s quality grammatically and stylistically. We covered Grammarly Premium in-depth here at InspireFirst. Grammarly Premium would make a great gift for a writer because it will help them improve their content for their readers’ enjoyment. We use Grammarly Premium to review every letter for our content. Make Grammarly a gift for the writer in your life! Photo by freestocks on Unsplash Audible and Kindle Unlimited Good writers love good books. An Audible subscription (audiobooks) and Kindle Unlimited subscription (unlimited reading of over 1 million eBooks) will give the writer in your life unlimited access to their favorite books to keep them inspired and entertained during their downtime. Rev Audio Transcription What if you need to record and transcribe audio as part of your writing duties? Recording and transcribing audio is important for journalists and podcasters. If you want to be a solid friend, give Rev Audio Transcription to the writer in your life. Learn more about audio transcription services. Asana We love Asana because it helps us stay organized and on top of our tasks. The Asana project management platform will help you or the writer in your life know what to work on and when it needs to be completed. And it can be used on a laptop, desktop, tablet, or mobile device. This is why we believe an Asana subscription would be a great gift for a writer. Best Notebooks and Journals for Writers Photo by Jess Bailey on Unsplash A good notebook or journal for ideas and brainstorming can help writers overcome a pesky case of writer’s block. A beaten and worn notebook is a well-used and purposeful notebook. Give your loved one the opportunity to wear out a good notebook! Get your friend a notebook that’ll last them — one that is of sustainable material and character. Here’s a list of the best notebooks for writers (in our humble opinion): Paperage Field Notes AndSoTheyMade Personalised Notebook Lightning Design Store Pens for Writers Photo by Jess Bailey on Unsplash I know it may seem old-fashion, but fountain pens and ballpoint pens make great gifts for writers. Just like there’s a whole community of photographers who find comfort in shooting with film/analog cameras, there are many writers who do not write on laptops and tablets but on pen and paper. You may be thinking, “A pen is a pen. As long as it functions, you should be capable of knocking out some good work.” However, there’s just something about a good fountain pen or ballpoint pen. If you want to purchase a great ballpoint pen for your writer friend, we’ve got you covered. You can go super fancy with a Montblanc Meisterstuck Platinum Line Classique ballpoint pen. Montblanc pens can cost a pretty penny, but their value is undeniable. They can last up to 7 or 8 years! Just don’t lose it! Geesh. I understand if you want to find a more affordable pen for your writer friend. In this case, you can’t go wrong with the PILOT Precise V5 RT Refillable & Retractable Liquid Ink Rolling Ball Pens (12-Pack) or the Schneider Slider Rave XB Ballpoint Pen (Box of 5). Bags and Backpacks Photo by Lina Verovaya on Unsplash If not working from home at a desk or in bed, many writers travel around to find that perfect spot for inspiration. Writers like to get out of the comfort of their homes and write from other places — maybe a coffee shop, a library, or at the park. For these kinds of outings, it would be extremely helpful for the writer in your life to have a great bag or backpack. There are a number of nice options to consider. There’s the messenger bag, the tote bag, and the good ol’ bookbag. Most backpacks come with laptop cases nowadays, and if you’re looking for gift ideas for a writer in the form of a bag, you’ve got to know your writer. Do they use a laptop? Do they prefer a pen and pad? This will help when buying a bag for your writer friend. Here are our favorite bags for writers: Wxnow Tote Cloele Tote Kattee Store Leather Tote Herschel Backpack Mactso Messenger Rustic Town Messenger Need More Help? We’ve given you a good number of ideas for great gifts for writers. They make good gifts for aspiring writers too…maybe just the right gift from this list will inspire them to write their first article or book! A lot of the tools and supplies that writers value aren’t common knowledge. It’s my hope that this list was helpful for you and will be a blessing for the writer in your life. But maybe there’s something else that you’re looking for. Join our writers’ community on Facebook and ask the community what they think would be a great gift for a writer. Lastly, if you’re a writer and you like some of the items that I listed above, share this article with your family and friends so they can get you a great Christmas gift or birthday gift.
https://medium.com/inspirefirst/the-best-gifts-for-writers-in-2020-1c20493f0d08
['Christopher Luxe']
2020-11-17 19:55:48.198000+00:00
['Shopping', 'Gifts', 'Writers', 'Writing', 'Christmas']
How I Managed My Quarantine Routine
Photo by Artem Riasnianskyi on Unsplash Since I was to quarantine alone for ten days, I knew that it might get a little lonely. I see people complaining about quarantine times locked up with their kids and friends and saying that it can get unbearable. I always feel like quarantining alone can be a skill that you can hone to get a better productivity flow. However, all of us think about the boredom that can come with being alone. I realized from my unique situation that this fear of being alone is not a direct result of this pandemic, but it has been for millennia. Sociologists argue that solitude forces us to come to terms with ourselves, and ironically enough, people are often not ready for this process. Staying at Sollagos accommodations, I immediately decided I wanted to know the effects of loneliness on pandemic survivors. A quick Google search informed me that people who are completely away from their family and friends are at a greater risk of contracting the virus. I felt lucky that I had my colleagues keeping tabs on me, and I am not entirely on my own. My Initial Response Quarantining alone meant that I could not randomly be leaving my accommodations, so I had to get creative with connectivity. The first thing I tried to fix in my schedule is to straighten my social connections. I had decent data service available, and I rang up my parents and friends to tell them that I was okay. I recommend anyone who is isolated right now because they have contracted the virus to keep a close check on your communications. Try to take inspiration from this opportunity to get closer to those who mean something to you. Why Have a Routine Here is how I came to find a niche for all my tasks during my time in the self-imposed quarantine: I figured since it is human nature to sort their tasks so that they can minimize uncertainty, I found it was utterly primal to have a list of my schedule A routine helps you be more efficient, and staying a better part of my day alone can get distracting if I have nothing to do Routine increases productivity. Many sociological types of research have concluded this statement, and I needed to boost my productivity with the amount of time I had Routine during quarantine is shown to decrease anxiety and depressive thoughts Setting A Routine Here is how I came to find a niche for all my tasks during my time in the self-imposed quarantine: Waking Up I tried to fix a morning time to wake up every day. I decided 9 AM was the best time to get out of bed and start my day. This time limit gave me ample time to catch up on my sleep and made me feel well-rested, fresh, and ready to take on the day by the horns. Training I searched online that physical training or exercising in the morning had fantastic health benefits. It not only gave strength to your muscles, posture, and refreshed you; it is an excellent immunity-booster — as little as a 15-minute walk has shown promising results in increasing body immunity. I work out for about 20 minutes every day. The training includes a variety of body exercises that target multiple muscles. Photo by Sebastian Pociecha on Unsplash Getting Ready After exercising, I hit the shower to freshen up and get rid of the sweat. It is after that that I take my breakfast. Marcelo makes sure that I take plenty of greens, including plenty of fruits and nuts, in breakfast, and I am incredibly thankful to him for taking such good care of me. I prefer to eat a healthy breakfast that can help me get me back on my feet. Since I am not down with any symptoms, I try to keep myself busy sometimes by taking time for some cooking. Working I plan my work schedule right after breakfast because I feel fresh by then, and I have a fair amount of work, so I head on to it. I have a habit of jotting down all of my tasks in order, clearly, on a clean paper pad. I stick by this to-do list every day. Photo by Mikey Harris on Unsplash Lunch Around 1 PM, I make pasta and eat it on the balcony outside. It is the same every day, but that little time out on the balcony helps me get some sense of normalcy. I can see one or two people moving about, some cars moving around, and birds flying above my head, and for a moment, I can pretend that things are every day again. It helps me realize that we have many things in life that we take for granted. It makes me humbled and appreciative of my surroundings. Evening Routine After dinner, I work till 6 PM and call a day. After this time, my evenings and a small portion of the night include YouTube binging on videos and various online courses. Task Distribution I made it a compulsion upon myself that I would break each activity up to the minimum level. I researched online and found that doing things this way helps micromanage the daily tasks and keep the overwhelming feelings away. I even tried putting smaller break chunks between the work time and labeled them with the relaxing activity that I would perform during that time. I found a perfect planner app as well, among many such tools at your disposal online. I like to keep things old-fashioned, so I prefer pen and paper. Many other people put their lists on those app platforms, share it with their close friends and peers to maintain a sense of accountability. You can color code the tasks in those apps, put them in a particular order using labels, and pull them out in a specific category with just a click of a few buttons. Some of these apps include Todoist and Habitica. About Online Courses and Other Activities I emphasized a great deal of my free time after finishing work to learn new things. My greatest passion after traveling is photography, so I found a few great Coursera courses about beginners’ photography. I found some graphic designing courses that helped me, as well. Other courses that I took up for free or for payment include coding courses, e.g., Python language courses, health benefits, learning more about the pandemic, precautions against it, and time management ones. I also took some valuable information from some travel vlogs and blogs about documenting my experiences effectively. In addition to that, I looked online for workout videos, physical fitness videos, and video courses for learning new musical instruments. I had always wanted to play the keyboard, so I took some rudimentary lessons from YouTube. I also learned a few things to cook, mostly over the weekends when I would not be working. I tried my hands at Acara, the local breakfast delicacy. Puff puffs were comparatively easier to make. There was another dish called Dambu Nama, which Marcelo provided me with the recipe, but I just could not make it right. What I Learnt from My Quarantine Routine I found out that managing my daily routine in my quarantine was precisely how I would be a student. I could get extra sleep but would not afford to sleep off the day. Without having any symptoms eased the process further. I would get breaks in between, but my main goal would be to finish the day’s tasks. After I was done with work, I could enjoy and learn new hobbies or skills. This freedom of choice made me understand how important it is to have a slotted time for yourself, even during regular times, because it is mind-relaxing. Another important thing that I learned is that your goals should not overwhelm you. You should put the daily goals that you can finish within the time slot you have available at your disposal. You can also put your time to fair use by providing your services instead of just consuming them. I have heard that many youth-related programs in Portugal specifically encourage youth to join knowledge-groups. These groups organize digital scholarly symposiums where people from different walks of life convene to discuss specific education topics. I also learned what it was to help the greater good by staying away from the general public for a few days. Studies have shown that people who go into self-quarantining avert 44 to 96% chance of affecting others, and this is a highly impressive success rate for such a simple yet effective method. In Conclusion The pandemic’s uncertainty can bum anyone out, and it is only natural to feel this way. To not get rolled over by your negative thoughts, it is always better to keep yourself busy. This process is why maintaining a routine during quarantine days helps people have a sense of purpose. You can gain a great deal during these times, and later use it for your advantage, e.g., you can learn new skills for your job, etc. Thus, having a routine is very important as it helps you not lose focus and have other distracting thoughts.
https://medium.com/illumination/how-i-managed-my-quarantine-routine-in-lagos-portugal-69a6cc122c4f
['Changwon C.']
2020-12-01 20:08:01.015000+00:00
['Covid 19', 'Lagos Portugal', 'Quarantine', 'Coronavirus', 'Coronavirus Update']
Create Scaffold with Laravel 5.7 — Add Core UI Template (Part 2)
Table of Contents Add Dependencies Open your terminal and typing the next commands to add dependencies # Enter to folder project cd laravel-scaffold npm install @coreui/coreui --save # Install CoreUI packagenpm install @coreui/coreui --save npm install @fortawesome/fontawesome-free --save # Install Font Awesome npm install @fortawesome/fontawesome-free --save npm install simple-line-icons --save # Install Simple Line Icons npm install simple-line-icons --save Import JS and CSS Open the resources/js/bootstrap.js file and write the next code line try { window.$ = window.jQuery = require('jquery'); require('bootstrap'); require('@coreui/coreui'); } catch (e) {} Open the resources/sass/app.scss file and write the next code lines // Bootstrap @import '~bootstrap/scss/bootstrap'; // Icons @import '~simple-line-icons/css/simple-line-icons.css'; @import '~@fortawesome/fontawesome-free/css/all.min.css'; // Coreui @import '~@coreui/coreui/dist/css/coreui.min.css'; // styles @import 'styles.scss' Create Layouts Open the resources/views/layouts/app.blade.php file and replace the next code lines <body class="app header-fixed sidebar-fixed aside-menu-fixed sidebar-lg-show"> <div id="app"> @include('layouts.header') <div class="app-body"> @include('layouts.sidebar') <main class="main"> <div class="container-fluid"> <div class="animated fadeIn"> @yield('content') </div> </div> </main> </div> </div> </body> Create the resources/views/layouts/header.blade.php file and write the next code lines <header class="app-header navbar"> <button class="navbar-toggler sidebar-toggler d-lg-none mr-auto" type="button" data-toggle="sidebar-show"> <span class="navbar-toggler-icon"></span> </button> <a class="navbar-brand" href="{{ url('/') }}"> <img class="navbar-brand-full" src="svg/modulr.svg" width="89" height="25" alt="Modulr Logo"> <img class="navbar-brand-minimized" src="svg/modulr-icon.svg" width="30" height="30" alt="Modulr Logo"> </a> <button class="navbar-toggler sidebar-toggler d-md-down-none" type="button" data-toggle="sidebar-lg-show"> <span class="navbar-toggler-icon"></span> </button> <ul class="nav navbar-nav ml-auto mr-3"> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> <img class="img-avatar mx-1" src="{{Auth::user()->avatar_url}}"> </a> <div class="dropdown-menu dropdown-menu-right shadow mt-2"> <a class="dropdown-item"> {{ Auth::user()->name }}<br> <small class="text-muted">{{ Auth::user()->email }}</small> </a> <a class="dropdown-item" href="/profile"> <i class="fas fa-user"></i> Profile </a> <div class="divider"></div> <a class="dropdown-item" href="/password"> <i class="fas fa-key"></i> Password </a> <div class="divider"></div> <a class="dropdown-item" href="{{ route('logout') }}" onclick="event.preventDefault(); document.getElementById('logout-form').submit();"> <i class="fas fa-sign-out-alt"></i> {{ __('Logout') }} </a> <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;"> @csrf </form> </div> </li> </ul> </header> Create the resources/views/layouts/sidebar.blade.php file and write the next code lines <div class="sidebar"> <nav class="sidebar-nav"> <ul class="nav"> <li class="nav-item"> <a class="nav-link active" href="/dashboard"> <i class="nav-icon icon-speedometer"></i> Dashboard </a> </li> <li class="nav-title">Settings</li> <li class="nav-item"> <a class="nav-link" href="#"> <i class="nav-icon icon-user"></i> Users </a> </li> <li class="nav-item"> <a class="nav-link" href="#"> <i class="nav-icon icon-lock"></i> Roles </a> </li> </ul> </nav> <button class="sidebar-minimizer brand-minimizer" type="button"></button> </div> Create the resources/views/layouts/auth.blade.php file and write the next code lines <!DOCTYPE html> <html lang="{{ str_replace('_', '-', app()->getLocale()) }}"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- CSRF Token --> <meta name="csrf-token" content="{{ csrf_token() }}"> <title>{{ config('app.name', 'Laravel') }}</title> <!-- Scripts --> <script src="{{ asset('js/app.js') }}" defer></script> <!-- Fonts --> <link rel="dns-prefetch" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css?family=Nunito" rel="stylesheet" type="text/css"> <!-- Styles --> <link href="{{ asset('css/app.css') }}" rel="stylesheet"> </head> <body> <div id="app" class="app flex-row align-items-center"> <div class="container"> <div class="row justify-content-center"> @yield('auth') </div> </div> </div> </body> </html> Update Views Open the resources/views/auth/login.blade.php file and replace the next code lines @extends('layouts.auth') @section('auth') <div class="col-md-8"> <div class="card-group"> <div class="card"> <div class="card-body p-5"> <div class="text-center d-lg-none"> <img src="svg/modulr.svg" class="mb-5" width="150" alt="Modulr Logo"> </div> <h1>{{ __('Login') }}</h1> <p class="text-muted">Sign In to your account</p> <form method="POST" action="{{ route('login') }}"> @csrf <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text">@</span> </div> <input id="email" type="email" class="form-control{{ $errors->has('email') ? ' is-invalid' : '' }}" name="email" value="{{ old('email') }}" placeholder="{{ __('Email Address') }}" required autofocus> @if ($errors->has('email')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('email') }}</strong> </span> @endif </div> <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-lock"></i> </span> </div> <input id="password" type="password" class="form-control{{ $errors->has('password') ? ' is-invalid' : '' }}" name="password" placeholder="{{ __('Password') }}" required> @if ($errors->has('password')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('password') }}</strong> </span> @endif </div> <div class="input-group mb-3"> <div class="form-check"> <input class="form-check-input" type="checkbox" name="remember" id="remember" {{ old('remember') ? 'checked' : '' }}> <label class="form-check-label" for="remember"> {{ __('Remember Me') }} </label> </div> </div> <div class="row"> <div class="col-4"> <button type="submit" class="btn btn-primary px4"> {{ __('Login') }} </button> </div> <div class="col-8 text-right"> <a class="btn btn-link px-0" href="{{ route('password.request') }}"> {{ __('Forgot Your Password?') }} </a> </div> </div> </form> </div> <div class="card-footer p-4 d-lg-none"> <div class="col-12 text-right"> <a class="btn btn-outline-primary btn-block mt-3" href="{{ route('register') }}">{{ __('Register') }}</a> </div> </div> </div> <div class="card text-white bg-primary py-5 d-md-down-none"> <div class="card-body text-center"> <div> <img src="svg/modulr.svg" class="mb-5" width="150" alt="Modulr Logo"> <h2>{{ __('Sign up') }}</h2> <p>If you don't have account create one.</p> <a class="btn btn-primary active mt-2" href="{{ route('register') }}">{{ __('Register Now!') }}</a> </div> </div> </div> </div> </div> @endsection Open the resources/views/auth/register.blade.php file and replace the next code lines @extends('layouts.auth') @section('auth') <div class="col-md-6"> <div class="card mx-4"> <div class="card-body p-4"> <h1>{{ __('Register') }}</h1> <p class="text-muted">Create your account</p> <form method="POST" action="{{ route('register') }}"> @csrf <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-user"></i> </span> </div> <input id="name" type="text" class="form-control{{ $errors->has('name') ? ' is-invalid' : '' }}" name="name" value="{{ old('name') }}" placeholder="{{ __('Name') }}" required autofocus> @if ($errors->has('name')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('name') }}</strong> </span> @endif </div> <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text">@</span> </div> <input id="email" type="email" class="form-control{{ $errors->has('email') ? ' is-invalid' : '' }}" name="email" value="{{ old('email') }}" placeholder="{{ __('Email Address') }}" required> @if ($errors->has('email')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('email') }}</strong> </span> @endif </div> <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-lock"></i> </span> </div> <input id="password" type="password" class="form-control{{ $errors->has('password') ? ' is-invalid' : '' }}" placeholder="{{ __('Password') }}" name="password" required> @if ($errors->has('password')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('password') }}</strong> </span> @endif </div> <div class="input-group mb-4"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-lock"></i> </span> </div> <input id="password-confirm" type="password" class="form-control" name="password_confirmation" placeholder="{{ __('Confirm Password') }}" required> </div> <button type="submit" class="btn btn-block btn-success btn-primary"> {{ __('Create Account') }} </button> </form> </div> <div class="card-footer p-4"> <div class="row"> <div class="col-12"> <a class="btn btn-outline-primary btn-block" href="{{ route('login') }}">{{ __('Login') }}</a> </div> </div> </div> </div> </div> @endsection Open the resources/views/auth/passwords/email.blade.php file and replace the next code lines @extends('layouts.auth') @section('auth') <div class="col-md-6"> <div class="card mx-4"> <div class="card-body p-4"> <h1>{{ __('Reset Password') }}</h1> <p class="text-muted">Reset you password</p> <form method="POST" action="{{ route('password.email') }}"> @csrf <div class="input-group mb-4"> <div class="input-group-prepend"> <span class="input-group-text">@</span> </div> <input id="email" type="email" class="form-control{{ $errors->has('email') ? ' is-invalid' : '' }}" name="email" value="{{ old('email') }}" placeholder="{{ __('Email Address') }}" required> @if ($errors->has('email')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('email') }}</strong> </span> @endif </div> <button type="submit" class="btn btn-primary"> {{ __('Send Password Reset Link') }} </button> @if (session('status')) <div class="alert alert-success mt-4" role="alert"> {{ session('status') }} </div> @endif </form> </div> </div> </div> @endsection Open the resources/views/auth/passwords/reset.blade.php file and replace the next code lines @extends('layouts.auth') @section('auth') <div class="col-md-6"> <div class="card mx-4"> <div class="card-body p-4"> <h1>{{ __('Reset Password') }}</h1> <p class="text-muted">Reset you password</p> <form method="POST" action="{{ route('password.update') }}"> @csrf <input type="hidden" name="token" value="{{ $token }}"> <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text">@</span> </div> <input id="email" type="email" class="form-control{{ $errors->has('email') ? ' is-invalid' : '' }}" name="email" value="{{ old('email') }}" placeholder="{{ __('Email Address') }}" required autofocus> @if ($errors->has('email')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('email') }}</strong> </span> @endif </div> <div class="input-group mb-3"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-lock"></i> </span> </div> <input id="password" type="password" class="form-control{{ $errors->has('password') ? ' is-invalid' : '' }}" placeholder="{{ __('Password') }}" name="password" required> @if ($errors->has('password')) <span class="invalid-feedback" role="alert"> <strong>{{ $errors->first('password') }}</strong> </span> @endif </div> <div class="input-group mb-4"> <div class="input-group-prepend"> <span class="input-group-text"> <i class="icon-lock"></i> </span> </div> <input id="password-confirm" type="password" class="form-control" name="password_confirmation" placeholder="{{ __('Confirm Password') }}" required> </div> <button type="submit" class="btn btn-primary"> {{ __('Reset Password') }} </button> </form> </div> </div> </div> @endsection Finally Compiling Assets (Laravel Mix) with the next commans: npm run dev // or npm run watch Now we have the next screen when enter the localhost:3000 route in browser http://localhost:3000/login http://localhost:3000/register http://localhost:3000/password/reset http://localhost:3000/password/reset/07a37b17 http://localhost:3000/dashboard
https://medium.com/modulr/create-scaffold-with-laravel-5-7-add-core-ui-template-part-2-d5263da689bb
['Alfredo Barron']
2018-11-09 17:27:21.572000+00:00
['Authentication', 'Template', 'Vuejs', 'PHP', 'Laravel']
What the Global Coronavirus Pandemic Can Teach Designers About Designing for Behavior Change
How did you get into behavior change design? Dr. Amy Bucher. When I finished my PhD in psychology, I knew I didn’t want an academic job. At the time, there wasn’t really anybody doing design psychology. I was very interested in healthcare so I joined a healthcare startup and was applying psychology to my work, getting a lot of experience in health and design. Behaviour change design is so powerful, and I’ve worked on projects designing for things like medication adherence, eating well, and cognitive behavioral therapy (CBT) for sleep habits. In terms of becoming a behavior change designer there’s really no blueprint. One of the reasons I wrote a book was to take the academic knowledge out there and put it into designers’ hands so that they could use it in their work. What I didn’t intend when I wrote ‘Engaged: Designing for Behaviour Change’ was its immediate relevance to current events and the situation we all find ourselves in with physical distancing. Behaviour change principles can help us to understand what’s going on during the pandemic, and why some people have been slow to adopt physical distancing. What are the key principles designers need to understand? There are two important concepts at a high level that designers need to understand about behavior change –how motivation works, and the three basic psychological needs. The first concept is the psychology of how motivation works. Motivation is what drives us to do certain things. A lot of us pick up the idea that motivation is either intrinsic (coming from inside ourselves), or extrinsic (coming from an outside source), and then the idea that it’s either high or low motivation. Self-determination theory tells us that motivation really exists on a type of continuum, with intrinsic and extrinsic at the ends of the spectrum. Intrinsic motivation is the strongest type of motivation to do something, and external motivation being the weakest type. Identified and integrated motivation also exist along this spectrum. Identified motivation is about connecting a behavior to a goal you really value, and integrated motivation is where you think of yourself in a certain way, and the behavior is related to that identity. This visual shows how self-determination theory places motivational quality on a spectrum. Image credit Amy Bucher SlideShare. The second concept is the idea of basic psychological needs. What the research tells us is that there are three basic psychological needs that humans have: autonomy, competence, and relatedness. Autonomy is about being able to make meaningful choices about our own needs. Competence is about interacting with your environment and seeing that you’re making a difference. Relatedness is to do with connection. We all have a need to feel related to and connected to others. Even for people who identify as introverts, this is a fundamental need. Motivation has three key levers, informed by basic psychological needs of autonomy, competence and relatedness. These theories were developed by Ryan and Deci in 2000. Image credit Amy Bucher SlideShare. How does behavior change relate to the COVID-19 pandemic? To see the principles of behavior change in action, you don’t need to look any further than the current situation across the globe, as people have been asked to change their behavior. In North America, it felt like the situation escalated pretty suddenly — people were asked to change their daily behavior in a way that, for many, felt like they needed to give up everything they enjoy. We’re asking people to radically change their behavior through physical distancing, staying home, not going to work, closing businesses, not socializing, and so on. The request to self-isolate challenges people’s basic psychological needs. Part of what’s very challenging from a psychological perspective is that these behavior modifications are imposed, which threatens our sense of autonomy. It also threatens our sense of competence, because as humans we want to do something and then see a result. However, during the pandemic, we are being asked to essentially do nothing, and if we’re lucky, nothing much changes, and we don’t get sick and those around us don’t get sick. Finally, of course, physical distancing and staying home greatly threatens our sense of connection and relatedness to others. How can designers apply a behaviour change perspective during the pandemic? What’s interesting about these basic psychological needs is that they have done this research globally in lots of different places, and it holds up in many different contexts. I love the idea that with this understanding of these basic needs, you can design for a really diverse audience. This means these principles will be relevant to people globally during this pandemic. For designers, understanding the principles of behavior change design can help us during this crisis, both in our professional and personal lives. When we apply these principles, we understand that people (ourselves and others) need the following at this time: We need to feel like we matter (autonomy): Can I make my own choices? Is anyone thinking about me and what I need? How can I express myself to the world within these constraints? We need to feel effective (competence): If I make this sacrifice, will it make a difference? Do I have any resources, skills, or abilities that are particularly helpful right now? How do I avoid feeling like life is on hold? We need to feel connected (relatedness): What is everyone else doing? How can I maintain my interactions over distance? How can I give and receive affection? Keeping these in mind, we should be designing experiences that enhance people’s sense of autonomy, competence, and relatedness. This is how we can create the conditions of success for these behaviours to stick. How do we give people room underneath the rules that need to be in place for the greater good? For example, encouraging people to order takeout from a local restaurant in order to support small businesses — if they feel empowered to make a difference, that can help them to feel hopeful and stay engaged in doing the right thing. What does this mean for designers at a personal level during COVID-19? For designers, we need to think about our own needs and what this crisis means for our own sense of motivation and basic psychological needs. Something I’m noticing is that for a lot of us, productivity is a way we think of ourselves and value ourselves. All of a sudden, we have to take on a lot of change, and we can’t have those same ideas of what it means to be successful. We have to reconsider what it means to be successful at this time — who are you at your core, what do you value? There are some helpful exercises that you can do to really connect to your values and understand what’s important to you. For example, there’s a really well validated tool from the University of Pennsylvania called the values in action charter strength survey. It can really help you to clarify who you are as a person and what’s important to you. There are also exercises, like writing your own obituary, or writing a birthday card to yourself on your 100th birthday, or asking yourself what super powers you would have if you were a superhero that can help you to reflect on how you want to be remembered. This can help you to reflect on how you want to behave during this crisis. What are the ethical and moral considerations with designing for behaviour change? When we think about designing for behavior change, there are ethical considerations to keep in mind. For me a really big one is having informed consent, that for the most part if there’s a behavior that we’re trying to get people to do, they have knowingly agreed to that. When it comes to a lot of the more complex projects — your user really needs to understand that the goal is for them to do something different, and that goal needs to be one that they care about and want to do. A lot of the research and understanding of your user has to focus on what matters to your user. If you do these things that violate what the user expects and wants, you will lose them eventually. So it’s really important to be mindful of where there might be potential breaches of that trust, for example using their data in a way they didn’t consent to. We want to make sure that we’re not doing things that are quick wins at the sacrifice of that trust because we will lose users in that way. We also need to keep in mind that behavior change is a long game, something that’s rooted in motivation takes a long time. We are often trying to look holistically at multiple behaviors, and aiming to make changes that stick. To learn more about behavior change design, check out Amy’s book, Engaged: Designing for Behavior Change. To learn more about Amy and her work, you can follow her on Twitter, check out her blog, or listen to some recent podcast episodes from the Rosenfeld Review, and This is HCD.
https://medium.com/thinking-design/what-the-global-coronavirus-pandemic-can-teach-designers-about-designing-for-behavior-change-4e34402f550f
['Linn Vizard']
2020-04-27 14:36:26.424000+00:00
['Covid 19', 'Behavior Change', 'UX', 'UX Design', 'Coronavirus']
HDFS Write & Read
One of the most important and most basic thing to know in hadoop-distributed world is how HDFS write/read operations are performed. To understand how different components work in hadoop ecosystem, its very important to know basic hdfs concepts. Before going into hdfs write/read, let’s list down the key players involved in this - Client : interacts with the cluster, initiates write & read NameNode : Master daemon responsible for orchestration and delegation DataNodes :Slave daemons, handles actual data storage HDFS Write Operation : Client breaks input file into chunks of block size,example 2013-apache-logs.txt is of 500MB and hdfs block size is 128MB then file would be broken into four blocks blk0, blk1, blk2, blk3. Client sends number of blocks(#4) & replication factor(RF=3) to NameNode and request to initiate write operation. NameNode responds with pipeline of DataNodes DN_A, DN_B, DN_C(no. of datanodes would be equal to RF) to perform write. Client contacts DN_A to write first block blk0, then DN_A contacts DN_B to write same block and DN_B contacts DN_C.Thus same block blk0 got written on three data-nodes.Along with data checksums will be written on each data-node which will be validated during read operation. Then reverse acknowledgement will be sent from DN_C →DN_B →DN_A →client. Client tells NameNode that blk0 is written to these locations and requests for blk1 pipeline. NameNode updates memory & disk representation offile(refer below image Steps 1 to 7) Repeat steps 3 to 7 for remaining blocks. Successful write of all blocks looks like something shown in image(after Step-9) Steps 1 to 7 After Step-9 HDFS Read Operation : HDFS reads are writes in reverse. Client contacts NameNode to read any file, NameNode repsponds with read pipeline of datanodes. Client then reads each block from datanodes in pipeline and in the end after successful read, client closes the file. Other Useful Resources for learning Java you may like 10 Things Java Programmer Should Learn in 2020 10 Free Courses to Learn Java from Scratch 10 Books to Learn Java in Depth 10 Tools Every Java Developer Should Know 10 Reasons to Learn Java Programming languages 10 Frameworks Java and Web Developer should learn in 2020 10 Tips to become a better Java Developer in 2020 Top 5 Java Frameworks to Learn in 2020 10 Testing Libraries Every Java Developer Should Know
https://medium.com/javarevisited/hdfs-write-read-cfa17a1a6100
['Jaydeep Deshmukh']
2020-02-28 02:53:31.282000+00:00
['Programming', 'Hdfs', 'Hadoop', 'Big Data', 'Spark']
Which Machine Learning Algorithm Should You Use By Problem Type?
When I was beginning my way in data science, I often faced the problem of choosing the most appropriate algorithm for my specific problem. If you’re like me, when you open some article about machine learning algorithms, you see dozens of detailed descriptions. The paradox is that they don’t ease the choice. Well, to not let you feel out of the track, I would suggest you to have a good understanding of the implementation and mathematical intuition behind several supervised and unsupervised Machine Learning Algorithms like - Linear regression Logistic regression Decision tree Naive Bayes Support vector machine Random forest AdaBoost Gradient-boosting trees Simple neural network Hierarchical clustering Gaussian mixture model Convolutional neural network Recurrent neural network Recommender system Remember, the list of Machine Learning Algorithms I mentioned are the ones that are mandatory to have a good knowledge of , while you are a beginner in Machine/Deep Learning ! Now that we have some intuition about types of machine learning tasks, let’s explore the most popular algorithms with their applications in real life, based on their problem statements ! Try to work on each of these problem statements after getting to the end of this blog ! I can assure you would learn a lot, a hell lot! Problem Statement 1 - To Predict the Housing Prices Machine Learning Algorithm(s) to solve the problem — Advanced regression techniques like random forest and gradient boosting Problem Statement 2 - Explore customer demographic data to identify patterns Machine Learning Algorithm(s) to solve the problem — Clustering (elbow method) Problem Statement 3 - Predicitng Loan Repayment Machine Learning Algorithm(s) to solve the problem — Classification Algorithms for imbalanced dataset Problem Statement 4 - Predict if a skin lesion is benign or malignant based on its characteristics (size, shape, color, etc) Machine Learning Algorithm(s) to solve the problem — Convolutional Neural Network ( U-Net being the best for segmentation stuffs) Problem Statement 5 - Predict client churn Machine Learning Algorithm(s) to solve the problem — Linear discriminant analysis (LDA) or Quadratic discriminant analysis (QDA) ( particularly popular because it is both a classifier and a dimensionality reduction technique) Problem Statement 6 - Provide a decision framework for hiring new employees Machine Learning Algorithm(s) to solve the problem — Decision Tree is a pro gamer here Problem Statement 7 - Understand and predict product attributes that make a product most likely to be purchased Machine Learning Algorithm(s) to solve the problem — Logistic Regression Decision Tree Problem Statement 8 - Analyze sentiment to assess product perception in the market. Machine Learning Algorithm(s) to solve the problem — Naive Bayes — Support Vector Machines (NBSVM) Problem Statement 9 - Create classification system to filter out spam emails Machine Learning Algorithm(s) to solve the problem — Classification Algorithms — Naive Bayes, SVM , Multilayer Perceptron Neural Networks (MLPNNs) and Radial Base Function Neural Networks (RBFNN) suggested. Problem Statement 10 - Predict how likely someone is to click on an online ad Machine Learning Algorithm(s) to solve the problem — Logistic Regression Support Vector Machines Problem Statement 11 - Detect fraudulent activity in credit-card transactions. Machine Learning Algorithm(s) to solve the problem — Adaboost Isolation Forest Random Forest Problem Statement 12 - Predict the price of cars based on their characteristics Machine Learning Algorithm(s) to solve the problem — Gradient-boosting trees are best at this. Problem Statement 13 - Predict the probability that a patient joins a healthcare program Machine Learning Algorithm(s) to solve the problem — Simple neural networks Problem Statement 14 - Predict whether registered users will be willing or not to pay a particular price for a product. Machine Learning Algorithm(s) to solve the problem — Neural Networks Problem Statement 15 - Segment customers into groups by distinct charateristics (eg, age group) Machine Learning Algorithm(s) to solve the problem — K-means clustering Problem Statement 16 - Feature extraction from speech data for use in speech recognition systems Machine Learning Algorithm(s) to solve the problem — Gaussian mixture model Problem Statement 17 - Object tracking of multiple objects, where the number of mixture components and their means predict object locations at each frame in a video sequence. Machine Learning Algorithm(s) to solve the problem — Gaussian mixture model Problem Statement 18 - Organizing the genes and samples from a set of microarray experiments so as to reveal biologically interesting patterns. Machine Learning Algorithm(s) to solve the problem — Hierarchical clustering algorithms Problem Statement 19 - Recommend what movies consumers should view based on preferences of other customers with similar attributes. Machine Learning Algorithm(s) to solve the problem — Recommender system Problem Statement 20 - Recommend news articles a reader might want to read based on the article she or he is reading. Machine Learning Algorithm(s) to solve the problem — Recommender system Problem Statement 21 - Recommend news articles a reader might want to read based on the article she or he is reading. Machine Learning Algorithm(s) to solve the problem — Recommender system Problem Statement 22 - Optimize the driving behavior of self-driving cars Machine Learning Algorithm(s) to solve the problem — Reinforcement Learning Problem Statement 23 - Diagnose health diseases from medical scans. Machine Learning Algorithm(s) to solve the problem — Convolutional Neural Networks Problem Statement 24 - Balance the load of electricity grids in varying demand cycles Machine Learning Algorithm(s) to solve the problem — Reinforcement Learning Problem Statement 25 - When you are working with time-series data or sequences (eg, audio recordings or text) Machine Learning Algorithm(s) to solve the problem — Recurrent neural network LSTM Problem Statement 26 - Provide language translation Machine Learning Algorithm(s) to solve the problem — Recurrent neural network Problem Statement 27 - Generate captions for images Machine Learning Algorithm(s) to solve the problem — Recurrent neural network Problem Statement 28 - Power chatbots that can address more nuanced customer needs and inquiries Machine Learning Algorithm(s) to solve the problem — Recurrent neural network I hope that I could explain to you common perceptions of the most used machine learning algorithms and give intuition on how to choose one for your specific problem. Happy Machine Learning ! :) Until next time..!
https://medium.com/analytics-vidhya/which-machine-learning-algorithm-should-you-use-by-problem-type-a53967326566
['Sukanya Bag']
2020-10-07 12:48:50.189000+00:00
['Algorithms', 'Deep Learning', 'Computer Vision', 'Reinforcement Learning', 'Machine Learning']
Does the Moderna Vaccine Prevent Transmission?
Does the Moderna Vaccine Prevent Transmission? It’s likely, but still unknown Credit: SOPA Images / Contributor / Getty Images An open question about the Covid-19 vaccines available — and soon to be available — is whether they prevent a vaccinated person from getting infected with the virus and spreading it to another person, even if they don’t have symptoms. Experts say the vaccines likely will reduce transmission, but it’s not confirmed yet. Similar to Pfizer-BioNTech’s Covid-19 vaccine, the Moderna vaccine data presented to the U.S. Food and Drug Administration (FDA) expert advisory panel on Thursday revealed a vaccine with high effectiveness at protecting against mild, moderate, and severe Covid-19 disease. This means the vaccine appears to prevent symptomatic disease very well. But does it prevent transmission of the virus? Could someone who has been vaccinated still contract the virus and spread it to another person even if they don’t have symptoms? While Moderna is not able to definitively answer the infection question yet, the company provided some data to suggest that the vaccine might prevent asymptomatic cases and transmission. Moderna gave Covid-19 tests to all trial participants between their first and second dose of either the company’s vaccine or a placebo. Among people who tested positive for Covid-19 without symptoms, there were 14 people in the vaccine group and 38 in the placebo group (there were over 14,000 people in each group). These are small numbers, but they suggest that the Moderna vaccine might prevent asymptomatic infections. Overall, the number of people to test positive for Covid-19 was higher in the placebo group than the vaccine group, especially after the second dose. Knowing whether the vaccine prevents transmission is important. If it doesn’t, then people will likely need to continue prevention measures like masks and distancing around unvaccinated people. The good news is that Moderna and the other companies with vaccines are studying this, and will hopefully have more insights on the transmission question soon. Again, experts predict the vaccines “will likely reduce transmission, but we don’t yet know, so use caution for now.”
https://coronavirus.medium.com/does-the-moderna-vaccine-prevention-transmission-4a2f0471a27c
['Alexandra Sifferlin']
2020-12-18 13:59:01.917000+00:00
['Covid 19', 'Coronavirus']
An Opinionated Approach to Developing Event-Driven Microservice Applications with Kafka and Web-Sockets (Part 1 of 4)
RabbitMQ vs. Kafka If you are developing an event-driven application, you are going to need a message-broker. Probably the most popular two are RabbitMQ and Kafka. In this section, I will explain why we thought one was a better fit for us than the other. When comparing the two, I think the most helpful distinction is, RabbitMQ is a queue that pushes whereas Kafka is a log that expects consumers to pull messages. While this contrast dictates significant architectural differences, for most common scenarios, both offer similar capabilities. Many articles compare them in great detail, like this one (https://jack-vanlightly.com/blog/2017/12/4/rabbitmq-vs-kafka-part-1-messaging-topologies) so I won’t go too much into it. However, let me explain the main difference that affected our decision. Some of our use cases require services to process specific messages in the same order they were created, so we need message ordering. And our performance goals require horizontal scaling of the services, preferably with auto-scaling. Both frameworks handle each of these requirements separately quite well. But when you combine auto-scaling and message-ordering, Kafka comes one step ahead. Let me explain. In RabbitMQ, messages go through an exchange, land on a queue then gets distributed to consumers in a way that a message can go to only one consumer. It is super easy to increase the replica count of consumers, but in this topology, you loose message ordering. Because the second message can be processed before the first one is completed. To have replicas and message ordering together you can use this topology; But as you can see, in this topology you loose auto-scaling. You cannot add more replicas than your hashing space allows. And even more importantly, you cannot have fewer replicas than your hashing space. Because in that case, messages go to a queue but never gets consumed. Since you will not change hashing algorithms on the fly, rearrange queues, and assign consumers, you are stuck with the number of consumers you started with. Although a consumer can subscribe to more than one queue, in practice, RabbitMQ does not provide an easy to use solution to manage these subscriptions and sync it with autoscaling algorithms. Kafka has a different approach. It uses hashing just the same, and it has Partitions similar to the Queues we have seen with RabbitMQ. Messages have ordering guarantees in a partition but no ordering guarantees between partitions. The difference is, Kafka manages the subscriptions itself. A partition can be assigned to only one consumer (in a consumer group, think as a replica-set). If a partition has no consumers, Kafka finds one and assigns it. So, our hashing algorithms need to land the related messages in the same Partition. Even if auto-scaling happens in between the messages, they will be processed in order, just by different services. Therefore it is essential to keep services stateless. Kafka has different acknowledgment mechanisms. There is auto-ack, where a message is assumed processed once it is delivered to the consumer. And there is manual-ack, where the consumer sends an acknowledgment, preferably after it processed the message. By using manual-ack, we can be sure messages are not lost and make this topology work. Of course, the number of partitions still limits the maximum number of replicas. Excess replicas sit idle. But by deciding on a high number of partitions, we can use auto-scaling and have our message ordering too. So in our case, it was a win for Kafka. Other than this main difference, we liked a few more things with Kafka. Its log nature for one, which we relied on for our web-socket and distributed tracing solutions. Organizationally, people liked the idea of having only one copy of a message more than having it copied to different queues in RabbitMQ. Also, even if we are still not using it, the option of having an event-sourcing infrastructure was attractive.
https://medium.com/codable/an-opinionated-approach-to-developing-event-driven-microservice-applications-with-kafka-and-eb643325dfd7
['Orhan Tuncer']
2019-11-22 10:30:53.249000+00:00
['Kafka', 'Websocket', 'Microservices', 'Obss Codable English', 'Event Driven Architecture']
How LinkedIn, Uber, Lyft, Airbnb and Netflix are Solving Data Management and Discovery for Machine Learning Solutions
How LinkedIn, Uber, Lyft, Airbnb and Netflix are Solving Data Management and Discovery for Machine Learning Solutions The tech giants have build unique architectures to manage datasets in large scale machine learning solutions. I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: When comes to machine learning, data is certainly the new oil. The processes for managing the lifecycle of datasets are some of the most challenging elements of large scale machine learning solutions. Data ingestion, indexing, search, annotation, discovery are some of the aspects required to maintain high quality datasets. The complexity of these challenges increase linearly with the size and number of the target datasets. While it is relatively easy to manage training datasets for a single machine learning model, scaling that process across thousands of dataset and hundreds of models can become nothing short of a nightmare. Some of the companies at the forefront of machine learning innovation such as LinkedIn, Uber, Netflix, Airbnb or Lyft have certainly experienced the magnitude of this challenge and they have built specific solutions to address it. Today, I would like to walk you through some of those solutions that can serve as an inspiration in your machine learning journey. High quality machine learning requires high quality datasets and those are not very easy to produce. As machine learning evolves, the need for tools and platforms that automate the lifecycle management of training and testing datasets is becoming increasingly important. Somewhat paradoxically, machine learning frameworks have evolved several orders of magnitude faster than the corresponding data management toolset. While today we have dozens high quality development frameworks that incorporate the latest research in deep learning disciplines, the platforms for managing the lifecycle of the datasets powering machine learning models are still in its infancy. To solve that challenge, fast growing technology companies like Uber or LinkedIn have been forced to build their own in-house data lifecycle management solutions to power different groups of machine learning models. Let’s take a look at how they did it. LinkedIn’s Data Hub Data Hub is a recent addition to LinkedIn’s data analytics stack. The core focus on LinkedIn’s Data Hub is to automate the collection, search and discovery of metadata related to datasets as well as other entities such as machine learning models, microservices, people, groups etc. Specifically, Data Hub was designed to achieve four specific goals: Modeling: Model all types of metadata and relationships in a developer friendly fashion. Ingestion: Ingest large amount of metadata changes at scale, both through APIs and streams. Serving: Serve the collected raw and derived metadata, as well as a variety of complex queries against the metadata at scale. Indexing: Index the metadata at scale, as well as automatically update the indexes when the metadata changes. To enable the aforementioned capabilities, Data Hub a state-of-the-art technology stack that includes several frameworks developed internally at LinkedIn. For instance, all metadata constructs stored in Data Hub are modeled using the Pegasus data schema language which was incubated by LinkedIn years ago. Similarly, the APIs powering Data Hub are based on LinkedIn’s Rest.li architecture for highly scalable RESTful services. LinkedIn’s data storage technologies such as Expresso or Galene are also used to store the metadata representations in ways that can enable diverse use cases such as search or complex relationship navigations. To abstract those different types of storage, Data Hub uses a set of generic Data Access Objects (DAO), such as key-value DAO, query DAO, and search DAO. This allow to use Data Hub with different underlying storage technologies. The robust backend architecture of LinkedIn’s Data Hub is complemented with a simple user interface that enables the search and discovery of metadata elements.
https://medium.com/dataseries/how-linkedin-uber-lyft-airbnb-and-netflix-are-solving-data-management-and-discovery-for-machine-2361a8623aa8
['Jesus Rodriguez']
2020-12-11 15:58:21.690000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Thesequence']
Reasons to Reject and Refuse work Opportunities
I think the reasons I’ve mentioned so far are good reasons. Your time is limited and if it’s something that won’t further your goals, it’s just not worth it. Unless of course, you are massively interested in it. Yes, love and passion do matter, so those would be a massive reason NOT to reject work opportunities. But even then. It’s good that you’re passionate, and maybe even have the time, but what about reimbursement for that passion and time? I have heard quite often that exposure and experience are used as ways to pay someone. As if those will pay the bills… Internships are a great example of this: they are often unpaid and used to get you into a paying job. No. Just absolutely no. Think of opportunity costs: if it’s not something you really want to do and you could be doing something else that would pay, which one should you be doing (hint, it’s the latter)? Even if you really want to do it, at least try to negotiate some type of payment. Your time and effort should be worth at least fighting for! If it turns out they really don’t want (or maybe can’t) pay you, that is in itself a good reason to refuse an opportunity. You need to eat too! It’s also been said (and there is research backing it up) that if you start on a lower wage, you’ll remain on a lower rage (relative to your colleagues/peers starting on higher wages) for quite a while, if not always. This is because if you accept low to no wage, in the beginning, you have undervalued yourself and have sent the signal to others that you can be undervalued. You tell them that it is okay to do so, and as such, they are likely to continue to do so. Don’t. If you feel like a work opportunity is trying to undervalue you, your time, your expertise and your effort, you can very politely tell them to stick it where the sun doesn’t shine…
https://medium.com/live-your-life-on-purpose/reasons-to-reject-and-refuse-work-opportunities-7fe4c44e3d8a
['Merle Van Den Akker']
2019-12-01 18:58:21.168000+00:00
['Life Lessons', 'Work Life Balance', 'Productivity', 'Work', 'Time Management']
Real-Time Analytics on Connected Car IoT Data Streams from Apache Kafka
In this IoT example, we examine how to enable complex analytic queries on real-time Kafka streams from connected car sensors. IoT and Connected Cars With an increasing number of data-generating sensors being embedded in all manner of smart devices and objects, there is a clear, growing need to harness and analyze IoT data. Embodying this trend is the burgeoning field of connected cars, where suitably equipped vehicles are able to communicate traffic and operating information, such as speed, location, vehicle diagnostics, and driving behavior, to cloud-based repositories. Real-Time Analytics on Connected Car IoT Data For our example, we have a fleet of connected vehicles that send the sensor data they generate to a Kafka cluster. We will show how this data in Kafka can be operationalized with the use of highly concurrent, low-latency queries on the real-time streams. The ability to act on sensor readings in real time is useful for a large number of vehicular and traffic applications. Uses include detecting patterns and anomalies in driving behavior, understanding traffic conditions, routing vehicles optimally, and recognizing opportunities for preventive maintenance. How the Kafka IoT Example Works The real-time connected car data will be simulated using a data producer application. Multiple instances of this data producer emit generated sensor metric events into a locally running Kafka instance. This particular Kafka topic is syncing continuously with a collection in Rockset via the Rockset Kafka Sink connector. Once the setup is done, we will extract useful insights from this data using SQL queries and visualize them in Redash. There are multiple components involved: Apache Kafka Apache Zookeeper Data Producer — Connected vehicles generate IoT messages which are captured by a message broker and sent to the streaming application for processing. In our sample application, the IoT Data Producer is a simulator application for connected vehicles and uses Apache Kafka to store IoT data events. Rockset — We use a real-time database to store data from Kafka and act as an analytics backend to serve fast queries and live dashboards. Rockset Kafka Sink connector Redash — We use Redash to power the IoT live dashboard. Each of the queries we perform on the IoT data is visualized in our dashboard. Query Generator — This is a script for load testing Rockset with the queries of interest. The code we used for the Data Producer and Query Generator can be found here. Kafka and Zookeeper Kafka uses Zookeeper for service discovery and other housekeeping, and hence Kafka ships with a Zookeeper setup and other helper scripts. After downloading and extracting the Kafka tar, you just need to run the following command to set up the Zookeeper and Kafka server. This assumes that your current working directory is where you extracted the Kafka code. Zookeeper: ./kafka_2.11-2.3.0/bin/zookeeper-server-start.sh ../config/zookeeper.properties Kafka server: ./kafka_2.11-2.3.0/bin/kafka-server-start.sh ../config/server.properties For our example, the default configuration should suffice. Make sure ports 9092 and 2181 are unblocked. Data Producer This data producer is a Maven project, which will emit sensor metric events to our local Kafka instance. We simulate data from 1,000 vehicles and hundreds of sensor records per second. The code can be found here. Maven is required to build and run this. After cloning the code, take a look at iot-kafka-producer/src/main/resources/iot-kafka.properties . Here, you can provide your Kafka and Zookeeper ports (which should be untouched when going with the defaults) and the topic name to which the event messages would be sent. Now, go into the rockset-connected-cars/iot-kafka-producer directory and run the following commands: mvn compile && mvn exec:java -Dexec.mainClass="com.iot.app.kafka.producer.IoTDataProducer" You should see a large number of these events continuously dumped into the Kafka topic name given in the configuration previously. Rockset and Rockset Kafka Connector We would need the Rockset Kafka Sink connector to load these messages from our Kafka topic to a Rockset collection. To get the connector working, we first set up a Kafka integration from the Rockset console. Then, we create a collection using the new Kafka integration. Run the following command to connect your Kafka topic to the Rockset collection. ./kafka_2.11-2.3.0/bin/connect-standalone.sh ./connect-standalone.properties ./connect-rockset-sink.properties Querying the IoT Data The above shows all the fields available in the collection which is used in the following queries. Note that we did not have to predefine a schema or perform any data preparation to get data in Kafka to be queryable in Rockset. As our Rockset collection is getting data, we can query using SQL to get some useful insights. Count of vehicles that produced a sensor metric in the last 5 seconds This helps up know which vehicles are actively emitting data. Check if a vehicle is moving in last 5 seconds It can be useful to know if a vehicle is actually moving or is stuck in traffic. Vehicles that are within a specified Point of Interest (POI) in the last 5 seconds This is a common type of query, especially for a ride-hailing application, to find out which drivers are available in the vicinity of a passenger. Rockset provides CURRENT_TIMESTAMP and SECONDS functions to perform timestamp-related queries. It also has native support for location-based queries using the functions ST_GEOPOINT , ST_GEOGFROMTEXT and ST_CONTAINS . Top 5 vehicles that have moved the maximum distance in the last 5 seconds This query shows us the most active vehicles. Number of sudden braking events This query can be helpful in detecting slow-moving traffic, potential accidents, and more error-prone drivers. Number of rapid acceleration events This is similar to the query above, just with the speed difference condition changed from latest_sample_speed_for_vehicles.speed < older_sample_speed_for_vehicles.speed - 20 to latest_sample_speed_for_vehicles.speed - 20 > older_sample_speed_for_vehicles.speed Live Dashboard with Redash Redash offers a hosted solution which offers easy integration with Rockset. With a couple of clicks, you can create charts and dashboards, which auto-refresh as new data arrives. The following visualizations were created, based on the above queries. Supporting High Concurrency Rockset is capable of handling a large number of complex queries on large datasets while maintaining query latencies in the hundreds of milliseconds. This provides a small python script for load testing Rockset. It can be configured to run any number of QPS (queries per second) with different queries for a given duration. It will run the specified number of queries for a given amount of time and generate a histogram showing the time generated by each query for different queries. By default, it will run 4 different queries with queries q1, q2, q3, and q4 having 50%, 40%, 5%, and 5% bandwidth respectively. q1. Is a specified given vehicle stationary or in-motion in the last 5 seconds? (point lookup query within a window) q2. List the vehicles that are within a specified Point of Interest (POI) in the last 5 seconds. (point lookup & short range scan within a window) q3. List the top 5 vehicles that have moved the maximum distance in the last 5 seconds (global aggregation and topN) q4. Get the unique count of all vehicles that produced a sensor metric in the last 5 seconds (global aggregation with count distinct) Below is an example of a 10 second run. Real-Time Analytics Stack for IoT IoT use cases typically involve large streams of sensor data, and Kafka is often used as a streaming platform in these situations. Once the IoT data is collected in Kafka, obtaining real-time insight from the data can prove valuable. In the context of connected car data, real-time analytics can benefit logistics companies in fleet management and routing, ride hailing services matching drivers and riders, and transportation agencies monitoring traffic conditions, just to name a few. Through the course of this guide, we showed how such a connected car IoT scenario may work. Vehicles emit location and diagnostic data to a Kafka cluster, a reliable and scalable way to centralize this data. We then synced the data in Kafka to Rockset to enable fast, ad hoc queries and live dashboards on the incoming IoT data. Key considerations in this process were: Need for low data latency — to query the most recent data East of use — no schema needs to be configured High QPS — for live applications to query the IoT data Live dashboards — integration with tools for visual analytics Learn more about how a real-time analytics stack based on Kafka and Rockset works here.
https://medium.com/rocksetcloud/real-time-analytics-on-connected-car-iot-data-streams-from-apache-kafka-d2870ddf150a
['Shawn Adams']
2020-02-11 22:55:28.923000+00:00
['Dashboard', 'Sql', 'Kafka', 'IoT', 'Real Time Analytics']
Lockdowns Are Not the Answer to Contain COVID-19; Mass Testing & Contract Tracing Are
Lockdowns Are Not the Answer to Contain COVID-19; Mass Testing & Contract Tracing Are Vinod Bakthavachalam Follow Dec 17 · 6 min read It is clear that the US response to the coronavirus has failed. The number of cases per capita and deaths per capita in the country is one of the highest in the world, and the country is ranked above most developed nations in these metrics, showcasing its failure to contain the spread of COVID-19. Currently, the US is 12th in both total cases and deaths from COVID per 100,000 people since the start of the outbreak in March. Many countries with fewer resources have managed to coordinate a better response to containing the virus. In addition, the virus is surging across the US overall and in almost every state with confirmed cases and deaths surpassing their previous highs. The months ahead are going to be very difficult despite the positive news about a vaccine since it will take some time to execute massive rollouts that reach the required level for herd immunity across the population (take as evidence the time required to create adequate testing infrastructure, which is still failing many people). Part of the problem in the US is the lack of a sensible strategy by the federal government at the top, which has forced states to each create their own patchwork response. The relative strategies across states have varied widely from doing nothing to complete economic shutdowns with orders to stay at home. The Bay Area in California, which implemented one of the strictest measures in the US back in March, has reimplemented a shelter in place order that bans gatherings with other households, bans outdoor and indoor dining, closes most retail establishments (but weirdly allows shoppers indoors at limited capacity?), and many other activities with the addition of requiring mask wearing when outside the home. While the state (and in particular San Francisco) have credited this strategy with combating the virus in the past, is it really the reason why they saw cases go down previously? The answer is most likely no. The shelter in place is a very blunt strategy that tries to limit all in person contact to reduce the risk of transmitting the virus, but everything we know about COVID-19 suggests that different activities have varied risk profiles. The shelter in place order does nothing to accommodate the scientific evidence in making policy, especially when weighed against the economic harm (and deaths that will likely result from this damage). All the research suggests that outdoor activities, especially when accompanied by distancing and mask wearing, have a very low risk. Indeed, outdoor dining with properly spaced tables, mask wearing, and cleaning between diners is quite safe. The real important things to have are mask wearing and social distancing. Why then would the state ban outdoor dining but allow indoor shopping? The answer is clearly to try to allow some economic stimulus during the holiday shopping season, but the state is picking winners (malls and retail stores) and losers (restaurants and bars) while allow a riskier activity to try to contain COVID. That does not make any sense. Furthermore, the state has implemented this policy without really enforcing it, meaning that people are unlikely to comply with the rules since they do not really make sense. The lack of a sensible policy means it is falling on deaf ears and that activity is likely to continue in risky ways. It is also clear that more severe lockdowns do not actually correlate with reducing COVID cases and deaths. The evidence in that paper comes from cross country comparisons in Europe as of August 6. We can both broaden the sample of countries to those globally and use more recent data on the spread of coronavirus to date to see if the findings hold up with additional data. We will take the weekly time series of COVID-19 cases and deaths per country per 100,000 people (to normalize for country size) and regress it on a measure of lockdown severity created by the Blavatnik School of Government from Oxford University. We will lag the index by 21 days to essentially predict future COVID cases and deaths as a function of lockdown policies implemented in the past. The 21 day lag is due to the timeframe of onset of COVID symptoms and also what the paper above that we are replicating uses. This should ideally solve any endogeneity issues, allowing us to actually understand whether stricter lockdown policies drive a decrease in COVID cases and deaths in the future. The lockdown index measures the implementation of various rules like school and workplace closings, cancelling public events, stay at home requirements, and other things, merging them into a containment index. Higher scores in the index mean a country has a more severe lockdown policy in place. This index is measured over time, so we can look at how the weekly containment index in a particular country (lagged to the past) correlates with future COVID cases and deaths to understand the relationship between lockdown policy and containing the virus. The results are in the table below. We see that generally the containment index is positively correlated with both COVID cases and deaths across specifications (except for one case where it is negative but not statistically significant). This suggests that more severe lockdowns do not predict lower COVID cases and deaths in the future across countries (even when focusing on just high income, developed countries similar to the US). We have little evidence here that severe lockdowns as a blanket policy work to stop the spread of COVID, likely due to the fact that the policies are not nuanced enough to both target and enforce the restriction of actions that research says are actually high risk for spreading the virus. What then is the right strategy? It is a combination of mass testing and mass contact tracing. Mass testing would allow us to actually identify the number of cases, providing people with the information needed to self-quarantine and reduce their risk of transmitting the virus to others. While the US has increased its testing infrastructure to the point where it is one of the top countries in terms of tests per capita, with it surging in recent weeks, that is not sufficiently enough to support true mass testing. Widespread testing would involve daily tests across the population and testing before engaging in certain activities to reduce the risk of super spreader events while allowing limited social gatherings. Combining mass testing with mass contract tracing would enable officials to both identify who has the virus but also trace it to the other people who are likely to be infected, allowing them to quarantine, and crucially identify the reasons why the virus spread. One objection to this is implementing contract tracing in the US due to a mistrust of government and privacy concerns for individual freedom. But lockdowns and shelter in place are already infringing on individual freedoms in a more obstructive way than contract tracing would and futhermore, there are ways to implement anonymized contract tracing that mitigate these concerns, especially when leveraging things like cellular data (California just rolled out an iPhone contract tracing protocol, but it is quite late into the pandemic obviously). With that information states and local governments would be able to define targeted policies that restrict risky activities actually leading to the spreading of COVID as opposed to applying blanket lockdown policies that do not appear effective and instead inflict massive economic pain (see the US unemployment rate and labor force participation rate vs. other countries during the pandemic). We might say that with the vaccine this is no longer important, but it will take time for the vaccine to rollout, meaning that the upcoming months will likely see both massive spreading of the virus and massive economic damage. Furthermore, this is unlikely to be the only pandemic we face going forward, and in the future ensuring the US has a better policy to confront future viruses will be important to minimizing the damage.
https://medium.com/vinod-b/lockdowns-are-not-the-answer-to-contain-covid-19-mass-testing-contract-tracing-are-31492ad31986
['Vinod Bakthavachalam']
2020-12-17 22:50:34.999000+00:00
['Economics', 'Covid 19', 'Data Science', 'Politics', 'Coronavirus']
What Is A Kubernetes Service
What Is A Kubernetes Service A Service is a Kubernetes object that exposes a set of Pods as a network service. Moreover, it provides a service discovery mechanism that dynamically adds or removes IP addresses of Pods to its endpoint list based on the creation or deletion of these Pods. Service Types Kubernetes provides many types of services but here only those frequently used ones are introduced. You can check this document for more details. Here only commonly used Services will be introduced. LoadBalancer A LoadBalancer exposes a set of Pods externally. A LoadBalancer is an L4 (Layer 4) load balancer, which means it can only utilize the information at the transport layer (Layer 4) to determine how to distribute client requests across a group of Pods. Here is an example of LoadBalancer that makes the Kubernetes application foo public in the demo environment: apiVersion: v1 kind: Service metadata: name: foo-service namespace: foo-demo spec: type: LoadBalancer selector: app: foo environment: demo ports: - protocol: TCP port: 443 targetPort: 8443 status: loadBalancer: ingress: - ip: 10.254.2.127 From the spec, you can see that: It relies on the field spec.selector to select Pods. to select Pods. The field status.loadBalancer shows the external IP address that is automatically assigned by Kubernetes. shows the external IP address that is automatically assigned by Kubernetes. The field spec.ports defines the ports that this service opens for the foo application. defines the ports that this service opens for the application. With the external IP address and then open port, the service foo in the demo environment can be accessed with the address 10.254.2.127:443 . When a request is sent to this address, the LoadBalancer will redirect it to port 8443 of one of the foo Pods. ClusterIP A ClusterIP is a Service that exposes a set of Pods on a cluster-internal IP, which means this Service is only reachable from within the cluster. It is also an L4 load balancer that can only provide simple load balancing functionality based on information at the transport layer. Here is an example of ClusterIP for the application foo : apiVersion: v1 kind: Service metadata: name: default-grpc namespace: foo-demo spec: type: ClusterIP selector: app: foo environment: demo clusterIP: 10.0.54.223 ports: - name: grpc port: 8443 protocol: TCP targetPort: 8443 From the spec, you can see that: Like a LoadBalancer Service, a ClusterIP Service also relies on the field spec.selector to select Pods. to select Pods. The field spec.clusterIP shows the internal IP address that is automatically allocated by Kubernetes. Only workloads within the same cluster can utilize this Service to access the application foo . shows the internal IP address that is automatically allocated by Kubernetes. Only workloads within the same cluster can utilize this Service to access the application . The field spec.ports defines the ports that this service opens for the application foo . Kubernetes will allocate a unique DNS address to a Service when it is created. The format of the DNS address is service-name.namespace.svc.cluster.local . For example, the DNS address for the above ClusterIP Service is default-grpc.foo-demo.svc.cluster.local . Ingress An Ingress is an object that manages external access to one or more Kubernetes applications in a cluster. It is not a Kubernetes Service, but it does provide load balancing, SSL termination, and name-based virtual hosting. Unlike a Kubernetes Service which is L4 load balancer and can only manage one Kubernetes applications, an Ingress is a L7 (application layer) load balancer and can manage multiple Kubernetes applications based on path or hostnames. For example, the following shows an example of path-based Ingress. With this Ingress, requests with the URL foo.bar.com/foo will be redirected to service1 (with the 8000 port) while requests with the URL foo.bar.com/bar will be redirected to service2 (with the 9000 port). service1 and service2 can either be ClusterIP or NodePort Services. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: simple-fanout-example spec: rules: - host: foo.bar.com http: paths: - path: /foo pathType: Prefix backend: service: name: service1 port: number: 8000 - path: /bar pathType: Prefix backend: service: name: service2 port: number: 9000 What is Next I recommend you read this blog if you are curious about how to utilize Kubernetes Persistent Volumes and Persistent Volume Claims to provision persistent storage for your applications in Kubernetes. Reference
https://azhuox.medium.com/kubernetes-services-1f4d3e43db67
['Aaron Zhuo']
2020-11-30 17:01:20.175000+00:00
['Entry Level', 'Kubernetes']
The Signs Investors Look For in a Startup
Over the past month, I’ve been consuming a lot of content about what investors look for in a potential investment. I’ve studied and noted advice/tell-signs of high-potential ideas/startups, from the likes of Kevin O’Leary, Jason Calacanis, Elon Musk, Jeff Bezos, Peter Thiel, and others. In addition to this, I’ve studied other business resources that teach what to look for in a successful business. In this post, I have laid out the top points that all of these resources have shared (in no particular order). Let’s dive in. 1. Does the Founder have what it takes? Jason Calacanis discusses in his interview with Galileo at HyperChange that there are so many unknown things that will come up along the way. For that reason, it’s really important the founder can work through those issues calmly. 2. Is There a Market for it? Without substantial proof that there is a market for the product, it’s not likely that you will have interested investors. This is why so many pitches flop on Shark Tank; they can’t prove market size. A good way to prove there is a market is by seeing if other companies are already in that space. 3. Is it Scalable? If the business cannot be scaled rapidly professional investors and venture firms are not likely to pony up the cash. For them, the only reason to invest is potential returns, unless of course, it is something they have a particular passion for. 4. How Many Basic Human Desires Does it Serve? This is taken from the book, The Personal MBA by Josh Kaufman. Every human has 5 basic human drives or desires. They are, Drive to Acquire, Drive to Bond, Drive to Learn, Drive to Defend, and the Drive to Feel. The more of these that your business has the greater the market and the more likely you are to have a highly desirable product. 5. What Unfair Advantages Does the Startup have? Y-Combinator’s Kevin Hale has a great talk that states specifically the main factors Y-Combinator looks for in a startup in this video. One of them being the 5 unfair advantages of the startup. These 5 are, A) The Founder . I.E. does the founder have some experience that makes him extra qualified in this space? This could be many years in a niche market. They use the hard number of 1 in 10. If the founder is 1 of 10 people in the world that can solve a certain problem. . I.E. does the founder have some experience that makes him extra qualified in this space? This could be many years in a niche market. They use the hard number of 1 in 10. If the founder is 1 of 10 people in the world that can solve a certain problem. B) The Market. Is the market growing by 20% a year? If you are building a solution to that problem in that area, you will be in good shape. Is the market growing by 20% a year? If you are building a solution to that problem in that area, you will be in good shape. C) The Product. Is your product 10x better than the competition? It has to be very obvious that it’s better than the competition. Having that distance between you and your competitor will make you much more attractive to potential customers, furthering your likelihood of success. Is your product 10x better than the competition? It has to be very obvious that it’s better than the competition. Having that distance between you and your competitor will make you much more attractive to potential customers, furthering your likelihood of success. D) Acquisition . Is your company growing without paid advertising? Kevin states if your only method of growing is Paid Advertising, without any organic growth, you are going to attract competition who may very well have one of the other factors and will eventually outpace you. . Is your company growing without paid advertising? Kevin states if your only method of growing is Paid Advertising, without any organic growth, you are going to attract competition who may very well have one of the other factors and will eventually outpace you. F) Monopoly. This is referring often to the effect that being in the market longer than others will have. If as your company grows the success of your growth widens the gap between your competitors, such as through network effects, or patented technology advancements, you will be set up for success. 6. Does it Pass The Threshold Test? Another thing that Kevin Hale discussed in the aforementioned video is, does this past the threshold test? What he means by that is, what basic thing has to happen in order for this idea to succeed. If that is, successfully manufacturing the product, is it reaching the market, is it keeping up with demand, etc? If that basic threshold is accomplishable/the founder can overtake that threshold, there is a good chance the startup will succeed. 7. Does the Idea Benefit Society/Humanity? Elon Musk, arguably the most influential innovator of the 21st century, is motivated to do things that benefit humanity. Any startup that truly seeks to better society, is highly likely to succeed. While this point is something I’m adding from my observations, I truly believe that this, paired with perseverance, will result in a successful venture. 8. Is it in a Market Where No One Else is? In this video, Co-Founder of PayPal, Peter Thiel discusses that one of the things he looks for in a company/organization. Organizations that are building something that is unpopular for the right reasons, should succeed. I would argue that his PayPal venture was unpopular at the time because it posed a threat to banks at surface value. The same could be said of Electric Vehicles; it posed a threat to conventional vehicle manufacturers hence the difficulty in breaking into the market. 9. Do the Founders have Team, Tech, and Strategy? This is another piece of advice from Peter in the same video as mentioned in point 8. If the founders are only talking about the technology they are offering, then get them to talk about the team and the strategy. He says that you need to have all three areas for it to succeed. If the founders are not comfortable speaking to all three points, they likely lack some necessary foundation blocks to succeed. 10. What are Your Sales? Is the company successfully selling their product, and on what platforms? Increasing sales is a basic factor to success but one that can occasionally be overlooked because of too much excitement in the product on the part of the investor. Identifying where their sales are coming from can be a good way to identify opportunities. Investors may know other mediums to sell the product which can help know if it’s a worthwhile investment. 11. Does the Company Actually Need Funding? Not every company needs funding. A truly successful team and strategy might already be in place, and the inexperience of the founders might be simply seeking funding to “accelerate” their growth. This mindset can be an indicator of “money-chasing-syndrome”, where the founder simply wants explosive growth because of the huge amount of money that will be at their disposal. This is something that can be corrected but should definitely be screened for. 12. Does the Product Move Quickly? One of the big points I noted from Shark Tank is the acute focus on how much inventory is held. If there is a large amount of inventory, it could be an indicator that the product doesn’t move swiftly. 13. What are the Costs? Does the product have a high margin that will increase as scaling happens? When the existing margin is high at the early stages, it’s usually an opportunity for greater improvement in the future as the process is streamlined, and the production/fulfillment costs are reduced. 14. Past Debt/Cash. Not being a financial expert, I cannot explain to you the exact importance of these calculations, but this article offers a good basic understanding of the different healthy financial ratios that a Startup should have if they are seeking outside investment. Suffice to say, if these ratios are severely out of whack, investors will likely steer clear. 15. Is there a Good Team? Having a team is very important to many investors. By having a team of any caliber, you prove that other people are confident enough in the idea to work with the idea fulltime. Beyond having a team, it’s important to have a well-functioning team. When proper delegation methods and the right expertise are mixed, it makes a fine concoction. A red flag is when all the decisions are being made by the CEO in a manner that takes away from him/her being able to focus on the growth of the company. The CEO shouldn’t be deciding what type of snacks to have in the office.
https://medium.com/the-innovation/the-signs-investors-look-for-in-a-startup-231a1ca47331
['Silas Mahner']
2020-10-10 18:11:56.026000+00:00
['Investing', 'Funding', 'Venture Capital', 'Startup', 'Angel Investors']
Start fixing the news
Start fixing the news It starts with learning a mutual language to speak critically and constructively about the news. Then creating big ideas. Cartoon for The New Yorker. February 2017 Do not expect to get out of talking about two big P’s this year at the Thanksgiving table — Politics and Pandemic. But between mouthfuls of turkey, the conversation may turn towards an easier target. During these holidays, there will be much bullying of The News. The news and its distribution — with the spectacular wrench of social media — are often maligned for being overwhelming, inconsistent, fake, unrepresentative, among many other ills. A summation of such gripes might be that modern news systems have left us ill-equipped to be informed-citizens. And there is little denying that the information landscape is barely recognizable from the past decade. Pew research in 2020 shows that one-in-five adults get their political news primarily through social media. Willing or not, big technology companies have become intertwined with the creation and distribution of news. Therein the purpose of news perhaps faces its greatest challenge, but also opportunity. When we are truly troubled by the systems that create and deliver the news and find its flaws detrimental to the social construct, we can do something. Let’s learn a mutual language to speak critically and constructively about the news, and then create big ideas to fix it. Here is where to start. For a foundational foray on how established journalists think about journalism, look no further than the work of insiders Bill Kovach and Tom Rosenstiel, The Elements of Journalism. What these authors have done excellently are these two things. First, with a group called the Committee of Concerned Journalists, they conducted the most extensive user research study ever organized across the profession. Through twenty-one public forums, more than one-hundred in-depth interviews, multiple surveys, a summit, and over a dozen studies, they gathered enormous amounts of qualitative and quantitative data about the news. Then the authors synthesized and organized that information on our behalf. As output, they offer a transcendent purpose and a framework of 10 principles to evaluate modern journalism as a newsperson or a citizen. Even dated (the latest revision was 2003) the framework holds up well in the digital age. “The primary purpose of journalism is to provide citizens with the information they need to be free and self-governing” Let those sink in. One wonders whether the principles espoused in this book are heard outside of journalism schools’ walls. To be useful standards, they should be. Agreeing or disagreeing with the principles, names, characters, and ideas espoused in Elements of Journalism goes beyond this column’s purpose. What the book does offer is an insider framework enabling us a gateway to think about journalism more critically and constructively. So how do we fix things? Consider an important insight from the book. The Discipline of Verification, the third principle of journalism, comes from the same intellectual roots as the scientific method. Just as with science studies, it is important to create news stories methodologically and ensure they are repeatable. Someone should be able to recreate the story with the same evidence. How might we make journalism as credible as scientific studies? As a flash in the pan idea, think about how AI could help us do verification by creating an exhaustive list of questions to test assumptions behind each story. There could be a service for self-publishers (like myself) that tests assumptions so that the writer can self-improve their story. This is the type of idea that can arise from breaking the news into elements, then thinking critically and constructively. Elements are a starting point. We should choose to be optimistic that journalism’s transcendent purpose can still be accomplished in modern society. That does not mean that the institutions that deliver it today should not change. Instead, we should boil that purpose down to its raw elements and not be afraid to design new systems that accomplish that purpose better. Next up, let’s talk about Big Tech. What role should Facebook, Twitter, Google play as creators and distributors of the news? That is enough to make heads spin.
https://jrhaglund.medium.com/start-fixing-the-news-4056f6a367a6
['John Haglund']
2020-11-26 16:43:51.048000+00:00
['Technology', 'News', 'Digital', 'Ideas', 'Journalism']
Design Career: How to Get a Design Job You Dream About
Finding a job can be tough, especially for someone who doesn’t have relevant experience. However, everybody has to start somewhere and on the way to your dream job you should be prepared for the hiring process both professionally and mentally. The whole process of getting a job can be divided into three stages: research, application, and interview. Today we are going to give you some helpful tips for going through each stage successfully, whether you have a design background or not. Research Each important life decision is accompanied by research. When making changes in personal and professional life we tend to analyze, investigate, and review facts. Applying for a job is not an exception since it sets the direction of your future career. When starting your research, think of a company that would perfectly fit you as a person and as a designer. Don’t associate this company with ones you already know or have worked at. Let it be hypothetical. Think of the size of this company, its values, its corporate culture, and its growth dynamics. Imagine what type of clients the company works with: private entrepreneurs, mid-sized businesses, or large corporations. Frame your place in this company and imagine yourself among teammates and team leaders. Think over the work process and tracking, retention, and reward systems this company has. Try to picture yourself as a part of this company. This technique will help you to set your priorities and define the environment which can unleash your potential. Even if you don’t have experience and are looking for your first job, don’t think that you should take whatever you can get. Being selective is very important for your career success, even if the number of options is limited. Now that you know what you want to get from your future job, it’s time to look at the options that the market offers. Don’t go to the job boards first, start with a list of companies where you can imagine yourself as an employee. Make sure you check the career sections on their websites. If there are no openings you can apply for at the moment, make sure you send your CV to the HR department so when a new vacancy appears, recruiters can find your resume in their database. Looking through the websites of design agencies, pay attention to their blogs. There you can find a bunch of useful information to increase your chances of getting a job. Read case studies to see how their work is organized. Try on the role of the designer in charge of the project and think of the changes you could make to improve the final result. Practicing this can help you to look at your own projects from the outside and unveil gaps in your knowledge. Don’t underestimate the power of social media networks when looking for a job. Follow social accounts of the companies you’re interested in to stay updated with the announcements they post. A lot of companies show their behind-the-scenes on social media. This will give you a better understanding of the corporate culture. One more undeniable benefit of social media is connections. Use your Facebook, Twitter, LinkedIn, Behance, Dribbble to connect with people who share valuable information. Make friends and don’t be shy to ask for advice. The offline community deserves your attention no less than the online. Take part in meetups, conferences, and panel discussions. These kinds of activities are meant to be a great way of making professional connections. Be active in a local design community because you never know where an opportunity can come from. Pay attention to internship offers when browsing websites and social media accounts. Internships are a win-win situation for someone without a professional background. Even if you don’t get a job, you will get a hands-on experience. You will learn a career field from the inside, work with real projects, gain industry knowledge from proficient designers, and establish a network of professional contacts. At the research stage, apart from looking for openings, you should really prepare yourself for a future job. Don’t be passive; dive into the process like you’ve already got a job in a design agency. This approach will bring you significantly closer to the goal. Application Now that you’ve done your research and have some open vacancies in mind, it’s time to talk about applying. At this stage, you need to think of three things: your resume, portfolio, and cover letter. Let’s look closer at each of them. Resume Recruiters in large companies spend less than 30 seconds scanning a CV. So don’t be wordy; keep your CV precise and clear. Start with the basics. No matter how creative you are at presenting information, you really need to cover common sections: personal and contact information, work history, education and qualifications, professional skills, and achievements. Make sure you get rid of all unnecessary details. If you have a lot of training courses under your belt, highlight only the most important. If you had numerous responsibilities at your last job, don’t try to list all of them. This will not help you to impress recruiters. Instead, focus on your experience that fits the requirements of the opening you are applying for. The section where you talk about your work experience is decisive. Share your accomplishments, not responsibilities. Pick the projects that have greatly influenced your career and highlight your role in those projects. Think of your strengths that are necessary for the job you want to get and mention them. For non-designers, we recommend using online services that provide CV templates, to make the text structured and easy-to-read. But if you are applying for a job as a designer, you really want your CV to stand out. Make your CV noticeable yet informative. Be creative but don’t overdo it. Think of your online CV too. LinkedIn is the most popular platform for applying and hiring. Make sure you have an account there and keep the information updated. Be where hiring managers are looking. Double check your CV to be sure there are no misspellings or unnecessary details. Working on a document for a few hours, you can miss some obvious lapses, so ask someone to review your CV with a fresh perspective before you send it. Portfolio Even if are trying to get your first job, you need to have some freelance projects or design concepts to show. Talking about your online portfolio, first you should pick a platform to present your works on. It’s better to set up your own website. Register a personalized domain name, find a reliable hosting service and use, for instance, WordPress to create a website. As for the content, make sure you include personal and contact information, add links to your social media accounts and online portfolios on other platforms and start filling the portfolio with your creativeness. It doesn’t matter how many design projects you have accomplished, you need to pick only the best. Sometimes clients ask for solutions that may not be attractive or technically right from the design perspective. Don’t showcase projects where you were not completely satisfied with the result. Your portfolio creates an impression of you as a designer, so include works you are truly proud of. Don’t stick to one form of design such as web design, illustration, or lettering. Mix your leading projects with self-initiated experiments to demonstrate your versatility. Even with the variety of projects, you still want your portfolio to look harmonious. Use a well-thought-out structure and background to ensure that all your works flow together nicely. Working on a portfolio design, think of it as a display case for your talents and hard work. Keep the design clean and simple so it doesn’t take away attention from your projects. Make the interface intuitive and easy-to-navigate; nothing should break the creative atmosphere. Don’t be limited to images of the final result. Work on case studies that include all the stages of your creative workflow. Describe the initial requirements and challenges you faced developing a design. Add text explanations and your thoughts that led you to the optimal solution. Your future clients and employers will want to know more about your approach to work, so go beyond the beautiful visuals and show the behind-the-scenes of your project. Think of your personal brand. Create a logo and identity to put on your business cards and resume. This is a nice way to show your skills at evolving and enhancing a person’s or company’s integrity visually. And last but not least, print your portfolio. Even though nowadays most designers use online platforms for showcasing their works, you may need to demonstrate your printed portfolio when meeting someone face-to-face. Cover Letter Unlike the resume where you mostly talk about facts, a cover letter is more of a personal appeal to the company you want to work at. A well-written cover letter can improve your chances significantly. So let’s look at some tips that can help you to strengthen your position among other candidates. First things first, don’t try to duplicate your CV. Consider the cover letter as a conversation with a hiring manager. Keep in mind that you are going to work for a business, so focus on the value you can bring to the company and benefits they can gain from hiring you. Emphasize the skills you have to embrace the challenges this position requires. Carefully look through the vacancy and the company’s website to figure out what kind of a professional and person they want to see in this position. Make sure the tone of your cover letter reflects the tone of the company. You may check some examples of good cover letters for inspiration, but don’t copy them. Your letter should be sincere and personalized. Try to avoid cliches and overused phrases that hiring managers are used to seeing in every single application. Use clear and straightforward language and don’t try to make your story long; instead, make it worth reading. The cover letter shouldn’t be exhaustive. Your task is to arouse interest and start the conversation with the hiring manager. Seems like it’s an allowance for writers, doesn’t it? Well, as a good professional you should work on your speaking and writing to succeed at every job you get. You should work on your so-called soft skills, and that’s what we are going to talk about next. Interview Praise yourself. An interview invitation is a huge step toward landing your dream job. It already means that an employer is interested in your skills based on your CV and portfolio. Let’s dive into practical tips that will help you to do your absolute best at the interview. Depending on the company, the whole interview process may include several rounds. For instance, you might interview with a hiring manager, lead designer, and CEO. You should prepare yourself for each round. That’s why you should find out who you are interviewing with and research them. You’ll feel more confident when you know what kinds of questions to expect: personal, technical, or business-oriented. Everyone is sick and tired of questions like “Where do you see yourself in five years?” However, hiring managers keep asking them. Think of your answers to the most expected questions and work on your reaction towards the most unexpected and uncomfortable ones. Some questions can be tricky, so take your time and think twice before you answer. What should you do when you don’t know an answer? This is primarily related to technical interviews. First, don’t panic. Remember that interviewers can ask questions that are out of your area of expertise or go beyond your experience. In this case, they don’t expect you to give the right answer; they want to see your actions in an unordinary situation. So if you don’t have an answer, tell how you could find it and describe a similar experience that could help you to come up with a solution. When a hiring manager asks you if you have any questions, please, don’t say no. Don’t be shy even if you’re looking for your first job. Demonstrate that you’ve done your research and understand the company’s culture. Ask the hiring manager about things that you really care about, like the workflow, compensation package, insurance, overtime, workspace, and so on. HRs are ready for these kinds of questions and they will appreciate your genuine attitude and serious approach. Remember that you’re making a choice too. Apart from the hard skills, the employer is going to assess your soft skills. They include communication skills, work ethic, critical thinking, leadership, ability to work in a team, and positive attitude. Don’t underestimate the importance of soft skills; they can be decisive in favor of one or another candidate. Never stop improving your soft skills as well as hard skills; this combination will give career prospects that you couldn’t even dream of. And of course, don’t forget the basic rules. Take care of navigation in advance, be there on time, pick an appropriate outfit, have your resume printed, and remember the importance of the first impression, so be open and friendly. Believe it or not, there are a lot of job opportunities out there waiting for you. All you have to do is be active and consistent on the way to your dream job. Keep in mind that applying for a job is a part of your success story, so be determined and inspired. Good luck!
https://uxplanet.org/design-career-how-to-get-a-design-job-you-dream-about-ef3676a36b7b
['Tubik Studio']
2018-09-12 10:00:56.491000+00:00
['Job Hunting', 'Web Design', 'UX', 'Design', 'Careers']
Pick The Right Plays — Using the Go-To-Market Framework
Pick The Right Plays — Using the Go-To-Market Framework ZoomInfo Follow Aug 27 · 6 min read By Scott Wallask Today, businesses across the country are scrambling to adapt, doing everything they can to rework their operating models in weeks and months, not years. Even those at the top are always at risk. In 12 years, half the companies on the S&P 500 may fall off that list, according to industry research. Disruption and uncertainty are inevitable in these times, which is why a solid go-to-market plan is critical to address the unique stage and goals of any given company. So what components make up this effort? ZoomInfo has identified four areas — or quadrants — that together create a framework for a successful go-to-market approach: Build loyalty. Offer expansion. Company transformation. Market expansion. In this piece, we’ll briefly give an overview of these areas, with future articles delving deeper into the details. Build Loyalty According to HubSpot research, 93% of consumers will be repeat buyers at companies with excellent customer service. With stakes that high, it is perhaps ironic that the early steps of building customer loyalty start with the figurative eyes and ears of a company. “It’s imperative to continually listen to what customers are saying, putting yourselves in the customers’ shoes and thinking of ways to better enable them,” Craig Williams, chief information officer at networking software company Ciena, told IDG Connect. It’s at least five times more expensive to bring in a new customer versus retaining an existing buyer. — Forbes It’s also important to determine the lifetime value a customer brings and create loyalty efforts based on long-term revenue estimates. These fiscally-minded approaches stem from well-known statistics that say it’s at least five times more expensive to bring in a new customer versus retaining an existing buyer. In turn, customers who are loyal and dedicated to a brand are more likely to enjoy buying from a company. Notably, the pandemic has shown that companies who’ve managed to grow almost cult-like followings are more adaptable to change and more prepared to tailor offerings when the market shifts around them. Consider these points: Key metrics : Customer churn and attrition rate, customer retention rate, lifetime value of a buyer (how much revenue a customer brings in the long term), and Net Promoter Score (a range that measures a customer’s experience with a product). : Customer churn and attrition rate, customer retention rate, lifetime value of a buyer (how much revenue a customer brings in the long term), and Net Promoter Score (a range that measures a customer’s experience with a product). Job titles involved : Executives, directors, and managers of user experience, customer success, customer enablement, and customer loyalty. : Executives, directors, and managers of user experience, customer success, customer enablement, and customer loyalty. Technology : Sales and marketing intelligence platforms and customer relationship management systems. : Sales and marketing intelligence platforms and customer relationship management systems. Go-to-market plays: Upsell campaigns, Net Promoter Score campaigns, customer referral campaigns, loyalty program rollouts, and automated customer service surveys. Creating a customer loyalty program to motivate repeat business is a solid step to boost up allegiance. Further, encouraging positive user reviews of a product serves up testimonials that often play strongly with other buyers. Offer Expansion Broadening what a company offers — whether it’s a new product, additional features, or technology integrations — brings in more buyers. Even during economic uncertainty, opportunities exist with new product development. “If your competitors suspend product development and you don’t, you have a great chance to either catch up with them or further your lead in the market,” wrote Carl Erickson, founder Atomic Object, a custom software developer. “If your competitors suspend product development and you don’t, you have a great chance to either catch up with them or further your lead in the market.” — Carl Erickson, founder, Atomic Object When new products and expanded features sync with market need, they will likely generate additional revenue from customers. Doing so may also offset declines in aging products a company sells. Note these aspects: Key metrics: How a product rates against similar competitors, how buyers use a product, and how many integrations a product has. How a product rates against similar competitors, how buyers use a product, and how many integrations a product has. Job titles involved : Executives, directors, and managers of product development, product marketing, marketing strategy, sales intelligence, marketing intelligence, and competitive analysis. : Executives, directors, and managers of product development, product marketing, marketing strategy, sales intelligence, marketing intelligence, and competitive analysis. Technology : Market intelligence and sales intelligence platforms, software that analyzes sales calls, and tools that track how customers use a product. : Market intelligence and sales intelligence platforms, software that analyzes sales calls, and tools that track how customers use a product. Go-to-market plays: Add-on campaigns, cross-sell campaigns, in-app message campaigns, user research, and product launches. The success of offer expansion can increase if companies conduct a competitive analysis of the strengths and weaknesses of other products in the market. Customer interviews are also beneficial. Market Expansion In March 2020, TentCraft — a company that manufactures custom tents for events like festivals and concerts — got hit hard by the spreading pandemic. TentCraft’s core customers stopped buying. But company president and founder Matt Bulloch saw an opportunity in a new market: outdoor tents for drive-up COVID-19 testing. “I told our employees we are going to completely retool the company to support the healthcare system,” Bulloch said. “There was only one problem: We’ve never actually sold to hospitals.” “I told our employees we are going to completely retool the company to support the healthcare system.” — Matt Bulloch, president and founder, TentCraft Using data culled from ZoomInfo’s sales intelligence platform, TentCraft was able to steer into new areas of the company’s total addressable market. This concept represents the full amount of companies or consumers that could become customers or the total revenue possible for a product to generate. Breaking into a new market can increase revenues while also building a company’s recognition and reputation. Meanwhile, buyers who are not familiar with products or services can benefit from market expansion as a company increases its reach. Think about these details: Key metrics: Demographic, firmographic, and technographic attributes from revenue-generating customers; and cost to acquire a new customer. Demographic, firmographic, and technographic attributes from revenue-generating customers; and cost to acquire a new customer. Job titles involved : Executives, directors, and managers of revenue operations, product marketing, marketing strategy, sales intelligence, and marketing intelligence. : Executives, directors, and managers of revenue operations, product marketing, marketing strategy, sales intelligence, and marketing intelligence. Technology : Sales intelligence platforms and buyer intent software. : Sales intelligence platforms and buyer intent software. Go-to-market plays: Cross-sell campaigns, account-based marketing campaigns, and demand generation campaigns. To expand in new markets, an old strategy is helpful: develop ideal customer profiles based on available data to focus on those most likely to convert. Other key data includes accurate contact information for top prospects and buying signals from potential customers interested in a product. Company Transformation Even for top companies, the market can be unpredictable; one day a rock-star firm is riding high, yet by the next decade it could be gone. “The 33-year average tenure of companies on the S&P 500 in 1964 narrowed to 24 years by 2016 and is forecast to shrink to just 12 years by 2027,” according to research from Innosight, a business strategy consultant. “The turbulence points to the need for companies to embrace … transformation, to focus on changing customer needs, and other strategic interventions,” Innosight added. Company transformation is a fundamental change in how a firm conducts business or sells its products. This evolution occurs because of an acquisition, new innovation, a shift in customer demand, or even an unexpected upheaval in the economy or society. An opportunity to outpace competitors often is a prime motivation for business transformation, as is the potential to fill a void in a market. An opportunity to outpace competitors often is a prime motivation for business transformation, as is the potential to fill a void in a market. Such transformation benefits buyers, as it can expose existing customers to new products they were not aware of and bring a new audience to a company’s product line. The following points explain the concept briefly: Key metrics: Cross-selling revenue, partner reselling rates, and investments in new technology related to product sales. Cross-selling revenue, partner reselling rates, and investments in new technology related to product sales. Job titles involved : Executives of business strategy, growth, and business transformation. : Executives of business strategy, growth, and business transformation. Technology : Automation platforms, machine learning, analytics software, and cloud systems. : Automation platforms, machine learning, analytics software, and cloud systems. Go-to-market plays: Upsell campaigns, cross-sell campaigns, service-level agreements between departments, and rebranded product launches. Business transformation moves more smoothly if marketing and sales teams can cooperate easily without departmental borders impeding efforts — the 50-cent phrase “breaking down silos” comes to mind. Also, an agile approach to changes allows companies to make adjustments quicker. Framework Sets Go-to-Market Direction In this era of economic and social uncertainty, go-to-market plans become even more important given the hurdles are higher — or even changing — for businesses to survive. A company’s goals in loyalty, product expansion, transformation, or new markets can be mapped out ahead of time within a framework that makes go-to-market motions smoother. Follow our go-to-market series. In our next article, we talk about moving from a go-to-market framework to strategy. Scott Wallask is a longtime content writer; seeking stories flowing from data with a dash of skepticism; Northeastern grad
https://medium.com/the-innovation/pick-the-right-plays-using-the-go-to-market-framework-6237c7c163a
[]
2020-09-02 07:33:57.818000+00:00
['Technology', 'Covid 19', 'Sales', 'Marketing', 'Business Strategy']
Libido Cannot Exist in a Vacuum
Few things are as surprising as feeling randomly horny on the yoga mat. Especially if your marriage has been a dead bedroom for years, you’re in the throes of major depressive disorder, and your life is as empty as can be. Back when I was still practicing yoga daily, sexual urges would occasionally taunt me, an unwelcome reminder of the dearth of human warmth in my life. They’d happen out of the blue as a tingle in the nether regions that would soon dissipate but not before sowing confusion. While it would be easy to blame mental illness for killing my sex drive, this isn’t the whole story. Because my being ill had grown into a constant source of resentment at home, any attraction I initially felt toward my partner vanished. You can’t be drawn to someone who begrudges you an illness that is constantly threatening to kill you because they think you just want an easy life. They didn’t seem to like me much anymore or have any interest in me as a fellow human, wife, or indeed sexual partner so sex eventually became an abstract. It turned into a conceptual notion I set aside in the hope I might revisit it some day, perhaps. And if not then not. When sex departed from my life, I didn’t run after it, all the more as I had had an interesting, varied, and satisfying sex life before I got married. It was just another of the many things that fell by the wayside and contributed to the slow and steady erosion of the self.
https://asingularstory.medium.com/libido-cannot-exist-in-a-vacuum-63ca95c386ab
['A Singular Story']
2020-07-14 10:22:44.614000+00:00
['Relationships', 'Sexuality', 'Mental Health', 'Sex', 'Self']
Diversification is Key to Successful Freelancing
Diversification is Key to Successful Freelancing As a freelancer, it’s tempting to rely on one stream of income. But, it’s a mistake. @cobblepot unsplash.com The definition of freelancing does not include working for free, being over-worked, working on projects with no concrete requirements, and working perpetually outside your comfort zone. After becoming a freelancer, and then trying to grow myself into an entrepreneur, I can honestly say that your mindset is the key to growing your business. In this mindset, you have to put diversification in the center. Diversification is everywhere in running a business: the contracts you sign, the people you hire, the technology you use, and even the connections that you make. When you pigeon-hole yourself into one income stream, one platform, or one type of work, things inevitably go awry. For a large part of last year, I tried making a full-time income on Medium. This year, I gave up on this idea. Now, I’m content with my “basic income” from Medium. I still try to grow it with as much time as I can spare. But, I’ve moved on to other projects. We are in this environment where growth is central to whether or not you can be successful. Every day, in the morning, as I rush through a day’s work, I recognize that my afternoon will be spent planning, plotting, checking over my work and scouting for new clients, new money streams, and new ventures. This is the rhythm of a freelancer or an entrepreneur. It’s the relentless beat of marching forward despite the downs, the hurdles, and even the mountains. There’s always more. There are always more people to connect to. There’s always more conversations to be had. There are always more projects to be worked on. There are always more ideas to be generated. There’s more life to be lived. This year has already been challenging. But, I’ve just gone through a month of tremendous growth and I’m ecstatic to move onto the next month. Often, it seems my head runs away with me and I feel like I should accomplish more. But, that’s just my impatience talking. I’ve grown steadily. I’m climbing bigger mountains, with some rations and supplies now. My footing feels a little surer. I can see the terrain a little clearer now. Will my landscape change tomorrow? Probably it will. The thing is that comfort is what holds most entrepreneurs back. That’s the skill that I had to painstakingly learn. Risk is actually the most valuable skill to have in an entrepreneur's toolbox. Being able to deal with risk, manage risk, and have the mindset around taking healthy risks every day to move forward is not easy. These days I find myself asking people around me, “Should I…..?” Inevitably, I answer my own question. The minute that I ask this question I know the answer will be, “Yes.” The art here is to try as many ideas as you can in the shortest span of time to see what works. Then, you move on. When you are comfortable, you continue with this cycle again. You try new ideas until some bear fruit. Then, you move on. Unlike a researcher, you are not searching for the right answer. You are developing a way of life that is your business. You simply have to grow organically and see what you end up with. If you ask me for my vision, I can tell you abstractly. But, ultimately, the organic way that my business runs day-to-day will determine the vision in the end.
https://medium.com/jun-wu-blog/diversification-is-key-to-successful-freelancing-415103894034
['Jun Wu']
2020-02-21 15:31:08.477000+00:00
['Writing', 'Self', 'Leadership', 'Diversification', 'Freelancing']
Text Analytics on ‘Friends’ TV Series — 10 Seasons
If you are a fan of the famous TV show ‘Friends’, you must have found yourself arguing regularly — Who is the most important character? Who is the most complex? Which characters were close? Who was the most positive or negative character? I am sure we all have opinions but the only justified answers can come from data. As a result, I decided to apply text analytics and web scraping principles I have learned through my Master’s at UCLA Anderson School of Management, to analyze my favorite TV series with 58,251 dialogues and 228 episodes spread across 10 seasons. I found the following website which contained the dialogue scripts for all 10 seasons of the series. Now I will walk you through the details of the project. In terms of the technical tools used, I used Python 3.7 environment in Jupyter notebooks (I have shared the Github link of the notebook at the end of this article). In this article, I have shared the overall steps on a broad level and focussed mainly on the process and the results. Feel free to check out the GitHub link to follow the Python code in detail. Overall, there were three key steps in the process: 1. Scraping the data from the website 2. Latent semantic and textual analysis 3. Extracting the key insights Let's talk about each of the steps one by one. Scraping the data from the website. I used ‘requests’ and ‘BeautifulSoup’ libraries in Python to extract the data from the website and store it into ‘pandas’ dataframe in a structured manner. At the end of this exercise, my data looked like this: Figure 1: Initial dataframe 2. Latent semantic and textual analysis In this process, I used ‘gensim’,’TextBlob’ and ‘nltk’ libraries to stem and lemmatize the data as well as performed a lexical and semantic analysis to extract sense out of the dialogues. Overall this step included filtering of dialogues by 6 main characters, topic modeling exercise to understand the character complexity, affinity analysis to understand each character’s relationship with one another, dialogue frequency analysis, and sentiment analysis. Those interested in understanding the concepts behind latent semantic analysis — especially topic modeling in detail, can take a look at this excellent Datacamp tutorial. After filtering the dialogues by characters and pre-processing the dialogues to lemmatize them and remove the stop words, I used ‘gensim’ library to identify the number of topics for each of the characters. After the coherence score analysis, the ideal number of topics for each character was identified. I named this analysis for each character the thought diversity score. The result is somewhat unexpected (we will discuss more on this in the insights section): Figure 2: Thought diversity score Using the ‘gensim’ library only, I extracted the topics for each of the characters and mapped the character associations for each other. Basically, I calculated the frequency of appearance of a character in each other’s topics. I then prepared a heatmap of this affinity analysis using ‘seaborn’ library which looked as follows: Figure 3: Affinity analysis The next analysis was simple, I simply calculated the dialogue frequency for each character, which looked as: Figure 4: Dialogue Frequency The last step in the analysis was sentiment analysis. In this step, for each character, I determined the percentages of negative, positive, and neutral dialogues using the ‘TextBlob’ library. The result looked like this: Figure 5: Sentiment analysis Now, this brings us to our last step. 3. Key Insights Using the thought diversity score, we can clearly say that ‘Rachel’ was the most complex character. Hard work indeed for Jenniffer Anniston. Affinity score map clearly shows that Ross and Rachel had extremely good chemistry and had a good top of mind presence in each other’s conscience. This was evident throughout the show. Dialogue frequency shows Joey as the dominant character. Aligns well with the character’s popularity as a TV series solely focussed on ‘Joey’ was launched right after ‘Friends’. Though that didn’t do well — probably scope for another project. ‘Phoebe’ was the most positive character in the show while ‘Monica’ was the most negative. This aligns a lot with the views of my friends who have watched the show multiple times and are a huge fan. Overall it was a fun project to work on where I applied both web scraping and complex text analytics skills to understand my favorite TV series even better. Moreover, I believe this is a pertinent skill to have not just for data scientists or analysts, even product and marketing managers can leverage this skill to quickly grasp the voice of customers and take action on insights. We live in the information age where it’s hard to keep track of customers’ and users’ voices and opinions on multiple channels and in huge volumes. Every product-led company — whether B2B or B2C company, tech or non-tech, needs to capture user voices and analyze them to generate actionable insights. This is where web scraping and lexical text analytics become useful. For my geeky friends, who want to explore the code, feel free to access my Jupyter notebook for this project from here and if you have any questions or feedback, feel free to get in touch with me here.
https://medium.com/swlh/text-analytics-on-friends-tv-series-10-episodes-42875804f3a6
['Apoorva Mishra']
2020-11-26 05:51:13.521000+00:00
['Python', 'Friends', 'Data Science', 'Web Scraping', 'Text Analytics']
An End to End Introduction to GANs
Generator architecture One of the main problems we face with GANs is that the training is not very stable. Thus we have to come up with a Generator architecture that solves our problem and also results in stable training. The preceding diagram is taken from the paper, which explains the DC-GAN generator architecture. It might look a little bit confusing. Essentially we can think of a generator Neural Network as a black box which takes as input a 100 sized normally generated vector of numbers and gives us an image: How do we get such an architecture? In the below architecture, we use a dense layer of size 4x4x1024 to create a dense vector out of this 100-d vector. Then, we reshape this dense vector in the shape of an image of 4x4 with 1024 filters, as shown in the following figure: We don’t have to worry about any weights right now as the network itself will learn those while training. Once we have the 1024 4x4 maps, we do upsampling using a series of Transposed convolutions, which after each operation doubles the size of the image and halves the number of maps. In the last step, though we don’t half the number of maps but reduce it to 3 channels/maps only for each RGB channel since we need three channels for the output image. Now, What are Transpose convolutions? In most simple terms, transpose convolutions provide us with a way to upsample images. While in the convolution operation we try to go from a 4x4 image to a 2x2 image, in Transpose convolutions, we convolve from 2x2 to 4x4 as shown in the following figure: Upsampling a 2x2 image to 4x4 image Q: We know that Un-pooling is popularly used for upsampling input feature maps in the convolutional neural network (CNN). Why don’t we use Un-pooling? It is because un-pooling does not involve any learning. However, transposed convolution is learnable, and that is why we prefer transposed convolutions to un-pooling. Their parameters can be learned by the generator as we will see in some time. Discriminator architecture Now, as we have understood the generator architecture, here is the discriminator as a black box. In practice, it contains a series of convolutional layers and a dense layer at the end to predict if an image is fake or not as shown in the following figure: Takes an image as input and predicts if it is real/fake. Every image conv net ever. Data preprocessing and visualization The first thing we want to do is to look at some of the images in the dataset. The following are the python commands to visualize some of the images from the dataset: The resultant output is as follows: We get to see the sizes of the images and the images themselves. We also need functions to preprocess the images to a standard size of 64x64x3, in this particular case, before proceeding further with our training. We will also need to normalize the image pixels before we use it to train our GAN. You can see the code it is well commented. As you will see, we will be using the preceding defined functions in the training part of our code.
https://towardsdatascience.com/an-end-to-end-introduction-to-gans-bf253f1fa52f
['Rahul Agarwal']
2019-12-25 11:39:06.748000+00:00
['Programming', 'Towards Data Science', 'Data Science', 'Artificial Intelligence', 'Machine Learning']
We’ve Got to Start Believing in Our Own Greatness
In the last few months, I’ve stunted my ability to dream of the next big milestone for myself. My heart and soul have been trying to wander and explore, and I’ve been resigning them in, telling them they need to be realistic. I have a business to run. We’re making sh*t happen for our clients. Whether you like it or not, we’ve got to sit down and do the damn work. I’m sure it’s no surprise to anyone that it didn’t take long for this poor attitude to really take its toll. I started getting restless, and antsy, and nervous that the current way my business stood I want more for myself — if only I have the courage to make it happen. Because the truth is, I’m afraid of my own greatness.
https://medium.com/the-ascent/weve-got-to-start-believing-in-our-own-greatness-bd52b8c04bb2
['Gillian Sisley']
2019-07-19 15:05:54.869000+00:00
['Life Lessons', 'Writing', 'Success', 'Work', 'Inspiration']
Rethinking Testing Through Declarative Programming
Expressing Tests With DSLs Let’s say the main idea up to this point was to make tests declarative by expressing them in terms of data instead of procedural code. We declare context, inputs, and expected outputs as data. This applies to both tablelike tests and doTest styles. Given that, we’ve found that in some domains, expressing those inputs or outputs can be very hard. It might involve a lot of code, which hurts test readability. (The whole point of being declarative is to improve the tests readability as much as possible, as in Edward Tufte’s Data-Ink ratio when applied to coding.) Let’s see some real examples. Example 1: A rich text editor selection and entities DSL This first example tests the isWithinEntityMatch function, which given DraftJS text editor content, tells us if the current selection is within a given entity. DraftJS is a framework for building rich text editors in React. We’ve used it for game dialogues. In this case, an entity is what we call a markup, like an inlined note within a dialogue, delimited by curly braces: {MOVE_CAMERA ... } . Here’s how it looks from the UI: The problem is this text is actually a pretty complex DraftJS model object, involving ImmutableJS- and DraftJS-specific models. So we need to improve the way we create those inputs or contexts to also make them declarative and readable. We could use util functions, factories, builders, and, at the end of the spectrum, create a domain-specific language (DSL) just for our specific concern. In our case, we came up with this tablelike test: The text is actually a very small internal DSL using just regular strings and conventions through symbols: ... {something} … : Curly braces for entities (same as the user types) : Curly braces for entities (same as the user types) [|] : Means the user cursor is at that specific position (the collapsed selection) : Means the user cursor is at that specific position (the collapsed selection) [> ... >] : Declares an expanded selection. That is, the content within it is currently selected from left to right. The cursor, by definition, is on the right side. ... : Declares an expanded selection. That is, the content within it is currently selected from left to right. The cursor, by definition, is on the right side. [< … <] : The same as above, but the selection is from right to left, and the cursor is on the left side. (The selection direction is pretty important in text editors.) The impl of expectIsWithin can be found here with the DSL parser. I’m not inlining it because it’s a lot of code for just an impl. But take a look, and imagine if each test case needed that much code to create the input. It’d be really difficult to read and maintain! Example 2: Undo/redo logic Another real example: a function to compute the undo-redo stack of an application. It’s a pure function in this case, a reselect selector. That’s a function that derives some data given a (redux) app state, it. In this case, the state has a list of changes that were done. A change could be: A regular change : For example, A (we assign names for the sake of test readability) : For example, (we assign names for the sake of test readability) An undo : Reverts a change. We use the notation U(A) . : Reverts a change. We use the notation . a redo: Redoes a change. We use R(A) . Same as with the DraftJS example, there’s a long distance between these concepts when thinking about a test — like Let’s test change A, B, C; Undo C and B; and see where the cursor is — to what we actually need to code to create that scenario. Expressing the inputs/context would involve a lot of code. So we, again, create a small, internal, string-based DSL to be able to express cases with a very compact syntax. Test tables and DSL for building complex input objects. Every string in the array is a test case expressed in a small, simple DSL as a notation. We actually just ended up transcribing the same notation we used on a whiteboard to investigate the problem and come up with a solution. It reads like this: // [latest_change ← first_change] => expected_stack [ R(B), U(B), U(C), C, B, A ] => [C, (B), A] The user did a change, A , followed by change B and C . Then they undid C and B , but right after this, they redid B . So where are we? [C, (B), A] This means that A is applied, C isn’t (it’s been undone), and we’re currently at B (also applied). From this point, we can undo A or redo C Both inputs, as well as the expected outputs, are expressed as a string DSL, but the underlying model consists of complex data structures that would’ve made the test difficult to read. Here is the impl of parseInput , which, in turn, has its own tests. So to make better tests, we had to create a language, a parser (although this was pretty simple), and tests for that parser. Imagine if this was easier to do out of the box with testing tools ? The DSL allows us to get rid of a lot of boilerplate code and just distill the meaningful part for the test. This makes it easier to think about missing cases, redundant cases, etc. — especially by others — in code reviews.
https://medium.com/better-programming/rethinking-testing-through-declarative-programming-335897703bdd
['Javier Fernandes']
2020-11-20 15:47:34.238000+00:00
['Nodejs', 'JavaScript', 'React', 'Testing', 'Programming']
The Cryptocurrency Version of eBay — Homebase For Online Sellers
Like eBay, Junktion shares in the mission to discover great “value and unique selection” for its customers. Junktion is building a cryptocurrency marketplace for your items that eBay sellers, in particular will love.Hop in our discord to tell us what features to build first, or to get your questions answered. This articles purpose is to give potential sellers a better idea as to what we are building. In plain terms, we are the cryptocurrency version of eBay. Think of something similar to eBay, but with more personalization. We also save sellers upwards of 13% on fees. Ebay sellers now have an outlet that doesn't take over their business Junktion is the bridge between physical items and digital assets. We are recruiting eBay sellers to sell on our platform. We are recruiting people who wish to sell and list their items on our eBay like platform that is currently being developed. If you are some type of an online seller and sign up on this form you are eligible to earn some of our scraps tokens. The users who sign up now will be considered our beta users. We will heavily promote their accounts and do our best to make sure succeed on our platform. Our vision Junktion is built by an ex eBay seller looking to build a better online marketplace. For sellers, by sellers. We envision an open online marketplace where people can sell their items for any currency they wish, Fiat or crypto. A place where your items are loved, and treated as a treasure because you want to see them make it out into another cycle of buying and selling. A circular economy where people can constantly trade in old items for new ones. A please where your things can call home. We ultimately foresee every item somehow associated with blockchain technology and Junktion is leading the way. You will be able to find the exact item you sold years ago using the technology we are creating. We foresee a more digital and open economy. To an outsider, our Ecommerce marketplace is a combination of ebay and cryptokitties. What we are building is the future of ecommerce. Our online marketplace will have blockchain rich futuristic features that online selling platforms such like eBay and Amazon couldn’t have dreamed of. We are willing to build exactly what consumers want and not what is best for us. That is why we are a zero fee online marketplace focussed on the users experience. How we are using blockchain technology Junktion is a company that is utilizing the technology in the Ethereum blockchain to enable a peer to peer ecommerce marketplace that will provide the best user experience for customers because we allow the items sold on our site to tell a story. Each and every item sold on site has a unique story associated to it. You and I might both own Iphones, but each is different. The same make and model is uniquely different because of the owners. We allow users to have the ability to track their items that they formally owned and have sold on our ebay like marketplace. We want to create features that ebay sellers want on our platform. We care about your items and want to make sellers the most important part of our business. Join the conversation in our discord. We plan on incorporating USD on our eBay like dapp, but also want to support the technology that we are using by accepting Ethereum. For a short time, Ethereum will be our main currency. We do have our own cryptocurrency that we will give to anyone who wants to be an early adopter. An early adopter is someone who is serious about selling online and will list a few or more items on our eBay like marketplace. If you already sell on a platform such as Ebay, Amazon, or Etsy and are looking for another sales channel do not hesitate to sell on our marketplace or even selling your goods directly to us. Junktion vs. Ebay The future of selling is hands down Junktion. Sites like ebay that incorporate a blockchain twist are the future of online selling. We are building a home for people who are looking to make more money with their online store that is perhaps on a site like ebay. We are developing incredible technology that will allow for users to have a better experience on both sides. There are two transactors in a marketplace. A buyer and a seller. Ebay heavily values the buyer in all transactions. Junktion will be focussing on sellers and building out our platform for sellers. We want to allow users to build their business on top of us as quickly as possible. According to the United States, the backbone of an economy is small business. Not the people who buy from them. Why should it be backwards on online marketplaces like eBay and Amazon. With the launch of this article, we are trying to organize our community of sellers in a public Junktion discord to discuss different issues, features, and to build a community around our open online marketplace. Filling out our form, and joining our discord will be the number one ways for online sellers to make their voice heard in the development process of a far superior ebay dapp. What is a dapp Ebay is what you would call a web app. It is an application used on the web. A dapp is a decentralized application. All this means is that it communicates with the Ethereum blockchain. A different online marketplace We plan to use Ethereum to allow users to track where the goods they are sending go. For example, It would be really neat to know that an item you are emotionally attached to and sold has lived another good life with another owner, until they sold it to another 5 years later. If this feature seems like something you would be interested in please sign up for our newsletter at the bottom of our site. We will go more into detail about it as development progresses. Defining our marketplace dapp Our marketplace dapp has a bright future because we understand that the backbone of an online marketplace is the sellers. Our online marketplace will focus on making sure that online store owners are able to succeed. We want to give people a platform to create a store, a brand, and a living. We want users to build their business on top of us and allow for the average person to successfully declutter, move on, and ultimately sell their items. Sites Like eBay Sites like eBay pop up from time to time but the truth is none of them have half the amount of potential as Junktion. I say this because our site is built for you. The person who hustles online, a seller on these online marketplaces. The blockchain solves a lot of issues logistically, monetarily, and automation wise that other similar sites to eBay will never have. Only large organizations have access to the blockchain e-commerce solutions available right now. Blockchain solutions for online marketplaces are going to make a huge impact in the world and price of crypto. We are the blockchain website like eBay that is here to innovate e-commerce and change the way we think about items and the things we own. Fee structure EBay has a broken fee structure and very much so favors the buyer in all transactions. If you wanted to transfer the funds you have from eBay to buy crypto you would pay an insane amount of fees. It would look something like this. eBay forces unneeded fees on the seller. That is a fundamental issue that Junktion is going to fix. Ebay bullies its platform users into multiple fees that they do not need and should get for free. Junktion will be a zero fee platform. The only fees people are going to be paying are the Ethereum gas fees for transactions. Ebay charges sellers small fees for features that are free on Junktion Bitcoin, Ethereum, and cryptocurrency marketplaces A bitcoin marketplace is an eCommerce site that accepts bitcoin as a form of payment. We are not focussing on Bitcoin as our currency. Our main currencies, for now, are USD and Ethereum. A Bitcoin marketplace tends to have goals that align with decentralizing trade. That is not our goal. We want to create a fee free marketplace for entrepreneurs to build a business on top of. There are other blockchain and bitcoin related e-commerce marketplace startups being created that are far less advantageous than Junktion. A lot of these projects are aiming to decentralize trade. That is something that Junktion wants to try and avoid. We want to be as compliant and user-friendly as possible. Creating a free online marketplace doesn’t require overthrowing governments. Decentralized marketplaces are a haven for fraud, illegal activity, and scams. We want no part in them. We are not drug dealers and will not have drugs or other objects that decentralized exchanges like. We are going to be focussing on compliance, transparency, and user experience.
https://medium.com/hackernoon/the-cryptocurrency-version-of-ebay-homebase-for-online-sellers-a4ccf69b9034
['Patrick Manfra']
2018-06-26 21:26:26.160000+00:00
['eBay', 'Online Business', 'Startup', 'Cryptocurrency', 'Bitcoin']
Prove Trump’s conspiracy theory right: Use mail-in ballots
Prove Trump’s conspiracy theory right: Use mail-in ballots United States Postal Service overlooked in $2 trillion stimulus package was not an accident Photo credit: Pope Moysuh/Unsplash Update on April 16, 2020: Exactly what I did not want to happen did indeed happen to an Election Judge. On April 1, 2020, a Chicago Election Judge (17th Ward) died from coronavirus 15 days after he worked a South Side polling place on Election Day. Rest in peace to Revall Burke. Imagine being $11 billion in debt, and a company says they’ll allow you the flexibility to borrow up to $10 billion more from the federal government. If you do so, now you’re $21 billion in debt because debt forgiveness was not even a thought in their minds. Even during emergencies, you’re on your own. That appears to be what’s happening with the United States Postal Service, which was not included in the $2 trillion coronavirus stimulus package. Donald Trump — who African-Americans, Hispanics and Asians all voted in much larger numbers against him ever making it to the White House (74 percent) — is more concerned about whether a Republican can stay in the White House versus protecting mail carriers. Now we could all argue that the postal service was already in trouble due to the popularity of instant messaging, text messaging, e-mail and other smartphone technology. It happens. The odds of someone receiving a handwritten letter in cursive have dwindled down considerably, although we cannot sleep on the Hallmark customer who is determined to keep the tradition going. But what stands out in the argument against debt forgiveness for the postal service isn’t that part. It’s what the coronavirus disease 2019 (COVID-19) denier in the White House took issue with in an original version of the stimulus package: the GOP is picking an argument with mail-in voting. According to the man in the White House — who believes his Facebook following and his past of banging models matters as much as the infection rate of 277,205 people in the United States (6,593 who died) and 1,051,635 worldwide — if mail-in voting becomes the norm, “you’d never have a Republican elected in this country again.” Photo credit: ArtTower/Pixabay That’s right. Donald Trump is more concerned about whether a Republican can stay in the White House versus protecting mail carriers, getting essential goods that need to be received by U.S. recipients, and a safe way to vote that helps voters stay in social isolation. Unfortunately, this may not be surprising to the 74 percent of 2016 voters who voted against him: African-American men (82 percent); African-American women (94 percent); Latino men (63 percent); Latino women (69 percent); and other minority groups, including Asians (61 percent). But it should certainly be a wake-up call to his supporters. You know where else minorities tend to be employed in large numbers? At the post office. But oddly enough, Trump’s decision doesn’t even back up his voter base. White men (62 percent) and white women (52 percent) who voted for Trump in large numbers aren’t even protected under his decision to put voters and mail carriers at financial and health risks. According to Deloitte’s Data USA, of the 330K in the postal service workforce, 67.9 percent of postal service mail carriers are white. Additionally, 193K of the workforce is male. (The second most common race or ethnicity are black — 19.9 percent.) Considering he is still convinced that voter fraud is a rampant problem — never mind the amount of people who just flat-out refuse to voluntarily vote — he has really convinced himself that people voting from the privacy of their homes will make sure that the GOP will be dismantled. However, according to Business Insider, “evidence about non-voters doesn’t support Trump’s assertion that higher voter turnout would automatically benefit Democrats.” And while the youth (ages 18 to 29) and people who have not earned a high-school diploma get picked on as being non-voters (some of which is justifiably true), there’s another concern that the Mexico Border Builder is more than likely worried about. Time reports that 5.8 million Hispanics will be eligible to vote in 2020 compared to 5.2 million in 2016. According to Time, “That’s a nearly 2 percentage point increase when accounting for overall population growth.” It’s not hard to believe that the last thing Trump would want is a bunch of fresh, new voters to have the ability to safely vote from the comfort of their homes — without being more distracted by social isolation, cloth face masks and staying six feet away from the next voter. Somehow though, he keeps overlooking that he’s putting his own MAGA supporters at risk of spreading or being infected by COVID-19 while standing in long lines for in-person voting. Unlike him, COVID-19 discriminates against no one. As long as this mentality of COVID-19 deniers are the main ones who are willing to show up to the polls, he knows he’ll win. But if state government hands over mail-in ballots to those who were paying attention to Li Wenliang in December 2019, Trump knows he’s at a disadvantage. And if you, the voter, are already pondering about why the rest of the United States doesn’t run their election seasons as mail-in only states the way Colorado, Hawaii, Oregon, Washington and Utah already have been doing, Trump already thinks you’re the kind of “crazy” voters he described on “Fox & Friends.” While there are limited options for those who are now unemployed, underemployed or desperately seeking employment with essential retailers who are hiring in droves after COVID-19 hit the U.S., there is one $0.00 thing that all voters can do when it comes to their election seasons. Try as best as you possibly can to participate in mail-in voting during your primary election season and/or during Election Day 2020. If ever there was a time to not ignore your right to vote and to stay safe in social isolation, this is it. And if you ignore another election season altogether and believe “it doesn’t do anything,” understand that you will be a part of the problem if Trump is re-elected. If you’ve got time (and money) to binge-watch “Tiger King” or Instagram Battles with songwriters and producers, you’ve definitely got time to fill in a few circles on a ballot!
https://medium.com/i-do-see-color/prove-trumps-conspiracy-theory-right-use-mail-in-ballots-7870f96dcc36
['Shamontiel L. Vaughn']
2020-04-16 18:50:59.976000+00:00
['Post Office', 'Covid 19', 'Election 2020', 'Coronavirus', 'Stimulus Package']
tsconfig.json demystified (Part III)
We dove into modules in Part 1 and explored the various module systems — AMD, CommonJS, the new ES6 system, the CommonJS + AMD hybrid called UMD. One “system” still used in the browser that must also be included in this exploration is the old fashion, pre-module way: attaching things on global by importing <script> tags that execute JS code that populates the global scope. A JS file that only has const fancyString = 'fancy' will create a global variable called fancyString that is accessible across the whole app. This kind of JS file isn’t a module (has no exports/imports). It is simply a script that gets invoked in the global namespace. What happens when we include the <script> tag for the JQuery CDN? We get a new $ property on our global/window scope representing JQuery. No modules, no imports, no encapsulation. It’s “exported” for usage throughout our app by virtue of being on the global scope that everything has access to. the allowUmdGlobalAccess flag allows you to use variables that exist on the global scope instead of explicitly using them by importing them. “An example use case for this flag would be a web project where you know the particular library (like jQuery or Lodash) will always be available at runtime, but you can’t access it with an import.” baseUrl: Params <path> TL;DR Set the base path from which to import files across your project using absolute paths as opposed to every import being relative to the file it’s being used in When To Use: When you want to shorten your import paths and not have to calculate levels of directories in projects with deep directory trees. Instead of needing to calculate the number of relative paths you need to traverse up every time you want to import some commonly used module: ‘../utils’ from some component, ‘./utils’ from a container, ‘../../../../../../utils’ from some super nested component you can set a baseUrl equal to the root of your source (ie: app/src ) and then import everything relative to that ( import ‘utils/’ ). Important: for this to work, you must use non-absolute paths (any path starting with ./ is considered an absolute path and thus will not resolve relative to the baseUrl moduleResolution: Params (node | classic) TL;DR Which resolution strategy to use when looking for the location of imported files When To Use: When you have a reason to deviate from what’s now the standard behavior of using the node resolution system. Safe to leave as default. The difference between these two module resolution systems (ie: different sequence of directories to walk up when searching for a modules location) is heavily documented here. A super effective tool in investigating this behavior yourself is to use the --traceResolution option when running tsc as it will show you an exact trace of the directories/files that TypeScript checks when resolving the location of an imported module. paths: Params (true | false) TL;DR A map of often accessed directories and corresponding aliases to simplify import paths When To Use: When you have frequently accessed sub-directories under baseUrl (eg: utils) that you want to create an alias for to shorten import paths. This is closely related to baseUrl . If you have a bunch of frequently used services in a directory structure such as app/src/utils/tools/services/ you might find yourself writing imports like this: import {fetchUsers} from ‘utils/tools/services/users’ , import {fetchItems} from 'utils/tools/services/items' , import {sanitizeRequest} from 'utils/tools/services/utils' , Paths lets you set custom prefixes so that you can tell TypeScript where to look in order to resolve imports matching your prefix` // sample paths setting in a tsconfig.json paths : { "services/*" : ["utils/tools/services/*"] } allowing you to rewrite the above imports as import {fetchUsers} from ‘services/users’ , import {fetchItems} from 'services/items' , import {sanitizeRequest} from 'services/utils' , preserveSymlinks: Params (true | false) TL;DR Replicate the behavior of NodeJS when dealing with Symlinks When To Use: When you want to preserve symlinks. This is best documented on NodeJS’s documentation which even the official TS docs refer to as well. See more explicit examples there. rootDirs: Params list of <paths> TL;DR Specifies a list of folders (“roots”) whose contents are expected to be merged at runtime so that TypeScript can allow relative imports across these “virtual” directories as if they were merged together in a single directory. When To Use: When you expect files that at compile time are in separate directories to end up in the same directory at run-time, and you need to tell the TypeScript compiler about this so it can allow you to import across these directory boundaries. This option is best explained with a straightforward example. Let’s say we have a src/utilities/math/equations directory that contains some custom math equations that we are actively developing and iterating on. The equations in src/utilities/math/equations happen to rely on some generated equations that live in src/generated/math/equations . We don’t want the generated code getting dumped in with our custom code, so we keep them in separately distinct directories. src/utilities/math/equations/myCustomEquation.ts src/generated/math/equations/genEq.ts Now, let’s say that as part of our build process (Webpack/Babel/Rollup/etc), we plan on actually merging all files under both /equations folders into a single directory called /allEquations so that during runtime they all live in the same directory. This allows imports to be localized ( ./ ) and rootDirs allows you to specify this relationship. Now myCustomEquation.ts can import ./genEq.ts as if the two files lived in the same directory. More examples of how this can be used to implemented efficient and robust internationalization can be found here. typeRoots: Params (true | false) TL;DR Sets the exact directories from which type declarations will be imported from When To Use: When you want to be explicit about where tsc will pull type declarations from. “If typeRoots is specified, only packages under typeRoots will be included.” This is different than the default behavior of including all types within node_modules that are under @types/. You might set this if you have .d.ts files in other parts of your project that you wish to import ( node_modules/custom_lib/types/ , src/generated_types , etc) types: Params (true | false) TL;DR Sets the exact set of types that will be used When To Use: When you want to be explicit about the types that your project includes.
https://medium.com/extra-credit-by-guild/tsconfig-json-demystified-part-iii-3ebc73ad4850
['Alex Tzinov']
2020-06-05 16:52:27.812000+00:00
['JavaScript', 'Web Development', 'Typescript', 'Front End Development', 'Nodejs']
When will I be okay?. What goes through my head when I’m…
It shouldn’t hurt this much Just to be alive I shouldn’t cry every day I shouldn’t always hurt inside It doesn’t make sense Why am I always in pain? Why do I always hurt Why am I fighting my own brain? Why do I have to live Below a mental state That is acceptable for life Why am I so irate? Why am I so absent From my life and those near me? Why am I the supporting character In my own story? When will the time come For me to shine and thrive? When will I have the strength To do more than just survive? How long before I leave This poisoned state of mind? How many times do I have to lie And say “no really, I am fine.” How much longer do I stay and wait Before I’m happy again How much longer do I hate myself Will someone please tell me when?
https://medium.com/blueinsight/mental-illness-ba9f8ff4c569
['Emily Lane']
2020-12-26 18:43:20.609000+00:00
['Loneliness', 'Mental Health', 'Blue Insights', 'Sadness', 'Poetry']
The Art of Feature Engineering
One of the easiest ways to experiment with feature engineering is by using PolynomialFeatures available through Sci-Kit preprocessing module. There are two important parameters to be aware of with PolynomialFeatures. degree = you have the option of setting to which n degree of features you want to engineer. For instance, setting the degree = 2 means that the PolynomialFeatures will multiply each features by itself and against every other features. I personally find that interactive terms to the 2nd degree is sufficient when running linear regression but you can certainly explore by putting in a pipeline so you can GridSearch to find the most optimal n hyperparameter value. include_bias = You want to set this to False when using regular LinearRegression. Setting this to True essentially includes the y-intercept column which you do not want to do when running a regular LinearRegression out of sci-kit’s linear_model module. One of the downsides of using PolynomialFeatures is the fast expansion of columns if you have a lot of features to begin with. One of the measures that you can take to reduce the number of features is by measuring the R2 score against the target variable. This way, you can quickly eliminate features that do not have strong predictive power. Another method that I like to visit is to run the OLS summary using statsmodels.api module, which is another way you can run OLS in Python. With OLS, unlike sklearn’s LinearRegression, you just have to remember to manually add in the y-intercept. After you fit the data, you can run the summary() method which produces the summary below. P>|t| column provides the p-value of each features which can also can be used to determine the statistical significance of each features. So if you have features with p-value > 0.05, you can eliminate those from your feature selection and those with p-value < 0.05, you can keep them as part of features for your model. Feature engineering is an exciting way to enhance your model performance and definitely requires some good discernment and experience to become more intuitively better at this. In many ways, it truly is an art. It requires some level of creativity, willingness to experiment, and explore features from many different angles. As we all strive towards becoming better data scientists, I hope we all continue to develop and hone our skills in feature engineering.
https://youngathpark1.medium.com/the-art-of-feature-engineering-5a09bd475198
['Young Park']
2020-12-02 03:48:06.096000+00:00
['Data Science', 'Polynomialfeatures', 'Python', 'Scikit Learn', 'Statsmodels']
Apple M1 foreshadows Rise of RISC-V
Likewise with these interrupts we can send complex machine learning tasks to the M1 Neural Engine to say identify a face on the WebCam. Simultaneously the rest of the computer is responsive because the Neural Engine is chewing through the image data in parallel to everything else the CPU is doing. RISC-V based board from SiFive capable of running Linux The Rise of RISC-V Back in 2010 at UC Berkley the Parallel Computing Laboratory saw the development towards heavier use of coprocessors. They saw how the end of Moore’s Law meant that you could no longer easily squeeze more performance out of general purpose CPU cores. You needed specialized hardware: Coprocessors. Let us reflect momentarily on why that is. We know that the clock frequency cannot easily be increased. We are stuck on close to 3–5 GHz. Go higher and Watt consumption and heat generation goes through the roof. However we are able to add a lot more transistors. We simply cannot make the transistors work faster. Thus we need to do more work in parallel. One way to do that is by adding lots of general purpose cores. We could add lots of decoders and do Out-of-Order Execution (OoOE) as I have discussed before: Why Is Apple’s M1 Chip So Fast? Transistor Budget: CPU Cores or Coprocessors? You can keep playing that game and eventually you have 128 general cores like the Ampere Altra Max ARM processor. But is that really the best use of our silicon? For servers in the cloud that is great. One can probably keep all those 128 cores busy with various client requests. However a desktop system may not be able to effectively use more than 8-cores on common desktop workloads. Thus if you go to say 32 cores, you are wasting silicon on lots of cores which will sit idle most of the time. Instead of spending all that silicon on more CPU cores, perhaps we can add more coprocessors instead? Think about it this way: You got a transistor budget. In the early days, maybe you had a budget of 20 000 transistors and you figured you could make a CPU with 15 000 transistors. That is close to reality in the early 80s. Now this CPU could do maybe 100 different tasks. Say making a specialized coprocessor to one of these tasks cost you 1000 transistors. If you made a coprocessor for every task you would get to 100 000 transistors. That would blow your budget. Transistor Abundance Change Strategy Thus in early designs one needed to focus on general purpose computing. But today, we can stuff chips with so many transistors, we hardly know what to do with them. Thus designing coprocessors has become a big thing. A lot of research goes into making all sorts of new coprocessors. However these tend to contain pretty dumb accelerators which needed to be babied. Unlike a CPU they cannot read instructions which tells them all the steps to do. They don’t generally know how to access memory and organize anything. Thus the common solution to this is to have a simple CPU as a sort of controller. So the whole coprocessor is some specialized accelerator circuit controlled by a simple CPU, which configures the accelerator to do its job. Usually this is highly specialized. For instance, something like a Neural Engine or Tensor Processing Unit deal with very large registers that can hold matrices (rows and columns of numbers). RISC-V Was Tailored Made to Control Accelerators This is exactly what RISC-V got designed for. It has a bare minimum instruction-set of about 40–50 instructions which lets it do all the typical CPU stuff. It may sound like a lot, but keep in mind that an x86 CPU has over 1500 instructions. Instead of having a large fixed instruction-set, RISC-V is designed around the idea of extensions. Every coprocessor will be different. It will thus contain a RISC-V processor to manage things which implements the core instruction-set as well as an extension instruction-set tailor made for what that co-processor needs to do. Okay, now maybe you start to see the contours of what I am getting at. Apple’s M1 is really going to push the industry as whole towards this coprocessor dominated future. And to make these coprocessors, RISC-V will be an important part of the puzzle. But why? Can’t everybody making a coprocessor just invent their own instruction-set? After all that is what I think Apple has done. Or possibly they use ARM. I have no idea. If somebody knows, please drop me a line. What is the Benefit of Sticking with RISC-V for Coprocessors? Making chips have become a complicated and costly affair. Building up tools to verify your chip. Run tests programs, diagnosis and a host of other things requires a lot of effort. This is part of the value of going with ARM today. They have a large ecosystem of tools to help verify your design and test it. Going for custom proprietary instruction-sets is thus not a good idea. However with RISC-V there is a standard which multiple companies can make tools for. Suddenly there is an eco-system and multiple companies can share the burden. But why not just use ARM which is already there? You see ARM is made as a general purpose CPU. It has a large fixed instruction-set. After pressure from customers and RISC-V competition ARM has relented and in 2019 opened its instruction-set for extensions. Still the problem is that it wasn’t made for this from the onset. The whole ARM toolchain is going to assume you got the whole large ARM instruction set implemented. That is fine for the main CPU of a Mac or an iPhone. But for a coprocessor you don’t want or need this large instruction-set. You want an eco-system of tools that have been built around the idea of a minimal fixed base instruction-set with extensions. Nvidia using RISC-V Based Controllers Why is that such a benefit? Nvidia’s use of RISC-V offers some insight. On their big GPUs they need some kind of general purpose CPU to be used as a controller. However the amount of silicon they can set aside for this, and the amount of heath it is allowed to produce is minimal. Keep in mind that lots of things are competing for space. The small and simple instruction-set of RISC-V makes it possible to implement RISC-V cores in much less silicon than ARM. Because RISC-V has such a small and simple instruction-set it beats all the competition, including ARM. Nvidia found they could make smaller chips by going for RISC-V than for anybody else. They also reduced watt usage to a minimum. Thus with the extension mechanism you can limit yourself to adding only the instructions crucial for the job you need done. A controller for a GPU likely needs other extensions than a controller on an encryption coprocessor e.g. RISC-V Machine Learning Accelerator (ET-SOC-1) Esperanto Technologies is another company that found a value in RISC-V. They are making an SoC, called ET-SOC-1, which is slightly larger than the M1 SoC. It has 23.8 billion transistors compared to the 16 billion on the M1. The Esperanto ET-SoC-1 die plot. Image: Art Swift. Instead of four general purpose Firestorm cores, it as four RISC-V cores called ET-Maxion. These are suited for doing general purpose stuff like running a Linux operating system. But in addition to this it has over 1000 specialized coprocessors called ET-Minion. These are RISC-V based coprocessors which implement the RISC-V vector extension. What is the significance of that? These instructions are particularly well suited for processing large vectors and matrices which modern machine learning is all about. You may be looking at the number of cores in disbelief. How can the ET-SOC-1 have so many more cores than the M1? It is because a Firestorm core is meant to deal with typical desktop workloads which cannot easily be parallelized. Hence lots of tricks have to be pulled to attempt to run code in parallel which is not trivial to parallelize. That eats up a lot of silicon. ET-Minion cores in contrast deal with problems which are trivial to parallelize, and these core can thus be really simple, cutting down on the amount of silicon needed. The key takeaway from ET-SOC-1 is that producers of highly specialized coprocessors are seeing a value in building coprocessors based on RISC-V. Both ET-Maxion and ET-Minion cores will be licensable from Esperanto Technologies. That means in theory Apple (or anybody else) could license ET-Minion cores and put a ton of them on their M1, to get superior machine learning performance. ARM Will Be The New x86 Ironically we may see a future where Macs and PCs are powered by ARM processors. But where all the custom hardware around them, all their coprocessors will be dominated by RISC-V. As coprocessor get more popular more silicon in your System-on-a-Chip (SoC) may be running RISC-V than ARM. Read more: RISC-V: Did Apple Make the Wrong Choice? When I wrote the story above, I had not actually fully grasped what RISC-V was all about. I though the future would be about ARM or RISC-V. Instead it will likely be ARM and RISC-V. ARM Commanding an Army of RISC-V Coprocessors General purpose ARM processors will be at the center with an army of RISC-V powered coprocessors accelerating every possible task from graphics, encryption, video encoding, machine learning, signal processing to processing network packages. Prof. David Patterson and his team at UC Berkeley saw this future coming and that is why RISC-V is so well tailored to meet this new world. We are seeing such a massive uptake and buzz around RISC-V in all sorts of specialized hardware and micro-controllers that I think a lot of the areas dominated by ARM today will go RISC-V. Raspberry Pi 4 Microcontroller, currently using an ARM processor. Imagine something like Raspberry Pi. Now it runs ARM. But future RISC-V variants could offer a host of variants tailored for different needs. There could be machine learning microcontrollers. Another can be image processing oriented. A third could be for encryption. Basically you could pick your own a little micro-controller with its own little flavor. You may be able to run Linux on it and do all the same tasks, except the performance profile will be different. RISC-V microcontrollers with special machine learning instructions will train neural networks faster than the RISC-V microcontroller with instructions for video encoding. Nvidia has already ventured down that path with their Jetson Nano, shown below. It is a Raspberry Pi sized microcontroller with specialized hardware for machine learning, so you can do object detection, speech recognition and other machine learning tasks. NVIDIA Jetson Nano Developer Kit. RISC-V as Main CPU? Many ask: Why not replace ARM entirely with RISC-V? While others claim that this would never work because RISC-V has a “puny and simple” instruction-set which cannot deliver the kind of high performance that ARM and x86 offers. Yes, you could use RISC-V as the main processor. No, performance is not stopping us from doing it. Just like with ARM, we just need somebody to make a high performance RISC-V chip. In fact it may already have been done: New RISC-V CPU claims recordbreaking performance per watt. It has been a common misconception that complex instructions give higher performance. RISC workstations disproved that back in the 90s as they destroyed x86 computers in performance benchmarks. How did Intel beat the RISC workstations in the 90s: Is It Game Over for the x86 ISA and Intel? In fact RISC-V has a lot of clever tricks up its sleeve to get high performance: The Genius of RISC-V Microprocessors. In short, there is no reason why your main CPU couldn’t be a RISC-V processor, but this is also a question of momentum. MacOS and Windows already runs on ARM. At least in the short term, it seems questionable that either Microsoft or Apple will spend the effort on doing yet another hardware transition. Share Your Thoughts Let me know what you think. There is a lot going on here which is hard to guess. We see e.g. now there are claims of RISC-V CPUs which really beats ARM on watt and performance. This also makes you wonder if there is indeed a chance that RISC-V becomes the central CPU of computers. I must admit it has not been obvious to me why RISC-V would outperform ARM. By their own admission, RISC-V is a fairly conservative design. They don’t use much instructions which have not already been used in some other older design. However there seems to be a major gain from pairing everything down to a minimum. It makes it possible to make exceptionally small and simple implementations or RISC-V CPUs. This again makes it possible to reduce Watt usage and increase clock frequency. Hence the last word on RISC-V and ARM is not yet said.
https://erik-engheim.medium.com/apple-m1-foreshadows-risc-v-dd63a62b2562
['Erik Engheim']
2020-12-25 15:41:13.955000+00:00
['Arm', 'Apple', 'Risc V', 'Trends', 'Apple M1 Chip']
Does Artificial Intelligence Mean Data Visualization is Dead?
We are user experience designers for IBM Cognos Analytics, a data analytics and reporting platform with robust data visualization capabilities. We are all about leveraging human perception and cognition to help people answer questions about their data. If you called us data vis fanatics, there would be some truth to that. Recently, there has been a tremendous push to add artificial intelligence (AI) features such as predictive modeling, chart recommenders, natural language generation and conversational assistants to business intelligence (BI) products such as ours. These features provide powerful and exciting new ways to analyze ever larger datasets. The word disruptive comes to mind. As un-official carriers of the data visualization torch in our organization, Nicolas Kruchten’s article, Data Visualization for Artificial Intelligence, and Vice Versa (Medium, 2018), made us pause for thought. “It might be tempting to think that the relationship between AI and data visualization is that to the extent that AI development succeeds, data visualization will become irrelevant. After all, will we need a speedometer to visualize how fast a car is going when it’s driving itself?” — Nicholas Kruchten If Kruchten’s suggestion that self-driving cars may no longer need speedometers, what does this mean for business intelligence tools that generate dashboards and reports? For example, if a computer can automate day to day operational business decisions, will we need business dashboards? If it can identify patterns, make accurate predictions and neatly summarize the results, what does this mean for data visualization more broadly? We set out to find answers to these questions by interviewing fifteen IBMers working at the intersection of AI and data visualization. This article summarizes their responses according to the following themes: Data visualization: the impact of AI on data visualization and what gets visualized the impact of AI on data visualization and what gets visualized User and user roles: the implications of AI on end users, domain experts and data visualizers the implications of AI on end users, domain experts and data visualizers Practical challenges to adoption of AI features in BI tools: human and technological challenges that suggest that visualizations and dashboards will be around for a long time Source: IBM Cognos Analytics Design Does AI Transform Data Visualization? Some participants felt “No, not really”, others said “Yes, absolutely”. As visualization designers, we thought a chart might help. Source: IBM Cognos Analytics Design On a fundamental level, human perception and pre-attentive principles are not likely to change in the foreseeable future. Chart primitives — “the workhorses of visualization” as one participant put it — are not likely to go away, especially in a business intelligence context. “The charts will always exist. AI just changes the inputs and outputs. The difference is under the hood. The AI-generated data is great, but the charts are still pretty mundane.” — Developer/Architect For some participants, visualizations are simply an output communication channel, independent of whether the underlying data and analysis were AI-generated or not. Others felt differently. Two participants suggested that visualizations could also be used as inputs to AI models. After all, “AI is excellent at handling images, so why couldn’t data visualizations be inputs to machine learning algorithms?” In response to the question, “Does a self-driving car need a speedometer?”, one participant explained that they don’t need to know what speed they are traveling, but why they are going so darn slow. Is there an accident? Construction? Anything that can be done? “It might change the thing we communicate. We no longer care about speed but we probably care about something else. AI is all about aggregation. Information can make you not play in the weeds but at a higher level.” — Designer The Nicolas Kruchten quote in the introduction suggested that AI could make visualization irrelevant. A number of participants felt strongly that the opposite was the case: AI would make visualizations more relevant than ever. Can data visualization make AI more trustworthy? Machine learning models are complex and subject to bias. Visualization can help make them more comprehensible and less frightening. As one participant put it: “Black boxes are scary. I need to see what you are doing so I can override it if necessary.” Many participants believed that data visualizations served a critical function helping build trust in the AI system, exposing bias in training data and models, and providing context for predictive outputs. “I don’t see how anybody will trust AI just on its own without visualization; without feedback? If you look around at all the recent articles, they’re all about removing bias. It’s about trust. How can I ensure this model isn’t discriminating against women or men? The only way to overcome that is to visualize; to see it.” — Developer Traditionally, maps and visualizations represent their underlying data with precision and accuracy. Predictive modeling, however, is more probabilistic and more dependent on good data quality. A number of participants believed that when visualizing AI generated outputs, it was important to also represent probability, uncertainty and data quality in order to provide the context necessary to interpret the outputs. “There is an authority with putting dots on a certain place and not somewhere else on a paper … there is not much you can argue with. There is quite a bit of work on uncertainty in visualization and I don’t think it is a done chapter yet in information visualization. I think there is a lot to do.” — Researcher Each and every opinion is correct in its own way. Collectively, they represent a broad range of positions, even within one organization. Is AI transforming data visualization? The general consensus is, ‘It depends’. Will it make data visualization obsolete? The majority opinion is, ‘No’, but for a variety of reasons. Source: IBM Cognos Analytics Design How Might AI Transform User Roles? We asked participants if the addition of AI features in BI tools is comparable to adding AI to a car. In other words, if an AI-driven car transforms drivers into passengers, does AI change business analytics from active inquiry to something much more passive? Responses were grouped according to three different types of users: end users, domain experts, and visualization experts. Source: IBM Cognos Analytics Design With regards to end users, many participants said that automation helps humans complete their tasks faster, more efficiently, and potentially more accurately, thereby freeing them from mundane tasks and letting them focus their energy on higher level decisions. According to this group, as AI advances to the point where it can be trusted and can successfully do what it is meant to do, AI agents will become the main players driving the data analysis process and humans will become secondary. Other participants had the opposite point of view, believing that humans continue to drive the analytic process and decision making, especially in a dynamic business domain. AI is just having a more powerful engine in the car, helping people make better and more informed decisions. In fact, a number of participants suggested that AI would expand—not diminish—the role of the analyst because it provides access to new data, methods, and capabilities that were previously not available. “Users will be going after things that we didn’t used to do before. For example, we can use sentiment analysis to analyze customer sentiment rather than only looking at basic sales data.” — System architect AI could transform the role of the analytics user for a number of reasons that center around domain expertise and evolving skill sets. There is a knowledge gap between the people who build models and those who understand the data and context in which they will be used. This gap will increasingly need to be filled by people whose skills bridge both domains. And will business analysts still be in the drivers’ seat? It’s complicated. Some said if the system is trustworthy enough, there is no need to know what is going on under the hood. Others felt that the next generation would be AI-savvy enough to expect a degree of visibility into the model for the sake of trust and oversight. At this juncture, people’s comfort level with AI is changing rapidly and we should avoid jumping to easy conclusions on the question of transparency. The third way AI transforms BI tool users has to do with data visualization experts. This speaks most directly to the question that prompted this research in the first place — does artificial intelligence mean data visualization is dead? Thanks to recent advances in visualization recommendation systems and conversational assistants, it is entirely possible that there will be reduced demand for dashboard designers. This does not mean, however, there will be reduced demand for data visualization expertise. Several participants emphasized that if mundane tasks can be handled automatically, data visualization experts will have more bandwidth to focus on storytelling and designing bespoke visualization solutions. Data visualization expertise will become more important, not less, especially when it comes to visualizing complex models and other phenomena. Source: IBM Cognos Analytics Design Challenges of Implementing AI Features in BI Tools Looking at the impact of AI in other domains, one might think the transformation of BI tools will be both imminent and sweeping. To test that assumption, we asked participants to discuss some real-world challenges they experienced while incorporating AI features into BI projects. We heard a range of responses that roll up into two main groups — human-centered and technological problems. Source: IBM Cognos Analytics Design On the human side of the tree, one of the biggest issues is the question of context. AI models only operate with the information they have been given. They have no contextual understanding of what the training or input data represents or how the model will be used. Another problem is the question of expectation. How do you manage users’ expectations for what a model can be expected to do? Any misalignment between what a model is trained for and what the user intends to do with it will lead to a negative experience, or worse, in high stakes situations. Clarifying the role of the user, or as many participants called it, “human in the loop” presents another set of challenges. At what point is the user given an opportunity to intervene and make a decision? There are two schools of thought here. Some people feel the perfect model shouldn’t require additional user input. Others say no model can ever be perfect. “There will be edge cases the model wasn’t trained for. A good self-driving car should ask if you want to hit the grandmother or the children.” — Researcher Nearly everyone mentioned the problem of “model explainability”. Models that are not explainable cannot be questioned. There are three different levels of user requirement for model explainability: 1) the AI researcher designing a new machine learning algorithm; 2) the data scientist trying to build and evaluate a model; 3) the end user who uses an AI feature to support their decision making. “When you disagree with the output you want to see why the model reached the conclusion it did, and most importantly override it.” — UX Designer People need to trust models before they will use them. Trust depends on a combination of explainability, confidence, accuracy, and reliability. One participant pointed out that there is a “trust spectrum” as requirements for trust change according to contextual issues. What is the user’s prior experience with the model? Are they using it in a high or low-risk environment? On the technological side of the tree, the biggest and most difficult challenges had to do with data. On the one hand, AIs require a LOT of training data. In a business context, historical data can simply be unavailable, unstructured, unreliable or noisy. It can also be skewed and biased, over-representing some things and under-representing others. Data that is considered good today may not be good tomorrow. Left to itself, without ongoing retraining, model quality tends to drift downward. Business, like science, medicine, and most domains, does not stand still. Models also have to adapt to changing times. “You might make a machine learning model for diagnosing cancer, but everything we know about cancer changes every five years.” — System architect It is clear that AI has come a long way in recent years and continues to grow at an exponential rate. But human factors and real-world challenges may prove to be the bottleneck for full adoption. Business users will continue to rely on conventional visualizations and dashboards for some time to come.
https://medium.com/nightingale/does-artificial-intelligence-mean-data-visualization-is-dead-ade70b7638cd
['Data Visualization Book Club']
2020-01-13 19:56:22.808000+00:00
['Machine Learning', 'Data Visualization', 'AI', 'IBM', 'Data Science']
50-year-old Cybernetics questions for an ethical future of AI
50-year-old Cybernetics questions for an ethical future of AI Norbert Wiener, one of cybernetics pioneers, envisioned AI ethics problems way ahead of us “43081” by Tekniska museet is licensed with CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/ Ethics has definitely become a trend in the field of Artificial Intelligence. It seems clear that AI faces a lot of challenges if we want it to have a positive impact for our society. Nevertheless, it is not the first time that researchers warn us about the risks of this kind of technology. Norbert Wiener, cybernetics pioneer, wrote this somehow prophetic piece back in his book God & Golem, inc, back in 1964: It is relatively easy to promote good and to fight evil and good and evil are arranged against each other in two clear lines, and when those on the other side are our unquestioned enemies and those on our side our trusted allies. What, however, if we must ask, each time and in every situation, where is the friend and where is the enemy? What, moreover, when we have to put the decision in the hands of an inexorable magic or an inexorable machine of which we must ask the right questions in advance, without fully understanding the operations of the process by which they will be answered? Quote taken from God & Golem, Inc — Norbert Wiener, 1964. It is mesmerizing to think that this was written more than 50 years ago and that somehow it reflects so well some of the main challenges we are facing for ensuring a truly ethical future for AI. Let’s break down Wiener’s written piece.
https://towardsdatascience.com/50-year-old-cybernetics-questions-for-an-ethical-future-of-ai-8287beb96257
['David Pereira']
2020-12-23 17:15:33.558000+00:00
['Cybernetics', 'Artificial Intelligence', 'Ai Ethics']
Seven Years Away from Mormonism and Why I’m Returning
Note: I originally shared this as a PDF document on April 6, 2019. To Whom It May Concern My name is Joe Tippetts. Seven years ago, I left the Mormon Church. For two years, I was “inactive”. In January of 2014, I formally resigned. On April 13, 2019, I will be baptized again. This is the story of why I left and why I’m returning. I’m addressing two audiences: To my fellow inactive and ex-Mormons. Many of us have been reading the same books, listening to the same podcasts, going to the same events, working through similar stresses to our closest relationships, learning far more church history than we ever knew as members, and redefining our lives based on our consciences rather than direction from the church. You’ve been my friends for the past few years. Trigger Warning I remember hating the perception that important people in my life would never think I was good unless/until I returned to the church. I never expected to return to the church. My time outside the church has changed many of the attitudes I had as a Mormon growing up in Utah. Ideas, such as, you have to be Mormon to be good. Or even if you’re good, you’re better as a Mormon. Or, if you leave the church it was because you wanted to sin or were lazy or lacked character. My enthusiasm about this unexpected change in my life may come across as a judgement on your life. If you’re feeling pain in relationships with your family or otherwise working through the stages of grief common for people who leave Mormonism, you may not want to read this. If my telling you not to read it makes you really want to read it to see how much of an idiot I am, and you feel angry at what I say and decide to send me a personal message telling me that I’ve been sucked back into the cult, I’ll understand. I’ll listen. And I won’t be offended. To my (used-to-be and soon-to-be) fellow church members. If you knew a younger me, you were surprised to see I had left the church. I was just getting really good at being a happy, well-adjusted non-Mormon. Hopefully, this story helps you understand why I left and why I am getting re-baptized. Thank you to the people who reviewed previous versions of this document and provided helpful feedback. Trying to address active Mormons together with ex-Mormons is tricky, but I’m trying to do it with a spirit of sincere respect for everyone’s journey. God Spoke to Me I’m returning to the Mormon Church because I believe God spoke to me and told me to. That’s the reason. Here’s the story. After work on March 5, 2019, I opened Facebook and recognized the name of the poster. A guy I knew growing up. It perked my interest. This private group is mostly inhabited by people who are questioning or have lost their faith in the Mormon Church. Many, like me, have voluntarily resigned. A few, like this poster, were excommunicated. The post was long. I was used to seeing these. Often, people going through a “crisis of faith” or a “faith transition” will write long narratives about their difficult personal experience. They’ll describe the frustration of trying to talk with believing family members or leaders who make them feel like they’ve sinned for asking legitimate questions or have no awareness of the issues they’re thinking about. They feel scared and isolated. After sharing very personal experiences, dozens of group members validate them with likes and encouraging comments. Scanning through the words, it became apparent that this post was different. This person was preparing to return to the church by getting re-baptized. Full stop. I don’t remember the rest of the words or the reasons he gave for changing his mind after 10 years away from Mormonism. What I’m calling “God” instantly filled my whole consciousness. My emotions were heightened with a sensation of bliss. My thoughts were crystal clear. Two clear messages pressed on my mind: 1. I am God. I am real. And I love you. 2. It’s time to go back to church. When God Didn’t Speak to Me When I say I felt God and understood the experience as God, that wasn’t normal for me. I wasn’t seeking him. I didn’t believe in him. The usual method for finding God, as taught in scriptures, is to seek, knock, or ask. Your heart needs to be open. You need faith, nothing wavering, right? And when nothing happens, you analyze yourself to death because obviously you did it with at least something wavering. Or else it would have worked. Before I left the church, I had stopped feeling God. I was worthy and checking all the boxes, but the more I wanted to experience God, the more impossible it seemed. When you’re a missionary [in South America], you don’t tell people it might take God five or ten years to answer their prayers. In Chile, we held up little pictures of Christ being baptized and asked people on their doorsteps if they wanted us to help them get baptized. Within an hour, we were trying to nail down a date. “Would you like to do it this week or next?” When we hear the sacrament prayer, we aren’t promised that the Spirit might always be with us, but also might go on vacation for a couple of years when we need God most. That we may always have his Spirit to be with us. It made no sense to me why God had disappeared. I finally stopped looking, first for a few days at a time, then weeks, then months. I stopped feeling like I needed to obey the rules of a god I no longer believed in. I had to re-evaluate the experiences I had labeled as god. Psychology and sociology offered reasonable answers. I was primed. It was reinforced socially. I was done with imaginary God. I filed him away next to Superman. Scriptural stories talk about another way people find God. He finds them. Not because they’re worthy or have desire. He stopped Paul on the road to Damascus. He stopped the sons of Mosiah and Alma the younger. This is what happened to me. I believe it was God and I believe his message to me was clear. That Night On the evening after this experience with God, I was in shock. I remembered the theme of many prayers from years before. “God, I’m scared that I’m going to leave you and never come back. My kids won’t go on missions. They won’t care about the temple. Please let me know you’re there. If I can’t believe you’re there, I can’t go on living this way. It’s too stressful to believe in you but have you absent in my life. I can’t fake my belief in you.” Despite my pleading, I would get no answer. No peace or assurance. Just the feeling of being an idiot for thinking God was real. Or the self-loathing that led to suicidal fantasies, assuming I must be doing something very wrong for God to stay away from me, but I had no idea what it was or how to correct it. All of this past resentment I had previously felt about God’s absence simply wasn’t there. The old fear I had felt about leading my family in a bad direction wasn’t there. I just knew that I had felt God again and I had a sense that there was an important reason for my experience. I went to bed, wondering how I would feel the next day. Would this all seem like a bizarre emotional blip? The Next Day I got up the next morning and it wasn’t as intense, but it felt like a warm blanket was wrapped around me as if to say, “Joe, I’m gonna walk with you for a little while until you feel sure this isn’t just some kind of mind game. I’m still here. I’m real. I love you. Now do what you need to do.” (This sensation stayed with me for about two weeks.) I told my wife. She was rather surprised. The wife who stuck with me when I left the church despite some very difficult years. The wife who loved me no matter what. I told my brother. My fellow black sheep who had also left the church. My safe place when everyone around us felt like judges. The maker of a great Moscow Mule. My Sunday fishing partner. So much more than a brother. They both expressed support for whatever I felt was right. Day Two When I said there were clear words pressing themselves in my mind, they were more like ideas from which the words formed. But the ideas were bigger than just the words. For example, how did I interpret God telling me to go back to the Mormon Church? Did this mean that God was telling me that this was the only true church? That’s not what I felt. Did God mean that I should go back to any church? That’s not what I felt either. It was very specific, and the meaning I perceived was that this was the place for me to go. I was itching to reach out to my bishop and start the process. Enough time had passed for me to feel confident. I didn’t want to ignore what I felt. I sent the text message to my bishop. I think I caught him off guard because his response was a little underwhelming. J Once that was done, I was curious to see if listening to one of my old critical podcasts would shake me out of this reverie. If I was going to get cold feet, better sooner than later. I plugged in my earbuds and fired up a classic Mormon Expression episode where John Larsen and his panel clearly demonstrated how impossible it was for Nephi to build a trans-oceanic vessel. He described all the industries that would be needed. The thousands of sheep. The acres of forest. The dry dock. 100 years-worth of man hours. Etc. As I listened, my reaction was different than in the past. It didn’t evoke a sense of disgust mixed with confirmation of my decision to leave the church. The long list of “facts” that appear to disprove God and religion felt like only part of the story. Part of the evidence one can use to determine truth. How do I account for treasured memories that are part of my life because of my involvement in the church? The people I loved and who loved me. The opportunities for growth. The instant family each time we moved. The weight of those factors have become more important. But they’re all still VERY secondary to the personal feeling that God is real, loves me, and pointed me back to this church. The First Meeting with My Bishop The first time I met with Bishop Smith, about 10 days after the experience, I felt calm and excited. I’ve lived in my neighborhood for 14 years and we are old friends. But something unexpected happened. It quickly became apparent that my “knowledge” was minimal. Remember, I felt confident about two things: · God was real and loved me. · I should return to the Mormon Church. The first baptism interview question is, “Do you believe that God is our Eternal Father? Do you believe that Jesus Christ is the Son of God and the Savior and Redeemer of the world?” I believed in God, but the question immediately jumps to describing God in specific ways. Was God my Father? I didn’t know. And nothing in my experience felt like I was encountering two beings. What could I honestly say about Jesus? After the meeting, I felt a little bit nervous. I didn’t want to feel like I had to say I believed something that I didn’t believe. Subsequent Meetings with My Bishop The second meeting felt similar to the first. I was rather verbose as I tried to translate my feelings into words. I had prayed about the questions, but didn’t feel any kind of certainty. In the past, I had heard people say that “faith is a choice.” At the time, it bothered me. It felt like they were saying, “willful ignorance is a valid choice.” If faith was a choice, I could make up anything and choose to believe it. But as our conversations progressed, I was seeing this idea of faith, as a choice, in a new light. The second part of the revelation I received was that I should go back to the Mormon Church. That was clear. I decided that if God was directing me to this church, that it would be fine, even correct, for me to choose to believe in some basic things. To exercise faith in them. My conscious choice to have faith didn’t feel haphazard. It was linked to what I viewed as a direct experience with God. By the end of the third meeting, I felt confident that I could honestly answer all the baptism interview questions. Ex-Mormon Me Chimes In I think it’s appropriate to allow my ex-Mormon self to express a few thoughts. It’s been at the steering wheel for the past seven years. It’s interpreting my experiences from a much different perspective because my words fall short. This former internal voice sees the central part of my experience as the most questionable. It says that subjective feelings or epiphanies are a horrible way to learn truth. An unreliable epistemology. It won’t pass a double-blind test. Former me stands ready with a long list of problems with the church. Contradictions. More complete versions of history. Deceptions. Like the song in the Book of Mormon musical says, “turn it off, like a light switch.” When I see facts that contradict what I’m feeling, instead of acknowledging reality, I ignore it or put it on a shelf. I’m behaving like a child who believes you can’t see him because he’s covering his eyes. In the last couple of days, social media has been on fire with strong reactions to the church reversing its 2015 policy on gay church members and their families. I’ve read the stories of families who were torn apart by this policy. I sense their valid and acute pain. I shared previous versions of this document with some ex-Mormon family members that I care about. I wanted their opinions because I didn’t want to offend. I didn’t want to tear open old, painful wounds of feeling unaccepted by some because they are no longer in the church. One honestly asked me if I really thought God would point me to a religion that hurts people so badly? A religion that won’t apologize and can have an “our way or the highway” approach to truth rather than being more accepting of diverse views or being willing to change as scientific advances shed new light on issues like gender and sexual identity. My experience with God didn’t resolve these issues. I feel like I’m supposed to return to the church, in spite of real problems, not because I believe that problems and contradictions don’t exist. General Conference, Then and Now General conference used to be a wonderful time. I looked forward to it, legal pad and pen ready to learn from the speakers and from the Spirit. In the years leading up to my departure, I grew to hate general conference. So many five step recipes for success. Recipes that hadn’t been working in my life. I was exhausted. Disillusioned. Frustrated. Today, I watched general conference for the first time in about seven years. I wasn’t sure how much I would be able to handle, but I turned on the morning session. I felt pulled in. I felt like most of the speakers had thought about me when they wrote their talks. They addressed questions I still have. But more important, I believe the Spirit spoke to me. Elder Ballard, the same one who I hadn’t appreciated in recent years, spoke to me. The same one that told girls to “put a little lipstick on” to get married. A favorite podcast of mine took this, and other phrases from recorded talks by general authorities, and turned them into a satirical song. The intent was to make these leaders look like buffoons; out of touch idiots. But today, as I listened, I felt the Spirit in my heart. I felt sorry for the way I had viewed him and his peers. I don’t think these men are perfect. I’ve spent a lot of years listening intently to critics who have jumped on their words, presented them in the worst light, and positioned their entire reputation around a quote or two that bothered them. I wondered how stupid I would look if people judged me from something I said last year or 25 years ago. I don’t think prophets are always right, even when they declare that they’re speaking the will of God. An honest look at history shows this. I see these men as trying to point us to God. God is the real relationship that matters. I may or may not enjoy a personality or a teaching style of a leader. But today, I was reminded that God can speak to me through them. I can disagree with them. I can hope that they will adjust certain views over time. Until then, I will sustain them. I will listen to them carefully, and, with the help of God, determine which messages I should try to follow. Today, simply listening to them watching them with the intent to learn, touched me. I remember the words of the scripture, “Did not our hearts within us burn?” Do I Know the Church is True? Yes and No. If you would have asked me 15 years ago what a Mormon should “know” is true, to be worthy, I would have been tricked by the question. I would immediately run off a list like: God is our Father in Heaven Jesus and His Atonement help us overcome sin and death to return to God and become like him Joseph Smith restored the true gospel of Jesus Christ The Book of Mormon is true President Nelson is God’s prophet on earth today The Church of Jesus Christ of Latter-day Saints is “the only true and living church on the face of the earth.” Or stated more succinctly, I know the church is true. I would have felt self-assured for knowing all of them were true. And I would have been somewhat wrong in my answer. At no point in any interview, for any advancement or ordinance or test of worthiness, is a person required to know that something is true. It’s true that you can feel social pressure to appear to know as a signal of your spiritual maturity. Or you may feel strongly that you do know that something is true, and I can respect your conviction, even if I don’t agree with it. For years, I said I knew these things were true. Then I didn’t. It makes me feel sensitive about my use of the words, “I know”. Today, I’m more inclined to leave the door open to my ideas evolving instead of casting them in cement with “I know”. Maybe saying , “I know I felt something that causes me to want to believe and act on the belief,” is more accurate for me. After I do this for a few years, repeatedly having affirming experiences, I may feel inclined to speak in more certain terms. The “I know” of previous generations looks different than the “I know” of today. Some grandparents knew that interracial marriage was wrong. It was against nature. It was against the law. It was strongly discouraged by prophets. Today, my Caucasian self quite enjoys my marriage to a woman with Japanese heritage. While I don’t always understand why the church and its leaders teach and behave in certain ways, I don’t believe that anything about my membership will prevent me from believing and acting in ways that I feel are right. Whether it’s in the way I vote, who I care about, or how I view people who don’t share my life choices. I know I want to follow the guidance of, what I believe is God, speaking to me. And I know that The Church of Jesus Christ of Latter-day Saints is where I want to pursue this. Will God Speak to You Too? Here, I’ve said God has spoken to me in a way I could understand. Will he do the same for you? That’s a hard question. The young missionary-me would promise you with conviction that God will answer your prayers. But now, I think back to the years when I desperately searched for God and couldn’t find him. It was the main reason I stopped believing in God and my church. Today, none of my children are active Mormons. I led them out of the church. If I want happiness for anyone, it’s my kids. If they ask if God will speak to them, what will I say? God is Love First, I will tell them that I believe God is Love. I believe wherever Love is, God is. Love others and you’re putting yourself in a position to be the best you. If the word “God” bothers or hurts you, call it Love and try to let it govern your life. I will tell them about my experience, both as a younger person who felt connected to God, and as a thirty-something who couldn’t find him, no matter how hard I tried. Until he came to me. I heard President Nelson quoted today, saying (paraphrased) that we need to seek God with a real intent to follow. If we’re just kicking tires, chances are slim that sacred understanding will come. If you try to find God and can’t seem to find him, instead of getting all worked up about it and feeling like God and religion are a lie, just set it aside and try to live with goodness. Maybe you’ll wake up one day and feel drawn to God. Maybe you’ll encounter the words of a prophet or an excommunicated Mormon and God will “visit” you. Maybe nothing will ever happen beyond you following your conscience and having satisfaction that you’re pursuing goodness. Look around at the people you know. Those you admire most. Those you trust and want to be like. Follow their lead while maintaining your right to do things in a way that work for you. I’m not implying that they’ll necessarily look or behave like Mormons. Your conscience is a good guide. If you make choices that don’t make you happy, you can always change directions and choose a new path forward. Don’t be afraid to try things and fail. Be open to unexpected possibilities. Don’t be afraid to act on what feels right, even if it doesn’t make sense. And don’t hold on to anger, doubt, and cynicism when you fail a few times. Even when you fail badly. That’s what I’ll tell my kids. Only You Can Know I feel confident that the God of love is pointing me in a specific direction. It feels good to honor it. Each day that passes, I see myself confronting situations where the recent-me comes into conflict with who I want to be. Recent me has a lot of ready arguments that can make current me seem kind of stupid. There is another kind of voice. I believe it’s real. Facts and logic seem to be overwhelmingly piled up against it. I have to make time to listen to it. Sometimes it’s in the form of music. It can be listening to men I now regard as prophets. It can be listening to the idea in my head to call a friend. This voice makes my heart expand. It becomes a “fact” that I feel clear direction and peace. I feel inspired. I feel like I can picture a future that I want to work hard to bring about. I feel like Sancho Panza, the sidekick in the play, Man from La Mancha. He sings about his feelings for Don Quixote, the crazy, delusion-filled old man that leads him on adventures. I like him, I really like him. Tear out my fingernails one by one, I like him! I don’t have a very good reason, Since I’ve been with him, Cuckoo-nuts have been in season. But there’s nothing I can do, Chop me up for onion stew, Still I’ll yell to the sky Though I can’t tell you why, That I like him! Where will Love lead you? Only you can know. Follow it!
https://joetippetts.medium.com/seven-years-away-from-mormonism-and-why-im-returning-dee588817120
['Joe Tippetts']
2020-12-07 20:22:20.100000+00:00
['God', 'Latter Day Saints', 'Exmormon', 'Conversion', 'Mormon']