title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Exploring Digital Nomadism (Working Abroad #5) — The Global MBA | Working Remotely For Maximum Gain
The digital nomad. The term became popular in the 2010’s with the proliferation of the internet, compact computers and the overall digitalization of the developed and (to some degree) the developing world. Basically a digital nomad in a nutshell is an individual that works remotely through a personal computer with an internet connection. Normally such an individuals live in developing countries where the cost of living is much lower and work in a developed country (remotely) where wages are much hire. Think of it as taking advantage of location arbitrage. You are taking your western wage and spending it in a developing part of the world.
Of course there are so many numerous ways you can do this and to make a full coverage on this topic will be a very long article. So for the sake of brevity, let’s go over a few ways you can make digital nomadism a thing for you to do further research:.
Different Types of Digital Nomads
Teaching English: Yes we have already went over how you can teach English in an east Asian country in a classroom setting. But you can also do something similar in teaching students remotely through Skype or Zoom.
Day Trading: I remember meeting an Indian fellow when I was in Ho Chi Minh, Vietnam that traded in the Indian market remotely. Not for the faint of heart and there is a capital requirement but it is possible.
Computer Programming: During my time WWOOFing in Mexico, one of the other WWOOFers was working for a Dutch company in this field.
Blogging/YouTube: A tight market flooded with more people you can count, but definitely a possibility if you put your mind to it.
Sell a Course: If you have a marketable skill, odds are someone out there will be willing to pay you to learn what you know. Popular platforms such as Udemy and Teachable are out there to spread the word on what you know. | https://medium.com/@kingstonlim/exploring-digital-nomadism-working-abroad-5-the-global-mba-61ac760e4c4e | ['Kingston S. Lim'] | 2021-09-08 12:01:38.656000+00:00 | ['Digital Nomad', 'Work Travel', 'Travel'] |
Building connections | Getting ‘in’ with a local business.
Much later on in the placement, a representative from McNair (a local business) came to visit the museum. This was a clothes shop that made bespoke shirts, designed for harsh environments.
The representative was incredibly interested in the cloth the museum was producing, and was wanting to create a collaboration. The museum would produce the cloth for the trim of the shirts.
As I was now one of the main weavers in the museum, this meant that I would be producing a lot of this cloth. This meant that I would have to consistently keep to high standards, while still maintaining the other duties of the museum.
The whole idea was nerve-wracking, especially as the representatives would understandably want to know about how the cloth was made, and who would be making the cloth.
This meant that I had to tame the butterflies zooming around in my stomach and have a discussion with the representatives. About the weaving process, and about my role in the museum.
The cloth that I had previously woven.
Understandably, the representative wanted to look at what I could do. So I had to display the cloth that I had previously made.
Thankfully, they were impressed!
This was it now. The order of cloth was made official, and I had to get my head down and weave. Thankfully, the pattern was rather easy to produce on the loom.
One, two, three, four. Change colour.
One, two, three, four. Change colour.
Much easier than some of my other patterns, this meant that I was able to share my time between the weaving and the visitors. Which kept my levels of stress in check!
It also felt quite inspiring to hear what the general public thought about the joint project. As many were interested in why we were weaving, and it was interesting to hear their thoughts.
The number of visitors to the museum also increased, as many of McNair’s clients visited the museum. The client wanted to be able to see the making process of the shirt that they were going to buy. This increase in visitors really shows the importance of making new connections.
The finished product.
The collaboration between the museum and McNair was called ‘really slow fashion’ due to the length of time needed to produce cloth on a handloom (but I promise that I went as fast as I could!).
The finished shirts are now in the shop, and many have already been sold. It is a little overwhelming to think that people are wearing something that I have made.
This has been a fantastic experience that has enabled me to address one of my weakest points in terms of career readiness.
The dreaded networking!
This was one of my weakest aspects, due to the constant fear of not coming across well to other people. This fear led to a major lack of confidence in myself. Going through this experience has proven that my fears were unfounded.
Through developing this skill, more doors will be open to me. These doors will lead to more and more potential opportunities. | https://medium.com/@lozielouallinson/building-connections-21448e4ce522 | ['Lauren Allinson'] | 2020-05-05 09:07:26.960000+00:00 | ['Local Business', 'Museum', 'Placement', 'Swp05', 'Weaving'] |
Lightning in a Bottle | Getting it Running Locally
Local Nodes
Getting the Bitcoin and Lightning nodes up and running wasn’t too tricky and mostly involved getting all the right dependencies installed and downloading the Bitcoin blockchain. Since we were using the testnet, it only took about 4 hours to download. Aside from that, it was also important to make sure we had the correct flags and configurations for the nodes.
Android
Getting the Lightning Network running on the Android phone took significantly more effort. First off, we had to take a look at ACINQ’s eclair wallet to understand how to use their Scala implementation of the Lightning Network. At first glance it seemed quite complicated, but piece by piece we broke it down and isolated the components we needed for basic boiler plate code.
Additional Setup
The last step necessary to facilitate communication between my local nodes and the outside world was to expose them through an SSH server for port forwarding. We used serveo for this, but ngrok works just as well too.
Initial Challenges
Testing was the most tedious aspect about getting the nodes up and running and making a payment channel. Since we were using the testnet we still had to wait for 6 confirmations from miners and they could take up to 30 minutes to complete. Whenever we would create a funding transaction for off-chain transactions, or close and settle a payment channel, we were waiting upwards of half and hour.
😞
Another inconvenience we encountered was that we couldn’t fund the payment channel after it was opened. Essentially, the funds you opened the payment channel with are all the funds you have to spend before the state has to be broadcast on the main chain. Contrary to the payment channels we implemented on the Ethereum network, the payment channels on the Lightning Network do not have addresses, or at least exposed addresses that can have funds sent to them.
General unpredictability of the network was a salient theme throughout our research of the Lightning Network. A number of times we would try to open a payment channel with a random node we’ve connected to and we would be greeted with the following error:
The error is telling us that we don’t have enough satoshis in our wallet to fund the opening transaction.¹ We tried to fund the channel with 20000 satoshis, if the fee is 485281 satoshis, like it says in the error, then that’s a total 505281 satoshis. In our wallet we have 41173093, more than enough to fund the transaction.
Lastly, when you make off-chain transactions on the Lightning Network you still pay a fee if your transaction has to travel through more than one node! The creators of the Lightning Network estimate that the fee should be 1 satoshi per node or smaller. The fee is nominal, but nonetheless you still have to pay a fee to the nodes you pass through, unlike Ethereum payment channels where there is no fee at all. | https://medium.com/stk-token/lightning-in-a-bottle-e5c3c90b8959 | ['Aleksa Jovanovic'] | 2019-02-08 15:24:12.664000+00:00 | ['Bitcoin', 'Payment Channels', 'Lightning Network', 'Lightning', 'Research'] |
Atmospheric and Glint Correction of Sentinel-2 Imagery for Marine and Coastal Machine Learning | Hedley et al sun glint correction is performed on the DOS and Sen2Cor images and requires pixels over deep water to calibrate the AC. To perform ACOLITE AC with glint correction the user setting glint_correction is set as True, glint_force_band is set to 1600 (short-wave infrared) and I experimented with the glint_mask_rhos_threshold (default 0.05). Observations of the glint corrected images and histograms above are as follows:
The DOS and Sen2Cor with Hedley glint correction removes some but not all sun glint as shown by the turquoise class in the deep-water locations.
The ACOLITE DSF with glint removal does a good job of removing glint over water, although some pixels appear so dark they are classed as the land class. There also appears better defined near-shore class clusters as shown by the turquoise and green class.
The ACOLITE DSF glint correction with a threshold of 0.05 (default) identified some cirrus cloud (shown in the north-west of the image that was missed from masking) and some bright shallow water areas as glint, as shown by the dark brown class.
The ACOLITE DSF glint correction with a threshold of 0.07 removed more glint in the deeper water and did not identify as much cirrus cloud nor shallow water as glint compared to threshold of 0.05.
The ACOLITE DSF glint correction with threshold 0.07 appears to outperform the other glint corrections. The ACOLITE DSF glint correction when set at a lower threshold (0.05) mistakenly identified some cirrus cloud for glint which had been missed from masking, which is useful information to improve the masking algorithm.
By assessing the impact of AC and glint correction algorithms on the structure of the image data this study can inform algorithmic decision-making for marine and coastal habitat mapping. Machine learning is data driven and the final model will only be as good or bad as the data that goes into is. Improvements to this work include comparison to other high regarded AC algorithms (e.g. C2RCC, Polymer) and statistical analysis. A robust statistical analysis of how effective AC would require ground truth Rrs values, collected using a hand-held or shipborne spectrometer, to compare and evaluate AC and glint corrected pixels to.
For any future ACOLITE users out there, I found a lot of useful information about ACOLITE in the forum: https://odnature.naturalsciences.be/remsem/acolite-forum/ and User Manual https://odnature.naturalsciences.be/downloads/remsem/acolite/acolite_manual_20190326.0.pdf
And a big thank you to Scottkaczor for contributing to this work!
References
[1] Warren, M., Simis, S., Martinez-Vicente, V., et al. (2019). Assessment of atmospheric correction algorithms for the Sentinel-2A MultiSpectral Imager over coastal and inland waters. Remote Sens. Environ. 225, 267–289.
[2] Mobley, C. (2020). The Atmospheric Correction Problem. https://oceanopticsbook.info/view/remote-sensing/the-atmospheric-correction-problem
[3] ESA. (2019). Sen2cor Configuration and User Manual. http://step.esa.int/thirdparties/sen2cor/2.8.0/docs/S2-PDGS-MPC-L2A-SUM-V2.8.pdf
[4] Chavez, P. S. (1988). An Improved Dark-Object Subtraction Technique for Atmospheric Scattering Correction of Multispectral Data. Remote Sensing of Environment, vol. 24, pp. 450–479.
[5] Royal Belgian Institute of Natural Sciences. (2020). OD Nature Remote Sensing and Ecosystem Modelling team. https://odnature.naturalsciences.be/remsem/software-and-data/acolite
[6] Hedley, J. D., Harborne, A. R., Mumby, P. J. (2005). Simple and robust removal of sun glint for mapping shallow-water benthos. Int. J. Remote Sens., 26 (10), pp. 2107–2112.
[7] Vanhellemont, Q. (2019). Adaptation of the dark spectrum fitting atmospheric correction for aquatic applications of the Landsat and Sentinel-2 archives. Remote Sensing of Environment 225, 175–192. | https://medium.com/uk-hydrographic-office/atmospheric-and-glint-correction-of-sentinel-2-imagery-for-marine-and-coastal-machine-learning-ec0ea8734e23 | ['Rachel Keay'] | 2020-11-25 12:43:34.313000+00:00 | ['Image Processing', 'Data Science', 'Sentinel 2', 'Marine', 'Machine Learning'] |
Azure DevOps multi-stage pipeline for Terraform | Azure DevOps is hosted service which helps you to create CICD pipeline, you can deploy your azure Devops source code repository or you can bring existing yaml pipeline from external DevOps services such as GitHub, Bitbucket etc, today we are going to discuss how to create multi-stage CICD pipeline for your Terraform code in Azure DevOps.
As outcome we are expecting a plan (build) stage of each of environment and a apply (deployment), at plan stage review can review the changes which are going to deploy and at apply stage there will be a approve gate in place where approve can approve the changes and it will be deployed to respective environments.
In this article, I am going to share all the steps you need to know to create the multi-stage pipeline in few minutes.
Prerequisite:
Basic of Azure DevOps project if you are new you can learn my video tutorials on youtube using this here.
Basics of Yaml.
Terraform commands.
commands. Some basic Azure CLI commands.
commands. Some of Linux basic commands (such as copy ‘cp).
Code Setup:
Before I start creating the pipeline I would like to show you the Terraform code structure which I have for this configuration.
Code structure
in the pipeline section, we are going to define our YAML file pipeline.
inside the ‘src’ we have ‘code’ and ‘module’ folder …. inside we have all our variables, maps, and provider defined and inside the module, i have Terraform modules for my azure resources.
my terraform code will be executed from ‘code’ folder which will call the respective modules
The source code structure is totally up to you, defining pipeline is not dependent on the structure, this the structure I prefer for my personal use if you like you can follow the same.
I have also created a few empty folders for each of my environment
These folders location where my pipeline will perform the Plan stage (build).
Creating Pipeline:
Section-1 : In the pipeline folder we are creating the azure-pipeline.yaml file
here at the top section, we have defined the pipeline trigger in the trigger I am specifying when my pipeline should auto-trigger the code above says trigger this pipeline when there is an update into ‘src/code/input’ folder
I have also created yaml to storage the variables which we are going to use in our pipeline, variables.yaml file has the following variables list
if you want you to store the variables into the azure variables group as a secret, for this demo I am keeping these variables into yaml files only ;).
[NOTE]: Here to run my Terraform commands I am using ‘ubuntu-latest’ agent pool, hence I need to use linux command only, if you are using any other platform then you need to replace the appropriate commands according to your operating system type.
Section 2: In this step, we need to add a step to install Terraform in your build agent only if you are using Microsoft hosted agent because as of now MS hosted agent doesn’t come with terraform installed on it hence we need to use the task to install the terraform.
Here we are installing Terraform version ‘0.13.2’.
[NOTE]: If your project uses self-hosted and the agent has terraform already installed on it then you can ignore this step.
Section 3: Terraform plan step on Dev environment, the objective of this step is to create the terraform plan, which can be reviewed by review, the plan will show what going to add/edit/delete on your subscription.
just like the below screenshot
Sample plan file screenshot
Here is the sample code for the same:
here we have created a new YAML pipeline stage (Plan_Dev), this stage depends on the previous stage (Setup).
Next, we are defining the job collection object in which we have created a job Build job (Plan_Dev), this job has multiple steps each step can have one or more tasks.
overall structure will look like this:
Yaml pipeline structure
In this step, we have 3 different task
→ Task 1: Script - this task will following activities
→ Copy content of ‘code’ folder to ‘dev’ folder move the execution pointer to ‘dev’ folder
→ Run terraform init and Terraform plan command
→ Task 2: copy the content of ‘dev’ folder into the staging directory, just to create artifacts.
→ Task 3: publish the artifacts to Azure DevOps, we are providing the name of our artifact as per our environment name (e.g.: dev_artifacts)
Section 4: Apply Stage, here in this stage we are going to define the steps to run the Terraform apply command which will create the infrastructure into the azure subscription.
Apply stage is a bit different from the Plan stage, apply stage uses the artifacts which are generated from the plan stage hence we are using the ‘deployment’ block
YAML schema — Azure Pipelines | Microsoft Docs
In the step we couple of tasks, before we discuss those tasks have a look at the ‘-deployment’ tag, here in tag we are defining the strategy to run the deployment using artifacts of build stage.
we are also using environment tag, using the environments we have to define the conditional deployment, wherein reviewer can review the stage and then they can make the decision to deploy.
Script — this task will following activities
→ Copy content of ‘code’ folder to ‘dev’ folder move the execution pointer to ‘dev’ folder
→ Run ‘Terraform init’ command.
→ Run ‘Terraform plan’ command.
→ Run ‘Terraform apply’ command
[NOTE]: here additional to build stage we are using ‘Terraform apply’ command to apply the changes.
[Important]: Our terraform plan command is using input variables that are defined in our source code as Json file, that’s why we are referring the path in our command.
terraform plan -var-file=”input/dev.tfvars.json” -out=”out.plan” -var=”client_id=$(client_id)” -var=”client_secret=$(client_secret)” -var=”tenant_id=$(tenant_id)” -var=”subscription_id=$(subscription_id)”
This is it, using the above-mentioned step you should be able to deploy infrastructure to the dev environment in your Azure subscription.
Question 1: What about deploying changes to multiple environments?
Well, the answer is simple…
Repeat the same step for the rest of your environments :) happy days.
Question 2: Why do we need a separate build stage after each environment?
Answer: As we have multiple environments and for each environment, we are maintaining the terraform state into its own environment-specific blob storage file (which will be stored into azure storage account), it might possible that different environments have different changes to be deployment from the last deployment…hence for each specific environment we want its own build so that reviewer can review the plan of that specific environment, deploying single plan of one environment into all the environment will not be the correct implementation because your plan of one environment might destroy something into another environment.
full YAML pipeline can be found here: | https://medium.com/@rakesh-suryawanshi/azure-devops-multi-stage-pipeline-for-terraform-493d502a67b6 | ['Rakesh Suryawanshi'] | 2021-03-16 19:48:29.497000+00:00 | ['Cicd Pipeline', 'Azure Devops Pipeline', 'Terraforming', 'Terraform'] |
Deploying FastAPI application in Google App Engine in Standard Environment | FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. To learn more about FastAPI, you can visit the docs of FastAPI by clicking here.
FastAPI also provides Swagger UI by default in the {base_url}/docs for testing apis.
Installation
pip install fastapi
You will also need an ASGI server, for production such as Uvicorn or Hypercorn. We will be using uvicorn for this article.
Uvicorn is a lightning-fast ASGI server implementation, using uvloop and httptools.
Until recently Python has lacked a minimal low-level server/application interface for asyncio frameworks. The ASGI specification fills this gap, and means we’re now able to start building a common set of tooling usable across all asyncio frameworks.
To install uvicorn:
pip install uvicorn
This will install uvicorn with minimal (pure Python) dependencies.
pip install uvicorn[standard]
This will install uvicorn with “Cython-based” dependencies (where possible) and other “optional extras”.
In this context, “Cython-based” means the following:
the event loop uvloop will be installed and used if possible.
will be installed and used if possible. the http protocol will be handled by httptools if possible.
I prefer using uvicorn[standard] as it installs cython-based dependencies which will prevent error related to uvloop and httptools while running in production.
Lastly, you should also install Gunicorn as it is probably the simplest way to run and manage Uvicorn in a production setting. Uvicorn includes a gunicorn worker class that means you can get set up with very little configuration. You do not need to install Gunicorn while running locally.
To install Gunicorn:
pip install gunicorn
Freezing Requirements File
After installing every required dependencies inside virtualenv, do not forget to freeze the requirements file to update before deploying as App Engine installs dependencies from requirements.txt file.
To freeze the requirements file:
pip freeze > requirements.txt
Configuring app.yaml file
Your python version should be above 3.6 for FastAPI to work. Here is my configurations for my project:
runtime: python37
entrypoint: gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app
instance_class: F2
You can have any python version above 3.6 and instance_class as per your need. The following will start Gunicorn with four worker processes:
gunicorn -w 4 -k uvicorn.workers.UvicornWorker
main is my main.py file and app is the instance of my FastAPI application. App Engine handles the port number but you can define your desired port number.
Confirming before deployment
After you completed all the above steps, confirm the following for the last time:
Your virtualenv is activated and all the requirements are installed by activating the environment. Make sure you have installed only the necessary dependencies and included .gcloudignore to ignore unnecessary files and folders during deployment. Freeze the requirements.txt file before deploying so that you don’t miss adding newly installed dependencies in the requirements file. Your app.yaml file is properly configured. Your service account json files have necessary access.
Deploying in App Engine
If you have not installed Google Cloud SDK, then you must install and configure the sdk. You can follow this link to properly configure your google cloud sdk.
After installing the sdk, you need to initialize the sdk. To initialize Cloud SDK:
Run gcloud init from the terminal.
After initializing, make sure you select your correct project id. To select the project from google cloud, you have to run gcloud config set project [project_id]
Finally, to deploy your FastAPI application in the selected project-id:
gcloud app deploy
You will get the url to view the application in your terminal. | https://medium.com/analytics-vidhya/deploying-fastapi-application-in-google-app-engine-in-standard-environment-dc061d3277a | ['Pujan Thapa'] | 2020-11-13 03:42:24.323000+00:00 | ['Python', 'Fastapi', 'App Engine', 'Google Cloud Platform', 'Deployment'] |
More Than Hygge: The Wintering We All Need Right Now | More Than Hygge: The Wintering We All Need Right Now
On surrendering and healing this season
Photo by jurien huggins on Unsplash
It was the winter of 1991. My aquamarine bib snow pants were strapped onto my shoulders and bunched over my boots, my coat barely zipped up over my sweater. I was Ralphie’s brother from A Christmas Story. The snow was piled up higher than my head, and in the backyard, I kept getting stuck as I tried to cross the boreal terrain. My nose was cold and I was too warm under my layered winter gear, but it was a delight to partake in this, Minnesota’s famous Halloween Blizzard. Everybody has a story about it.
Ask any ’80s kid and they’ll talk about it like it was the most magnificent event in which they’ve ever partaken. Ask someone from earlier generations and they’ll reminisce about having had to shovel their way out of their homes and unbury their cars, like three feet of snow is only a nuisance, not an enchanting occurrence.
I yearn for that kind of childlike merrymaking, where the deeper the snow the bigger the celebration, and the lower the temperatures the taller the tales. But like Santa Clause and Rudolph, these delights melted away along with my youth. Now, when I see snow falling I think about the condition of the roads. When the temperatures are below zero, my mind goes straight to my drafty windows and how I’ll be able to keep my children from frostbite.
Not this year.
This year I am embracing the Scandinavian heritage my home state was founded on. When I see snow, I’m going to marvel at how it sets the streets aglitter. And when the temperatures plummet, I’m going to think about bundling up under a blanket by the fire. I’m not going to simply endure winter like I do year after year; I’m going to use it as an excuse to rest, reflect, and rejuvenate. This winter I am going to heal.
You may have heard of the Danish word hygge. It’s not a word so much as a concept or way of being. Not quite translatable into English, it is essentially a coziness that evokes a feeling of contentment or well-being. Hygge is a big leather chair, a weighted blanket, and a good book. It’s drinking hot cocoa by a crackling fire and cuddling with a pet or loved one. A way of living as second nature as bicycling in Denmark, hygge has only recently hit the U.S. — and to much fanfare.
Hygge is a delicious idea. It’s enough to get me through until spring. But I recently came across another concept that has a slightly stronger pull for me: wintering.
British author Katherine May released a book last year called Wintering: The Power of Rest and Retreat in Difficult Times, and it is a glorious read. Wintering, according to May, is not just a time of year. Everyone has their own personal winters, or seasons of difficulty in which we must nurture ourselves and our souls to come out better than we were upon entering them. Sometimes winters are in the summer. Other times, like this year, they begin in March and last for an unforeseeable number of months. Winters like those May speaks of are a time to welcome our hardships (they’re coming for us regardless, but embracing the cold makes them hurt a little bit less) and give ourselves the time and space we need to get to the other side.
“Wintering brings about some of the most profound and insightful moments of our human experience, and wisdom resides in those who have wintered,” May says. A metaphor and a way to embrace the season, wintering is everything we need to do right now. We as a collective whole need to hunker down and heal. There is global hunger for it.
Many of us are still trying to be as productive as possible (I am so guilty of this). But maybe instead of being productive we should focus on doing what we need to survive.
I slept until 8:45 this morning — later than I’ve slept in recent memory. I have two children; one is only three months old. When I was up at 5:00 with the younger one, my initial reaction was to get up and grab my computer. To get some writing done. To produce. But after a feeding session, I handed over the baby, crawled back into bed, and woke up hours later. I wintered, and I feel incredible for it.
In Wintering, May talks about the magical transformation trees in northern climates undergo: “The changes that take place in winter are a kind of alchemy, an enchantment performed by ordinary creatures to survive.”
Is it magic? No. It’s nature, and it’s in you and it’s in me.
It’s time to retreat, to follow our urges to go to bed a little earlier and wake up a little later.
It’s time to indulge, not extensively, not unhealthily, but in a way that warms the heart and fuels the soul.
Winter is a time to read the books and take the naps we haven’t made time for because we’ve been attending to everything that needs doing. Because winter isn’t about doing, it’s about being — whatever being is to you. And for me, being is reading a book in the quiet of the morning with a coffee in hand and the whole day ahead of me.
Yes, get outside and snowshoe or ski if the snow attracts you. But if the cold makes you recoil a little bit, embrace your desires and give in to the urge to be cozy and relax. Some proper wintering with a little hygge woven into it makes me excited about this winter, and the mere thought of spring makes me nostalgic for crackling fires and wool sweaters. Before the snow melts and the trees blossom, I am going to drink that coffee with heavy whipping cream, make those hearty stews, and find my healing. I am going to retreat. And come springtime, when it’s warm enough to feel the sun on my skin, I’ll be open enough to receive all of the renewal the season has ready for me.
I received a holiday card this year. “Kindness is like snow,” it reads. “It beautifies everything it covers.” This quote by Kahlil Gibran — accompanied by a man in a crown holding a robin while a goose and a fox have tea in the foreground — is the kind of specific beauty you can only find this time of year. I intend to bask in it and let the cold, the snow, and the darkness magnify the beauty all around me.
Let’s winter together. | https://medium.com/curious/more-than-hygge-the-wintering-we-all-need-right-now-ee0a638f6c8a | ['Kolina Cicero'] | 2020-12-22 07:37:21.848000+00:00 | ['Health', 'Mental Health', 'Self', 'Hygge', 'Winter'] |
Where do I belong | Where do I belong
Do I belong anywhere?
HomeTown or WorkTown or Hyderabad
Which is the "my place"?
Where I feel I am home
Or it is just a fake feeling
To have ever
I have a doubt and I m doubt
People are old and new
I am confused
Existence is yes or no
Am I alive or I am unwanted?
What is the question and where are the answers
Belong is a hold which I wanna feel
It is corona or it is me
Did I fail or I am failure?
Do I belong ?
Should I stop or move?
Find a new belongings
What is it?
Do I have family,friend or boyfriend ?
Where is the love?
What are mine?
I feel I m dark
Dark are sad and Dark likes nowhere
Question is where do I belong?
Search is painful and forever. | https://medium.com/@tuni24609/where-do-i-belong-ae77b354f673 | ['Naam Me Kya Rakha Hai'] | 2020-12-17 11:00:51.275000+00:00 | ['Existence', 'People', 'Life', 'Belonging'] |
“Palestinians is Winning the Battle of Legitimacy” | I need to share the intriguing point on what I read. It’s about how the International law is one of the most discussed words with regards to the Israeli control of Palestine. In the case of tending to the Israeli wars and the attack of Gaza, the development of illicit settlements in the West Bank or the intrusion of politically-sanctioned racial segregation in Israel and the Occupied Territories, it is quite often present.
In any case, it sometimes, if at any time, converts into something concrete, regardless of the hugeness and value of the word. For instance, without global law ensuring Palestinian regular people against Israeli denials of basic freedoms, the Israeli attack of Gaza has proceeded with unabated for over 13 years. Taking a gander at it, there is the way that 1,000 illicit settlement units in the West Bank were endorsed a month ago by the Israeli government. Maybe, Palestinians are truly winning this time. | https://medium.com/@blackwilliam879/palestinians-is-winning-the-battle-of-legitimacy-23da281be542 | ['William Black'] | 2020-10-07 13:52:23.002000+00:00 | ['Palestine', 'Gaza', 'Isreal', 'Wine'] |
Rate Limiting a .NET Core Application using the Sliding Window Algorithm — Part 2, Validation with Unit Tests | In the first article of this series, we implemented a rate limiter in .NET Core using the sliding window algorithm with the intention that it will be used by an application to self-limit its requests to an API. Before we can put our implementation into practice, we first need to validate that it functions as intended.
In this article, we will discuss how to validate our rate limiter with unit tests. We will first setup the test class and several helper methods that can be reused throughout our unit tests. We will then turn our attention towards the unit tests, examining each based on the scenario that they assess; this is done similarly to how we implemented the rate limiter in the previous article. The unit tests will be written using xUnit and Moq.
Setting up the Test Class
Setup of our test class, SlidingWindowTests is relatively simple, requiring only a single mock implementation and three helper methods. The setup for the SlidingWindowTests class is displayed below,
As you can see from the Github gist, we need only mock a single implementation: the ITimestamp interface. We mock ITimestamp so that we can precisely control what values of the time are returned from calls to ITimestamp.GetTimestamp() made by the SlidingWindow class (as was alluded to in the previous article).
We have added several helper methods to our test class to aid with code reuse,
GetElapsedTimeInTicks as the name suggests, converts milliseconds to time in ticks
as the name suggests, converts milliseconds to time in ticks GetMinimumElapsedTimeInTicks calculates how much time must elapse for a request to conform given the rate limit and the current request count*
calculates how much time must elapse for a request to conform given the rate limit and the current request count* SaturateWindow saturates the SlidingWindow object by requestLimit requests, mocking the time returned by the ITimestamp implementation with a value for the time elapsed that starts with timeElapsedTicks and increments by incrementTicks for every iteration. The method assumes that each request conforms up to the limit and asserts this using the test framework
A rudimentary unit test is also included in the code excerpt above; its responsibility is to verify that the SlidingWindow class is performing validation on its constructor inputs.
*The equation for GetMinimumElapsedTimeInTicks is derived from the equation for calculating the approximate request count by assuming the previously tracked window reached the rate limit and then solving for the elapsed time.
Unit Tests by Scenario
Our unit tests can be divided into three scenarios — each assessing how the implementation responds under certain conditions:
When an incoming request is received, but there is no previous window
When an incoming request is received, and there is a previous window
When the time between the incoming request and the start of the former, current window meets or exceeds two or more window lengths
Scenario #1: A previous window does not exist when checking the conformity of an incoming request
As the title states, we need to assess how the rate limiter responds when it is in a state where there is not a previous window. This can occur in two situations,
The current window being tracked is the first seen by the rate limiter
There has been a significant amount time between the start of the former, current window and the incoming request (seen in the last scenario)
We will address the former situation in this sub-section and save the latter for the third. When this situation occurs, we expect the rate limiter to allow all requests up to the request limit (i.e. if the request limit is 50, then 50 requests will conform/be allowed). The unit test for this scenario is displayed below,
The unit test above is rather simple,
We create a SlidingWindow instance passing in arbitrary values for the rate limit and our mock ITimestamp implementation
instance passing in arbitrary values for the rate limit and our mock implementation We saturate the window with as many requests as the value of requestLimit via a call to SaturateWindow() — which also acts to verify that each request up to the limit is considered to be conforming by the rate limiter
Notice that we provide 0 for the input value of incrementTicks . We do this because the elapsed time would have no impact: a previous window cannot be used to calculate the weighted request count and the request limit for the current window has not been met.
Scenario #2: A previous window exists when checking the conformity of an incoming request
The second scenario is somewhat more involved than the previous. In this scenario, the rate limiter has a previous window to reference so we expect it to impact checks to the conformity of incoming requests. We can assess our rate limiter’s response in this scenario through the following unit test,
The unit test above is certainly a larger block of code, but is no more complex than the first,
We create an instance of SlidingWindow and saturate the initial window, assigning timeElapsedTicks to the return of SaturateWindow()
and saturate the initial window, assigning to the return of At this point, the value of timeElapsedTicks places us at the start of the next window, so we calculate the minimum time that needs to elapse before the next request will conform
places us at the start of the next window, so we calculate the minimum time that needs to elapse before the next request will conform We setup the next call to ITimestamp.GetTimestamp() to return a timestamp that is only half of the minimum elapsed time calculated in the previous step and then assert that the next request does not conform
to return a timestamp that is only half of the minimum elapsed time calculated in the previous step and then assert that the next request We do the same thing as the previous step, but setup ITimestamp.GetTimestamp() to return a timestamp that now equals the minimum elapsed and assert that the next request does conform
to return a timestamp that now equals the minimum elapsed and assert that the next request Lastly, we verify that the next request does not conform since we have not advanced the time returned
You may have noticed that we have setup this unit test as a theory. This is not done to test equivalence classes per say, but to give some validation to our calculation of the minimum elapsed time with varying values for the rate limit.
Scenario #3: The timestamp of the incoming request exceeds the current window by two window lengths
The third and final scenario verifies that an incoming request is unencumbered by the former, current window when there has been a significant amount of time between the start of the window and the incoming request (two window lengths to be exact). The corresponding unit test is rather simple, constructed similarly to the unit test of the first scenario:
To validate the rate limiter for this scenario, we take the following steps in our unit test above,
Create a SlidingWindow instance and saturate the initial window, assigning timeElapsedTicks to the return value of SaturateWindow()
instance and saturate the initial window, assigning to the return value of Increment the the value for timeElapsedTicks by two window lengths
by two window lengths Saturate the window again, using the value of timeElapsedTicks as the starting value returned by ITimestamp.GetTimestamp and verifying that the new, current window can be saturated with as many requests as the value of requestLimit
Running the Tests
From the image below, you can see that our unit tests run successfully — or, at least on my local they do 😜. | https://medium.com/@gopherchucks/unit-testing-a-rate-limiter-written-in-net-core-90bfe9f72404 | [] | 2020-10-16 05:28:56.502000+00:00 | ['Algorithms', 'Unit Testing', 'Networking', 'Csharp', 'Dotnet Core'] |
How can we save time after an interactive Exploratory Data Analysis? | How to generate code within code?
Python’s answer: f-strings - simple and powerful
“F-strings provide a way to embed expressions inside string literals, using a minimal syntax. An f-string is an expression evaluated at run time, not a constant value.” (PEP 498)
In Python: preface a string literal with an f and put your variable into curly braces {}, when you execute the code, those variables will get assigned values. See the simple example below (f-strings and their evaluated values are bolded):
platform = 'Towards Data Science'
author = 'Robert Dzudzar'
number_of_articles = 1 example = f"{author} wrote {number_of_articles} article(s) for {platform}." print(example)
# Returns: Robert Dzudzar wrote 1 article(s) for Towards Data Science.
How do I generate code in ‘Distribution Analyser’?
Within the app, the user plays with widgets: they select a distribution, then they use sliders or input boxes to tweak distribution parameters (each distribution has its own set of parameters). Values from Streamlit widgets are passed to a function that makes the figure — so why can’t we pass code too?
Depending on the code and parameters, sometimes variables need to be tweaked to have the desired format in the f-string:
In Distribution Analyser, to provide a user with generated code I need variable names or their values, or sometimes both, which I obtain with simple string manipulation. Once I get the desired format, I just pass it to the f-string (see Figure below). Created f-sting is then passed to a Streamlit button which will print it out when pressed.
The figure below shows a side by side comparison of f-strings in ‘Explore distributions’ and generated code (f-strings and their values are highlighted). In the same fashion, this is done to obtain the code from the best-fit distribution function. You can find the entire web app source code on GitHub, and feel free to play with and explore Distribution Analyser on Streamlit Sharing.
In conclusion
Give more to users to save their time and effort.
For web app developers it will take only a few more functions added to their app (at least if they are using Python) to make simple, reproducible and ready-to-use code that they can give to users as part of the analysis results. Having such an option saves users hours or even days of taking notes and digging through the source code. | https://towardsdatascience.com/how-can-we-save-time-after-an-interactive-exploratory-data-analysis-c9edfdb8d3e5 | ['Robert Dzudzar'] | 2021-04-04 23:52:43.764000+00:00 | ['Web App Development', 'Exploratory Data Analysis', 'Statistics', 'Python', 'Data Science'] |
The Tumultuous Decade: Arab Public Opinion and the Upheavals of 2010–2019 | JAMES ZOGBY
The Tumultuous Decade
Arab Public Opinion and the Upheavals of 2010–2019
STEUBEN PRESS 2020
September 4, 2020 James M. Dorsey
James Zogby’s The Tumultuous Decade: Arab Public Opinion and the Upheavals of 2010–2019 (Steuben Press, 2020) takes the reader on a decade-long tour of the Middle East as the region reverberates from popular revolts that toppled long-standing dictators, civil and proxy wars that sparked some of the world’s worst humanitarian crises, foreign interventions and seemingly intractable power struggles.
It does so through the eyes of ordinary Arabs, Iranians, and Turks rather than the region’s political elites. Zogby’s ability to tease out a sense of public opinion in a part of the world in which freedom of expression and freedom of the media are rare quantities constitutes an important contribution to the literature and understanding of a region that often seems too complex and intricate to easily wrap one’s head around.
In a world of autocracy, repression and conflict, polls often offer ordinary citizens a rare opportunity to express an opinion. Zogby demonstrates that autocratic and authoritarian leaders frequently ignore public opinion but track it closely and at times are swayed by what the public thinks and wants. Years of polling also demonstrates that failure to understand public sentiment and/or take it into account produces misinformed and misguided policies not only by rulers in the region but also governments like that of the United States.
Zogby’s discussion of Iraq since the 2003 US invasion that toppled Saddam Hussein illustrates the point. So does his analysis of polling of attitudes over several years in countries that overthrew their leaders during the 2011 popular Arab revolts as well as of perceptions of Iran and Palestinians incapable of wresting themselves from Israeli occupation. Zogby’s book offers a different look at the Middle East, one that offers fresh insights on the basis of citizens’ aspirations rather than what authoritarian and often corrupt elites would like the world to believe.
James Zogby is director of Zogby Research Services, a firm that has conducted groundbreaking surveys across the Middle East, and the founder and president of the Washington, DC-based Arab American Institute.
Click here to listen to the podcast
James M. Dorsey is a senior fellow at Nanyang Technological University S. Rajaratnam School of International Studies and the National University of Singapore’s Middle East Institute. He is the author of the syndicated, column, blog and podcast, The Turbulent World of Middle East Soccer | https://medium.com/the-turbulent-world-of-middle-east-soccer/the-tumultuous-decade-arab-public-opinion-and-the-upheavals-of-2010-2019-f470a91e6971 | ['James M. Dorsey'] | 2020-09-05 03:15:48.368000+00:00 | ['Israel', 'Palestine', 'Middle East', 'Iran'] |
How to Secure a Spring Boot Application with TLS | Creating a Spring Boot Application
In this section, we will create a Spring boot application and expose the following endpoints:
GET v1/books/ : List all books
POST v1/books/: Create a new book
GET v1/books/{book_id}: Get a book resource
DELETE v1/books/{book_id}: Remove a book
Step 1: Creating a Spring Boot Project
Browse to your favorite IDE and create a Spring boot project with web, h2, data-jpa and Lombok dependencies. Following is the pom.xml file:
pom.xml
Step 2: Configuring h2 database
In this application, we will use the h2 in-memory database as our backing database. Add the following configuration in the application.properties file to configure h2 database:
application.properties
Step 3: Creating the domain object
In this application, we will manage book information. Users of this application/API can perform CRUD operations. We have created the following book entity:
Book entity
Step 4: Creating the REST endpoints
To facilitate the REST endpoints, we have created the following REST controller. It serves the aforementioned endpoints:
BookController.java
We also have created the following repository:
BookRepository
Step 5: Insert data at application startup
To help with the testing, let's insert a few book details in our database. To do so, create a new SQL file named data.sql in src/main/resources directory. Spring boot automatically executes this file at startup.
data.sql
Step 6: Testing the API
Let us now start the application and test a few endpoints to ensure the application is working as expected. We are using HTTPPie to access the endpoint and we are receiving data from the server as shown below: | https://medium.com/swlh/how-to-secure-a-spring-boot-application-with-tls-176062895559 | ['Somnath Musib'] | 2019-12-03 05:01:02.046000+00:00 | ['Technology', 'Software Engineering', 'Coding', 'Software Development', 'Programming'] |
Is All Writing Valuable Writing? | I say yes. Do it all.
deviantart.com
There are so many styles of writers who write on various subjects and in various themes.
There is:
The deep thinking essayist.
The lighthearted one-minute-wit writer.
The haiku poet.
The short story fiction writer.
The limerick poet.
The horror writer.
The serious poet.
The How-To writer.
The Dr. Suessy rhyming styled poet.
The self-help inspirational motivational writer.
The listicle writer.
The personal essayist
The comic writer.
The satirist.
The feverish journal writer.
To me, all of this is valuable writing. It all counts. I’ve written all of this myself.
I find it fun to experiment. Shouldn't writing be fun and enjoyable? Write it all if you want. Hey, why not try things on for size? There are no writing police (that I’m aware of!) waiting to handcuff you or take your writing permission slip away.
Well, maybe you’ll feel judged or even ostracized by your peers for stepping out of your comfort zone. I'm willing to risk that. I’m willing to risk going rogue.
I don't write for anyone but me and my inner future old lady, anyway. | https://medium.com/writing-heals/is-all-writing-valuable-writing-16e65ee3abb9 | ['Michelle Monet'] | 2019-10-17 13:47:14.203000+00:00 | ['Inspiration', 'Writing', 'Motivation', 'Creativity', 'Books'] |
Face Time | Who knew in 2017?
Photo by Charles Deluvio on Unsplash
August 2017:
The best invention EVER.
I live 279 miles away from my grandchildren. And yet, I am able to see them on a nearly daily basis.
I was awakened one night at 2:30 AM by a text from my daughter just wondering if I happened to be up. The youngest had gotten up in the middle of the night crying for ‘Nana’ and was now marching around the house iPad in hand demanding my virtual presence.
One of the truly stellar “Nana moments” in my career as a grandmother.
I will be on Face Time with my daughter and one of the kids when suddenly, I will hear my name shouted as another sister hears my voice and realizes I am ‘there’. Soon I will see that little person skidding into view, dashing around the corner as fast as her legs will carry her, eager to be in on the conversation.
The older kids have mastered Face Time on the go and can hold their mom’s phone and manage the app while in the car. I get the low down on the day’s activities or have a song sung to me all while en route to the next big adventure. And remarkably, they share it. Score one for Mom.
I have watched dance performances, yoga exercises, been read to, kissed, and hugged — all through the gift of Face Time.
Thank you, Apple. You have no idea how much you improve the quality of my life.
Namaste.
Addendum: This was written in 2017. Back then we had no idea how much investment we would need to make in our virtual relationships. I am reposting this to remind all of us the gift technology truly is. I am grateful every single day — all the way back to 2017 for the opportunity to share in my grandchildren’s day to day lives. Even from nearly 300 miles away. For that alone, my iPhone is worth every. single. penny. | https://medium.com/recycled/face-time-59ac9a26b87b | ['Ann Litts'] | 2020-05-09 15:05:51.032000+00:00 | ['Technology', 'Life', 'Family', 'Quarantine', 'Grandmother'] |
The Corona Map: Visualizing the Pandemic | The Novel Corona Virus, the Pandemic of the Century has brought hysteria all over the world. Stock markets witnessed their worst day since the 2008 crises and oil prices saw their biggest fall in 29 years.
First of all, let us understand what is this new Corona Virus or the COVID-19. Corona isn’t a single virus, it is a family of viruses that causes diseases like flu, etc. It is the newest discovery from that family which has been named the Novel Corona Virus or the COVID-19.
So how is this new Coronavirus different from the old Coronaviruses that don’t seem to hurt much? Well, the main issue is that it is new and we don’t yet have much research and/or medicine available to treat this. Another issue with this virus is that it is highly contagious.
Around 97% of its victims survive but people with weak immune systems can’t fight it and we don’t have the medicine to kill this virus yet.
Scientists and researchers are working on it and soon we will have vaccinations and treatment available for this. Governments and private institutions worldwide are working to mitigate the disaster and contain the virus.
How can GIS help?
“GIS provides excellent means for visualizing and analyzing epidemiological data, revealing trends, dependencies, and inter-relationships.”
To respond to the disasters we need data about them and in this age of data analytics and the tools we have available now, there is a lot that can be done. In this regard, the Johns Hopkins CSSE did great work. They have been collecting data and posting it daily on their repository. They also created a portal that shows current information.
The above portal does a great job of providing the latest information but lacks the historical information on the map which is present in the datasets that they have provided. They do provide historic data in their charts but not on the map.
What we are doing?
I believe that visualizing historic data related to this Pandemic in space would add value for the researchers and whoever wants to study more about the COVID-19. So, I and my friend decided to build a small tool: The Corona Map.
This is a free and minimalistic tool that we have developed for those interested in studying how the disease has spread and what is the status of it around the globe. It could be beneficial for travelers, doctors, researchers, NGOs, governments and anyone who wants to study and/or find out the current status of the impact of this disease.
We have only worked on it for a week so far and we plan to add more layers of data to it in the future and will keep updating the current data every day. Right now, it shows the global stats about each day on the map, and the stats for each country for each day since the first case was reported there.
Calendar at the bottom right allows you to check the status of Corona on any date
Stats of Coronavirus in China until March 09, 2020
We are looking at more sources of data and will add them to make it more useful for researchers. If you want to do your bit and want to share data or ideas that can really help researchers and organizations fighting against the disease, please reach out to us so that we can make The Corona Map more helpful for them. | https://towardsdatascience.com/the-corona-map-visualizing-the-pandemic-fc436f175dff | ['Ramiz Sami'] | 2020-03-12 16:00:07.607000+00:00 | ['Mapping', 'Maps', 'Corona', 'Data Visualization', 'GIS'] |
Suffering,how will it get better | Life is hard, well pretty sure we all know that. I think everyone suffers and days on and on stay the same. Everyday is the same. Why? I ask myself why everyday. Why am I still here, alive? Why is earth like this? Why is everything bad? Why do I want to hurt myself? Why do I hate my body? Why isn’t there anyone to talk to?It’s funny that you think there is answers to these questions.
I don’t know if I can get out of this deep hole I dag for myself, you can’t blame everything on yourself. There is many people I can blame for my sadness and suffering but it’s mostly my fault. The fact that I know it’s myself is horrible. I’m the problem. I have goals for myself. Goals lead to happiness? There will be struggle the whole way through, will I be able to get through the struggle.
For being young, I wonder what my future will be like, that’s why I am alive. My future. I can’t wait till i’m fully grown and happy. Will I ever be happy? Everyone says things will get better but will they ever? I don’t believe them. I guess waiting is what life is. What is my purpose? The day I look back on this and seeing how sad and young I was, will be my purpose.
Oh to the house I will have, the house I will build. The house I built in my head is a coffin but reality I want to spend the rest of my life with my soulmate, and if not, i’m fine alone. My perfect home, another reasons of waiting, another reasons of still being alive, maybe.
Maybe.
You can’t just write you 12 reasons to live and there being 13 reasons not to live, your reasons to live will never win. Maybe i’ll have the perfect life, maybe I will end up not making it. But i’m waiting. That’s everything, just waiting. Wait to see my final day alive, wait to see what earth will become. Wait to see me grow and my soulmate. Wait for success, maybe. I’m taking the chance of that maybe and waiting to see what happens, that’s why i’m alive, for myself. | https://medium.com/@onlysiennarusso/suffering-how-will-it-get-better-131b5d4121ce | [] | 2020-12-18 04:08:21.233000+00:00 | ['Self-awareness', 'Life', 'Depression', 'Love', 'Sadness'] |
“This Might Be My Last Thanksgiving.” | “Give me the turkey tail!” my father would yell from the head of the table, every Thanksgiving of my childhood. I would leap to slice off the fatty, juicy, tender stub at the tail end of our Thanksgiving bird. “That butt is mine!” he would say, sending my mother into fits of giggles. Our guests — usually just family but sometimes including runaway teens, homeless drunks and disgraced politicians — would scurry to fill their plates from the sideboard, where platters of traditional soul food were stacked 3-deep. My dad didn’t care about the candied yams, collard greens or macaroni and cheese; he only wanted turkey tails, and my mother’s secret-recipe stuffing.
“Tell the story,” my aunt would settle herself with a full plate and a good listening position. Gram would cease fussing in the kitchen; the uncles didn’t say much, but they leaned in. Aunt June and Aunt B nudged around the liquor cabinet until they ferreted out the Jack Daniel’s. We kids — crowded around a card table and hiding the dog under our tablecloth — would quiet down for the first time all day.
“Well,” he’d begin. “We were broke, as usual.”
“Yes,” Gram would chime in. “But we were together.”
“Good thing it was so many of us,” Aunt June would nod. “Snuggling together kept us warm.”
“The church had a list for charity baskets, but we didn’t get one,” my dad said. “Mama didn’t want to be charity.”
Gram snorted.
“Couldn’t have cooked it anyway,” Aunt June said airily. “They’d turned the gas off for nonpayment.” Aunt B nodded, and poured her another shot.
“What about the wood stove, Daddy?” I asked, in exactly the right place.
“Oh, you couldn’t fit a turkey in there,” Daddy said.
“How about half of a turkey?” I asked. He beamed at me for knowing my lines.
He shook his head. “No, baby girl, we didn’t even have the little tail of a turkey to cook.” He looked around the room, making eye contact with each one of us. His children, cousins, in-laws, sisters, uncles, aunts. His motley assortment of guests, who might be angels in disguise. His wife and his mother. All eating heartily — this day and every day — because of him.
“It was very cold, and we were very hungry,” he’d say. His deep voice rumbled over me, like always, and I closed my eyes to listen. I could see their stark little apartment in my mind, could hear baby June whining and Gram singing Frank Sinatra tunes and Uncle Joe asking to go outside and play stickball in the street.
I could smell the turkeys cooking in the other apartments, could hear the other children laughing and playing … children who would eat their fill on Thanksgiving, while the boy who would be my father, went hungry. Again.
I could see him and Uncle Joe, 11 and 12, two skinny Black boys in thin coats, running down to the kosher deli; watch as Mr. Cohen shooed them to the back of the glass counter; observe them picking through the meat scraps that were headed for the trash. Chicken feet. Brisket fat. Turkey tails.
“We made deliveries for Mr. Cohen all the time,” Daddy said. “He knew us. Me and Joe. He didn’t pay us for doing it, but the nice white ladies always gave us good tips, so it was a good job to have.”
Mrs. Cohen, wiping her hands on her apron, wrapping up the scraps. Scooping up knishes and rye bread and pickles when Mr. Cohen’s back was turned. Daddy stuffing these gifts inside his too-small jacket. Joe, pretending to trip and fall, slipping apples into his pockets.
“Mama had never seen a turkey tail by itself before, without the turkey attached,” Daddy laughed, and Gram dabbed her eyes. “She found it very peculiar.”
Aunt June cackled. “So she cooked them like she cooked everything. She put a pot of water on top of the wood stove and — ”
“She BOILED them!” all the kids shouted in unison.
“Yes!” Daddy grinned. “The feet and tails and scraps all together, with salt and pepper and one wilted little onion. And when it was cooked — “
“God, do you remember how good it smelled?” Aunt June asked, hugging Gram. “Oh it filled the whole apartment!”
“Yes, it was divine,” the aunts agreed.
“ — when it was cooked,” Daddy went on. “We all climbed into Mama’s bed.”
“It was finally warm in the house, from the wood burner,” Gram nodded. “And Joe had found some extra wood — ”
“Yes, FOUND!” Uncle Joe had died in Korea, long before I was born. “Joe was always FINDING stuff we needed!”
Gram’s eyes twinkled. “Joe was a good boy,” she said, dropping a kiss on his 4-year-old namesake, my little brother Joey.
My aunts rushed to fill in the story, as they did every year, their words tumbling over each other like puppies in a basket.
“And we turned on the radio to listen to the Andrews Sisters and Glenn Miller and Louie Armstrong — ”
“And Mama taught us the words to the old songs — ”
“ — and we spread out the food on the bed and ate with our hands — ”
“ — and Joe cut his finger slicing up the apples — ”
“ — and June was scared of the chicken feet — ”
“ — and Eddie ate all the pickles — ”
“ — we never sang so loud — ”
“ — we never laughed so hard — ”
Daddy would groan loudly while they talked, licking his fingers. “Best damn Thanksgiving we ever had,” he’d say happily, looking right at us kids. “Now, Mama, bring out the apple pie!”
My mother would hold back her tears until the pie came out, and then she would lean against Daddy and weep. He would pat her hair gently and smile. “Why are you crying?” he would ask her every year. “It was a million years ago. And look around — is anyone going hungry today?”
That cold Thanksgiving in 1945 was the last holiday his family spent together. By 1946, they would be homeless, separated, adrift. He would lose his brother Eddie in a car accident, and see his mother’s health deteriorate. He would spend a summer living under a viaduct in the park, and would protect his sister at a great personal cost. Later, he would survive high school with two pairs of jeans, eyeglasses he got from welfare, and a night job mopping floors.
He’d serve in the Air Force, and the GI bill would cover his college and law school tuition. He would marry that cute girl from high school — the one who was crying because she couldn’t open her locker — and their children would never know a moment of hunger or insecurity. Every year, he’d tell them about Thanksgiving 1945.
It’s been decades since my mother and Gram were here to laugh at Daddy’s story. I’m the one who makes the stuffing now, the one who giggles when he yells for the tail of the turkey. It is my daughter who retrieves the turkey bits for him, my nephews who give him his story prompts, my son who brings home purple-haired guests who might be angels. It is Aunt June, now in her 80’s, who sings Frank Sinatra at the top her lungs and bakes the apple pie. | https://psiloveyou.xyz/this-might-be-my-last-thanksgiving-dede1a9f8a40 | ['Kay Bolden'] | 2021-09-10 14:30:35.453000+00:00 | ['Kay Bolden', 'Family', 'Relationships', 'Travel', 'Parenting'] |
Nationwide Law Firm Faruqi & Faruqi, LLP Highlights the Key Differences Between Employees and Independent Contractors | While there are still some significant areas concern such high youth unemployment and the precariousness and volatility of the emerging gig economy, the good news is that the U.S. unemployment rate has fallen to a 50-year low. In some industries such as cyber security, life sciences, health care, and trucking, the availability of jobs significantly outstrips the supply of labor.
However, the bad news is that many workers across all sectors, fields, and levels of responsibility from entry-level to management are being misclassified as independent contractors instead of employees.
“It is important to note that some employers who misclassify workers are not doing so to achieve some kind of devious benefit or ill-gotten gain”, commented a representative from Faruqi & Faruqi, LLP, a national practice that focuses on complex civil litigation in the areas of wage & hour litigation, securities, merger and transactional, shareholder derivative, antitrust, and consumer class action. With, many employers are deliberately flouting employment laws in order to avoid their full financial and administrative obligations. Fortunately, the courts have repeatedly demonstrated zero tolerance for these transgressions, and ignorance of the law is not a defense.
According to Faruqi & Faruqi, LLP, generally there are three categories of inquiry that determine whether a worker is an independent contractor or employee: financial control, behavioral control, and the type of relationship. Each area is further discussed below.
Financial Control
A common and incorrect belief held by some employers, is that financial control of a worker begins and ends with how they are paid. For example, they assume that if a worker is paid through an invoice then they are automatically classified as an independent contractor, and if they are paid through a payroll process (e.g. direct deposit in their bank account every two weeks) then they are automatically classified as an employee.
Granted, this is one important detail of financial control that determines the classification, but it is by no means the full story. There are several other pieces of information that factor into the financial control equation, such as whether a worker may work for other businesses or clients at the same time (though not necessarily for competitors if a valid non-compete restrictive covenant is in place), and whether a worker is taking a financial risk and may incur a profit or loss.
A representative from Faruqi & Faruqi, LLP states that “independent contractors are autonomous business entities, which means they must be allowed to exercise certain functions and options that are unavailable to employees. For example, an employee may be banned from taking a second job or `moonlighting’ to augment their income. However, in most cases no such impediment — explicitly or implicitly — can be applied to independent contractors.”
Behavioral Control
Essentially, workers who are trained, managed, told when and how to work, and provided tools or equipment (even if these items are used off-site or in a home office), are likely classified as employees. Alternatively, workers who exercise a significant measure of control over their work process and who use their own tools and equipment, are likely classified as independent contractors.
According to Faruqi & Faruqi, LLP, “understandably, there is some grey area with respect to behavioral control. For example, there are situations when it is appropriate for employers to provide independent contractors with some measure of training, such as familiarizing them with a new internal process or protocol.” In the same sense, there are situations when it is appropriate for employers to ask employees to use their own tools and equipment, such as in organizations that implement a bring-your-own-device policy. However, what matters is whether the employer exerts a significant degree of authority over how, where, and when a worker carries out their tasks. The greater the level of control, the more likely it is that a worker should be classified as an employee.
Type of Relationship
If a worker is entitled to benefits including (but not limited to) workers’ compensation, or if they perform activities that are central to the organization’s core business purpose, then they are likely classified as an employee. If a worker is not entitled to benefits, or if they are not performing core business activities, then they are likely classified as an independent contractor.
Faruqi & Faruqi states that a critically important fact that employers must understand, and heed is that “it does not matter what a contract says, or what a worker may agree to explicitly, implicitly or tacitly. What matters is the actual, realistic factors of the relationship.” In other words, employers cannot defend a misclassification by saying that a worker signed a contract or agreed to a certain arrangement. The law is not circumstantial, and the onus is on employers to classify correctly. | https://medium.com/@faruqifaruqillp/nationwide-law-firm-faruqi-faruqi-llp-highlights-the-key-differences-between-employees-and-d817abc964e5 | ['Faruqi'] | 2020-03-03 14:29:10.674000+00:00 | ['Employees', 'Law', 'Lawyers', 'Business', 'Business Development'] |
Why Does Offshore or Nearshore Cost More than Advertised | Last week, I talked about why outsourcing your software development to an American-based company is a much better option than hiring offshore or nearshore software development companies. There are a lot of downsides to offshoring/nearshoring, and the only upside seems to be the lower hourly rate of hiring overseas developers.
Doing so is understandable, of course. You can hire three offshore developers for $20 per hour or you can hire one person in the U.S. for $60 per hour. Three developers are better than one, and you can build software faster for the same price. Right?
Unfortunately not. The $20 developers usually don’t have the same skills, knowledge, or experience as the $60 developer. More often than not, you end up with software that is riddled with problems such as being unreliable, hard to maintain, hard to extend, or doesn’t scale well. Many times the only thing that the $20 per hour team does faster is to fail to meet the expectations of the stakeholders; the $60 developer can avoid all of those problems from the start.
Years ago, I worked for another consulting company and nearly 80% of our work was fixing failed offshore engagements. It really reinforced my appreciation of the sentiment “garbage in, garbage out.”
Cheap software bears an enormous amount of intangible costs.
To begin with, you’ll have frustrated users. Cheap software is unreliable and crashes on a regular basis. If you’re lucky, it’s only your internal users who are frustrated. But, frustrated employees mean unhappy employees, and unhappy employees mean higher turnover and attrition. That, of course, equates to higher costs of recruiting, hiring, onboarding, and lost productivity. And if you’re unlucky, it’s your paying customers who are unhappy. Unhappy customers mean lost customers, and lost customers mean lost revenue.
Bad software can also lead to bad data. Imagine if you had systems that measured your factory’s output, rejects, inventory, machine runtime, and machine maintenance, but the data collected was wrong or allowed for corruption over time. Very important metrics such as production output or costs could be inaccurate, and you end up making bad decisions. Have you overcounted your revenue or undercounted your scrap and inventory? It happens, and sometimes it’s hard to detect until you are faced with a serious problem.
Then there’s the repetition of time and investment, asking the low-priced developers to fix problems they created in the first place. Consider going to a proper steakhouse, such as Bones in Atlanta. You pay a premium for your favorite cut of steak, but you expect it to be done right every time. Contrast that with going to a cheaper restaurant and ordering a perfect medium-rare 12oz filet. You may get lucky and they may nail it. But chances are they won’t. Now imagine if you had to pay every time you sent it back to be recooked. By the time you get your steak delivered the way you want it, you could have just gone to Bones and gotten it done right the first time. And both you and your guests (the system users and project stakeholders (yes, pun intended)) would be quite a bit happier. This is how it can be with cheap software developers. You’re paying for time spent writing code, not the finished product, which means you’re paying for their ineptness and resulting mistakes.
Your time as a manager is very valuable. If you have to spend a lot of time going back and forth with the developers, communicating the problems, assessing the fixes, communicating more problems, squashing bugs, communicating once again, then that costs you a lot of money. And that is time and money you could have spent doing something else instead of being spent on the software developers that were supposed to be such a bargain in the first place.
Last, but certainly not least, are the ongoing and everpresent costs associated with poorly written code being difficult to maintain or extend. When it’s time to fix a bug or add a new feature, it could very well take twice as long or even ten times as long as it should take because the code is written so badly. This phenomenon is very real, happens often, and costs the business very real money.
So with all of that said, when you’re choosing an outsourced software development provider, consider more than just the hourly rate of the team or the project. It’s almost always going to be worth it to pay more upfront, knowing that you’re getting what you’re paying for. It’s worth it to get what you want and get it right the first time. Good software developers — those who earn what they are worth — have the knowledge and experience to build software systems that you will be happy with, from both financial and operational perspectives.
In the end, locally outsourced software developers are the better bargain. Costs are lower, mistakes are fewer, reliability and accuracy are higher, and customers are happier. And that all means more money on your bottom line.
To learn more about outsourcing your software development in the Atlanta area, visit the Elegant Software Solutions website or call us at (855) 449–4649.
Photo credit: Mr.TinDC (Flickr, Creative Commons 2.0) | https://medium.com/elegant-software-solutions/why-does-offshore-or-nearshore-cost-more-than-advertised-eeff73479550 | ['Tom Hundley'] | 2020-06-26 21:49:37.536000+00:00 | ['Outsourcing', 'Offshore Development', 'Management', 'Software Development'] |
Candlesticks — A dataviz marvel. No, this post is not about technical… | Image by Isaac Smith on Unsplash
No, this post is not about technical analysis of a stock. I am writing this to bring out the beauty of data visualization or dataviz as it’s fondly referred to in the data community.
Let’s look at an example.
As you know, stocks are traded on the exchange whenever buyers and sellers agree on a price. This could happen very frequently i.e. multiple trades in minutes or seconds. For the purpose of this example, let’s say there are tens or hundreds of trades of a stock happening at different prices in a minute.
You are an analyst interested in tracking the price of this stock over a period of time. What are your options ?
Line chart
The simplest one. You can simply plot the price on the y-axis over days on the x-axis as shown below.
Line chart of Apple stock (Source : www.seekingalpha.com)
While this is good start, it only shows closing price of the stock on any given day.
The above line chart will not convey any information about intraday movements of the stock. How can we represent this additional information ?That’s where candlesticks are helpful.
2. Candlesticks
The chart below is a candlestick version for the same time period.
Candlestick chart of Apple stock (Source : www.seekingalpha.com)
Looks clunky isn’t it. Let’s zoom in a bit to last two weeks only (ignore the bottom bars, those are volumes not prices).
Candlestick chart of Apple stock (Zoomed in)
Each daily candle conveys following information about the stock price.
1.) Open/Close price — Horizontal bases of the candle
2.) High — Top wick of the candle
3.) Low — Bottom wick of the candle
4. Note that, color has been used as an additional dimension to indicate whether stock closed higher than its open price (Green) or vice versa (Red). This means, for Green candles lower horizontal base of the candle is the Open price and upper horizontal base is the Close price (vice-versa for Red).
While the above example shows candlesticks by day, they can be plotted by hours or even minutes.
Candlesticks patterns are used by traders to understand demand, supply and price discovery of a stock. Traders take positions based on these patterns (which is beyond the scope of this discussion).
The takeaway is, there are elegant ways of representing high velocity data by overlaying them into existing visualization options like line charts. Candlesticks are just an example. | https://medium.com/@daryani-harish/candlesticks-a-dataviz-marvel-ca7c4f289562 | ['Harish Daryani'] | 2021-07-05 18:13:50.364000+00:00 | ['Data Visualization', 'Candlesticks', 'Dataviz', 'Technical Analysis'] |
Thank you for bringing this perspective of the Great War to us. | Thank you for bringing this perspective of the Great War to us. I had no clear understanding of the Austrian annexation; I never knew Austria elected a fascist government. I wonder how Austrians, then and now, feel about their President’s decision not to resist German occupation? It would most likely have been a futile effort, but as a point of national pride, there’s the other side to be made as well.
I recently submitted an “extended version” essay on fascism which may interest you or your readers:
Keep on Writing! | https://medium.com/@patrickinca1/thank-you-for-bringing-this-perspective-of-the-great-war-to-us-a1c669cae496 | ['Patrick O Hearn'] | 2020-12-14 06:15:00.703000+00:00 | ['Austria', 'Fascism', 'Hitler', 'World War II', 'Mussolini'] |
A Crash Course in Lightning App Development | Before we get to the nitty gritty, a little background:
About a month ago, I left a steady job as a data architect to study and work on the rapidly growing Bitcoin ecosystem. If you’re here, maybe you’ve made (or are considering making) a similar leap.
Quite simply, I think Bitcoin is most fascinating technical innovation I have ever seen. And yes, that includes the internet. The more I learn about Bitcoin, the more enamored I get with it. This technology has the potential to transform and disrupt so much about how we transact today — for the better — and I want to be a part of that change.
That is why I chose to build on Bitcoin. My hope is this post will help educate developers who are not familiar with the Lightning ecosystem and its possibilities, and encourage more creative engineering in the space.
Honestly, there is so much about the Bitcoin ecosystem that I’d love to write about eventually (the concept of energy-backed money being at the top of that list), but we’ll save those discussions for another day. Right now, it’s time to start building.
Literally the worst font I could find
The Architecture of Lightning
The basic design of the Lightning Network is a bunch of nodes connected to one another via channels. There are set amounts of bitcoin committed to those channels, which we record as a transaction on the main Bitcoin blockchain. Then, using special, off-chain contracts, we’re able to move those satoshis around between nodes without having to commit a new transaction to the main chain.
We do this by keeping track of the balances in channels as transactions take place. For example, if you have 100 sats committed to a channel with Alice’s node, you have about 100 sats (minus fees) you can spend with either Alice directly or anyone else you are able to connect with through Alice. These payments are near-instantaneous because all we have to do is adjust the balance of the channel with every transaction. This happens with the help of encrypted messages that travel across the network between sender and receiver.
Lightning applications operate against this so-called second layer of Bitcoin, the Lightning Network. This enables a developer to build an application that can operate using bitcoin as a payment method but with rapid settlement and lower fees.
Now, this is a vast oversimplification of the Lightning protocol, but I think it’s enough to start. If you want to understand in more detail how LN works, here are a few good sources. But be mindful that the rabbit hole goes deep, and if you’re not careful you will be reading about the ChaCha stream cipher family at 1 am with a code editor sitting empty alongside your browser. Not that that’s necessarily a bad thing, but we do want to build something eventually. And also sleep.
An even better way to learn about LN and how nodes work, in my opinion, is to set up a node yourself and figure out how to route payments through it. This takes a little more time and money but it can be fun as well as educational.¹
The Implementations
Lightning is an open protocol, which ultimately means there is a standard way to communicate and interpret messages over the Lightning network. In the same way that no one owns HTTPS or TCP/IP, no one owns the LN protocol and anyone can participate over LN as long as they follow the standard. Not only that, the standard continues to evolve as developers try to harden the protocol and add new features.
There are four companies at the heart of developing the protocol today: Lightning Labs, Blockstream, ACINQ, and Square Crypto. Each has their own implementation of Lightning:
Lightning Labs — lnd (Go)
Blockstream — c-lighting (C)
ACINQ — eclair (Scala)
Square Crypto — Rust Lightning (Rust)
All of these implementations are able to communicate with one another over LN. The differences lie in their individual API’s. This guide will focus solely on Lightning Labs’ lnd because that is what I started with and know best. It was a bit accidental in terms of how I ended up using lnd, but to their credit, the team’s documentation is pretty good and their Slack channel for developers has been great for getting help and support.
So be aware the rest of this guide will be lnd-specific, but the general concepts should apply to any Lightning implementation.
Your Development Environment
Two words: Use Polar.
Ok, a few more words. The biggest hurdle for anyone doing anything on Lightning right now, whether you’re a developer or a user, is getting your backend set up. Now I’m not talking about just a wallet, for which there are plenty of fast and easy custodial solutions. I mean a true backend — a node running Bitcoin and Lightning.
I hope the big mouse is OK with this.
The reason for this is your application needs to have certain permissions on a node in order to do important things like create invoices and watch for payments via API/gRPC calls. While this is possible if someone gives you access to their node, it would be good to have your own.
In order for a node to be functional for the purposes of Lightning development, it has to have 1) a running Bitcoin process with an up-to-date copy of the blockchain and 2) a running Lightning process with some open channels to send and receive payments through.
Over an average, residential internet connection, the blockchain sync alone can take several days or more. There is a lite, abridged version of Bitcoin you can run called Neutrino that can work, but I’ve also heard it can cause problems when trying to develop on Lightning. I’ve never worked with it personally. Either way, for someone just learning these tools I figure it’s best to remove variables that could cause unpredictable side effects, so just stick with the full blockchain if you can afford to.
Once we have the Bitcoin layer sorted, then there’s the Lightning layer. As I mentioned, we need channels in order to transact over LN. While it isn’t too difficult to open channels on Lightning, it not only requires some planning and coordination, but also requires bitcoin. You could, of course, commit some Bitcoin to open some channels and start playing around with transactions. But there’s no reason to take on such financial risk.²
Forget all that for now. Polar is an amazing tool that allows you to run a simulated version of all these processes in Docker containers on your laptop. It’s even got a slick UI that visualizes the network for you.
Look, Ma! No waiting for sync! (Polar screencap)
What’s great about Polar is that all you really need to do to switch your application to a testnet or mainnet backend when you’re ready is update a few configuration settings. I’ve been able to flip between local, testnet, and mainnet backends easily by just commenting out/in a few lines of code in my application (I will get to that .env file eventually. We’re not here to judge now, remember?).
Polar also supports lnd, c-lighting, and eclair nodes, which means you have the flexibility of trying other implementations if you like.
So to reiterate: Just use Polar. It’s a fast and easy way to get started, and who knows? Maybe you won’t even like developing on Lightning. Might be good to find that out before spending a lot of time and money on a system you’ll never use. On the other hand, if you find yourself listening to Stephan Livera podcasts nightly as you drift off to dream about preimages, you can set up a more serious dev environment.
Setting Up Polar
We can create our simulated Lightning environment in Polar quite easily by using the “Create Network” action. Name your network whatever you want (“test” is pretty clever, I think) and add 2 lnd nodes and 1 Bitcoin Core node. That’s really the minimum you need to start, and you can always add more later. Once you’ve created the network, start it up. The first time you run through this may take some time as you’ll have to download all the Docker images. Oh yeah, by the way, you’ll need Docker on your machine. Like I said before, we’re flying by the seat of our pants here.
After all that you should have a screen in Polar that resembles this:
In the beginning…
As you can see we’ve got Alice and Bob floating in space with their lnd nodes and both are hooked up to a Bitcoin Core backend process. Also note we’re at a block height of 1. Our very own Genesis block! Unlike a live blockchain, Polar only mines blocks when we need to commit transactions as we develop and test.
The next step is to create a channel between Alice and Bob so we can make Lightning payments. If we click on Alice’s node, the control panel on the right changes to show the different ways we can interact with this node. Under the “Actions” tab we’ll find all the tools we need to fund Alice with some simulated bitcoin and subsequently open a channel between her and Bob. So let’s deposit some funds to Alice and once that’s done, open a channel with Bob.
Click “Deposit” and proceed with putting 1M sats in Alice’s wallet.
If everything went according to plan, you should see the block height change as well as Alice’s balance. If not, seek help. It’s not your fault.³
Now we can open a channel between Alice and Bob. Since Alice has all the funds right now, we can simply open an “Outgoing” channel with Bob. This basically means Alice is committing a certain amount of bitcoin to spend with Bob through this channel, and they can use this channel for multiple transactions as long as there is a balance remaining. Note, however, that immediately after the channel opens, only Alice has the ability to pay Bob because the spendable balance is all on her side. That can change, of course, after Alice pays Bob.
This management of money in channels in order to maintain the ability for people to transact is what we mean when we talk about liquidity in Lightning. Practically speaking, users can only spend what is committed in all the channels across the network. Taking that a step further, a single payment can settle only if there is enough money in all the channels between the payer and the receiver (money going in the right direction, additionally). Otherwise that payment will fail. The management of and workarounds for the constraint of finite liquidity in LN is a topic in and of itself.
So opening channels in Polar… this is, admittedly, the place where I run into issues sometimes. I suspect it’s related to a state discrepancy between the UI and the backend, but I’m not sure. Stopping and restarting the nodes (or the entire network) sometimes helps. Quitting Polar entirely and restarting can do the trick, too. Also, in times like these, it’s amazing what a 10-minute walk outdoors can do for the mind and spirit.
Regardless of the reason, I’ve found the best way to run these node operations is via the CLI that Polar provides. While the UI might not always reflect the changes, we can be certain the database (so to speak⁴) is accurate by querying it directly. Polar makes this easy, so let’s do that for now.
Making sure we have Alice’s lnd node selected, let’s go to the “Actions” tab and click on “Launch” under “Terminal”. This should open a command prompt like so:
ASCII art circa 2020
Now we can use lnd’s lncli tool to open channels, create invoices, and pay them. First, let’s try running the following to get a lay of the land:
lncli --help
To open an outbound 100,000 sat channel between Alice and Bob, we can use the following:
lncli openchannel --node_key <bob's public node key> --local_amt 100000
You should see a response with a “funding_txid”. This corresponds to the transaction Alice and Bob are broadcasting to our simulated Bitcoin blockchain to commit the funds to our channel.
Try running the following to see our new channel:
lncli listchannels
If the list the command returns is empty, try mining a few blocks in the Bitcoin node (it’s under the “Actions” tab). This should help confirm the transaction and open the channel.
We can now see via the listchannels response that we have a channel with Bob and it should have a local balance of close to 100,000 sats (minus transaction fees). This local balance is what Alice can now use to pay Bob over Lightning.
The Payment Flow
As of today, the simplest (and I believe most common) payment flow over Lightning is via invoices. An invoice is essentially a set of payment instructions with two key components: how much and to whom. There are other parameters and variations on this theme that allow for some interesting possibilities (like hodl invoices and the BOLT12 proposal) but we will focus on this most basic of patterns here.
Continuing with our Polar setup, we start by creating an invoice. But this time, we’re going to use Bob’s node, so launch a Terminal for Bob and try running the following:
lncli addinvoice --amt 100
The command above creates an invoice for 100 sats (I actually ran into a connection error at this point while running through the steps. Try stopping and restarting Bob’s node if you have a similar issue). We can see via the response a bunch of info about this invoice:
{
"r_hash": "7d91cafaba85b6086924142dfd890f350eb53b17b80e2993d0a2ce5ccc7252f1",
"payment_request": "lnbcrt1u1ps3lu04pp50kgu4746skmqs6fyzsklmzg0x58t2wchhq8zny7s5t89enrj2tcsdqqcqzpgsp55rtlzlf5rt0z5zg34nc2rlcm9mw6nd77x45r85z6zp07qumphr7q9qyyssqzrvxdlsluaeu7esscvv8skcmaly4794j7pg9ytapmn50uukezf4xpqma9758s39wpn4pwk475dztezg4tff8xpylksl4mww57q8hj7cq7s7222",
"add_index": "1",
"payment_addr": "a0d7f17d341ade2a0911acf0a1ff1b2edda9b7de356833d05a105fe07361b8fc"
}
We’re only going to focus on the “payment_request” portion for now, because that piece of data contains everything we need for Alice to pay Bob — namely, the payment amount and where the money is going.⁵
If we flip back to Alice’s node terminal, we can take the payment request and pass it in as an argument to the following command:
lncli sendpayment --pay_req <payment_request>
And the result is:
Success.
Choose Your Own Adventure
I used to make diagrams like this for work. I still do, but I used to, too.
At this point, you should have the basic knowledge and tools to start building something. The diagram above outlines a simple example application that uses the same API calls we used to create and pay an invoice in Polar. It’s really all you need for a bare bones app. Of course, this is just one example, and there are lots of other concerns we can pile on as we develop — these are only starter blocks — but you can figure out what you need and how to make it along the way.
Some additional tips and references:
There are a lot of libraries out there meant to help developers write less boilerplate code and cut right to the action. For me, learning how to use these libraries created more frustration than efficiency. That’s more on me than the libraries. Abstractions are great if you have a basic understanding of what is being abstracted (now that’s a t-shirt!), and I did not have that understanding when I started. Personally, I found the API docs for lnd the easiest to follow. Once I used this guide to configure a gRPC client for Javascript, I was off to the races.
To see an example of a more fleshed out application, check out this excellent tutorial on Lightning Labs’ Builders Guide. I would recommend this tutorial if you’re familiar with the tools they use in it: express, mobx, and React. If you’re not familiar with these tools, you may not get as much out of the tutorial but you will still learn something. One thing I really like about it is it showcases a few interesting features that Lightning (and cryptographic proofs in general) can enable in an application.
Finally, please send me any feedback or questions about this post that might help improve its clarity and/or accuracy. To a point, of course (see first sentence).
Thanks for reading.
Let’s go. | https://medium.com/@rheedio/a-crash-course-in-lightning-app-development-5be5b8d2d558 | ['Michael Rhee'] | 2021-09-16 13:38:07.080000+00:00 | ['Development', 'Lnd', 'Bitcoin', 'Lightning Network'] |
Merchants and Madness | ‘Genius in a Bottle’ Prompted Writing Challenge
Merchants and Madness
GiaB writing prompt #5
Photo by Tupungato on Shutterstock
Browsing through the stalls,
a haggling cacophony
overwhelms my peace.
Shopping is our chance to wonder and dream. To imagine ourselves in different clothes, or picture various new elements in our homes. It’s where we as an imaginative species co-mingle with others and take part in group dreaming about possibility.
The Challenge
We invite writers to produce a piece of poetry, fiction or non-fiction on the theme of Shopping.
The Guidelines
We cannot publish pieces or reward writers who self publish or publish at another publication, so those pieces have to remain external to the challenge. Only pieces submitted to Genius in a Bottle will be considered for the challenge. If inviting other writers from outside the publication, please ensure they are aware of this.
Poetry is to be limited to 30 lines but can be in any style.
Fiction and non-fiction submissions are to be capped at 750 words.
Please refer to the prompt in the subtitle. Feel free to copy this as a template: GiaB prompt # [insert prompt number and theme here].
When submitting, please ensure that one of the 5 tags is GiaBprompt.
Please ensure a second tag is Poetry, fiction, or non-fiction, as appropriate.
We would like to become exposed to writers and pieces that you have enjoyed in all of Medium. Please tag up to ten writers whose pieces you have enjoyed recently, or who you feel may enjoy participating in this challenge.
For further information pertaining to the challenge, please refer to the rules and guidelines.
And away we go. We look forward to enjoying the paradise of the written word with you all.
Victor Sarkin and Chirag | https://medium.com/genius-in-a-bottle/merchants-and-madness-b44e8c6da69 | ['Victor Sarkin'] | 2020-10-14 15:02:56.684000+00:00 | ['Archived Prompts', 'Poetry', 'Haiku', 'Giabprompt', 'Writing Challenge'] |
The Cooperative Data Commons. We’re very excited here at Ledgerback… | About LedgerbackØDCRC
Established in 2018, the Ledgerback Digital Commons Research Cooperative (LedgerbackØDCRC) is a nonprofit cooperative association and distributed p2p network for unifying the study of the internet and society and fostering collaboration between stakeholders to advance towards a global technological commonwealth.
Our research approach is an inter/cross-disciplinary approach, with the goal to eventually employ an anti/antedisciplinary approach as we continue to grow.
Global Technological Commonwealth
A global technological commonwealth (as summarized here) is a sociotechnical imaginary (i.e., a vision of the future) that “consists of post-capitalist society where communities of mutual interest cooperate in the construction of institutions of regenerative economic relations” [1]. The technological design principles include:
“incorporating planetary boundaries,
modelling on natural biological ecosystems,
enabling the redefinition of value,
enabling radically democratic coordination and governance, and
allowing for the growth of a cooperative commons as the desirable future” [1].
For more information on the global technological commonwealth (and to get some background), we recommend reading Dr. Sarah Manski’s article, Distributed Ledger Technologies, Value Accounting, and the Self Sovereign Identity.
Areas of Interest
Our areas of interest include, without limitation:
Web 3 technologies (blockchain, pubs, secure scuttlebutt, fediverse, smart contracts, etc.) Collaborative economy (platform ecosystems, business models, platform capitalism, platform cooperativism, ownership economy, p2p/commons, digital labor, social contracts, etc.) Future of work (open value accounting, peer production, self-management practices, digital organizations, etc.) Digital Infrastructure (internet service providers, hardware, mesh networks, machine-to-machine economy, Internet-of-Things, etc.) Data science and ethical AI (AI/ML, human-in-the-loop AI, data analytics, algorithmic policy, algorithmic governance, etc.) Information privacy and security (data stewardship, cybersecurity, privacy-by-design, zero knowledge proof, cryptography, etc.) Knowledge Commons (notetaking tools, knowledge repositories, decision-making models, decision analysis, collective intelligence, swarm intelligence, EdTech, etc.) Metascience (open science, citizen science, science funding, bibliometrics, publishing, etc.) Personal Data or Digital Identity economy (data stewardship, data monetization, self-sovereign digital identity, decentralized identifiers, digital identity, decentralized identity, data privacy, data cooperatives, data trusts, etc.) Open Finance (e.g., alternative currencies, timebanking, community currencies, decentralized finance, prize-linked savings accounts) Complex systems (game theory, mechanism design, dynamic systems, simulation, etc.) Cryptoeconomics (bonding curves, cryptoprimitives, complex systems, peer prediction, schelling points, tokenization, etc.) Sustainability (circular economy, renewable energy, community-owned utilities, etc.) Science and Technology (how science and technology interact with society positively and negatively, and how the relationship between them can be changed for the social good)
Problem Statements
Some example problem statements we are investigating are described in the following articles:
Currently, LedgerbackØDCRC is run by volunteers (and we thank them for all their effort!).
Membership Benefits
Our primary stakeholders or intended beneficiaries of our membership are investigators (scholars, researchers, academics, activists, makers, technologists, etc.), practitioners, citizens, and our staff (the people who make LedgerbackØDCRC run!) .
The benefits we provide or plan to provide to our members includes:
online portal (email included) cloud infrastructure and interactive computing infrastructure combining resources mapping, data analytics, knowledge tools grantwriting support fundraising support sharing experiences publishing support research assistance networking offline and online informing members of opportunities providing resources
We do not have a membership fee (no need to pay $2,500.00 to join our community) but we do have annual fees ($50.00/year or provide 40 hours of time to cooperative-directed activities) to keep the cooperative operational.
Join us via the form below or send an email to [email protected].
Describing LedgerbackØDCRC
The LedgerbackØDCRC is best understood as multi-purpose cooperative (we don’t fall neatly into a category 😖) that can better be described by its functions (or really a mix of a foundation, ecosystem and a research institute):
Research Institute: We produce original research (basic, applied, empirical) and analyses on the internet and society, formulate models, tools, and designs and practices, grow a body of knowledge on the internet and society with an emphasis on how to transition towards a global technological commonwealth, develop prototypes, open source software and proof-of-concepts, and run citizen science projects. Data Cooperative: We produce and analyze datasets, trends, and other areas of interest by collecting publicly available data or curating data from our members or participants in our projects, and offer our analyses and datasets to the general public and interested parties. Foundation: we support efforts to advance towards a global technological commonwealth, hosting events and workshops, hosting distributed communities, and acting as a host for the greater Ledgerback ecosystem. Observatory: We monitor progress among the many sociotechnical ecosystems Academy: We produce open source educational materials and help others find and take courses on the internet and society, and develop the skills needed to cause transformational change towards a global technological commonwealth. Distributed community: We work together with people all across the world online to build a knowledge commons and provide resources to those who need them.
Supporting the LedgerbackØDCRC
You can support us in many different ways including: | https://medium.com/ledgerback-blog/the-democratized-data-commons-93f825b576bb | [] | 2020-12-25 23:42:06.163000+00:00 | ['Platform', 'Decentralization', 'Technology', 'Data', 'Blockchain'] |
The mind | ‘’Everything has its wonders, even darkness and silence, and I learn, whatever state I may be in, therein to be content’’
-Hellen keller
Chapter 1–3 states of mind~
Just like the states of matter our mind occur in different states of thoughts. These thought occur through different activities happening in and around us. These different states of mind or thoughts we be in
affects our lives. You must be wondering ‘this seems to be a big deal?’ Indeed it is. That’s why the most important part is to control the numerous thoughts and chemicals bursting in our head.
Well most of you may have taken some decisions in your life which you’ll regret forever. But that’s OK it happens to everyone at some point of time when you’re not able to control you’re thought’s.
So, the 5 Whys is a basic root cause analysis technique used in the Analyze phase of the Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control). To solve a problem, we need to identify the
root cause and then eliminating it.
Most of you didn’t get that right? Or maybe you did. So Here’s the solution, First of all you should know what you’re looking for I mean to say ‘’what’s the state of your mind right now?” Are you happy?
Are you sad?…..Just think of all those emoji’s that you use all the time! They best describe your state of mind.
After you’re done with that then you need to question yourself ‘’Is my current state of mind gonna help me get out of the situation I’m held in?’’ Once you have the answer then do ‘’two things’’
First of all give yourself some time to think and once you’re stable like Carbon you’ll be able to control yourself but as you know there is always some exceptions! Let’s see an example.
You’re in someone’s funeral ‘’its okay to cry’’ right? But imagine the same situation in front of your Boss? Do you think it will help? Yeah, maybe upto some extent but its more sort of awkward.
Through all these what am I trying to convey is that whatever that happens with you is what you did and what you did is what you thought and what you thought you did it as per you’re state of mind.
You should be aware of yourself ‘’the true you’’ before doing something based just on the state of your mind cause you’re mind always thinks the worst case scenario to be in play which always is not true at
all.
One most popular question that arises among the homo sapiens ‘’should we listen to our heart or our mind?’’ First of all you’re heart is just a blood pumping organ in you’re body. In order for your
brain to think, you need nerve cells that can detect information about the outside world and can transmit that information to other nerve cells.
It’s the transmission of information, the cells talking to each other, that’s the fundamental physical basis for how thinking works.
Well that’s a bit scientific side of ‘’How it works’’ But I respect your emotions looking towards the ‘’love’’ side of you. It won’t change anything at all you’ll still think through you’re brain.
But Things such as anxiety, depression, stress, etc have become a common problem among our generation. It all can’t be cured only through the ‘’DMAIC’’ technique.
So we’re again at a dead end:( But don’t worry that’s why I’m here. We’ll discuss it as we further understand our mind.
Lastly I’ll end with “Your strongest muscle and worst enemy is your mind. Train it well’’
Chapter 2-The vast field of thoughts~
We are discussing the mind right? Then you should see this ‘’You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive.
Yet neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes).’’
That’s shocking right? If I start typing from this point of time then it would take me 100 years to conclude everything here in this book it would exceed all the books available on the planet. You know why?
Because all these books have the thoughts of all the authors and poets ever born. And that’s what we’re discussing right now-’Thoughts”
So how do we conclude everything in just one lesson? Well its easy we here are only going to discuss the problems that occur while you’re processing things and how to cure them.
Firstly as we studied in the first chapter some methods to do that but we ended up at a dead end. So what now? Let’s see through an example.
(Second person perspective) I woke up in the morning took a shower and had a healthy breakfast as it was Sunday I decided to take a nap in the sun, So I went out in my garden as I was relaxing
on the comfy couch I heard a gun shot! I woke in surprise and adrenaline rushing throughout my body I started imagining the worst case scenario. My heart started palpitating and I was sure it was a gun
shot!
The above scenario is the same thing that I am(Narrator) trying to explain. It is generally observed that people tend to imagine the worst case scenario and this happens when they are not able to process
all of it in their mind and it causes stress and anxiety.
So now we are finished with the discussing the problems and how it occurs. But wait How to cure them or eliminate them completely Using the “DMAIC”? No as I told you some things can’t be cured with
a practical mind set. Just by considering that ‘’Our mind is like a machine’’ we can’t reach a solution. UGH another dead end:(
Chapter 3 — End of process
What are some types of mental disorders?
Anxiety disorders, including panic disorder, obsessive-compulsive disorder, and phobias.
Depression, bipolar disorder, and other mood disorders.
Eating disorders.
Personality disorders.
Post-traumatic stress disorder.
Psychotic disorders, including schizophrenia.
Well you should know I’m no doctor not even a psychiatrist then you must we wondering how this guy with no degree can help us?
Well I can’t help you I can only guide you just the way I did. Well that’s gives you a clear picture why I’m telling you all this. So People have many ways to deal with these situations. Some of them just
require some love, patience, peace of mind, etc. But that’s in the case when you have someone to help you the case in which ‘’You have support’’ but for those who are all by themselves. What can you guys do?
Well you have to confide in yourself, Be confident, trust yourself. You need to make your mind think that you’re more powerful than it is. Well you must say this all seems very easy in words but when
it comes to practical I’ll back off. I won’t because I have been through the same and I’ve SURVIVED.A teenage person deals with different problems as compared to an adult but both are equally competent.
Your problems may not be the same as others but that doesn’t mean that their problem is bigger than yours. Secondly, the most important thing that the mind hates when it’s not able to process is when
more and more things are added to process. For example opening 20 tabs on your browser will cause more lag and trouble than using only one tab for your work.
That’s why when dealing with certain issues the most important thing is to think only about your present situation and its gravity. Adding more things will affect the efficiency of the process.
For example a person dealing with financial stress may be in a better condition than a person suffering from both financial issues and PTSD.
One another efficient way of dealing with these issues is to divert your mind. Your mind also requires rest to prevent it from OVER-HEATING! That could be more dangerous and increase the density of the
matter. Yet another way of dealing with such sort of situations is to do what ‘’you love to do’’ for example-I personally preferred to listen to songs and play my Ibanez electric guitar. Well this is something
that is very specific to me but to you guys it may be different. So how would it help to treat your issues? Answering this question seems to be like answering whether the cat died or not in Schrödinger’s cat.
Yet I assure you that these methods are really helpful as it’s all practical based knowledge.
Some people tend to be depressed because they don’t have anyone to confide in. Well you see we have reached a point for which you need to look at the upcoming chapter:)
Chapter-4 You’re not alone
Well hey there! Do you know? The major levels of organization in the body, from the simplest to the most complex are: atoms, molecules, organelles, cells, tissues, organs, organ systems, and the human organism.
Indirectly I am telling you that you’re never alone your own body is a lot of atoms combined together and these atoms make you up just like in the movie Spiderman! Remember the sand man? Well that guy
is made up of sand grains aha! Well jokes apart let’s get to our lesson.
Remember the first quote that I gave at the very beginning? Exactly! Darkness, coldness, emptiness, silence all these are signs of being alone and depressed. Well now the important factor that can save you
from this is none other than the state of your mind well I’m not talking about the emoji’s this time I am talking about the positive and negative state of mind.
Once your weakness is your strength and you are positive about it then only you are able to succeed.
Now but how exactly in the most layman terms we’re able to do that? Well it isn’t quite difficult all you need to do is to utilize this current time. Wait how is this relevant to our case scenario?
Well it is because most people in this state of mind tend to think of the perfect future and past they want to have or they used to have. Well this isn’t going to help at all it will make it worse for your mind.
All you need to do right now is to think about the present not the golden days of your life. You know why? Because they are yet to come! How am I so sure ?Well because every person before reaching that golden
days of his life has to see the harsh truth of life and if you’re in that stage of life then you’re time is absolutely yet to come.
So all this is theoretical talks but we aren’t here for that right? We are here for its practical use in our life.
I told you in the beginning ‘’utilize your time’’. Well but how? Yeah, that’s the question “how”. You all might must have heard of ‘’Anne Frank’’. Yeah one of the most famous holocausts victims. And her own personal
diary she called “kitty”. An extract from her diary says that “Let me put it more clearly, since no one will believe that a thirteen-year-old girl is completely alone in this world. And I’m not. I have loving parents and a
sixteen-year-old sister, and there are about thirty people I can call friends. I have a family, loving aunts and a good home. No, on the surface I seem to have everything, except my one true friend. All I think about when
i’m with friends is having a good time. I can’t bring myself to talk about anything but ordinary everyday things. We don’t seem to be able to get closer, and that’s the problem. Maybe it’s my fault that we don’t confide
in each other. In any case, that’s just how things are, and unfortunately they’re not liable to change. This is why I have started the diary”
Well now you all must know what I am trying to convey. No not to be anne frank but to learn a lesson from her. Even after no one to confide in she found her diary to be her best friend which is a good way to not feel
alone and depressed. And one more important thing that we realize after reading that extract is that no matter how alone you are instead of wasting your time on seeing beautiful dreams ‘’work on them’’. Anne Frank
still is one of the most remembered person after so many years.
Well now I’ll end the chapter with a quote “Loneliness adds beauty to life. It puts a special burn on sunsets and makes night air smell better’’
Henry rollins~
Chapter-5 Cessation
For all those people who’ve been with me till here their head must be bursting with a lot of questions or maybe some who might have understood me completely. Yeah I get that this might be a little confusing
or maybe not. But I need to conclude the lesson and for that I need your help stay with me I hope all you’re doubts must be cleared after this.
So what was the main motive of this article? To attract people towards mental health, to make people aware about their own mind, to tell them how much is the storage of their brain? No it wasn’t any of this in specific.
The article had one and only one motive and that was to make people understand their capabilities. But what is that supposed to do with this particular article? Yes it does. All these chapters deal with the mind which
is the specific organ that allows us to think and what we think, how we think, how much. All these allows us to broaden our mind take decisions with extra precautions and “how to fix the brain lag”. That’s
quite funny isn’t it? Oh I am talking about the “lag” thing.
Well some of you still might have doubts well when you look at each chapter in specific they have different ideologies but in all together as a complete they are the same.
And of course the ending…. “The brain’s memory storage capacity to something closer to around 2.5 petabytes that means a lot of memory, find your own quote”
-Rudra Jaydeokar | https://medium.com/@jaydeokarwritings/the-mind-a48f3132f364 | ['Rudra Jaydeokar'] | 2020-12-22 17:28:34.230000+00:00 | ['Life', 'Quotes', 'Self Improvement', 'Mind', 'Reality'] |
6 Things Young Adults Shouldn’t Purchase | 6 Things Young Adults Shouldn’t Purchase
1. Car
Don’t buy a car unless you absolutely have to. RIP those who just bought a car right before the pandemic lockdowns. Cars could cost upwards of 10k+ a year for young adults. Car insurance is a big factor to consider especially if you live in the city. As a young adult in my early 20s if I were to drive I would be looking at 6k-8k a year for car insurance in the city. Plus, why would you need to drive a car if you live in the city? There should be a great transit system available. Car repairs can also break your wallet. They said to budget $100 per month or $1200 per year for car maintenance. That’s expensive for us who have just started our careers and aren’t in great financial shape. Another reason to not buy a car is that before the age of 25, your brain hasn’t matured yet. The frontal lobe is responsible for impulsivity control and isn’t fully formed until age 25. Having an immature brain behind a wheel can be dangerous which is why insurance is higher for us. Think about all those people who aren’t driving in the city. In my city, it’s 33% who don’t own a car! Join the group, don’t own a car while you’re still young as cars are liabilities. They will only make you poorer!
2. House
I know some of you are thinking about becoming a house owner. However, it still requires a lot of consideration even if you could afford it. Think about where you want to live five years down the road. The general rule is this, if you aren’t going to live in the same place five years down the road, don’t bother buying. You might think that the property value might go up so you’ll be raking in if you do decide to sell before the 5-year mark. Think again. There’ll be a closing cost when you buy and sell the property. Every year, there’ll be property taxes, mortgage, maintenance costs, utility bills, etc. For the first five years, you would be paying a lot more of the interest rather than the actual mortgage. You might end up losing money if you sell too early. If you’re thinking about maybe renting it out while you get another mortgage for another house? You may be able to afford less mortgage for your second property. Think about the location, sometimes it’s hard to predict but when something major happens it could drop the value of the house drastically. Something major could be, a murder nearby, a shooting, closed-down stores leading to lots of unemployment, etc. We can’t predict many of these major events but it’s something to take into mind when purchasing a home. Young adults tend to have more unpredictable career trajectories so don’t settle down too early as of yet.
3. Expensive furniture
Trust me, furniture gets broken during the moving process. Before you own a house, don’t purchase expensive furniture. Even if you do own it, there really is no need to purchase high-end furniture when you could be investing the money.
4. Expensive or frequent meals at restaurants
Dining out frequently can be expensive in the long run. Instead, learn to cook healthy meals for yourself. It makes you seem like a more attractive romantic partner and you get to save money.
5. Expensive clothes
There’s no need for expensive clothes for most of us young adults. Think about money in terms of the long game. Don’t buy it because it makes you appear wealthier. Use the money to actually make yourself become wealthier.
6. Latest iPhone and other digital gadgets
Look around at those people who are brandishing their newest phones and gadgets, most of them aren’t rich but they bought it as a status symbol. Just how many people are willing to forego their dinner for the month to be able to afford their new toys we don’t know but there are certainly people doing that because they care a lot more about their image than they should be. Don’t fall into the social media trap. It’s easy to fake being rich on social media. It’s smarter to use the money for other purposes rather than following a craze. | https://medium.com/@haappier/6-things-young-adults-shouldnt-purchase-9e7150801706 | [] | 2020-12-26 10:38:02.585000+00:00 | ['University', 'College', 'Self', 'Students', 'Youngadult'] |
On Happiness | Sometimes I feel like I cannot omit myself at all
Like there are clots in my bloodstream
Like my pores are blocked with thick sludge.
Sometimes my brain feels like it’s been dropped in sand
like an iPhone
And my neurotransmitters just won’t conduct electricity.
Sometimes my thoughts are so unclean I need a scalding shower
Like my lungs are seeping tar
Like the seams of my veins will only reseal if melted
Sometimes my nails don’t feel strong enough to shovel the decay of indecision
from my skin.
Sometimes I feel the things I never got to say
Like my vocal chords had rusted through
Like my amylase had turned toxic and burnt holes through my face
Sometimes unspoken words bounce around my skull
like windows screensavers
And my saliva leaks in thick, claggy streams from my pried jaws.
There are times I am unhappy.
***
I have always thought of very broad, blanket desires when tossing coins into wishing wells. “I wish to be truly happy, one day… I wish to be shown true happiness… I wish for those I love to find happiness.” I always hope for the same things. My earliest memory of a wishing well was outside the National Gallery of Victoria, as a child. I was very careful that day, to keep my wish a secret. I tucked it neatly in my mind; under my mousy brown braids and my small red ribbon. That day, I wore my favourite little red swing dress over a snug white skivvy. Winter, 2005. My family had taken me to the Dutch Master’s exhibition. I remember running my hand through the water wall; liquid dancing at my fingertips. I flipped a gold coin into the fountain, scrunched my eyes, shut and wished.
***
Just like Rembrandt van Rijn, Johannes Vermeer and Pieter Brueghel, you are an artist. Your life is art. Your existence is the carefully crafted and refined expression of your experiences, values and beliefs. I think it is important to dismiss the notion that happiness alone is the key to life. As I age, my grasp on what it means to feel alive is evolving. I am dismissing the notion that happiness is the epitome of human existence. We would not have art if we did not open, about our struggles and understandings. Humanity’s greatest romances are not emptily smooth and happy. Shakespeare’s’ greatest works were horror romance and tragedy. We must romanticise our lives; our coffees, our commute to work, each sunset- to avoid a dull mundane. The thought that the fullness of our lives is solely based on how happy we are leaves us with dangerous and unattainable expectations. It’s okay to not be happy. I do not mean romanticise mental unwellness; but to dismiss the notion that happiness is always critical. Obsessively striving for happiness can leave us feeling as though we are doing something wrong or falling short. It seems that we have programmed ourselves into believing emotions other than happiness are personal failures. However, eudemonia does not come from a dilute, lukewarm, “happiness”; it comes from a flourishing life, compiled of moral virtue, practical wisdom, passionate experience and great understanding and the hunger to understand. We have a whole spectrum of emotion to experience. To only strive for happiness is linear and limiting. Would you want to look at thousands of years’ worth of the same soft lemon coloured painting? Or, would you prefer to consume the intense creations of Vincent Van Gogh and David Foster Wallace? For the same reasons we break away from lullabies, to the work of Beethoven. Non-conformity and unhappiness are quintessential in creating a non-linear masterpiece from our human experience.
“Calm seas may bring you peace, but storms are where you’ll find your power”- Spirit Daughter.
2. If someone was asked to describe me, and the first word that sprung to mind was “nice”, I would be disappointed. Similarly, if the only word that sprung to mind to describe the most important days of my life was “happy”, I would be disappointed. Perhaps, in these cases one would benefit from building a more accurate vocabulary. When I think about the liveliest moments of my time on earth, I think of passion, pride, infatuation, devastation, romance and pain. Looking back on your richest experiences, do you think of earth shuddering concerts, intense romances and your breakthroughs/ accomplishments; does your mouth sting with the memory of homemade ouzo and dancing naked on top of coffee tables? Is your brain branded with a certain humans’ irises and do your ears bleed with the sleepy voice of your loved ones? Or, do you hold most dear the quiet, comfortable nights you spent in bed, watching Netflix and eating the popcorn you ordered from Hoyts on Uber Eats? “Emotion, in my view, is the “awareness of self-in-circumstance,” a condition that helps persons realize their possibilities in situations. Emotions are “constructions” or “productions” assembled and maintained by both physical and symbolic patterns. They reflect different kinds of levels of awareness and, indeed, are built upon the more basic forms of recognizing-and-responding that other creatures have. The different words we use to describe our emotions — and there are hundreds of these — express those levels and subtleties of assessment. Happiness and sadness are not our most basic forms of awareness. Much more fundamental are the feelings that attend acts of “noticing,” perceiving some discrepancy or change in an environment. We live between latitudes of boredom and anxiety and experience feelings like interest and surprise. There are also the feelings that come from “evaluating,” where we apply personally held standards to what we’ve noticed. By those standards — cognitive, moral, aesthetic, and practical — we judge occurrences to be “good” and “bad.” At times, we are content with what is going on; at other times we are dissatisfied, even disgusted. However, these feelings of propriety — or of their opposite, “trouble-sensing”– are not equivalent to happiness.” (Hendrix, T. (2016). The happiness Cult).
Oscar Wilde once said “To live is the rarest thing in the world. Most people exist, that is all.”
3. Happiness can be synthesised. Synthesised happiness is created when we don’t get what we actually wanted; synthesised happiness is what Ringo Star’s blindsided replacee created when he said he was happier he never got to tour with the Beatles, after they blew up. Similarly, happiness can be manufactured by ripping bongs from a gatorbeuge all day, with a bag of Doritos, in a Frankston South back yard. This addiction to an emotional façade of cheap, artificial and somewhat empty comfortability and happiness can evoke addition to stimulus (or, over stimulus). The modern-day emphasis on the supposed critical nature of happiness encourages false serotonin highs through often antisocial behaviour including drug, alcohol, sex, gambling and social media abuse. However, we rationally value natural happiness over synthesised happiness. Because, happiness alone is not always the measure of a good life. Instead of undergoing a practical journey towards a purposeful and rich life- the over stimulating spectrum of choice and freedom available to us often leads us to building our sense of identity, belonging and livelihood on weak foundations of quick and short-term happiness. If we do not see immediate results of the actions we take in attempts to achieve purpose, we often give up. It is difficult to build a sense of self-importance in our communities if we spend significant portions of our lives in “non-places”. Non-places being a concept developed by the French anthropologist, Marc Auge, to describe spaces of “transience, where humans remain anonymous and that do not hold enough significance to be regarded as places”- ie: clubs, work places with high turn overs (some hospitality jobs, call centers, strip clubs etc), gambling facilities and the social media realm. This behaviour of seeking happiness and validation from platforms and places which are not based in communal reality is counterproductive. It is critical to persevere through difficult emotions and situations, in order to progress in ones’ journey towards building own esteem and the sense of worthiness.
4. “Ruthlessness is a surprising bedfellow to scientific success.” (Ross, P. (2013). Isaac Newton: Was He a Jerk Due to Asperger’s?). Let’s talk leadership. Think Gordon Ramsay. Actually, maybe don’t think Gordon Ramsay. Think Winston Churchill: suffered debilitating depression yet one of the greatest leaders and statesmen of all time. Josef Stalin: although a complete megalomaniac, a very powerful leader. There is a trend of anti-happiness in strong leadership and achievement. When you are happy, you are comfortable- so, there is no thirst for further accomplishments. To achieve, you need to sense something was wrong, lacking or not achieved; you need to sense that something needs changing. So, fuck contentment. You cannot feel full, without first feeling hungry. When you are discontent you are uncomfortable; the contrary to happiness; the fuel for hunting. Those who make great changes take the largest steps from comfort to pursue the constant and hopefully unattainable quest towards contentment.
5. “Happiness is a prison… …Happiness is the most insidious prison of all… You’re afraid because you can feel freedom closing in upon you. You’re afraid because freedom is terrifying.” — Alan Moore, V for Vendetta. The admittance of unhappiness is the acknowledgement of your call to action, to believe in and respect yourself. Oftentimes, being bound in an unhappy situation lets us feel a lack of accountability for our own happiness or lack thereof; we delegate the responsibility of our feelings to the situation we feel trapped by. To become aware of your wants and needs can be difficult, as opposed to naively shying away from the elements of your life which are not serving you. Discomfort can therefore be revolutionary. It is imperative on our journey of fulfillment to value and generate original thoughts; to break away from cultural, religious or societal expectations which do not support our best interests. This deranged focus on positive thinking often pushes us away from taking responsibility and control of our lives; the environments we belong to and the choices we make. In this way, happiness can act as a prison; chaining us within the familiarity and one minded direction of what we find mind numbingly comfortable.
6. Freedom is generally more valued on an individual level than short term happiness. I’m sure we have all seen a zombie apocalyptic film- or show, like Westworld- in which all authoritarian structure is abolished and humans run rogue, acting however they primitively desire; murdering, raping and stealing. While I have spent many classes day dreaming about where I would hide in a zombie apocalypse (generally an iron barred portable in Mount Erin’s F block), I hope we never end up in a situation like that. Because, if everyone was able to live out all their dirty, antisocial fantasies in their pursuit for happiness and (the hope of) satisfaction, our society would obviously not function. In civilisation we help each other to function. While I also daydreamed about the joy I would feel in setting fire to my high school, it unfortunately is deemed antisocial, in our society. I would like to say my senses of ethics and morality are why that school is still standing but I feel more so that I did not want to become a convicted and detained arsonist… There are concepts we generally value more than happiness, on an individual level. Yet, as a culture, we compulsively speak of happiness and positivity as though it is the epitome of human experience. We all measure the good of our lives subjectively- because, there are many ways of living a worthy life. Hendrix also explores this in his blog post. Hendrix explains modern day society’s default understanding of happiness is often based around “short term and frequently self-centered fascinations” as opposed to previous generations who grew up more often basing their happiness around their ability to “maintain themselves, their families and their communities.” Hendrix speaks of happiness as a social construct which is highly malleable, in society. We also (generally) make willing sacrifices to said personal happiness, for the betterment and functionality of society.
***
I have always thought of very broad, blanket desires when throwing coins into wishing wells. Always wishing to be happy. And as always, I was very careful on that winter day in 2005, to keep my wish a secret. I tucked it neatly in my mind; under my mousy brown braids and my small red ribbon. I didn’t want to jinx it. But now I want to jinx that wish. For if happiness is all I ever wanted, all these years of growth and craft and evolution would have been a waste and I could have simply continued kicking back with those nacho supreme Doritos in Frankston South (they were good).
***
Sometimes I feel like I cannot omit myself, but
sometimes I feel like I can.
Like there are seeds in my bloodstream
Like my pores are bursting with new growth
Sometimes my brain is an overflowing greenhouse
And my neurotransmitters are an irrigation system
crafted to mechanical perfection
Sometimes my thoughts are so pure they blossom from my eye cavities
so fast I cannot catch all of the petals n my palms
Like my lungs are home to roses
and my heart bleeds creativity, when it brushes their thorns
Sometimes when I sit in the sun I feel buttery
like I’m oozing sunflower oil from my skin
Sometimes I package the things I never got to say
in little blueberries
Like my sister and I could throw them at strangers
from a balcony, during sunset
There are times I am happy.
__________________________________________
WORDS: TAMARA CLARK | https://medium.com/@tamaraclark/on-happiness-ca97a0275b66 | ['Tamara Clark'] | 2019-03-08 01:26:53.241000+00:00 | ['Mental Health', 'Psychology', 'Happiness', 'Thoughts', 'Positivity'] |
How Did I Save Myself From a Potential Remote-Job SCAM | “Hi, Jai We need full time writers who can work with us long term You should be able to write articles and web blogs. You should prepare fresh aws/azure environment to work with us https://aws/azure.amazon.com You should know about wordpress blogging and how to add articles.”
I got this personalized note from a profile on LinkedIn on September 5th i.e., two weeks ago. And this incident taught me a big lesson this Teachers’ day.
Since I am a content writer, constantly looking for opportunities from all over the world, I got excited to get such a note from a person who claimed to belong to the US in his LinkedIn profile. But my happiness didn’t last long when suspect and doubts hovered my mind (And I couldn’t be more thankful for that.)
Now, let me tell you the whole story.
So, I got this note from a LinkedIn profile around 1:06 PM IST. I am telling you the exact timing because it also helped me generating doubts about them. (More on this later)
This person’s profile’s key identity included-
Name- Anintish Yakov. And now I found out that his profile is not available on LinkedIn.
Profile of Antinish Yakov on LinkedIn
2. Profession- Recruiter at a big and reputed company for 7 years. (I am not mentioning the name of the company for obvious reasons)
3. Profile activity- Nothing. No likes, no comments on others' posts, no updated posts. Absolutely blank history.
Anyway, I replied with a thank you and showed interest. He then asked me for my Outlook mail ID and patiently waited for me to join a group on Microsoft Teams he sent the link to via email. I joined the group.
There were 3 people in the group but they didn’t include the person who contacted me on LinkedIn. One person (Ulises Monroe) called himself the CEO, the second one denoted himself as the CTO (Thatcher Fallon) of the company, and the other one (Ian Brown) was being called the Product Manager.
The “CEO” asked me a few generic questions in the name of the interview in the chat itself and asked me to create a Microsoft Azure account.
When I asked if they are considering me to hire, they said yes. To be honest, I was on cloud nine.
But they seemed to be in some kind of hurry. I asked them about their company website and job description before joining any group. They did send me a genuine website link of a genuine company called executiveessentials.org.
Till now, everything seemed mostly normal.
The loopholes in that conversation that raised doubts in my mind-
The company website (THE BIGGEST LOOPHOLE)- The company website was absolutely real. I even checked on its LinkedIn page. But because of my concerns, I went to their “Meet Our Team” page. And saw “Michelle Tillis Lederman” being mentioned as the CEO and NOT any Ulises Monroe. English errors- After those prepared interview questions, when I started cross-questioning about their company and everything, I found so many errors in their grammar and that seemed quite unprofessional. Impatience- They looked desperate for me to join that group in Azure they were talking about. They kept telling me to join that group even before telling me the job description, salary, and information about their company. An offer that seemed “Too good to be true”- When I enquired about the payment and working hours, they asked me my hourly rate. I told $15 per hour. Now, $15*40h*4weeks becomes $2400. They said they will pay me $2500 for the first month and after that, the salary will be reconsidered. Now think, instead of negotiating they offered me an extra $100. The “interview”- I couldn’t believe that an interview could be this easy. No questions on skills, no face-to-face video chat, no cross-questioning on my replies, no question about my working experience and the industries. The time difference between the US and India- I checked the time difference on Google and found out that the conversation went on at an odd timing and not during office hours in the US. Why me?! Usually, companies say that they will get back to the candidate after interviewing other candidates. But they confirmed my job just after those 5 questions. That seemed odd.
WHAT DID I DO NEXT
Long live Google- I Googled about the remote-job scams and how to identify them. And found out the exact signs I acknowledged during this incident.
2. Confrontation- I asked them regarding the CEO confusion. They didn’t reply for hours.
3. Enquiring the source- I then texted Yakov who had approached me on LinkedIn and showed my concerns towards the issue. He too didn’t reply. And mind you, before I joined that group on Microsoft teams, he was so quick to reply every time I texted.
4. I insisted to hop on a video chat. But they kept ignoring that and eventually denied saying that they are too busy to have a video call right now.
After I informed Yakov, I got a reply in that group (still no reply from Yakov directly)
The group scene- The proclaimed CEO in that group said that he is the CEO of one department of that company, while Michelle Tillis being the main CEO.
AND HERE I CONFIRMED MY DOUBT
I was not feeling good about all this. Thus I sent a message directly from the company website asking-
“Does your company have two CEOs?” I also explained everything in brief and asked to reply soon.
Thankfully, Michelle Tillis replied herself and even appreciated my move.
The real CEO of Executive Essentials replied to my inquiry
Another insight from Michelle Tillis
I sighed out of the big relief. And learned one of the most memorable lessons in my life.
But it didn’t end here
After a couple of days, one more person from LinkedIn approached me with the exact personalized note again.
This time instead of accepting the request I started cross-questioning right away. You can judge their intention or stupidity with his reply.
After this, he didn’t reply to me ever. Yes, I am going to report this profile now.
If you ever come across such an incident, kindly stay alert and do every possible research before accepting the offer or giving out any detail. | https://medium.com/@kumarijaishree27/how-i-saved-myself-from-a-potential-remote-job-scam-96aaeab20ae0 | ['Jai Shree'] | 2021-09-16 11:59:04.334000+00:00 | ['Remote Working', 'Jobs', 'Scam', 'Fraud Prevention', 'Fraud'] |
How Probiotic Foods Can Ease Your Pain | There has been so much chatter recently about foods that contain Probiotic foods so we want to sort out what’s wrong? what’s right? What foods contain probiotics? to tell us what they are? what you need is to ease our pain?
Probiotic Foods
Common Questions Before You Get The Probiotic Foods
Yeah! a lot about probiotics foods and I always wonder, Is this a marketing deal? Do we really need this? and then you’re also saying prebiotic food is something we need to know about it.
Right, so this is how probiotic foods can ease your and our lives’ pain, Our digestive system has trillions of microbes that work in harmony, these microbes, also known as gut microbiota, play many important roles in our body. They help us with the digestion of the food we eat and the absorption of nutrients from the food.
They also boost our immune system and protect us from countless allergies. Unfortunately, these 20 bacteria in our gut are often destroyed when we fall sick, or we consume too many antibiotics, we consume too much alcohol, and even when we are stressed.
But the good news is that we can replenish or refill these good bacteria, back in our body by consuming foods that are rich sources of these bacteria. These foods are also known as Probiotics.
Probiotics are naturally found in certain foods, and they’re also sold in the market as probiotic supplements, but unless you have been advised by a doctor, you don’t need to go for the supplements, you can instead have these foods that I’m going to tell you today for your daily dose of probiotics.
Difference Between Prebiotic and Probiotic Foods
Let’s try to know what is the difference between pre and probiotic foods
Before that, we want to know about the definition as I mentioned above that probiotics are healthy bacteria under our body already living. In fact, there are already 300 trillion bacteria in our body, some of them are good and some are bad. Probiotics are those that fight with the bad bacteria while in our body and increase our health by killing the bad bacteria so that we stay healthy and diseases run away from us.
What Food Has Probiotics
To increase these good bacteria, there are products that use Kefir which has more than 2.5 lakh good bacteria. Kefir goes inside our body and starts making colonies so that colonies are formed in our body and these colonies together fight off the bad bacteria.
Good or bad bacteria live in the intestine into our body until they get out of there and when they come out, they call it a leaky gut syndrome, we will talk about it later to increase these good bacteria in our body there are so many probiotic fruits, foods, and supplements that are available in the market.
Prebiotic
Know about prebiotics I’ll give you an example suppose I buy a plant from a market that is a very beautiful plant. I placed it in a good pot. I give good water to it and also save him from sunlight as well, then it also needs some kind of nutrition and fertilizers to grow fast and to save it from insects.
And after all of this, the plant will grow and will become a healthy plant. So the same things go for prebiotics, prebiotics is similar to the fertilizer which feeds the probiotics.
If I say in the easy language it is a fiber food, whatever food is rich in fibers is a natural probiotic foods. When good bacteria are having these prebiotic foods, it grows much faster and it is beneficial for our health which kills bad bacteria, and good bacteria numbers are increased so we called it a superhero who kills bad villains. | https://medium.com/@freeyopawan/how-probiotic-foods-can-ease-your-pain-3c6149a06c79 | [] | 2020-12-23 09:35:33.212000+00:00 | ['Weightloss Tips', 'Probiotics', 'Probiotic Supplements', 'Health Foods', 'Healthy Lifestyle'] |
My biggest strength is being a girl | Photo by Autumn Goodman on Unsplash
Today I want to tell you about a situation that I recently faced.
Has a currently unemployed and college drop out student, I’ve been job hunting until I can get my life back on track. Last month, I went to one of the most disturbing interviews of my life.
I walked inside the room where I was going to be interviewed and there’s a middle-aged man sitting there waiting for me. Until here nothing is suspicious. As I sit in my chair the man gets ready to start is questions and I mentally prepare myself for the answers.
He asked me to talk a little bit about myself so I did. As I finish my little monologue, he asked the following question.
‘’That’s great. But my point is are you single? Do you have kids? Will you be getting married this year? That’s what I really want to know.’’
For a moment, time stopped while I was processing what he had asked me. But then, I brought myself together, and politely answered no to all those questions. The interview went on and I put this in the back of my mind. It’s no big deal, I think, he probably asks this to everyone.
‘’Final question, if you are selected for this position you will have to sign a term of agreement saying that you won’t have children in the next three years. Would that be a problem?’’
Too embarrassed to answer anything else I said no. What else would I answer? I’m 21, I want to enroll in college next year and my love life sucks. Getting married or having children is not in my list for the next three years. But what if it was?
I left the building and grabbed my phone to google something along the lines of “is it normal to be asked about having children in job interviews’’. I was shocked with what I found:
Endless blog posts and magazine articles about advice about mentioning children if you are a woman or what to answer when you are questioned that in an interview. I even found an article about illegal questions to ask in interviews.
That’s when it hit me: he asked me that because I am a woman.
I started doing research then. About the state of SDG 5 — Gender equality — and data on inequalities in the workforce.
Did you know that only 1 in 4 parliamentary seats are occupied by women? Honestly, I’m not investing in my education to be denied a place in the parliament because I am a woman.
I don’t want to be denied my dream job, or any job, just because I am a woman. Things only get worse if, besides being a woman, you are part of a minority because of your skin color, your religion or your culture.
According to the 2014 corporate diversity study, out of 69 high positions in 100 companies, only 22,9% are women. Out of those 22,9%, only 18.3% represent non-white women.
Want to know why this is even sadder?
According to a 2014 report, the more diverse a team is in terms of race and gender, the more likely it is for that company to have a financial achievement above average.
So why aren’t we talking about this? I think it is so engraved in our minds, as girls, that it is normal to be asked about children and for people to assume we would leave everything behind to build a family and take care of those kids, that we don’t even see this as a problem.
That’s why we need to speak up about these issues. That’s why we need to unite and speak as one loud voice that screams : ‘’BEING A WOMAN IS MY BIGGEST STRENGTH’’.
And you can start speaking up by filling this survey. Right now, AIESEC is conducting a survey called Youth Speak Survey, where we are trying to collect the youth’s voice on different subjects, one of them being gender inequalities. Go and speak your truth. | https://medium.com/youth-for-global-goals/my-biggest-strength-is-being-a-girl-c02e821124e | ['Renata Félix'] | 2019-11-16 16:54:52.263000+00:00 | ['Gender Equality', 'Aiesec', 'Sdgs', 'Women'] |
Must Visit Cities in The Netherlands | With canals galore, lovely gabled houses, world-class historical centers and that are only the tips of the iceberg, Dutch urban areas are absolutely home to some fine sights. Meandering around their cobbled lanes or taking a trench pontoon along the conduits is otherworldly, and there is an inviting and agreeable air about the nation by and large. Laidback, yet with a feeling of fun, investigating all that the wager urban communities in the Netherlands bring to the table will give you enduring memories.
1. Amsterdam
Meandering along the cobbled roads that line the notorious trenches, it is anything but difficult to perceive any reason why alluring Amsterdam is one of the most prominent visitor goals in Europe. Delightful gabled structures and enchanting old extensions are wherever you look. There are various world-class galleries on offer, for example, the Rijksmuseum, Van Gogh Gallery and Anne Blunt House.
2. The Hague
With the seat of government and the imperial family dwelling inside its limits, it appears to be marginally abnormal that The Hague isn’t the capital of the nation. All the more suitably, this huge city has a stately air about it. Amazing chateaus and channel houses line verdant streets, while consulates and government structures encompass its fine stops. Because of universal bodies, for example, the UN and EU, the city is multicultural, get to know more about the Cities in Netherlands.
3. Utrecht
Perhaps the oldest city in the nation, Utrecht’s winding trenches bend their way around its brilliant medieval focus, which has the arrestingly lovely Domkerk house of God transcending above it. Despite the fact that the rambling rural areas don’t establish the best connection as you enter the city, its revolting tangled trap of streets are before long overlooked once you figure out this exuberant spot with its fun climate.
4. Delft
A well known day trip goal, it is anything but difficult to perceive what makes Delft such an appealing choice. With its stunning medieval focus and beautiful trenches crossed by block connects and fixed with trees, the city is interesting and tranquil. Its most well-known child, the painter Johannes Vermeer, is only one of numerous who have praised its excitedly over the ages. Celebrated for the unmistakable blue and white tiles and earthenware production that are created here, visiting the Delftware manufacturing plants is famous among voyagers.
5. Rotterdam
The second biggest city in the Netherlands, Rotterdam is home to the greatest and busiest port on earth, with various conduits and waterways bungling the city. Having supported extensive harm during the Subsequent World War, the city is presently portrayed by cutting edge and imaginative design, despite the fact that there is as yet a basic coarseness to the spot.
Interested to know more regarding the Netherlands then concern to Netherlands Tour Guide. | https://medium.com/@adequatetravel/must-visit-cities-in-the-netherlands-eaefea68cce2 | ['Adequate Travel'] | 2019-07-16 13:10:05.441000+00:00 | ['Travel Tips', 'Cities', 'Travel', 'Netherlands'] |
10 things I learned in the first month of motherhood | 10 things I learned in the first month of motherhood
Pandemic pregnancy and new mamahood has been a lot. I’ve loved and learned so much in this first month and wanted to share what I’ve learned these first wild weeks. Some is comical venting, part dropping gems that were either shared with me by our doula, mom friends, aunties, and other loved ones, and part things I’ve figured out on the fly because well, that’s motherhood. Also ignore my untamed hot mess curls. We haven’t done a good detangle since baby girl was born but I do love this photo.
Grace — You must give yourself grace. Balls will drop, things will fall through the cracks, texts and calls won’t get returned as quickly as you’re used to, thank you cards won’t go out immediately after receiving gifts. It’s okay. Baby comes first, and the things on your to do list will eventually get done, or maybe they won’t and it’ll be okay.
Patience and Persistence — Once the sleepy newborn days pass around day 10, you’ll have to actually put baby to sleep. It is not always easy. Whether it’s 2am or 2pm, you will find yourself swaddling and re-swaddling (because they will bust out of your newbie parent swaddle), rocking and rocking and walking and rocking some more. They’ll fake you out and you’ll think they’re asleep, four tries later, they… then you will finally go to sleep.
Have a routine — The first days and weeks will be somewhat of a blur. A small routine will help you feel an ounce of normalcy. For me making our bed, especially since we’re cosleeping, has helped me feel like I hit the reset button every day. Then I do the same face routine I’ve always done, tone then moisturize, brush teeth, put in contact lenses. The cold toner hitting my face wakes me up and helps jump start my day.
Mom Friends — Lean on your circle of mom friends. I’ve learned literally no question is TMI in this motherhood journey and my mom friends have played a significant role since the day I found I was expecting. If you’re lucky like me, some mom friends will be in the thick of newborn life with you. I have a few friends who are also up in the middle of the night nursing and those 3,4,5am texts to cheer each other on or keep each other up are so clutch.
Shart — Google if you’re not familiar. Your baby will shart on you, be prepared. The first time it happened was while I was exhausted and groggy. Shart all over my pajamas and our 1500 thread count sheets. The next time, she nearly missed my face as I was closely checking to make sure she was all the way clean during a diaper change.
Nipple Butter — It is your friend, get you some. Don’t wait until you bring baby home to use it, start lathering up your nips before baby arrives to get them ready. I didn’t use it until we were home from the hospital and by then it was too late, I was cracked and bloody and in pain. Nothing like recovering from an emergency c-section while nursing a newborn with raw nipples.
Hands Free Pumping Bra — Buy one, but just one. It will be your best friend, then you can do what I did and turn all those hot girl summer tanks you’re never going to wear again into a hands free pump top by cutting a tiny hole for your phalanges to poke through. Boom, saved you about $75, you’re welcome.
Snack Bin — Wherever you are feeding your baby most often, especially if you’re nursing, keep a snack bin. You have to hydrate and nourish yourself every chance you get. Our doula filled a small bin with nuts, chocolates, and granola bars on the nightstand next to my glider. Let me tell you, this makes that first early morning feed feel doable. You’ll be ravenous and dying of thirst — nursing can feel taxing when you’re thirsty and hungry.
Treat Yourself — A little more than a year ago I met with a life coach to talk about my 2020 goals. One gem she shared was to treat myself when I have to complete a hard task. At the time it was op-eds and business proposals. Now it’s pumping. Pumping is a lot of f*&^ing work. Washing and assembling pump parts is a full time job. My pump treat is small, but a treat — sheet masks. Friends gifted me these along with things for our daughter. If I have to look and feel like a milk factory, might as well luxuriate my face while I do it, right?
Sleep When Baby Sleeps — This is the advice literally everyone has given. It’s not all the way realistic but make it work for you. During some of baby’s naps you’ll want/need to shower, eat, restock the diaper caddy, do laundry, schedule the million and one doctor appointments you and baby will have, pump… But pick a nap or two in windows you need it most. For me, if baby had a rough night, I nap with her after the first morning feed. If she had a good day, I’ll nap with her after dinner to prep for the night shift.
Parenthood is magical, it’s exhausting, exhilarating and a blessing beyond measure. I couldn’t imagine these moments in my wildest dreams or even my most fervent prayers. I’m thankful for all of it and hope someone finds this helpful or perhaps just feels seen. From one new parent to another. | https://medium.com/@heather-cabral/10-things-i-learned-in-the-first-month-of-motherhood-deab4f90144 | ['Heather Cabral'] | 2021-02-11 16:17:39.137000+00:00 | ['Newborn', 'Postpartum', 'Motherhood', 'Breastfeeding', 'Pandemic Pregnancy'] |
Trans-Continental Christmas | Trans-Continental Christmas
If I pay extra, will USPS deliver a hug?
What does it cost to ship love from Atlanta to Seattle? You don’t want to know.
On our daughter’s second Christmas morning, she opened a toy house handmade by her grandmother. Constructed of stiff cardboard covered with fabric, it was lovingly decorated with tiny furniture and a photo of our little girl on the wall.
Her next gift, from us, was a Fisher Price farm. Barn doors opened to reveal a plastic sheep, cow and chicken. The tiny farmer and his wife were perfect for the house Grammy had made.
Our toddler turned her sweet face from one treasure to the other, eyes wide with wonder, and finally announced, “I have a house and a farm. I have everything.”
That was some 30 Christmases ago. Traditions have been born and matured. Each child gets a new ornament, something to symbolize the year’s events. There’s a tiny drum set, miniature ballet slippers, a blown glass Mario, karate belt, pewter dragon, two skylines — Atlanta and Seattle — and dozens more meaningful trinkets.
Everyone gets new pajamas for Christmas Eve. Wouldn’t want to greet Santa in last year’s pjs, now would we?
I suspected late last summer that the flight from Seattle to Atlanta might get a thumbs-down from the youngsters. Nothing was said because neither child wanted to disappoint mother. So, in late October, I asked.
They gave the answer one would expect from two bright and careful children. A text from our 29-year-old son said it all: “I’d feel awful for the rest of my life if my coming back for the holidays led to one or both of you contracting Covid.”
So there it was. Obviously, I can mail this year’s ornaments and pajamas to the west coast, but what about the rest of our traditions?
For the past several Christmas Eves, we have each made an appetizer and enjoyed four delicious creations for dinner. This year it will be a fancy drink and Zoom happy hour, sure to be made fun by our almost-son-in-law, who loves to create in the kitchen. But it won’t be the same.
We’re a big game-playing family and Hearts has always been a favorite. In December, it is “awards season”, with trophies and speeches and very silly fanfare. Instead, we’ll play online and bestow the honors remotely. We’ll have fun but, of course, it won’t be the same.
Then there is Moe the penguin. He was originally a package decoration I tried to throw away. The kids rescued Moe and we went on from there, with me pretending to victimize this character and them exalting him.
Moe has a tiny cage and coffin as well as a star and top hat. Our little friend is missing a foot and most of his feathers. He could be tortured and revered online, of course, but maybe Moe gets a well-earned break this year because — you guessed it — it just isn’t the same.
I usually buy each child a board game but, hearing about the popularity of jigsaw puzzles this year, decided to go with that. Besides, it’s too sad to have a new game and nobody to play with. I even got them a puzzle mat, a nifty little gadget that allows you to roll up your project when the only table in your 700-square-foot place needs to be used for cooking, eating and remote work.
Stockings? I bought a couple cheapies, filled them with little toys and necessities (we’re big believers in zinc for colds and who doesn’t need a nose flute?) I sewed the top closed and shipped those off as well.
But what will we do without kids home at Christmas for the first time in 30 years? I’ve bought us a couple new two-player games. We’ll take long hikes and continue to work on our ping pong game. Maybe we will learn to play something together on the piano.
Holiday plans for two are in the works and no, it won’t be at all the same. Still, looking at the current state of affairs, I can’t help being reminded of that Christmas morning so long ago.
I have my family and my health. I have everything. | https://medium.com/crows-feet/trans-continental-christmas-952b1cb04949 | ['Cindy Shore Smith'] | 2020-12-07 19:18:50.466000+00:00 | ['Relationships', 'Family', 'Love', 'Christmas', 'Life'] |
A really friendly guide to use of Wavelet Theory in Machine Learning (Part 1) | The first part of this series of blog posts will cover the basics of Fourier transform and Wavelets.
The Wavelet Transform is a very powerful time-series analysis tool that isn’t very popular among the Data Science community. We are already familiar with other signal processing tools like Fourier Transform which are very intuitive albeit a bit mathematical. These transformations have changed our perspective from focusing on the signal to being appreciative of the underlying building blocks.
It is expected of the reader to be familiar with the fundamentals of Fourier Transform. What the Fourier transform essentially does is transform a signal from its time-domain to frequency domain. Fourier transform works very well only when the signal in context is stationery i.e. the frequencies present in the signal are not time-dependent. This is not desirable as most of the signals we encounter in real life are time-dependent like Stock data, Sensor data, etc. Wavelets on the other hand are very good with these dynamic signals which are localized in both space and time.
Here’s a short description on how the fourier transform works: We multiply (dot product) the signal with a series of sine-waves with different frequencies. If the peak observed is high, this means that there is an overlap between these two signals and the selected frequency is observed in that signal.
We can see that regardless of how the signal is localized in time, the peaks obtained from Fourier transform focuses only on how the signals are localized in space. It cannot tell us about the location of these peak frequencies which is why these two signals are same to the transform. Short-term Fourier transform attempts to solve this problem by breaking the signal into shorter segments of equal length and computing the Fourier transform of each shorter segment.
25 ms window
125 ms window
375 ms window
1000 ms window
However the problem with this approach is that STFT has a fixed resolution (runs into theoretical limits in accordance with the uncertainty principle). The smaller we make the window, more precisely we can identify the time at which the frequencies are present but the precise frequencies becomes difficult to identify. Upon increasing the window size, we can identify the precise frequencies but the time between frequency changes becomes blurry.
This is where Wavelets come in. Wavelets are a better way of analyzing these dynamic signals because they have a relatively higher resolution in both time and frequency domain. Wavelet Transform tells us about the frequencies present as well as the time in which these frequencies were observed. This is done by working with different scales. First we work on the signal with larger windows and try and understand the larger features and then we move on to identifying the smaller features using a smaller window. The wavelet transform, for small values of frequency, has a high resolution in the frequency domain but less resolution in the time domain. On the other hand, it has large resolution in time domain for large frequencies but less in frequency domain. It is very important to understand this trade-off because this is the only reason wavelets are preferred over short-term Fourier transform.
So what are these wavelets? How do they look? Wavelets are ‘mini waves’ that have a short ‘burst’ and die away quickly, unlike sin() or cos() wave that goes on forever.
The fact that these wavelets are localized in time gives us an advantage over infinite waves as it gives us a better resolution in time domain. Instead of trying to model our signal with an infinite wave, like in Fourier transform, we are modeling here with a finite wave which is being slid across the time domain. This process of sliding is also called as convolution. After we have done this with the mother wavelet, we can scale it to accommodate lower and higher frequencies. This is essentially done by stretching or squishing our wave. There are hundreds of different wavelets, which is why there are hundreds of different transforms in hundreds of different domains. However each domain has scale in their x-axis, which can be easily converted to frequency domain by using Pywavelets function scale2frequency.
Below are the different families of wavelets with each of them having a different shape, smoothness and compactness.
We can design a new wavelet provided it follows the following two conditions: it has finite energy in time and frequency and it has zero mean. It should be also noted that there are two types of wavelets: Discrete and Continuous depending upon the scale and translation factors.
Hope this served as a healthy introduction to the concept of wavelets. I’ve tried to sum up as much fundamentals related to the subject as I could have without diving deep into the mathematics. I encourage the reader to go through various other resources to learn more about wavelets. The next part of this blog post will focus on visualization and classification of signals using continuous wavelets and Convolutional Neural Networks. | https://medium.com/intel-student-ambassadors/a-really-friendly-guide-to-use-of-wavelet-theory-in-machine-learning-part-1-d254125ac263 | ['Kaustav Tamuly'] | 2019-04-08 18:08:45.374000+00:00 | ['Machine Learning', 'Convolutional Network', 'Neural Networks', 'Computer Science', 'Programming'] |
Creepy Crawlers | Writing without limits is freedom of expression. From sonnets to short stories, this is your platform to show who you truly are through putting pen to to paper.
Follow | https://medium.com/wordsmith-library/creepy-crawlers-25db383e47d1 | ['Sanjit Sengupta'] | 2020-12-28 15:02:37.767000+00:00 | ['Humor', 'Haiku', 'Insects', 'Poetry', 'Wordsmith Library'] |
5 years in: a renewed commitment to diverse founders and those building solutions for diverse audiences. | This October marks five years since humble ventures first started. As 2020 wraps up, I am thankful and grateful.
Like so many others, COVID and this pandemic have affected the health and safety of our family, friends, and our network on a global basis.
We are thankful to be ambulatory and currently surviving and we have learned a lot about a lot.
The lesson of 2020 has been to be like Water. And in 2021, our goal is to continue to Flow.
Some humble ventures stats since April 2020 — December 2020:
200+ Zooms
75 Medium posts
7 newsletters sent
70% subscriber increase
20 virtual pop-ups
38 design thinking workshops
120 pitches of our creds deck
33 investments
$150K in grant funding to 50+ Black-owned businesses
200+ students taught
Facilitated 1 innovation challenge
We’ve made some changes to our website to communicate our focus and direction. Take a look and let us know what you think.
A few highlights of our new changes (TL;DR):
Our Approach
We continue to use lean startup, design thinking, and community power in our business model.
We spend time either consulting or providing programming to early-stage companies, enterprise organizations, or fellow investors.
Our Fund
We invest in businesses off our balance sheet and have launched our t-shirt fund to increase the capital invested in our dealflow. Buy your humblethread$
Our Team
We continue to highlight the various partners such as Ogilvy Consulting, National Black Chamber of Commerce, Sparks Consulting Group, and Republic.
We appreciate problem solvers in search of fellow collaborators. Take a look at our updated website, and reach out if you think we might be able to help. | https://medium.com/humble-ventures/5-years-in-a-renewed-commitment-to-diverse-founders-and-those-building-solutions-for-diverse-badbf9298565 | ['Ajit Verghese'] | 2020-12-24 16:14:49.413000+00:00 | ['Venture Capital', 'Future Of Work', 'Entrepreneurship', 'Open Innovation', 'Investing'] |
How a 77-Year-Old Theory Can Help You Write Wonderful Headlines | How a 77-Year-Old Theory Can Help You Write Wonderful Headlines
Look to Maslow’s hierarchy of needs to understand and uplift your readers during COVID — and beyond
Illustration by Cynthia Marinakos.
He had a dream.
A vision.
A vision of a peace table where people would sit around discussing the important things in life — human nature, hatred, war, peace, and brotherhood.
Just after Pearl Harbour, this man drove home in his car, stopped to let through a “poor, pathetic parade.” Tears ran down his face as he watched. He was overwhelmed with sadness that “we didn’t understand — not Hitler, nor the Germans, nor Stalin, nor the communists. We didn’t understand any of them.”
And this man believed this lack of understanding was holding back progress.
“It was at that moment that I realized that the rest of my life must be devoted to discovering a psychology for the peace table. That moment changed my whole life.”
This was the drive behind Abraham Maslow’s famous hierarchy of needs, as told by Hoffman in The right to be human: A biography of Abraham Maslow (1999, 2nd edition).
Maslow’s hierarchy of needs was first introduced in his 1943 paper “A Theory of Human Motivation” in Psychological Review. It has been used widely in many different fields, particularly in management, to understand human motivation and happiness.
What made Maslow’s studies different from other psychologists during his time was his focus on happiness and health — rather than human weakness. He believed when basic needs are met, the deeper desires of creativity and fulfilling our potential can be addressed.
How is this helpful when writing content — and headlines? Firstly, we can better capture our reader’s attention with content that addresses their deep needs. Secondly, when we believe our reader is capable of achieving their best self, we help our readers believe that too.
Today we’ll look at Maslow’s hierarchy of needs and learn how we can bond more deeply with our readers by fulfilling one or more of these needs in our content and headline writing. | https://medium.com/better-marketing/how-a-77-year-old-theory-can-help-you-write-wonderful-headlines-c5c818600132 | ['Cynthia Marinakos'] | 2020-05-05 16:19:44.780000+00:00 | ['Headline Hacks', 'Writing', 'Psychology', 'Creativity', 'Productivity'] |
Tips For Using TikTok To Market Your Brand | Photo by Solen Feyissa on Unsplash
TikTok is a great app for businesses to promote themselves. It’s not just about posting pictures and videos, but also creating content that has an eye-catching caption.
There are many features on the app that you can use for marketing your business — including polls, hashtags, and mentions.
You don’t need to have a business account to post on TikTok. This means you can post as many videos as you want on the app for free!
Here are some tips to help you market your brand on TikTok.
How to get started with TikTok for your business
TikTok is a great app for businesses looking to post videos and photos. You can even create polls, hashtags, and mentions on the app.
The best part is that you don’t need an account to post on TikTok! That means you can post as many videos as you want on the app for free.
Here are some tips for marketing your brand on TikTok:
# Build a following
It’s important to remember that this isn’t just about posting videos. It’s also about building your following. Your following will be what keeps people coming back to see what you’re up to next. To build a following, try sharing content that speaks to your brand and make sure it has good captions.
# Know how long each video should be
Ideally, your video shouldn’t be longer than one minute long — otherwise viewers will lose interest quickly. Remember: shorter is always better! But don’t worry if you can’t keep it short; sometimes it’s worth posting a longer video if there is something really special in it (i.e., new products).
# Be interesting
The most important thing is to be interesting — whether that means trying different types of content or
Get My eBook “TikTok Marketing” Full of Essential Details for FREE…
How to post content that is eye-catching and creative
TikTok is all about creativity. Your content needs to be creative in order to attract more views. This means you need to post content that is eye-catching, witty, and memorable.
Don’t just post any text or picture on the app — find something interesting that will stick with the viewers.
Here are some great examples of eye-catching TikTok posts:
-Video of a bakery making donuts
-Picture of a dog wearing sunglasses
-Videos of people playing Fortnite
How to gather followers and increase engagement
If you want to grow your following on TikTok, there are a few ways to do it.
First, make sure your content is good. The more followers you have, the more engagement you’ll get. If you post high-quality content that people enjoy, they’ll follow you.
TikTok has different types of followers — ones who watch your profile and ones who watch your stories. So if you want them both to follow you, make sure to post videos that are both public and private.
Second, use hashtags! This will increase visibility for your account and give you access to other users who might not be able to find your account otherwise.
Third, interact with others on the app! When other users mention or tag your account in their posts, comment back! It’s a great way to start conversations and connect with new people.
What not to do on TikTok
There’s a lot you need to know before you start your TikTok marketing journey. You can’t just post and think that you’re going to get all the right eyeballs on your account. That’s not how this works and it doesn’t work like that with any other platform, either.
TikTok is a place where people will search for content related to what they want. This means you have to make sure the hashtags and captions are relevant, as well as the general theme of your posts.
Make sure you don’t do these things on TikTok:
- Posting content that isn’t related to what your brand is about or what services or products it offers
- Posting anything too often
- Posting at inappropriate times
- Posting anything that might be inappropriate for younger audiences
Conclusion
With more than 150 million monthly active users, TikTok has become a pivotal social platform for brands.
But to make the most of the app, you’ll need to know how to use it properly.
TikTok is a vibrant social media platform that is a fun and easy way to connect with your followers and potential customers. To get started, follow these tips for using TikTok for your business.
First, find a niche that your audience will engage with and start posting content that is eye-catching and creative.
Second, gather followers and increase engagement by using hashtags and location tags.
Third, know when to post — some days are busier than others so plan accordingly.
And lastly, don’t do anything on TikTok that you wouldn’t want your grandma to see.
Get My eBook “TikTok Marketing” Full of Essential Details for FREE… | https://medium.com/@witoon_aran/tips-for-using-tiktok-to-market-your-brand-6afc8a1a8dd6 | ['Witoon Aran'] | 2021-12-27 00:33:04.259000+00:00 | ['Brand Strategy', 'Marketing', 'Internet Marketing', 'Social Media', 'Tik Tok'] |
Rising of the Snakes- Moumita Alam | Source: Google
the sweats of the foreheads run down
cross the knuckles to meet with dusts
thousands hearts praying in unaffiliated hymns
of one religion of proletariats
to shake up the mound of the clay of sweat
into millions venomous snakes.
-
the snakes will retrace the roads
the bare footed have travelled
and
will poison the minerals of gold cups
will kill their square meals
will gulp down the fats,
the accumulation of the labourers’
blood.
-
the iron built mansions of the
sadagars have to have holes
the snakes will rise to melt
their cornucopias into bones
of the impoverished
and the mansions become
the skeletons of the tuberculosis,
silica lies in tons there.
-
the snakes will plough their lawns
harvest the ripen skins
of these bourgeoisie
to make the first shoes for the walking people
to walk on their destiny
the bottle gourds will brim
with the smooth hands, fingers, tongues
that write the death knell of
the voiceless.
-
the snakes will pierce into the ears
to eat the brains of the worms
the brains that let these fuckers
destroy the paddies with weevils
and let the sowers buy
their own fruits.
-
the snakes will enter their penises
will bite their testicles
incapacitate those machines
that bulldoze million
manoramas;sisters,mothers
of those walking foreheads.
-
come close walking hearts
sing the hymn of the
working souls with devotion of those
rough,cracked palms
to let rise these million snakes.
-
the right time never comes.
-
ABOUT THE POET
Moumita Alam is a non conformist waiting to see a new dawn when the marginalized will rise and shatter palaces of the fascist. She also teaches English in a school | https://medium.com/the-freedom-review/rising-of-the-snakes-moumita-alam-b0e38a7b4d60 | [] | 2020-04-04 13:28:57.167000+00:00 | ['Socialism', 'Tfr', 'Communism', 'Poetry', 'India'] |
VC Vitamin C 1000mg Injections — 10 Ampoules in India | If you’re big skincare freak and haven’t added Vitamin C to your routine, then you are depriving your skin with so many benefits. Vitamin C one of the super antioxidants to maintain and achieve beautiful healthy skin. Vitamin C ingredient and Products which includes Vitamin C; like Vitamin C injections are must you look for your upcoming shopping. Yet not finished, here we go for more info. Vitamin C doesn’t only protect our skin from free radical damage, it also performs a significant role in anti-aging, acne, wound healing, skin whitening, and even hyperpigmentation. The best way to improve skin health through VC Vitamin C 1000mg injections in the daily intake but, not just anything that’s labeled as vitamin C, instead well-formulated products like our Vitamin C 1000mg injection. Because the main ingredient of injection is Vitamin C.
Think of our skin is like a plant. We feed plants from the inside by organic fertilizers and also shield from outside desired sun exposure and daily watering to prevent from damage and drying out. While in the case of our skin, we need to go a little further by adding antioxidant-rich products to shield from UV rays, radiation, pollution, smoke, and other external factors.
So, that’s the reason why do we need to take vitamin C (VC Vitamin-C 1000mg Injection) to achieve plump and glowing skin.
Benefits Of Vitamin C
Vitamin C is a naturally occurring water-soluble vitamin also termed ascorbic acid. Fruits, vegetables, and certain herbs mainly contain vitamin c. Lots of research has been done over the cycles that have only led to finding out several different roles of vitamin C in our body that also, directly and indirectly, affect how skin functions, looks, and feels. Few are here;
Shielding Cell DNA- Stress, smoke and unhealthy diet all drive to cell damage which can be overcome by vitamin C .
. Skin Wound Healing- refers to inner wounds as well as cuts and bruises
Decreasing Oxidative Damage- prevent aging and sunspots.
Strengthening Blood Vessels- support to reduce dark circles, post-inflammatory erythema.
Building and Supporting Collagen — required for skin and joints.
Enhancing Efficiency of Sunscreen- extra shelter from suntan, sunburn, rough thick skin.
Absorption of Iron- indirectly assists with dark circles and bruising and skin whitening. Reduces acne and pimples
Metabolism of Proteins- proteins, including collagen being the most important element for built and functions, vitamin C has an essential role in their synthesis making it especially for our defense system.
Sadly, vitamin C is not produced by the human body. True, our body does not possess the particular enzyme activity required for vitamin C synthesis, which leaves us with only one choice daily consumption of vitamin C (Now, available in the form of Injection)
Precautions of the Vitamin C Injection:
If an individual has certain medical conditions then have to stop injecting it.
Pregnant women should avoid the product until the delivery and hereafter expert advice is needed to inject vitamin c.
While breastfeeding mothers should not take any whitening and anti-aging injection.
People should avoid these injections if allergenic to vitamins.
People with cardiovascular conditions should also avoid injection.
To order Now call on
Payal: +91 9736163030
Aliya: +91 99882 27622
Originally published at https://www.tamazglobal.com. | https://medium.com/@tamazglobal/vc-vitamin-c-1000mg-injections-10-ampoules-in-india-e2ac125b7e68 | ['Tamaz Global Trading Company'] | 2020-04-23 12:02:19.907000+00:00 | ['Vitamin C', 'Skin Care Products', 'Skincare', 'Vitamins And Supplements', 'Skin Treatment'] |
Some Unexpected Ways Medical Software Service Provider India Can Make Your Business Life Better | Some Unexpected Ways Medical Software Service Provider India Can Make Your Business Life Better Maveric Solution Nov 11, 2021·3 min read
We are involved in developing and designing Medical Software Service Provider India. The healthcare consulting organizations help the various patients connect with the medical software available. Medical software is important to the healthcare domain since it lets the healthcare service providers to manage and monitor the healthcare company as well as the patient’s data. Many hospitals use different types of software that help to manage all data and patients data or medical documents. We are also working in field of medical, engineering, education software service provider India.
Steinbichler Optical Blue Light 3D Scanner unveils the new high-speed sensor COMET 6 8M. The new variant of the high-end sensor series excites with a stunningly short measuring time which even in maximum resolution mode is less than one second. Equipped with a broad range of fields-of-view and unique, extremely bright projection optics, this innovative fringe projection system sets new standards due to its sensational speed. Thus, maximum efficiency is guaranteed for data acquisition featuring highest data quality and precision.
The tried-and-tested single-camera technology in the COMET 6 8M allows the measurement field size to be quickly adapted to the measuring task at hand. The new handling system makes sensor positioning with respect to the measurement object extremely easy, highly convenient and fatigue free. Thanks to its extremely easy and comfortable sensor handling, and the project-oriented software Steinbichler Optical Blue Light 3D Scanner guarantees fast measurement processes for a multitude of applications, e.g. in quality control, design and reverse engineering.
Sigma ROHR2 & SINETZ Software is the leading European for Pipe Stress Analysis, a standard tool for pipe static and structural framework analysis. ROHR2 compares existing and allowable stresses. The results will be documented in lists and graphical presentations. The range of applications is completed by internal pressure analysis and flange and nozzle calculation modules.
Integrated planning systems with specialized components have a central meaning in the project planning. Such a „specialist” is SINETZ, the program for the calculation of pressure drop and heat loss in branched and intermeshed piping networks with circular, rectangular as well as any cross sections, defined by the hydraulic diameter.
The tasks of SINETZ are:
· Dimensioning of cross sections and insulation in the project period
· Dimensioning of pumps
· Verification of dimensions for network expansions
· Usability analysis of existing piping networks
· Simulation of different operation states or abnormal occurrences in intermeshed piping networks
SINETZ Software solves the tasks by calculation of pressure and temperature loss in branched and intermeshed piping networks with circular, rectangular as well as any cross sections, defined by the hydraulic diameter. SINETZ calculates direction of flow, rate of flow and temperature loss for individual pipe sections, as well as the temperature and pressure of individual nodes, and the resulting flow distribution for an intermeshed system of any complexity. Calculations for compressible and incompressible media are both possible.
We Maverick Solution have confidence in the offering of amazing Engineering, Medical, and Education Software Services. All offered administrations are finished utilizing top evaluation assets according to the need of our customers. These administrations are notable among our customers because of their fantastic yield, low value, high adaptability, and high customer fulfillment. If you are interested my software services, you can visit my website once for more details about my services and software. Click here-https://www.mavericsolution.com/ | https://medium.com/@mavericsolution/some-unexpected-ways-medical-software-service-provider-india-can-make-your-business-life-better-77ce5344f2a1 | ['Maveric Solution'] | 2021-11-11 05:43:09.252000+00:00 | ['Medical', 'Provider', '3d Scanning Services', 'Software', '3D Printing'] |
Freelancer’s Log #1: You Can Cringe at my Content Mill Thoughts | It’s easy to say “I want to make a living writing.” It’s harder to actually figure it out, at least for me. So this is my attempt to hold myself accountable. And also log what I try, so I can look back years from now and shake my head at my naivety!
The two avenues I’m exploring are copywriting and articles/blogs. The annoying thing about being curious though, is I have a lot of things I know a little about. But not enough to make me confident I can write on them. Because of this, I’ve been focusing more on studying copywriting the past couple weeks and creating specs.
Pitching Wins
Between scouting potential clients on social media and LinkedIn, I ended up doing 20 pitches this past week. For each one I aimed to do my research so I could personalize it, whether through a cover letter or a cold pitch email. The good news is, I think I’m mentally toughening up so pitching seems less scary. The downside is, I got no responses. But I read a post from someone else in my copywriting class who said he did 80 pitches and got 3 nibbles. So instead of feeling discouraged, I’m looking at this as good practice for perfecting my pitches!
The Content Mill Got Back to Me
I know if you have been in freelance writing for a bit you’re about to cringe at this next part, but hear me out first. I had applied for a content mill a couple of weeks ago (as I floundered around, trying to get my bearings in this big, writing world….). They finally got back to me and I passed their screening process. I got the chance to write up to three articles to show I can do the work, and I did.
I think the research I’ve had to do thus far and some of the editing feedback has been helpful, so I’m grateful for that. I admit though, it’s grueling. I’ve been trying to make it “either-or” in my mind to keep me motivated: either you can continue plodding away at this and burn yourself out, or you can do it on the side and focus on better prospects.
Peace of Mind
The biggest reason I’m keeping the content mill on the side for now, besides the bit of polishing it’s giving me, is the peace of mind. I’m going into a major life change, and it’s scary. I experienced a massive sense of relief when I got the acceptance email. It may be tough, underpaid work, but I’m not worried about where money for food is coming from. In a sense, it was also validation. It probably sounds silly, but it would have been depressing to not have my writing skills considered good enough for even the content mill.
My to-do list is overwhelming, but technically I have till at least next August to complete it. So in the meantime, I will tackle it little by little and hopefully improve as I go! | https://medium.com/freelancers-log/freelancers-log-1-you-can-cringe-at-my-content-mill-thoughts-2e6dc564f95b | ['M Anastasia Kinderman'] | 2020-12-22 00:48:45.908000+00:00 | ['Writing', 'Freelancing', 'Content Mills', 'Pitching', 'Freelance'] |
Folexin | Folexin Where To Buy?
When it has related to hair thinning, there’s not anyone who is exempted. Thankfully, although baldness is inevitable, in addition it is treatable. You must block the hair thinning out of progressing and begin on Growing New Hair as rapidly as you are able to. As you are out there you truly will need to center on the most likely reason for your hair thinning and get the ideal treatment to deal with that cause in you. Once you observe that hair loss is no longer the conventional, after that you simply must assess it in with your doctor. As hair thinning and treatment would be two things which are closely associated with another, you would want to think about about addressing the matter properly too.
If you’re searching for just one of the much better products, have a glimpse at Rogaine. A few times you’d encounter certain products that would promise you an all round treatment for any sort of Hair Loss. Lots of people realize that lots of the merchandise which can be seen in the marketplace are overly unpleasant. Some of the greatest hair thinning products are not being among the very used. Deciding on the ideal hair loss products may be confusing sometimes due to this high number of brands which can be found on the business. As it has to do with identifying the very best hair loss products, you’re likely to soon discover industry for these products is huge and the amounts of choices available are lots of.
Folexin Coupons
For adult males, you’ll be able to choose to find products which have wheat and silk proteins. There are a lot of merchandise out there which handle a whole lot of distinct good reasons for hair thinning. Require note you will find a lot of merchandise available on the market that are COMPLETELY USELESS. The absolute most effective hair loss products for men are likely to add BOTH an topical and oral section of the therapy. Read More About Folexin
You can also appear hair loss products which have fatty acids based on plants which may produce enzymes whose reaction with testosterone can lead to hair loss. If you are hunting for hair loss products that can help keep your locks, it’s relatively crucial that you keep a few essential things in your mind. What’s the absolute best hair loss product for one person might not satisfy everybody, therefore it is vital that you do your research. If you’re trying to purchase the very best hair loss goods, then it may be a very confusing market place. Should you find the absolute best hair loss product for the circumstance, it will not only make you not feel better, however in addition look definitely better.
Folexin Does It Work?
Not all will be in each individual product. There are numerous products for sale, and it seems like all their advertisements and advertisements claim they’re the only real product you have to have beautiful hair again. While there are many goods available on the marketplace, you would like the best one for you. Although there are a number of hair loss products for women available on the market, I can not bet my money on almost all them as they do not maintain their promises.
Whichever treatment you decide on, you ought to be more conscious that when you quit using this product, the difficulty will restart. Natural products are made from nutrients that your own hair needs to be able to raise and keep much healthier. You are interested in buying hair loss products that contain ingredients such as zinc, pro-vitamin b 5 and saw palmetto. When you will find the perfect hair loss products for you, make sure to carry on using them so as not to lose some new rise of hair you’ve achieved. Finding the very best hair loss products for women are often as easy as ensuring you receive the very best proven ingredients in the appropriate combination. If you’ve started looking then you have realised that finding the best hair loss products for women could be difficult and confusing.
Folexin Before And After
To genuinely discover precisely what is causing your hair to drop out, many urge you to locate bloodwork done to ensure there aren’t any underlying causes. When you start to lose your hair, you may undoubtedly begin a frantic search for something that’ll stop the clinic. There are different ways which you’ll be able to save your hair without spending a great deal of time and money. By using the accounts, you’ll have the ability to discover the best solution for your hair thinning. Sadly, it’s maybe not quite as simple as it sounds to rise naturally hairloss. If you noticed the indications of greater than hair, then you’ve got to be a candidate for hair loss solutions that you would certainly take a posture to acquire over the counter or as prescription medication. | https://medium.com/@michelesae/folexin-6160ca7b0efb | ['Michele Sae'] | 2019-10-16 12:15:02.075000+00:00 | ['Hair', 'Hair Loss'] |
A Messy Guide to Dealing With the Messy Truth | A Messy Guide to Dealing With the Messy Truth
Reflections from the Data Viz Society’s discussion on how to deal with complex truths and stakeholder conflict
On a dedicated channel, #dvs-topics-in-data-viz, in the Data Visualization Society Slack, our members discuss questions and issues pertinent to the field of data visualization. Discussion topics rotate every two weeks, and while subjects vary, each one challenges our members to think deeply and holistically about questions that affect the field of data visualization. At the end of each discussion, the moderator recaps some of the insights and observations in a post on Nightingale. You can find all of the other discussions here.
Inspired by the growth of the Data Visualization Society, I decided to try out my first visualization competition. I am a massive environmentalist, especially on issues like climate change, so I cheerfully set out to compete in the National Geographic Plastic Pollution competition. The competition had the explicit goal of convincing individuals to reduce their plastic consumption for the health of the planet, but the more I looked, the more I ran into a problem.
It turns out plastics are freaking amazing (kind of, not really, but still good)! They are lightweight, durable, opaque or transparent, and come in many different varieties. But they’re also made out of fossil fuels, and if mismanaged they end up in the oceans where they break into pieces, infiltrating and infecting every level of our ecosphere.
Everything was more complex than I expected, each truth made up of multiple parts, trade-offs, and nuances. My goal was to reduce CO2 emissions, but emissions drop only 3 percent with total global recycling. Meanwhile, 99 percent of emissions happen before a product reaches a consumer. And in many cases, plastic is actually great at reducing emissions. All these facts were in direct conflict with the goal of the competition.
A screenshot from an interactive visualization, made by The Ocean Cleanup Project — with added personal notes. It shows the 1000 river nodes that send 80% of mismanaged plastic into the ocean.
Even when you explain this, or remind people of the sheer magnitude of the problem, or even show where and how plastic is mismanaged, you get the opposite reaction to what you want. You get helplessness. Aimlessness. A lack of responsibility.
Distraught, I decided to give up. I had good intentions, National Geographic had good intentions, we both wanted to help the world. It was as if my “client” and I had different perspectives, and this made a mess of the data and fluid truth. I did not know how to balance the intended outcome with my research, or how to make the situation any better.
So, I hosted a discussion for the Data Visualization Society on dealing with complex truths and connected stakeholder conflict instead.
Libbie Weimer uses a legal lens to deconstruct complex truths into three distinct categories. Something that seemed simple often can be shown to be complicated through “uncertainty … contradictory or lack of evidence … [or] contextualisation of the evidence.” She writes that the notion that “99 percent of CO2 emissions happen upstream of the consumer … complicates Natgeo’s narrative about plastic pollution” for the general population, by contextualising the evidence away from their direct choices. She explains that you must be comfortable reacting and working with each category to be legally sound and achieve a bulletproof result. In data visualization, taking the same approach should lead to enhanced durability.
One could argue in return, “why are we focusing on emissions?” or “people can choose more sustainable products,” contextualizing the evidence again. Despite thinking you have an escape hatch, just the conflict of ideas can lead to more problems like contradiction and uncertainty. Those messages might not even reach your audience. Instead, when working with organizations, Weimer first tries to emphasize a shared value and image of credibility. By being brave with the truth and purposefully elaborating on the nuanced situation, you must now trust the audience, and then the messy truth flows freely from a place of respect. In turn, you can achieve effective, credible communication, by being secure on all fronts. Her experience tells her that it is hard to get everyone on board doing this, but “the essential component is the relationship” with the client. When you trust each other, it is easier to trust others. By leveraging any shared values, she concludes that anyone can achieve pluralistic and multi-faceted visualizations.
Credibility is not the only value critical to dealing with messy truths. Ben Olmos shared a collection of his tried and tested values to rationalize decision-making around complex truths. The ones that stuck out to me personally were:
Trust: I value and want to maintain your trust
Accountability: Don’t attach your name to anything that sucks
Integrity: Be open, honest, and real
In explaining his values, Ben emphasized the idea of his own personal brand as well as the company’s, noting that if you “compromise your integrity and reputation … you may find it to be far more difficult earn it back.” Instead, he says that you should not embed yourself or your client into the data, just allow the truth to naturally reveal itself. He writes:
“As data analysts, scientists, and or visualization experts, our job is to tell a story with data. I was hired to help, educate, inform, and ultimately make those who hired me look great. I don’t manufacture the data, the organization or the environment does. All I can do is ensure we are properly collecting it, ethically cleaning it, and accurately illustrating it, so that it is as solid as it can be.”
In the same scope, Bill Shander sees himself as a “data fiduciary” rather than someone who can deliver an outcome from what he gets. Like the “ghost-whisperer” of your information, he takes a relative perspective of the “truth” and tries to build a direct communication link with stakeholders, minimizing any surprise shocks.
Even when given a desired outcome, Elijah Meeks recalls one incident where there was an implied critique of his work, “that he could not ‘pull it off.’” It was seen as his “professional failing” for not being able to produce insights that aligned with the client’s idea of the truth. Others also mentioned that they have interacted with people who disregarded key insights or asked them to spin the data to make the truth more palatable. Meeks explains that the conflict between stakeholders over the truth and the accompanying complex approach, makes the truth singular. He writes:
“Instead of building data products to support impact and decision making, the lowest common denominator is invariably: ‘Can we agree from an engineering perspective that the data is good’ and then the metrics used for decision making are simple counts of that. Then it’s just [a] Move Number.”
When arguing about nuances and complexity, the messy approach comes out of favour. The only thing that can circumvent the war of ideas is the data and its inherent credibility. The situation is often complex, so something like the number of plastic bags wasted, in its tangibility and purity, becomes the key metric that defines success (the Move Number). Everyone reacts to that number now, and that shapes outcomes, insights, and discourse. Even if cotton bags and paper bags are worse in different ways, all that is counted is the fact they contributed in not wasting a plastic bag.
On this subject, I commented that it was “as if the race for optimization and dashboarding stands in the way of slow exploratory thinking, and that the struggle to find the answer is not worth its weight if we assume every other answer will come from the same place.”
Bridget Cogley was on this same boat, saying: “People want to achieve X, so they hyperfocus into Y metric at the expense of so many other data points.” Stating more generally that focusing on specific singular aspects while missing the complex view is part of our human nature, she concludes that we have to define success differently and that ethics-based collaboration is key. Her success comes from walking people through the epiphany, the “aha” moment, and exploring possibilities from there together. You can put the seed of an idea in their head and watch it grow within them, with their own complex rationalisation. During their subsequent dive into the messy truth, you can share the conflict and come up with solutions together. | https://medium.com/nightingale/a-messy-guide-for-dealing-with-the-messy-truth-557d56bcfe2b | ['Oren Bahari'] | 2020-07-06 20:26:50.129000+00:00 | ['Topicsindv', 'Decision Making', 'Problem Solving', 'Data Science', 'Data Visualization'] |
Ethereum 2.0: Launch of the Staking on Coinbase soon | Ethereum 2.0: Understanding ETH 2.0
Ethereum staking: Definition
Coinbase and Ethereum 2.0: Staking on the platform soon?
The first phase of Ethereum 2.0 was on Tuesday, December 1, 2020. In an article on their site, Coinbase indicates that it plans to launch the major upgrade with the implementation of a staking service for Ethereum soon; the platform will fully support ETH 2.0 with the implementation of staking and trading functionalities.
Ethereum 2.0: Understanding ETH 2.0
The long-awaited first of December date for Ethereum fans has finally arrived.
Ethereum 2.0 is launched and is accompanied by a rise of the ether price which exceeds 600 dollars. The value of this second blockchain is close to 70 billion dollars. But one wonders how this update is really going?
Since its launch in 2015, Ethereum has become the most widely used decentralized network in the crypto-sphere. It was then necessary to rethink and redefine the architecture of the network in order to provide solutions to the problems the network is facing: one of the main problems of the Ethereum block chain today is scalability.
The popularity of the network has led to the execution of a large number of transactions on Ethereum’s multi-chain public network and these transactions have begun to test the network’s scalability limits. Block generation is limited to 7–15 transactions per second. This transaction congestion results in long waiting times for Ethereum users.
Ethereum 2.0 is the update of the Ethereum blockchan. Eth2.0 refers to a set of interconnected updates that will make Ethereum more scalable, secure and sustainable.
These updates are realized by several teams of the Ethereum ecosystem.
The upgrade aims to improve:
Network efficiency: Most poof of stake networks have a small set of validators, allowing for a more centralized system and reducing network security. Ethereum 2.0 requires a minimum of 16,384 validators, which makes it much more decentralized and therefore more secure.
The scalability of the Ethereum network: the network will be able to process more transactions; with Ethereum 1.0, the network could only handle about 30 transactions per second, which caused delays and congestion on the network.
Ethereum 2.0 promises up to 100,000 transactions per second.
ETH 2.0 will be launched in three phases:
- The first phase 0: known as the Beacon Chain, it was just launched yesterday, December 1st and involves the introduction of Proof of Stake alongside the pre-existing Proof of Work mechanism. The latter (POS) is designed to minimize the disruption of normal chain activities during these early stages of transition.
- Phase 1: Shard chains will be introduced in Phase 1, scheduled for 2021, to accelerate the network’s path to greater scalability. The network is expected to be launched with 64 shards (allowing 64 times the throughput of Ethereum 1.0) although at launch they will not support accounts or smart contracts.
- Phase 2: is planned somewhere in 2022 and will unlock the other features of the Ethereum Shard chain.
It will also add Ether accounts and allow transfers and withdrawals, implement cross transfers and contract calls. This second phase will create runtime environments for scalable applications that are built on Ethereum 2.0.
The PoW mechanism will be retired in Phase 2, with the Ethereum 1.0 data being absorbed into the ETH 2.0 public network.
Ethereum 2.0 : A test of Spadina in September 2020
In September 2020, the developers of Ethereum 2.0 had launched a new test network.
This new test followed the test of Spadina which had encountered problems during its launch, which had forced to do at least one more “dress rehearsal” before the launch.
Spadina is a short-term testnet designed to test the genesis, or creation of the first block, on Ethereum 2.0. It differs from the larger testnet Medalla, which is a general sandbox representing the operational version of the network.
The problems with the Spadina testnet were related to low participation, associated with “confusion” and “invalid repositories”.
Testnets to test different parts of the Eth 2.0 launch began to be deployed in 2019.
Medalla is the first Eth 2.0 Phase 0 Testnet supported by the Ethereum Foundation.
This testnet was preceded by a series of other tests such as Goerli and Schlesi.
Danny Ryan, researcher at the Ethereum Foundation said in an article entitled “eth2 quick update no. 16” on September 14, 2020:
“The main objective is to give us all another chance to go through one of the most difficult and risky parts of the process — deposition and genesis — before we start the main network.
If all goes well, this should give us greater peace of mind before we embark on the most difficult and interesting part. “
From Proof of Work (PoW) to Proof of Stake (PoS)
The new version of Ethereum will notably allow to make the network more scalable and to move from proof of work to proof of stake.
The proof of work is a method of validation of the blocks of transactions which consists in using the computing power of a machine while the proof of stake is a method of validation of the blocks of transactions which consists in proving the ownership of a certain amount of crypto-currency.
In Proof of stake, miners run nodes and expend computing power to solve complex mathematical problems in a contest to exploit the next block.
The Proof of Stake replaces the two main components of PoW (miners & electricity) with validators and issues on Ethereum 2.0. In general, validators replace miners as individuals who maintain the agreed state of the grid and receive rewards for random selection of the next block.
Proof of stake is thus the most important change in Ethereum 2.0, as it reforms the crypto-economic incentive structure for blockchain validation. It will change the way the network operates in many ways. The transaction validation mechanism will shift from extraction to play, making the network more secure. Validators will have to stake out 32 ETH to run a node. Validators will be randomly selected and rewards will be awarded for each successfully verified block added to the network.
The main advantage of PoS is that it is much more energy efficient than PoW, as it decouples energy-intensive computing from the consensus algorithm. PoS will reduce the energy consumption of the network, as validators do not need a huge amount of computing power to participate.
Ethereum staking: Definition
We previously talked about dropping 32ETH in order to activate the validation software for ETH 2.0, this is called staking.
The validator is responsible for storing data, processing transactions and adding new blocks to the blockchain.
Theoretically, anyone can participate in the staking of any blockchain using a proof-of-stake consensus.
For Ethereum, it is a question of reorganizing the whole Ethereum platform, effectively launching a new, more scalable version.
Note that the major difference between Bitcoin and Ethereum networks is the way it validates transactions.
The Bitcoin network uses proof of work (PoW) while Ethereum moves from proof of work to proof of stake (PoS). Proof of stake means that you show your commitment to the network by blocking, an ETH deposit to participate as a network validator.
This will happen when Ethereum upgrades its network to version 2.0, an enhancement that will be completed in 2021 according to previous phases.
A validator is thus a participant of the Ethereum network that manages the nodes that propose and validate the blocks of the tag chain. To be able to “propose and validate”, validators must involve the ETH they own.
However, the network can sanction validators who try to propose false blocks or act in a malicious way. Also, validators will have to be online consistently, under penalty of minor sanctions.
The ETH’s rate of return on play is expected to be approximately 4–10%. A program called “slashing” will be applied to any validator acting maliciously towards the network by taking a portion of the validator’s stake.
Staking: Benefits and Risks
It is important to know that during a Staking, one has:
The possibility of generating income by holding the cryptocurrency;
Rewards are given for actions that help the network to reach a consensus. You will be rewarded for consolidating transactions into a new block or checking the work of other validators, as this is what allows the chain to operate securely.
The requirement to have 32 ETHs to become a full validator or a few ETHs to join a placement group. You will also have to manage an “Eth1” or mainnet client. The launch pad will guide you through the processes and hardware requirements. You can also use a management API.
However, by staking, users lock their assets in crypto for a defined period of time. Thus, while you can be rewarded for work that benefits the network, you can lose ETH for malicious actions, being offline and not participating.
Coinbase and Ethereum 2.0: Staking on the platform soon?
The holders of Ethers (ETH) will soon have the possibility to do staking with their tokens on Coinbase. The platform intends and has presented plans to support ETH2 through staking and trading. Coinbase customers will be able to convert the ETH in their Coinbase accounts to ETH2 and earn staking rewards.
While the ETH2 tokens wagered remain locked in the tag chain, Coinbase will also allow trading between ETH2, ETH and all other supported currencies, providing liquidity to our clients.
Source: coinbase blog
On its blog, via an article titled “Ethereum 2.0: Staking rewards coming soon to Coinbase”, the company said:
“While ETH2 tokens that are wagered remain locked on the tag chain, Coinbase will also allow trading between ETH2, ETH and all other supported currencies, which will provide liquidity to our customers”.
They added:
“We will launch the above functionality to customers in eligible jurisdictions beginning in early 2021. We will provide more details as we approach the launch of each functionality. »
Ethereum holders who choose to put their coins on deposit will not be able to withdraw or transfer their participation until the end of Phase 1 — a process that could take years. Several companies, including Darma Capital, are planning to offer intermediary bids that would allow users to continue to access their capital.
Coinbase is therefore taking the lead on Ethereum staking, as its users will be able to easily do ETH 2.0 staking if they have ETH tokens on the crypto exchange, which today has more than 30 million users worldwide.
The Magna Numeris team is delighted with this update of Ethereum. And you, what do you think about it?
Written by Laetisia Harson, Project Manager at Magna Numeris
https://twitter.com/CartamOfficial | https://medium.com/@magnanumeris/ethereum-2-0-launch-of-the-staking-on-coinbase-soon-4d708c95e87c | ['Magna Numeris'] | 2020-12-02 09:29:04.140000+00:00 | ['Coinbase', 'Ethereum', 'Staking', 'Ethereum Blockchain'] |
Prosperity Requires Money | by Randy Gage
Posted By: Randy Gage / December 22, 2020
There is a qualitative difference between money and prosperity. But please don’t make the mistake of many people, believing that money is not part of the prosperity equation. Money is an integral part of prosperity and any belief otherwise will greatly diminish the level of prosperity you will manifest in your life.
True prosperity is multifaceted. An accurate definition of prosperity would have to include optimal health, harmonious relationships, spiritual sustenance, and most definitely…money and material things.
People sometimes argue with me that Mother Teresa was a living example of someone who was prosperous without money and material things. I find that hard to believe, since I made an ongoing monthly donation to her charity. That doesn’t mean everyone everywhere has to have money. There certainly are ascetics who eschew all material comforts who profess to be prosperous and they may be. But I believe that for more than 99 percent of us, the ascetic lifestyle would not feel prosperous to us. (If you’re wondering if you fit in the less than one percent group, I’ll posit that since you’re getting broadband and reading this on a device right now…you’re not.)
There are lots of people with loads of cash who have sacrificed their health for it, and they are certainly not prosperous. Likewise, those with high net worths, but stuck in negative, dysfunctional relationships are not prosperous either. You also need a harmonious mindset, not necessarily religious, but experience a day-to-day spiritual connection that grounds you. Then we must add money freedom to complete a truly prosperous life. If you are healthy, have a great marriage, are spiritually grounded, but struggle to pay your credit cards each month — you are certainly not prosperous either.
Get the money thing out of the way…
It’s not that money and material things make you happy. They don’t. What money and material things can provide is a way to express yourself that enhances your level of happiness. When we affirm, “get the money thing out of the way,” this elegantly yet simply explains the dynamic that matters: You don’t have to reach a certain net worth, have more money than your neighbors, or make the FORBES list to be prosperous. That’s just social signaling bullshit that prevents you from feeling prosperous. True prosperity just means you have enough wealth so that you don’t have worry about survival and necessities, or are forced to make every decision based on the cost. Money freedom means you’ve gotten the money thing out of the way and it’s an important element of a prosperous life.
Peace,
- RG
Previous post: Faith as a Prosperity Magnet | https://medium.com/prosperity-success/prosperity-requires-money-4dc12f2ed965 | ['Randy Gage'] | 2020-12-22 14:56:10.078000+00:00 | ['Money', 'Wealth', 'Prosperity', 'Happiness', 'Life Lessons'] |
Changing Nuances on the ‘Prime Minister’ | It has been interesting to observe the changing political language on the BBI report particularly on the roles of President and Prime Minister but first, some context.
In 2004 Bomas constitutional conference, the role of the Prime Minister vis-a-vis that of the President was one of the deal-breakers of that constitutional search.
Towards the last day of the constitutional conference, over 415 notices of motion were filed by those who sought to make amendments on the draft report.
Goerge Omari Nyamweya (Delegate No. 615), representing the Democratic Party (President Mwai Kibaki’s party) filed several amendments whose import was to water down the roles assigned to the Prime Minister.
The political consensus had been that the executive authority of the country to rest on the President, the Prime Minister and the Cabinet. The DP delegation had strenuously opposed this arrangement, and wanted the totality of executive authority to rest squarely on the president.
A splinter group within the government of Mwai Kibaki, then made almost entirely of members of the LDP Party then led by Raila Odinga, preferred a situation where the President was the head of state while the Prime Minister was the head of government.
While the DP/NAK group saw this arrangement as creating what became known as “two centers of power”, the LDP group argued that the arrangement was one more chip off the monster called “imperial presidency”.
You must not forget that the 2002 election and support for Kibaki by the LDP had been premised on Kibaki appointing Raila Odinga as Prime Minister, with the powers to run government. This “MoU” became a sticking issue in the early days of NARC government because when Kibaki constituted his first cabinet, he dealt LDP a shorthand.
The demands by the opposition, and their description of the role of the prime minister, gave birth to what was now widely, and, derisively, referred to as “executive prime minister”.
Kibaki lost that referendum. The opposition, buoyed by the victory against Kibaki, formed a party called ODM and spoilt for a fight. The election ended in chaos.
The formation of the Grand Coalition government revived the structure of government that had seen the talks collapse at Bomas. The Prime Minister returned.
There has been two more elections after 2007 and the post of the Prime Minister which has been contentious since 2003 is now an eventuality.
In the BBI, while the President retains the dual headship of state and government, the Prime Minister now has fairly enhanced powers.
The BBI Prime Minister, though still lower in stature as was contained in the aspirations of the LDP delegation at Bomas, is still way above what the DP/NAK hardliners offered as their “irreducible minimum”.
One more irony is that karma is a bitch. The current descendants of DP/NAK may just be the first to taste the “empty” powers of the Prime Minister, while their former LDP antagonists ascend to the presidency, in an arrangement endorsed by both of them! | https://medium.com/@dikembe/changing-nuances-on-the-prime-minister-a26c4864fd6b | [] | 2020-12-07 14:03:03.796000+00:00 | ['President', 'Bbi', 'Nak', 'Prime Minister'] |
Virtual Clusters for Kubernetes — Benefits and Use Cases | Virtual Clusters for Kubernetes — Benefits and Use Cases
Virtual Kubernetes clusters could be the next driver for Kubernetes adoption
Photo by Kelvin Ang on Unsplash.
Virtual Kubernetes clusters (vClusters) have the potential to bring Kubernetes adoption to the next level. They run in a physical Kubernetes cluster and can be used in the same way as normal clusters, but they are still just a virtual construct. For a detailed definition and description of virtual Kubernetes clusters, take a look at this article.
Similarly to virtual machines that revolutionized the use of physical servers, virtual Kubernetes clusters have some benefits compared to physical clusters that make them particularly useful for some scenarios.
In this article, I will describe the benefits of virtual Kubernetes clusters and provide some use cases in which vClusters are more advantageous than other solutions such as many individual clusters or namespace-based multi-tenancy. | https://medium.com/better-programming/virtual-clusters-for-kubernetes-benefits-use-cases-a4eee1c5c5a5 | ['Daniel Thiry'] | 2020-09-01 17:17:12.375000+00:00 | ['DevOps', 'Programming', 'Kubernetes Cluster', 'Kubernetes', 'Containers'] |
PySpark process Multi char Delimiter Dataset | The objective of this article is to process multiple delimited files using Apache spark with Python Programming language. This is a real-time scenario where an application can share multiple delimited file,s and the Dev Team has to process the same. We will learn how we can handle the challenge.
The input Data set is as below:
Name@@#Age <--Header
vivek, chaudhary@@#30 <--row1
john, morgan@@#28 <--row2
Approach1: Let’s try to read the file using read.csv() and see the output:
from pyspark.sql import SparkSession from pyspark.sql import SparkSession
spark= SparkSession.builder.appName(‘multiple_delimiter’).getOrCreate() test_df=spark.read.csv(‘D:\python_coding\pyspark_tutorial\multiple_delimiter.csv’)
test_df.show()
Output
#Note: Output is not the desired one and so the processing will not yield the desired results
Approach2: Next, read the file using read.csv() with option() parameter and pass the delimiter as an argument having the value ‘@@#’ and see the output:
test_df=spark.read.option(‘delimiter’,’@@#’).csv(‘D:\python_coding\pyspark_tutorial\multiple_delimiter.csv’) test_df.show(truncate=0)
error
#Note: spark throws error when we try to pass delimiter of more than one character.
Approach3: Next way is to use read.text() method of spark.
mult_df=spark.read.text(‘D:\python_coding\pyspark_tutorial\multiple_delimiter.csv’)
mult_df.show(truncate=0)
spark.read.text
#Note: spark.read.text returns a DataFrame.
Each line in a text file represents a record in DataFrame with just one column “value”. To convert into multiple columns, we will use map transformation and split method to transform and split the column values.
#first() returns the first record of dataset
header=mult_df.first()[0]
print(header)
Output:
Name@@#Age #split('delimiter') the string on basis of the delimiter
#define the schema of the Dataframe to be created schema=header.split(‘@@#’)
print(schema) Output:
['Name', 'Age']
The next step is to split the row and create separate columns:
#filter operation is removing the header
#map operation is splitting each record as per delimiter
#.rdd converts DF to rdd and toDF converts the rdd back to DF mult_df.filter(mult_df[‘value’]!=header).rdd.map(lambda x:x[0].split(‘@@#’)).toDF(schema).show()
Final Output
Hurray!! We are able to split the data on the basis of multiple delimiter ‘@@#’.
Summary:
· Read Multiple Delimited Dataset using spark.read.text() method
· use of map(), filter() transformations
Thanks to all for reading my blog, and If you like my content and explanation, please follow me on medium and share your feedback, which will always help all of us to enhance our knowledge. | https://pub.towardsai.net/pyspark-process-multiple-delimited-data-ef99fa05c6f7 | ['Vivek Chaudhary'] | 2020-09-17 06:08:52.798000+00:00 | ['Python Programming', 'Big Data', 'Programming', 'Apache Spark', 'Python3'] |
[Java-201a] The Hidden Dangers of Scanner | Don’t Close Scanner!
When we use Scanner and leave it open, our IDE may complain that we have a resource leak. Even if the IDE does not complain, usually it’s a good idea to close something if we don’t need it anymore. The intuition, then, is to simply close it… Right? Consider the following code snippet:
public class Main {
public static void getInput1() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void getInput2() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void main(String[] args) {
getInput1();
getInput2();
}
}
At a glance, there is nothing wrong with the code. We create a new Scanner, close it, create another one, and close that one. However, if we try to run it, this is what we get:
ss
Exception in thread "main" java.util.NoSuchElementException: No line found
at java.util.Scanner.nextLine(Scanner.java:1540)
at com.usc.csci201x.Main.getInput2(Main.java:17)
at com.usc.csci201x.Main.main(Main.java:23) Process finished with exit code 1
I typed ss .
What’s going on here then? Observe the constructor of Scanner when we instantiated it: we gave it System.in . That’s an okay thing to do. However, it has an implication that may not be obvious: when we do scanner.close() , we are closing System.in as well.
What is System.in ?
System.in is the system input stream opened up by the JVM. Normally it stays open throughout the lifetime of the program so we can capture user input. However, if we close it prematurely, we will not be able to capture user input anymore. Hence, when we try to read input with System.in again, it tells us No line found .
The Solution
Just don’t close Scanner.
If you must close Scanner, you can wrap System.in inside of a FilterInputStream and override the close() method:
Scanner scanner = new Scanner(new FilterInputStream(System.in) {
@Override
public void close() throws IOException {
//don't close System.in!
}
});
And now it is safe to call scanner.close() as it will only close FilterInputStream instead of System.in .
If your application is single-threaded, consider making Scanner static. Maybe like so: | https://medium.com/swlh/java-201a-the-hidden-dangers-of-scanner-7c8d651a1943 | ['Jack Boyuan Xu'] | 2020-01-25 07:17:51.034000+00:00 | ['USC', 'Viterbi', 'Programming', 'Java'] |
5 Companies using blockchain to change the property industry | The global property market was worth $228 trillion in 2017 according to Savills, a real estate firm. Property is more valuable than all the world’s stocks, shares and securitised debt combined.
But while property is king, it has its issues. In many industrialised economies, the young and working class are priced out of the property market, middlemen are becoming increasingly dominant and wealth is more concentrated in the hands of the few.
Can blockchain help solve some of those problems? These five startups certainly think so.
DECENTRALISING THE PROPERTY MARKET WITH OPENBRIX
The property world is dominated by middlemen. Companies like Zoopla and Rightmove don’t own any property, but they control nearly all the interactions between buyers and sellers.
They charge property agents access to that information, which ultimately is passed on to the customer. OpenBrix wants to change that.
OpenBrix is a decentralised network that connects all key players in the property letting and sales market on a single network.The network is designed to give everyone access to all the data, and incentivise users to maintain the network via a token system. So if a user or estate agent provides data that helps the network they’re rewarded, as are users.
It’s a more balanced system that helps prevent middlemen from dominating and dictating terms to buyers and sellers.
REDISTRIBUTING PROPERTY WEALTH WITH HIP
In most industrialised countries, home ownership is in decline. The chances of a young adult on a middle income owning a home in the UK have more than halved in the past two decades according to the Guardian newspaper.
Lenders control access to both property and money via the mortgage system that incumber buyers with long-term debt. That’s where companies like HiP want to come in.
HiP aims to be the first to make real estate wealth more accessible at scale by turning equity and debt underlying a property into tradable, interactive assets.
The UK-based company wants to turn the equity in a home into a currency that owners can release and use without having to sell their home or increase the size of their debt. Investors can then buy smaller chunks in property, allowing more people to benefit from partial home ownership.
ACCELERATE THE BUYING PROCESS WITH CLICKTOPURCHASE
Buying a house is a long, drawn out process. The multiple layers of bureaucracy involved: property portal, estate agent, solicitor, land registry, surveyors — can leave buyers and sellers in limbo. That’s where companies like Clicktopurchase come in.
Clicktopurchase is an online execution platform for buying and selling property.
It streamlines the process of verifying genuine buyers and sellers, creating a framework for the buying process to take place in real time, as opposed to the weeks and sometimes months it takes in the industry currently.
The company also has the ability to provide an option for real time auctioning with a live auctioneer interacting with auction bidders.
REDEFINING THE RENTAL INDUSTRY WITH ATLANT
Short term rentals have risen sharply thanks to the likes of Airbnb. However, the dream of peer-to-peer renting has gone slightly askew.
Service fees from these middlemen are putting up prices. On average, Airbnb charge a service fee of up to 12% from the guest and 3% from the host, largely to compensate its 3500+ employees who process transactions in a centralized fashion.
That’s where companies like ATLANT come in.
ATLANT addresses both the short- and long-term real estate rental markets by lowering fees, using decentralized conflict resolution and making this market truly P2P, eliminating various middlemen, and also ensuring that reviews and listings are honest, as they are stored on an immutable blockchain.
ATLANT’s decentralized system allows people to earn tokens to mediate helping to keep costs down and reviews honest.
CREATING A DECENTRALISED PROPERTY PLATFORM WITH IMBREX
Current property platforms are maintained and controlled by their owners — which puts the buyers and sellers in a position where their data isn’t their own. Because of that leverage, those companies effectively control the marketplace.
But what would happen if those buyers and sellers became the owners of the data and maintainers of the network? That’s where IMBREX comes in.
IMBREX is building a global, shared, real-estate database, property identification layer and data exchange on Ethereum.The France-based company allows properties to be listed and searched for free. Users receive rewards for contributing market data to the platform, or posting detailed information about a specific listed property.
Want to know more?
This is just the tip of the crypto iceberg. Aysha has a a whole host of easy-to-understand articles that make learning about cryptocurrencies, blockchain and DLT technology really bloody simple.
Click here to learn more. | https://medium.com/aysha-io/5-companies-using-blockchain-to-change-the-property-industry-379d2bc1c444 | ['Matt Hussey'] | 2018-07-19 09:14:21.850000+00:00 | ['Bitcoin', 'Property', 'Blockchain', 'Real Estate', 'Rental'] |
What you admire is your success. It’s not all $$$, popularity, luxury… | If it’s beautiful to you, it’s your path to success
For my cat Sookie, beauty is a crystal Christmas ornament
I’m currently a nomadic bootstrapper. I’ve had one failed and one successful business. Before striking out on my own, I worked as a an amateur classical pianist, then a reformed theologian, a home automation engineer, a copywriter, an editor, an instructional designer, a product manager. I’ve been successful at every one of these jobs — even when I failed. Why? Because I focused on building something beautiful. But when the beauty faded, I moved on.
Beauty is that which delights, inspires awe, and connects you to a greater truth beyond yourself. It is one of the rare things in life that is valuable in and of itself. For the beholder, it is without compare. It remains beautiful long after we are around to admire it.
Yet, beauty is subjective, relative, and fleeting. What I find beautiful today may change in 2 months. Yet the truth remains: what I find beautiful leads me to uncovering and further understanding my philosophy of success.
Success is that which drives me, motivates me, pushes me to be better. It is not necessarily more money, more space, more followers, more muscle mass, more users. Success is what keeps me going when failure appears.
So what do beauty and success have to do with one another?
I have found that if we look at what we find beautiful, we’ll also understand why we admire, envy, and go after a particular kind of success.
I love reading. A well-articulated and novel idea encapsulated in the written word is breathtaking to me. Ariel and Will Durant’s The Lessons of History is an unrivaled example of this.
I love classical piano music. Rachmaninoff’s Piano Concerto №2 transcends the corporeal and lifts me to a universal plain of beauty. I disappear and only the music remains.
What does this tell me of success? It shows that what I admire is also what I aspire to create. What I find beautiful is where I find success.
So perhaps, instead of searching for success, I should focus on understanding what I find beautiful and why. Then drop everything I find ugly and work voraciously towards making something beautiful. My own beauty.
So what is your beauty?
A truth discovered?
A theory proven?
A book beloved?
A film revered?
A beauty desired?
A mind envied?
A net worth unequaled?
A legacy written
A record unbeaten?
A trophy won?
A criminal convicted?
A life self-sustaining?
A fanbase expanding?
A monetization model scaling?
A remote job unbound?
An online business profiting?
A novel anticipating?
A painting studied?
A gadget addicting?
A TV show enrapturing?
A concerto enthralling?
A condo admired?
A surgery perfected?
A theory discovered?
A life saved?
A virus eradicated?
A job-creating small business?
An industry disrupted?
A crop of peaches devoured?
A family bonded?
A physique distracting?
A song chart climbing?
A student driven?
A luxury car collected?
An app sold?
An ROI exponential?
A meal photographed?
A life nomadic?
A policy controversial?
A following religious? | https://medium.com/the-free-range-life/what-you-admire-is-what-you-aspire-to-create-474ae9f7eda6 | ['Cory Decker'] | 2020-12-14 20:18:10.294000+00:00 | ['Success', 'Stoicism', 'Beauty', 'Philosophy', 'Lifestyle'] |
The Euler-Mascheroni Constant | Remember the harmonic series?
More often than not, it serves as one’s first encounter with a series wherein individual terms diminish successively yet the series diverges to infinity. The below quote nicely sums the up notoriousness of the harmonic series:
Today I said to my calculus students, “I know, you’re looking at this series and you don’t see what I’m warning you about. You look and it and you think, ‘I trust this series. I would take candy from this series. I would get in a car with this series.’ But I’m going to warn you, this series is out to get you. Always remember: The harmonic series diverges. Never forget it.” — Source
Indeed. Never forget. The harmonic series diverges. Flirting continuously with convergence all the way, yet slowly but surely making its way to infinity. Increasing ever so slowly. In your grasp, yet out of it. Coldly tantalising its suitors.
By now, the series’ divergence has been confirmed via myriad different proofs, though none more beautiful than the very first one, which stands out for its simplicity. However, in this article, we would be concerned with the question
Is there any function which provides a decent approximation to the partial sums of the harmonic series?
Here, by partial sums, we mean the first n terms of the series, ie, -
It turns out there is one, the natural logarithm function. As n gets larger, the difference between the partial sums and ln(n) approaches a finite limit. This limit is known as the Euler-Mascheroni constant, γ (gamma).
First appearing in 1734, the constant derives its name from two mathematicians — the ubiquitous Leonhard Euler, and Lorenzo Mascheroni, an Italian mathematician. The notation γ was adopted most probably because of the constant’s connection with the gamma function (extension of the factorial function). Despite being around for almost 300 years, gamma’s rationality is an open question. Also, it is unknown whether gamma is algebraic or transcendental.
How does the harmonic series relate to the logarithm function? That, precisely, is the content of this article. The process followed here relies on geometrical intuition, and is an archetype of a well established test for series convergence, the integral test.
Let
Then,
For better understandability, the article has been divided into four sections | https://medium.com/cantors-paradise/the-euler-mascheroni-constant-4bd34203aa01 | ['Ujjwal Singh'] | 2020-12-14 14:45:04.695000+00:00 | ['Science', 'Math', 'Mathematics', 'History'] |
Apply any metrics in PyTorch | In this article, we will understand how we can apply any metrics from scikit learn library in our pytorch code .
So, in our training loop when we do model(input_data_from_dataloader) it will give us regression outputs for our regression model or, classification confidence probabilities for our n number of classification classes . Now if we are training our model in GPU or TPU then we need to detach our predictions and real values both to CPU with the help of this code and change their type to numpy array.
real_targets = real_targets.detach().cpu().numpy()
predicted_targets = predicted_targets.detach().cpu().numpy()
After this we need to understand , what our metrics want to fed in. If it is asking for one single prediction of data of all the outputs for both its real as well as predicted target then we write the following code:
total_sum = 0 in zip(real_targets, predicted_targets):
total_sum += OUR_REQUIRED_METRIC(i, j) for i,jzip():total_sum += sklearn.metrics (i, j) # Keep on taking the total sum value from each batches of dataloader
final_sum = final_sum + total_sum # taking the average of all sum we obtained REQUIRED_ACCURACY_PERCENTAGE = final_sum/number_of_data_in_dataframe
And if our metric is asking array of all predictions as well as the array of all targets at a time , then we need to do our metric evaluation like this
all_targets = 0
all_predictions = 0 # for each batch of training and validation do this
if batch_index_number > 0:
all_targets = np.concatenate((all_targets, real_targets), axis=0)
all_predictions = np.concatenate((all_predictions, predicted_targets), axis=0) else:
all_targets = real_targets
all_predictions = predicted_targets # finally calculating the value of evaluation metric like this REQUIRED_ACCURACY_PERCENTAGE = sklearn.metrics.OUR_REQUIRED_METRIC(all_targets, all_predictions) (all_targets, all_predictions)
Read the comments present before every line of the code written here , so that if can be understood better. If you still have any questions with regards to this article then don’t forget to ask it in the comments down below and until then happy learning . | https://medium.com/analytics-vidhya/apply-any-metrics-in-pytorch-16e281e06699 | ['Soumo Chatterjee'] | 2020-12-28 16:35:49.266000+00:00 | ['Pytorchevaluationmetric', 'Metrics', 'Learn Applying Evaluation', 'Pytorchmetric', 'Apply Metric Pytorch'] |
Facing Three Fundamental Coronavirus Fears | 1. Since immunity to the novel coronavirus may not last long, doesn’t the virus need to be eradicated in order for the pandemic to end?
Humans generally become immune to a pathogen after immunization or recovery from an infection. Depending on the particular disease or vaccine as well as host characteristics such as the age and health of the individual, immunity can last a lifetime or may be short-lived. Moreover, immunity is not an ‘either/or’ process. Our immunity to different pathogens doesn’t just suddenly switch off. Instead, it wanes over time.
It is true that a person’s immunity to the novel coronavirus, SARS-CoV-2, probably only lasts a few months to a few years. It is also true that outbreaks of two other coronaviruses earlier this century, SARS and MERS, were indeed contained, although not technically eradicated. However, the other four known human coronaviruses, HCoV-229E, -NL63, -OC43, and -HKU1, are considered endemic. These viruses, which have been around longer than the other three, are continuously circulating through the population and typically cause no more symptoms than the common cold.
Interestingly, there is historical evidence that the four endemic coronaviruses were likely the cause of pandemics, or at least epidemics, in the past. Of course, this would have been long enough ago that people didn’t know what a coronavirus was, but humankind managed to recover anyway. These older coronaviruses now permeate among the population. People often become exposed at a young age, and, as with SARS-CoV-2, the vast majority of children have no symptoms or a minor respiratory infection. Even though immunity to these viruses diminishes over time, because they are endemic and continuously circulate through the population, every few years our immune systems are again exposed and receive a refresher course on how to kill the virus.
With possible exception to geographically isolated locations like Iceland and New Zealand, SARS-CoV-2 is on a path to becoming endemic like its older endemic coronavirus siblings. In fact, since March, for most regions containment has no longer been a strategy for managing SARS-CoV-2 as it was during the previous SARS and MERS outbreaks. SARS and MERS have a higher fatality rate than Covid-19, and it is for that very reason they were able to be contained. Since patients infected with SARS and MERS were more often symptomatic, generally developed more severe symptoms, and had a shorter period of time between exposure and the onset of symptoms, cases were recognized sooner, contacts were more easily traced, and further spread was prevented through quarantining.
“The coronavirus is spreading too rapidly, and too broadly for the U.S. to bring it under control with testing and contact tracing.” — Dr. Anne Schuchat, CDC Deputy Director, 6/29/20
While contact tracing and quarantining strategies remain important during the Covid-19 pandemic, they are aimed at protecting vulnerable groups and reducing the overall transmission of the virus rather than ultimately containing or eradicating the virus. At this point eliminating SARS-CoV-2 worldwide is essentially impossible. Fortunately, however, as was the case with the four endemic coronaviruses, eradication is not a requirement for the pandemic to end. | https://medium.com/microbial-instincts/facing-three-fundamental-fears-about-the-coronavirus-261ba270f402 | ['Bo Stapler'] | 2020-08-11 19:37:46.296000+00:00 | ['Health', 'Science', 'Wellness', 'Coronavirus', 'Covid 19'] |
How to improve the performance of BigQuery queries by optimizing the schema of your tables | How to improve the performance of BigQuery queries by optimizing the schema of your tables
In this article, I take a real table and change its schema in a lossless way so as to improve the performance of queries on that table.
Optimize how your data are stored to achieve better query performance. Photo by Annie Spratt on Unsplash
Queries to optimize
To illustrate that the table schema is improved, we have to measure the performance on realistic queries on real-world datasets. I will use observations of inhalable Particulate Matter (PM10) from the United States Environmental Protection Agency (EPA). The EPA PM10 hourly dataset is available as part of the BigQuery public dataset program.
This is originally a flattened table — basically, there is a row for every hourly observation. The dataset is relatively small (1.4 GB, 40m rows), so the queries should fit very well within the free monthly quota (1 TB). Because it is a small table, though, the improvements won’t be as dramatic as they would have been on larger tables.
Let’s say that we want to find how many instruments per county we have PM10 observations in 2017. The query is:
SELECT
pm10.county_name,
COUNT(DISTINCT pm10.site_num) AS num_instruments
FROM
`bigquery-public-data`.epa_historical_air_quality.pm10_hourly_summary as pm10
WHERE
EXTRACT(YEAR from pm10.date_local) = 2017 AND
pm10.state_name = 'Ohio'
GROUP BY pm10.county_name
This query took 2.4 sec and processed 1.3 GB.
For the second query, let’s say that we want to find the maximum PM10 reading in the City of Columbus, Ohio year-by-year. The city polygons are in another public dataset, and so we will join them:
SELECT
MIN(EXTRACT(YEAR from pm10.date_local)) AS year
, MAX(pm10.sample_measurement) AS PM10
FROM
`bigquery-public-data`.epa_historical_air_quality.pm10_hourly_summary as pm10
CROSS JOIN
`bigquery-public-data`.utility_us.us_cities_area as city
WHERE
pm10.state_name = 'Ohio' AND
city.name = 'Columbus, OH' AND
ST_Within( ST_GeogPoint(pm10.longitude, pm10.latitude),
city.city_geom )
GROUP BY EXTRACT(YEAR from pm10.date_local)
ORDER BY year ASC
This took about 4 minutes, processed 1.4 GB, and yielded PM10 readings in Columbus over the years.
These are the two queries I’ll use to demonstrate the optimizations.
Tip #1: Use Nested fields
The EPA hourly data is in a table each of whose rows is an hourly observation. This means that there’s now a lot of repeated data about stations, etc. Let’s combine all the observations on a single day from the same sensor into an array (see the ARRAY_AGG below) and write this to a new table (make a new dataset named advdata first):
CREATE OR REPLACE TABLE advdata.epa AS SELECT
state_code
, county_code
, site_num
, parameter_code
, poc
, MIN(latitude) as latitude
, MIN(longitude) as longitude
, MIN(datum) as datum
, MIN(parameter_name) as parameter_name
, date_local
, ARRAY_AGG(STRUCT(time_local, date_gmt, sample_measurement, uncertainty, qualifier, date_of_last_change) ORDER BY time_local ASC) AS obs
, STRUCT(MIN(units_of_measure) as units_of_measure
, MIN(mdl) as mdl
, MIN(method_type) as method_type
, MIN(method_code) as method_code
, MIN(method_name) as method_name) AS method
, MIN(state_name) as state_name
, MIN(county_name) as county_name
FROM `bigquery-public-data.epa_historical_air_quality.pm10_hourly_summary`
GROUP BY state_code, county_code, site_num, parameter_code, poc, date_local
The new table has fewer rows (1.7 million) but it is still about 1.41 GB because we have not lost any data! The difference is that we are storing the observed values as arrays within a row. So, the number of rows has been cut down to 1/24 of the original number.
Querying for instruments by county is now:
SELECT
pm10.county_name,
COUNT(DISTINCT pm10.site_num) AS num_instruments
FROM
advdata.epa as pm10
WHERE
EXTRACT(YEAR from pm10.date_local) = 2017 AND
pm10.state_name = 'Ohio'
GROUP BY pm10.county_name
The query now takes 0.7 sec (3x faster) and processes only 56 MB (24x cheaper). Why is it less expensive? Because there are 24x fewer rows (remember that we aggreggated hourly measurements into a single row), and so a table scan has to process 24x less data. The query is faster because it needs to process fewer rows.
But what if you really need to process hourly data? Since the transformation to use arrays is not lossy, we can still query for the maximum hourly PM10 observations over the years in Columbus. That query now requires an UNNEST in the FROM clause but is otherwise identical:
SELECT
MIN(EXTRACT(YEAR from pm10.date_local)) AS year
, MAX(pm10obs.sample_measurement) AS PM10
FROM
advdata.epa as pm10,
UNNEST(obs) as pm10obs
CROSS JOIN
`bigquery-public-data`.utility_us.us_cities_area as city
WHERE
city.name = 'Columbus, OH' AND
ST_Within( ST_GeogPoint(pm10.longitude, pm10.latitude),
city.city_geom )
GROUP BY EXTRACT(YEAR from pm10.date_local)
ORDER BY year ASC
This query still takes 4 minutes, but it processes only 537 MB. In other words, storing the data as nested fields (arrays) has made the query 3x less expensive! This is curious. Why does the data read go down? Because there are rows (those outside Columbus) that we don’t need to read the array data for. But the computations (max, extract year, ST_Within) are the bulk of the overhead for this query and the number of rows those are carried out on is the same, so the query speed doesn’t change.
Tip #2: Geography Types
Can we improve the computation through storing the data better? Yes!
Instead of constructing the geographic point from the longitude and latitude each time, it is better to store the latitude and longitude as a geographic type. The reason is that creating a geographic point with ST_GeogPoint() is actually a somewhat expensive operation that involves finding the S2 cell that holds the point (it’s even more expensive if you are trying to create more complex shapes like polygons):
CREATE OR REPLACE TABLE advdata.epageo AS SELECT
* except(latitude, longitude)
, ST_GeogPoint(longitude, latitude) AS location
FROM advdata.epa
The first query is the same because we don’t use latitude and longitude in the query. The second query can now avoid creating a ST_GeogPoint:
SELECT
MIN(EXTRACT(YEAR from pm10.date_local)) AS year
, MAX(pm10obs.sample_measurement) AS PM10
FROM
advdata.epageo as pm10,
UNNEST(obs) as pm10obs
CROSS JOIN
`bigquery-public-data`.utility_us.us_cities_area as city
WHERE
city.name = 'Columbus, OH' AND
ST_Within( pm10.location, city.city_geom )
GROUP BY EXTRACT(YEAR from pm10.date_local)
ORDER BY year ASC
It takes 3.5 minutes and processes 576 MB, i.e. 6% more data (the geography type for a point uses more than what two floats would take) for a 12.5% speedup in query performance.
Tip #3: Clustering
Notice that we use the time quite extensively. What if we ask BigQuery to store its tables in such a way that all equal values of that field are held in adjacent rows? This way, if our query every filters on some aspect of the time, BigQuery doesn’t have to do a full table scan. Instead, it can read just part of the table.
Because you can only create 2000 partitions at a time, I decided to partition by a dummy date field (this is okay because by clustering, we are forcing queries to use partitions):
CREATE OR REPLACE TABLE advdata.epaclustered
PARTITION BY dummy_month
CLUSTER BY state_name, date_local
AS
SELECT
*,
CAST(
CONCAT(CAST(EXTRACT(YEAR from date_local) AS STRING), "-",
CAST(EXTRACT(MONTH from date_local) AS STRING), "-01") AS DATE) AS dummy_month
FROM advdata.epageo
Now, let’s take the first query:
SELECT
pm10.county_name,
COUNT(DISTINCT pm10.site_num) AS num_instruments
FROM
advdata.epaclustered as pm10
WHERE
pm10.state_name = 'Ohio'
GROUP BY pm10.county_name
0.8 seconds, 43MB! No difference.
How about the second?
SELECT
MIN(EXTRACT(YEAR from pm10.date_local)) AS year
, MAX(pm10obs.sample_measurement) AS PM10
FROM
advdata.epaclustered as pm10,
UNNEST(obs) as pm10obs
CROSS JOIN
`bigquery-public-data`.utility_us.us_cities_area as city
WHERE
pm10.state_name = 'Ohio' AND
city.name = 'Columbus, OH' AND
ST_Within( pm10.location, city.city_geom )
GROUP BY EXTRACT(YEAR from pm10.date_local)
ORDER BY year ASC
The query now takes just 20 seconds and processes 576.4 MB, a 10x speedup. This is because we clustered the table by month, and we are filtering by month, and this allows BigQuery to organize the data more efficiently.
Enjoy! | https://medium.com/google-cloud/how-to-improve-the-performance-of-bigquery-queries-by-optimizing-the-schema-of-your-tables-e4c36077fa2d | ['Lak Lakshmanan'] | 2019-09-21 16:03:31.976000+00:00 | ['Sql', 'Bigquery', 'GIS'] |
Building Cross-Platform Apps With SwiftUI | Coding Time
Our application will be a very simple article listing app. We’re going to make a really simple application. Maybe I can develop this application further in the articles I’ll write about universal apps in the future.
Final app
Creating a new Xcode project
First of all, let’s create a new Xcode project. We need to create this project as a multiplatform app.
Of course, you can give the project any name you want. After that, the only thing left is to start making your project.
My advice to you while developing a project is to divide your files into folders. While doing this project, you can use folders in the same way I use them.
Building the data model
There will be only one data model in our project. As our application is an article-display application, as you can understand, an article has a data model. This model has four properties: id , title , description , type .
struct Article: Identifiable {
// MARK: - Properties
var id = UUID()
let title: String
let description: String
let type: String
}
Of course, we need objects created with the Article model. We’ll create this data by writing it directly.
let techArticles = [
Article(title: "AirPods Max: This is Expensive", description: "Let's take a closer look at AirPods Max, Apple's last product in 2020.", type: "Tech"),
Article(title: "Mac Apps are Back!", description: "With first Apple Silicon -M1- Macs, many applications that we use on iPhones and iPads come to the Mac App Store! Here is why.", type: "Tech"),
Article(title: "How to Create an Onboarding Screens in Your App", description: "Onboarding screens are very important for new users to fully understand the application and have a smooth user experience.", type: "Tech")
] let scienceArticles = [
Article(title: "Are Apple Products Becoming More Cheaper?", description: "In recent years, when I looked at the price of Apple's newly introduced products, I saw a slight decrease in the price of new products.", type: "Science"),
Article(title: "Limit Properties", description: "Limits can also be evaluated using the properties of limits.", type: "Science"),
Article(title: "Direct Substitution", description: "We can find any limit of a definite function with direct substitution. Let’s find out how we can do this!", type: "Science")
] let designArticles = [
Article(title: "Euler Number", description: "What is the number of e that we usually encounter in calculators? What does it do? Let’s find out what this number is!", type: "Design"),
Article(title: "Introduction of Limits", description: "Now that we have defined the limit, let’s try to better understand the limit by giving an example…", type: "Design"),
Article(title: "Find Limits Using Graphs", description: "Graphs are a great tool for understanding the approaching values. Let’s see how this happens!", type: "Design"),
Article(title: "Find Limits Using Tables", description: "A noteworthy method to understand limits. How, you ask?", type: "Design")
]
Now that our data model and the data sets are ready, let’s create the interface elements to display this data.
Building views
First, let’s create an ArticleView object that shows the properties of a single Article object.
struct ArticleView: View {
// MARK: - Properties
let article: Article
// MARK: - UI Elements
var body: some View {
VStack(alignment: .leading, spacing: 5) {
Text(article.title)
.font(.title)
Text(article.description)
.font(.headline)
Spacer()
}
.padding()
}
}
Now to show these ArticleView elements together, create a new View object. It’s called ArticlesListView .
struct ArticlesListView: View {
// MARK: - Properties
let articles: [Article]
// MARK: - UI Elements
var body: some View {
Text("Hello, TurkishKit!")
}
}
We won’t create the List object directly in the body because when the application is opened on a Mac device, it’s better to set the size accordingly. As you can see below, we adjusted the size of the Mac app with the frame method.
struct ArticlesListView: View {
// MARK: - Properties
let articles: [Article]
// MARK: - UI Elements
var body: some View {
#if os(macOS)
return
view
.frame(minWidth: 400, minHeight: 600)
#else
return view
#endif
}
@ViewBuilder
private var view: some View {
List(articles) { article in
NavigationLink(destination: ArticleView(article: article)) {
ArticleView(article: article)
}
}
.navigationTitle("\(articles[0].type)")
}
}
Now it’s time for the important part: SideBar and TabBar . TabBar and SideBar elements are extremely important for the structure of our application. Usually we divide our applications with these elements. In this app, the TabBar element will also be used for the iPhone version, and the SideBar element will be used for the iPad and Mac devices.
Let’s start with the TabBar element. The only thing we do in this element is arrange our main screens. We use the tag method to position the screens truly.
struct TabBar: View {
// MARK: - UI Elements
var body: some View {
TabView {
ArticlesListView(articles: techArticles)
.tabItem {
Image(systemName: "newspaper.fill")
Text("Tech")
}
.tag(0)
ArticlesListView(articles: scienceArticles)
.tabItem {
Image(systemName: "paperclip")
Text("Science")
}
.tag(1)
ArticlesListView(articles: designArticles)
.tabItem {
Image(systemName: "rectangle.and.paperclip")
Text("Design")
}
.tag(2)
}
.navigationTitle("Articles")
}
}
Let’s move onto the SideBar element. We have a basic View element in the SideBar element, similar to the TabBar element: List . We’ll use the NavigationLink object to orient in the List object.
struct SideBar: View {
// MARK: - UI Elements
@ViewBuilder
var body: some View {
List {
NavigationLink(
destination: ArticlesListView(articles: techArticles),
label: {
Label("Tech", systemImage: "newspaper.fill")
}
)
.tag(NavigationItem.tech)
NavigationLink(
destination: ArticlesListView(articles: scienceArticles),
label: {
Label("Science", systemImage: "paperclip")
}
)
.tag(NavigationItem.science)
NavigationLink(
destination: ArticlesListView(articles: designArticles),
label: {
Label("Design", systemImage: "rectangle.and.paperclip")
}
)
.tag(NavigationItem.design)
}
.navigationTitle("Articles")
.listStyle(SidebarListStyle())
}
}
Develop the code as follows to ensure the correct positioning of the NavigationLink elements.
enum NavigationItem {
case tech
case science
case design
} struct SideBar: View {
// MARK: - Properties
@State var selection: Set<NavigationItem> = [.tech]
// MARK: - UI Elements
@ViewBuilder
var body: some View {
List(selection: $selection) {
NavigationLink(
destination: ArticlesListView(articles: techArticles),
label: {
Label("Tech", systemImage: "newspaper.fill")
}
)
.tag(NavigationItem.tech)
NavigationLink(
destination: ArticlesListView(articles: scienceArticles),
label: {
Label("Science", systemImage: "paperclip")
}
)
.tag(NavigationItem.science)
NavigationLink(
destination: ArticlesListView(articles: designArticles),
label: {
Label("Design", systemImage: "rectangle.and.paperclip")
}
)
.tag(NavigationItem.design)
}
.navigationTitle("Articles")
.listStyle(SidebarListStyle())
}
}
Important: You have to locate the files of the SideBar and TabBar elements as done below — otherwise, problems will occur.
If your files can’t communicate with each other, you may need to edit your Target Memberships. Our project is very small, so you can have the two existing targets open in each file.
Finally, it’s time to edit the ContentView . Here we need to run different elements depending on which device the app is running on. For this, we’ll use tools like #if , which we also used earlier, and then we’ll use the horizontalSizeClass property. Thanks to horizontalSizeClass , we can detect whether the device running the application is an iPad or an iPhone.
struct ContentView: View {
// MARK: - Properties
#if os(iOS)
@Environment(\.horizontalSizeClass) var horizontalSizeClass: UserInterfaceSizeClass?
#endif
// MARK: - UI Elements
@ViewBuilder
var body: some View {
NavigationView {
#if os(iOS)
if horizontalSizeClass == .compact {
TabBar()
} else {
SideBar()
}
#else
SideBar()
ArticlesListView(articles: techArticles)
#endif
}
}
}
Thus, our application is officially completed! | https://betterprogramming.pub/building-cross-platform-apps-with-swiftui-3fea88cdb0ae | ['Can Balkaya'] | 2021-03-01 20:15:54.296000+00:00 | ['Swift', 'Mobile', 'Swiftui', 'iOS', 'Programming'] |
Kogi State and the NIPC Q3 Investment Report: The True Story | Kogi State Governor — Yahaya Bello (Middle), Vice President Yemi Osinbajo (Right) and Lagos State Governor (Babajide Sanwo-Olu). Photo credit: The Cable News.
A few days ago, we woke up to the news that the Nigerian Investment Promotion Council (NIPC) has ranked Kogi State as the number one investment destination in Nigeria. The state holds the largest chunk (25% — about US$1 billion) of the US$3.9 billion investment announcements in Nigeria between July and September 2020.
The news was indeed music to my ears. As someone who has been clamoring for strategies to attract private sector investment into the state, I celebrated the news all day long.
I even started bragging and bantering with some of my friends who always mocked me whenever unsavory news and dramatic stunts popped out of my state.
From Dino Melaye’s tree-climbing episodes, to Yahaya Bello’s scary boxing stunts and to the most concerning: news of non-payment or percentage payment of workers’ salaries — Kogi is never short of ‘holy shit’ kind of news.
The Doubting Thomas has Some Strong Reasons
But my friends would not let me enjoy this cheering one in peace. They dismissed the news. Initially as being fake and later, as being planted by Governor Yahaya Bello’s media aides to boost his image ahead of his declaration for — don’t bother me — you already know the office.
Well, as would happen to a kid whose end of the session academic performance is beyond expectation, I started to doubt the story myself. So, being a researcher, I decided to do the right thing: research the story.
Before I tell you what I found, let me give you some context and the probable reasons why people don’t believe the good news coming out of Kogi State.
First, with all the drama that seem to characterize the state, people don’t see Kogi as being serious enough to have the kind of business environment worthy of attracting a billion dollars investment. And they are not wrong for having that view.
In the World Bank’s 2018 Ease of Doing Business, Kogi is in the bottom 10 overall; 28th (out of 37) in Starting a Business, 22nd in Dealing with Construction Permits, 18th in Enforcing Contracts and 14th in Registering a Property.
The report submits that Kogi is one of the states that did not implement any serious reforms to improve business environment. For sure, those are not the numbers and narrative that any serious investors would want to deal with.
The state has historically been in the bottom of the list when it comes to investment attraction in Nigeria.
For sometimes now, I have been tracking NIPC investment announcements, at least since the last 15 quarters. Since 2016 when the current administration in the state came on board, not once, not in any quarter, was the state among the top 5 destinations. For the most part, Kogi is not even on the list.
Now, add to the mix, the fact that unlike other states with active Investment Promotion Agency like Kaduna, Ekiti, Nasarawa and Lagos, and whose Governors are visible in the investment promotion space, Kogi does not have any organized and active investment strategy delivery structure, nor is its governor known to be active in the investment promotion circle, aside, of course, the annual jamboree called Kogi Investment Summit.
So, you would excuse my friends who did not expect the state to do as well as was announced by NIPC.
My Findings — The True Story
The state’s recent performance in FDI, my study revealed, is traceable to one key manufacturing/steel project: the US$1 billion Agbaja Cast Steel Project being undertaken by Kogi Iron — a company listed on the Australian Security Market (ASX). The project is said to be funded by Torridon Investment — an European Fund Management Company.
The project is located on the Agbaja Plateau, near Lokoja — some 200km from Nigeria’s Capital City, Abuja.
Estimated to have a probable 205 million metric tonnes of Ore Reserve at grade 45.7 iron (Fe), the project is designed to exploit the Plateau’s extensive, flat-lying channel iron deposits.
The project has been a long time coming. As stated in the company’s website, the decision to explore the potential of the project was made in 2016. By 2017 through 2018, a Feasibility Study and Pilot Plant Test were successfully completed and the results confirmed both the technical feasibility and marketing viability of the project.
When completed, the project has a huge potential, not only in terms of putting to productive use, the unique mineral endowments of the state and boosting her IGR, thousands of jobs could come out of the value chains of the project.
At this point, a round of applause is in order for my State.
Thus, the NIPC ranking of the state is not a hoax. While it might seem like a fluke for the State to be where it is at the end of the quarter, it is one fluke that must be celebrated by all lover of development.
Regardless of your regard for the officials of the state, this is one big win that must be cherished and projected in the best light possible.
Meanwhile, I have a word for the state government and it’s not pleasant.
Since the release of the report, I have seen some state officials celebrate, in some cases wildly, and I want to say — carry on. Even if this is really a fluke, it’s your luck, enjoy it. I only want to add that, it is high time the state is seen in a different light. Enough of the drama.
Kogi cannot have the kind of potentials and natural advantages — 58% of cashew trees in West Africa, purest deposit of iron ore (Itakpe) in Nigeria, highly educated young population, naturally-well irrigated land for all-year-round farming etc. — and be where it is in terms of IGR, unemployment and poverty.
In the world of investment attraction, image matters. There is a silver lining somewhere in people’s disbelieve of the state’s Q3 investment performance.
If we are really smart, and we love the thrill of being ranked first in investment attraction enough to celebrate the way we have been, then we should see this moment as a wake-up call. A call to be honest with ourselves and do the damn right thing.
Remember, champions are defined not by a one-off victory but by their consistency being at the top of their games.
The state should seize the momentum by putting up a deliberate structure that not only consolidates the current achievement (provide a superior aftercare and seek for expansion), but also effectively go after every investment opportunity just like Kaduna, Ekiti and Nasarawa do. | https://medium.com/@adams-a/kogi-state-and-the-nipc-q3-investment-report-the-true-story-4897a99056c4 | ['Dr Adams Adeiza'] | 2020-12-28 07:38:05.884000+00:00 | ['Development', 'Kogi State', 'Nigeria', 'Investment'] |
Akıl — Kalp Çıkmazı. Koca bir çıkmazdayım. Aklımda kalbim… | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/t%C3%BCrkiye/ak%C4%B1l-kalp-%C3%A7%C4%B1kmaz%C4%B1-1a64f80f2274 | [] | 2020-12-21 16:58:29.172000+00:00 | ['Türkçe', 'Kalp', 'Akıl'] |
Get started on Ethereum with KyberSwap | New to crypto trading? Don’t fret, KyberSwap is the right place to begin!
KyberSwap.com is the fastest and easiest way to buy and sell tokens in a decentralized manner. Token exchange happens fully on-chain, which means KyberSwap never holds your funds, and trades are always transparent on the blockchain. You can trade with peace of mind as you’re always in control.
To own your first crypto, you need to first create your own wallet and address to store your crypto. KyberSwap is based on the Ethereum blockchain, so you need to create a new Ethereum wallet.
1. Create your new Ethereum Wallet
a) Create a new wallet with Torus
Visit Kyberswap.com! If you already have an Ethereum wallet, you can connect using Metamask, Ledger, Trezor, Coinbase Link, Wallet Connect, or other common options. If not, simply click on the ‘Torus’ button.
You will be prompted to choose your preferred social media login method. Currently 5 options are supported — Google mail, Facebook, Reddit, Twitch and Discord. Choose the one you prefer. Each social media login type is tied to a different Ethereum address (gmail is different from facebook, reddit etc.).
That’s it! It’s really that simple. Your Ethereum wallet will be successfully created within seconds and you can use it on KyberSwap or any application that supports Torus.
View your wallet address with the wallet tab on the right
b) Create a new wallet with Metamask
First install/use the Metamask browser extension, and create your wallet on Metamask by following their instructions below:
c) Create a new wallet on the KyberSwap Mobile App
Download our KyberSwap Android mobile app or iOS(Testflight), and create a new wallet there in a few seconds!
Your Ethereum wallet will have a public address that looks like e.g. 0x54DEFF07401E922Ff57cCeC5693daC0aB5CCDDc2
This wallet public address is where your friends can send you crypto! It’s like your home address where you receive mail :)
Check your wallet address balance anytime on the Ethereum blockchain explorer Etherscan.io
With your new wallet and address, you can now connect to KyberSwap.com to send, receive, and store Ethereum tokens!
2. Buy Ether with Fiat
Your new wallet is empty, so it’s time to buy some Ether! Ether, or commonly known as ETH, is the native digital currency of Ethereum. ETH is required for all kinds of activity on the blockchain network, as a transaction fee in ETH (gas) is paid to the network when you perform any action.
On KyberSwap.com , click ‘BUY ETH’ on the menu bar and a simple drop-down list with MoonPay and Wyre fiat to crypto options will appear.
With our Torus collaboration, you can use either MoonPay or Wyre directly on the KyberSwap.com site to easily purchase ETH with fiat currency (Visa/Mastercard credit or debit cards, or apple pay). This is a very simple process and you can enjoy multiple fiat to crypto on-ramp options. For more options such as Simplex and Ramp Network, you may head to app.tor.us.
*Note: The fiat on-ramp is only available on the web platform (not mobile app) at the moment. Torus, Moonpay, and Wyre are 3rd party service providers. Crypto is highly volatile. Information contained in this post is not intended as financial advice. Minimum/maximum purchases are subject to change. Please check with Tor.us for questions related to the fiat to crypto services.
Alternatively, there are also in-person or peer-2-peer options such as LocalEthereum where you can find offers from other people.
3. Swap ETH for KNC or other ERC20 tokens!
Once you have bought ETH and receive it in your wallet, connect your wallet and use KyberSwap to easily trade more than 70 other tokens including stablecoins DAI, USDC, TUSD, USDT, digital gold DGX, as well as MKR, LINK, KNC, SNX, and WBTC (Wrapped Bitcoin)!
Simple Swaps: On KyberSwap, you can make fast, simple, secure token swaps in a few clicks. Kyber never holds your funds, you’re always in control.
Limit Orders: KyberSwap allows you to set non-custodial limit orders. You don’t have to monitor volatile crypto markets 24x7 as you can place orders to buy/sell tokens at your desired rate — no time wasted depositing and withdrawing tokens to an exchange account!
No Trading Limits: There are now no trading limits, and no KYC required. Enjoy NO restriction to your trading amount. Liquidity is subject to Reserve capacity.
Price Alerts: You can set Price Alerts to get notified whenever the token price hits your target alert level.
Price Trend Notifications: Powerful notifications that let traders know when their token price is moving.
Portfolio Dashboard: Track your performance, your transaction history, and portfolio token distribution over time.
Troubleshoot transaction issues easily: Simply paste your transaction hash on our Kyber debugger tool.
Need help? Talk to our admins using our new Live Chat function!
We now have live chat support! Just click the chat button at the bottom right of the KyberSwap.com web screen. In our mobile app, it is on the Setting tab.
A very warm welcome to Ethereum! KyberSwap.com is glad to have helped you start your crypto journey.
❗️REMINDER: KyberSwap.com will NEVER ask your to provide your Ethereum wallet’s private key. DO NOT give your private key details or send funds to anyone, even if the person claims to be from the KyberSwap team.
Explore Ethereum
Now that you can buy ETH and swap it for tokens, start exploring Ethereum! Check out the various crypto wallets, blockchain games, DAO (decentralized autonomous organization) platforms, and DeFi (decentralized finance) applications out there.
…and many more!
Want a fast, simple, secure way to exchange tokens? Just KyberSwap.
Contact Shane on Telegram or visit the KyberSwap Official Telegram if you have questions.
Learn more
👉🏻Why KyberSwap?
👉🏻What is Ethereum?
👉🏻KyberSwap Official Telegram
👉🏻Tips for trading on KyberSwap
👉🏻Limit Order 2.0 Overview
👉🏻Set limit orders like a pro!
👉🏻Follow KyberSwap Twitter Announcements
👉🏻Download KyberSwap iOS App
👉🏻Download KyberSwap Android App
👉🏻Transaction Debugger Tool | https://medium.com/kyberswap/getting-started-on-ethereum-with-kyberswap-60b3951bd74 | [] | 2020-09-18 03:51:13.576000+00:00 | ['Featured', 'Basics', 'Ethereum', 'Blockchain', 'English'] |
How to pick the best R&D Tax Consultant | When it comes to getting the most out of your R&D tax credit, hiring an R&D advisor is usually the best strategy.
In the UK, there are hundreds of companies offering R&D tax credit services. They range from one man/woman outfits to companies employing hundreds of tax relief specialists. If you count the big accountancy firms, that number increases by another order of magnitude. Overall, thousands of financial specialists can help you make sense of the research and development tax credit. The question is: how can you choose who is best prepared to help you?
In this short guide, we’re highlighting the most important things to consider when deciding to work with an R&D tax consultant.
Types of R&D Tax Consultants
In the world of R&D tax relief, there are broadly three types of companies that cover most of the market in London and the UK and offer services that range from forget-about-it to just-a-once-over.
Specialists
For the specialist, R&D is their bread and butter. They usually have a complete, hands-on service and try to take the whole process off your hands. The service typically includes an interview with your tech lead, writing the technical narrative, and creating the financial calculations. Comprehensive enquiry support is usually part of the package as well, so if HMRC wants to question any part of the claim, the specialist is there to handle it.
Accountants
Accountants are all-rounders, and you are probably very familiar with their services. Most accountants can and usually do offer RD tax advice as part of their offering, sometimes bundled with other services, sometimes for an additional fixed or % charge. The main difference between an accountant and a specialist is that the service an accountant can provide is usually limited (though not in all cases). Accountants will typically create the financial calculations for corporation tax and review a technical narrative that your team has put together. In most cases, they will offer only limited enquiry support, and most of the email and phone communication with the inspectors still being left to be handled by your employees.
Simplified Service Specialists
Given that many a business has acquired some form of experience with R&D tax claims, a new generation of service providers has sprung up to help the companies that just need a second opinion and some technical support. Most of these providers provide platforms that the client needs to input information into, and then most computations are generated automatically.
What to look out for when choosing the best R&D tax consultant
R&D spending
How much you’ve spent is an important question, as the more substantial the claim, the more carefully an HMRC inspector may look at it. If your credit is still on the small side, you can probably get away with filing a simplified claim or letting a non-specialist accountant create the claim. As your spending increases and the technology gets complex, a Research & Development advisor becomes critical. If your R&D spending is up to £30,000 — £40,000, which could translate into a credit of up to £10,000 — £12,000, it’s very probable that HMRC will not bat an eye, and you’ll be fine with a simplified filing. Your accountant can do the filing, or you can decide to self-file. A platform solution could be an easy way to solve this problem, as well.
Your time
To file R and D tax credits, there are a few more laborious steps that take a bit of time from you and your team. The filing preparation also often happens in moments when your tech team’s energies are better spent coding or designing, rather than writing pages of legalese about the eligibility of the technology they’ve created.
The tradeoff is not hard to guess — if you want to create at least part of the R&D claim, and have no problem being more hands-on in the process, then a lower service option, like letting your accountant check it, may be a great idea. Nobody knows your product like you, so writing the tech narrative yourself is often a good idea.
The difference a specialist makes in writing the claim is not necessarily a better understanding of the technology, but a better understanding of how to present it to an inspector, while ticking all the boxes for eligibility.
Financing your claim
If you are looking to finance your upcoming claim through Advance Funding, having a full-service provider is paramount. Most lenders will work with the consultant to understand the claim and get an accurate estimate of its future value. Often, without a specialist, it can be hard for a lender to extend a term sheet as there is a lot of additional uncertainty. Having enquiry support at your side in case the taxman needs more clarification is also crucial if you are expecting the claim to repay a loan. This helps speed things up, maximises your chances of claiming the full amount and puts the financing company at ease. Chat to us at Fundsquire if this is interesting to you, we can help.
Dealing with an enquiry
An enquiry is a process where an inspector will ask questions as to the eligibility of certain technology projects or certain expenditures included in your business’ claim. The probability of an enquiry is low if your spending is low, but as you start investing more in eligible technology, this probability increases. As you get in the high hundreds of thousands or millions in claim value, it becomes very likely that an inspector may want to take a second look at your claim. This makes sense, this is taxpayer money after all. Overall, the higher your spending amount, the higher the risk.
The cost
Photo by Matthew Lancaster on Unsplash
Last but certainly not least, an essential factor in assessing the best provider is how much of your claim value you are willing to part with.
A full-service R&D tax consultant can charge from 12–25% of the final claim value, depending on the size, complexity, and timeline of the claim submission. There is a bit of flexibility in the fees, depending on contract length, and in some cases, fixed fee deals are sometimes on the table as well.
An accountant may file the submission as part of a package, charge by the hour or charge a fixed price of a few thousand pounds to review and submit the technical narrative created by your team. Additional enquiry support may be on a per-hour basis.
Simplified service platforms can cost between 5%-10% or, alternatively, charge a fixed cost of a few thousand pounds.
The Wrap Up
Who is the best provider for you? It depends a lot on your situation: early-stage or more mature, spending a trickle or a tonne, having time on your hands or pouring every second into your primary business.
If you’d like a bit more information on choosing the perfect research and development advisor, we’re here to guide you. We have a panel of partners that range from full-service consultants to platforms and we can at least point you in the right direction. | https://medium.com/fundsquire/how-to-pick-the-best-r-d-tax-consultant-1545a9798211 | ['Alex Kepka'] | 2020-11-03 10:56:02.646000+00:00 | ['Funding Round', 'Startup Lessons', 'Funding', 'Startup', 'Venture Capital'] |
Thoughts on being a non-binary survivor of Childhood Sexual Abuse | Being a male-ish (not a man) survivor of child sexual abuse is hard. Obviously being a survivor is hard full stop, but I want to talk specifically about the first part. It can feel very lonely to be non-binary, like I am an alien. I live my life just expecting to be misgendered, since I get coded as both male and female by people but no one ever guesses “neither.” I’ve never liked being called a man or a woman. The not wanting to be called a woman thing was, expectedly, always easier for me to articulate since I was AMAB. Why do they think I’m a girl??? Since no guy wants that, it was easier for me to be honest with my feelings that it upset me. What was less easy for me to come to terms with was while my expression and identity is definitely male-leaning (demimale is the most common term), it never fit. It was assuming things about me that were not true and that frankly people had no right to assume. You look at me and think you know what is in my head and what is in my pants? Unless you have an out of the box imagination, chances are you are wrong on both counts.
The abuse is a very complicated part of my coming to terms with myself, since it requires conversations about how I responded to it. Because of our rape culture, being penetrated or assaulted is automatically connected with being emasculated. A lot of male-focused therapy is about “reclaiming manhood” and “you are still a man” and other shit that I can’t even judge if it’s valid, it’s just not valid for me. And not because I’m somehow less than a man is, I’m just different. I understand that people who identify as male have the right to be validated in that, but to me it feels like Fry when in he is in the robot asylum and he is declared sane only when he stops believing he is human.
One reason it is complicated is because one of my abusers did refer to me in feminizing terms. A bigger boy I was left with for long hours day after day while my mother tried to “fix” his mother. My mother would give herself these people as projects, and I would be expected to be her assistant, subsuming myself to be friends and confidant and pre-teen therapist to either the “patient” or in this case the patient’s son. It was my job to make him a happier kid (just as it was with my brother) and any problems I might have were just a sign of how selfish I was. So in the shed or in the den, while his mother was either out with my mom or supposedly watching us but no where to be found, he would say “let’s play disney prince and princess” and I was always the princess. Given what it means to play the girl’s role in our culture, I don’t think I need to be more explicit.
I have struggled for a long time with “is that why I identify as gender-queer? Am I just still living out a program he put in my head?” But no, I don’t think so. Frankly, that aspect of it was only important and distressing because it was control and domination. I used to play that I was a girl long before that — though only in very brief moments due to my strict upbringing. I remember being in kindergarten or first grade and making a story with blocks where all the girls went one road and all the boys went on the other and the block that represented me snuck over to the girl’s road. As soon as I did that, I thought about God watching me and how mad he must be and I had a panic attack — the first one I remember having. I thought god was killing me.
But anyway, even though that was after the first person to regularly sexually violate me (that was an older boy who began touching me and humiliating me before I was potty trained and continued it until I moved at age 6), it was still before there had been any gender talk in my abuse and I really don’t think my gender confusion or questioning came (or only came) from my abuse. I mean no doubt it gave me issues, I just don’t think it ever gave me gender issues since at that age I didn’t know genitals had anything to do with sex assignment (I actually didn’t know that until second grade, that was when my little brother told me that he knew mom didn’t have a penis because when no one else was around he took showers with her. No, I don’t know anymore about that situation). But regardless, gender identity is definitely an area I’ve struggled with, and something I’ve found most cis survivors can’t help me with.
As long as we are on the subject, I’m also sick of the bigotry of people, survivors and not, assuming that every child who presents as gender queer has been abused. That’s so invalidating. Abuse can cause confusion, but my abuse and my gender identity are not the same thing and assuming only abused people are gender queer is saying you know both survivors and non-survivors alike better than they know themselves.
So there’s a lack of safe spaces and support for people like me. I don’t want my manhood reconfirmed. It’s especially hard to talk about my female abusers with men, I find that even fellow survivors just want to treat me as a prop, as an exhibit about why their view of women is justified. I mean, if anyone has a right to hate women it’s men who were raped by them on a regular basis growing up. They are victims of rape culture who are barely able to function due to what predators did to them. They are not the ones to pick a fight with. At the same time, it does nothing to help me to be in a group with them and listen to their delusions (and I’m using delusion as a medical term. The abuse has made them paranoid and unattached to reality, and they deserve compassion even as we also recognize how dangerous their delusions are). But since society (mostly) codes me as a man, and that includes my abusers, my experiences tend to isolate me from women survivors who mostly and understandably wouldn’t want a male-looking person in the conversation anyway.
This doesn’t even get into the nutty and disgusting things I’ve heard over the years about male and male-ish victims. Things like “the reason why male survivors have it easier is because male survivors always get justice from the courts” (um what?) and “the rape of a male is an aberration to their otherwise privileged life and therefore not as traumatic to them as the rape of a woman, whose oppression is a continual reminder that they could be attacked again.” For that last one, for all I know that may be true in larger society. I don’t know. But personally I don’t know any AMAB survivors who got raped and the next day woke up and went “well, that was a shitty night but at least it probably won’t happen again!” Especially not survivors like me who were first attacked in early childhood. My entire life has been spent in fear, and each subsequent experience only reconfirmed that I wasn’t safe. I didn’t potty train for myself, I potty trained because I thought that would stop the assaults and the humiliation. That’s how far back this goes. That’s not to say there aren’t specific and compounding issues for women that I’ll never have to face, but the existence of those issues doesn’t mean my trauma doesn’t count as much on the suffering-o-matic scale.
So I can end up feeling pretty isolated and invalidated. And I know the obvious answer is to find other non-binary survivors, but the great and frustrating thing about us is that we are all so different and unique that just because we don’t identify with male or female doesn’t actually mean we identify with each other. And especially being a male-leaning NB, the more male I come off the less people are comfortable with me in that kind of space. So I still feel like it is not quite safe.
I don’t know what I want though. I want a place where I can feel safe to share, but not have people assume things about me. A place where people confirm that my trauma is just as valid as theirs from the get-go, not only once I prove to them through graphic detail that yes I too was a victim. A place where no one is side-eyeing me whispering “do they belong here?” or on the other hand assuming I’m the same as them because we look alike. A place where women predators are actually discussed, but not a place that villainizes women. A place where if I did choose to tell my story, the parts of it that connect to me being biologically male-ish wouldn’t make everyone else shy away OR the parts that are about me not being a cis man wouldn’t make everyone else confused. I don’t feel like there’s any place that actually would want me to tell my whole story, and even though I doubt I ever will, knowing that what is a safe space for others will never be completely safe for me keeps me disconnected and alone.
I mean, there are definitely people who love me who I can share these things with. But it is different when they aren’t fellow survivors or NB. But sometimes I think what I went through and who I ended up becoming is so different from anyone else that the group of people I’m looking for just doesn’t exist. But when I see other places that try hard to provide safe spaces, it can feel like musical chairs and I’m the only one left standing. | https://medium.com/@demonboy/thoughts-on-being-a-non-binary-survivor-of-childhood-sexual-abuse-4738dde14596 | ['Mx. Ferdinand Lylith'] | 2020-11-24 14:28:20.494000+00:00 | ['Sexual Abuse Survivors', 'Nonbinary'] |
New Wives’ Tales for 2020 | New Wives’ Tales for 2020
Updated superstitions and justified paranoia for the new year
Photo by Thought Catalog on Unsplash
1. Going outside with wet hair will make you catch a cold that you won’t be able to treat until your insurance premium goes down.
2. If you swallow gum it will stay in your stomach for the length of time it takes you to find an in-network gastroenterologist: 7 years.
3. It’s bad luck to open an umbrella inside the waiting room of your gastroenterologist’s office.
4. You should wait an hour after eating before opening the bill from your gastroenterologist visit.
5. The white spots on your fingernails are due to your lying about completing Whole30.
6. Eating the crust of bread will make your gluten intolerance flare up.
7. If you make a silly face in the wind your boss will still accuse you of having resting bitch face.
8. Cracking your knuckles gives you arthritis, which is a pre-existing condition you are mandated to disclose to your insurer.
9. “Swimming on a full stomach is the reason you are experiencing bad cramps,” is how your male doctor will misdiagnose you.
10. Swallowing a seed will make a watermelon grow in your stomach and your employer deny your request for maternity leave.
11. Nosebleeds are a sign of sexual arousal that a licensed therapist could help you work through sometime in 2024 when they are accepting new patients.
12. Eating ice cream before bed leads to nightmares about how your body is breaking down and you can’t afford basic medical care, but at least there’s one, free thing left that can always put you to sleep.
13. Masturbation will make you go blind if you don’t turn your screen brightness down. | https://alikelley.medium.com/new-wives-tales-for-2020-7f9a0a81cdff | ['Ali Kelley'] | 2020-01-01 18:00:02.517000+00:00 | ['Humor', 'Lists', 'New Year', 'Healthcare', 'Satire'] |
Ensemble Semi-Supervised Approach for Unsupervised Record Linkage Problem | In this article I’m going to explain and trying to implement an approach proposed in A Novel Ensemble Learning Approach to Unsupervised Record Linkage Anna Jurek, Jun Hong, Yuan Chi, Weiru Liu paper for using semi supervised learning in matching databases with automatic seed selection.
Record linkage is a process of identifying records that refer to the same real-world entity. Many existing approaches to record linkage apply supervised machine learning techniques to generate a classification model that classifies a pair of records as either match or non-match.
Record linkage is necessary when joining data sets based on entities that may or may not share a common identifier example joining hospital data sets for medical applications or Hotel data for mapping different suppliers' data for OTA.
The main requirement of such an approach is a labeled training dataset. In many real-world applications, no labeled dataset is available hence manual labeling is required to create a sufficiently sized training dataset for a supervised machine learning algorithm.
To avoid the manual labeling process this will lead us to deal with the problem either using Unsupervised approaches like clustering or using the semi-supervised approach with few labeled data, here we are going to talk about semi-supervised or Self Learning approaches.
The proposed approach is to train with self-learning algorithm which takes small labeled seed at the start and train on it then predict on the other unlabeled data and then select the most confident points and join them to the labeled seed and start the training process again and repeat until no unlabeled data exists.
the difference here is the automatic seed selection, to avoid the time needed to select the right seed for matching and not matching algorithm.
the whole process is done in 6 steps :
calculate different distance metrics for each given feature. make a combination of all the features from the metrics and store them as feature schema select the most diverse sachems as features to our classifiers, so that each classifier is deciding from a different perspective than other classifiers (ensemble learning). for each feature schema, select the seed for matching and not matching classes. make several classifiers = number of selected feature sachems and start the self-learning process. the last step is to calculate the difference between each classifier and the other classifier so that if there is a classifier that has a big difference with others we remove it from the final prediction.
Let's talk about each one in detail and see how to implement it with python.
1- process the data and create features :
This step actually depends on the data you have so for example you might have data with (Name-Email-phone number ), so this step is done when you create similarity distance for each one like (Levenshtein distance — Jaro-Winkler — fuzzy matching) for strings and (Hamming distance ) for phone number.
As I said it depends on the data, so the code is different but here is a sample from the result you should have at this point and some helpful python libraries.
2- generate the feature schema
The target of the ensemble classifiers is to make each classifier look at different features from the other, so in this step we want to generate a pool of schemas each schema has different metrics for each feature like this
to do that we are going to use Cosine similarity to calculate the similarity between each metric and the other for each feature.
let's stick with the (name — email — phone) example, so the algorithm is an iterating process where we loop through each feature and get the lowest pair of metrics in term of cosine similarity by using this function:
from itertools import combinations
def get_lowest(df,cols):
low = 0
pair1 = 0
pair2 = 0
for k in combinations(cols,2):
cos=spatial.distance.cosine(df[k[0]].values,df[k[1]].values)
print(cos,k)
if cos>low:
low = cos
pair1 = k[0]
pair2 = k[1]
return pair1,pair2
I’m using the itertools library to generate all the pairs.
Now after we got the lowest pair we start to compare all the other metrics for the same feature to these pairs and select metrics in which no cosine similarity between two similarity measures is greater than a threshold.
from collections import defaultdict
def generate_pool(df,p):
features=defaultdict(list)
for feature in features:
vf = [col for col in df.columns if feature in col]
V = []
pair1,pair2=get_lowest(df,vf)
V.append(pair1)
V.append(pair2)
features[feature].append(pair1)
features[feature].append(pair2)
vf.remove(pair1)
vf.remove(pair2)
while(len(vf)>0):
f=vf.pop()
c=0
for item in V:
num = spatial.distance.cosine(df[f].values,df[item].values)
if (num > p) & (f not in features):
c+=1;
if(c==len(V)):
V.append(f)
features[feature].append(f)
return features
Now let's create a list combination of the selected similarities
features=dict(features)
a=list(features.values())
import itertools
features_schema=list(itertools.product(*a))
Now we have our schemas let's start the initial training seed selection process
3- Automatic Seed selection:
we need our algorithm to find the number of matching rows and mismatching rows to do so we will first normalize all our similarity measures so that
{1: means perfect match, 0: mean perfect mismatch}
then we will calculate the Manhattan distance between 1 vector for matching seed and 0 vector for mismatching seed.
as an example, we have row values like this <100,80,92> we first normalize it to be <1,0.8,0.92> and then subtract it from 1 vector <1,1,1> and the result is <0,0.2,0.08> by the sum over axis 1 we got 0.28 and the smaller value we got the more matching seed we have.
for now, this will help us get a good initial seed, but what if there is a feature as an example the Name helps us getting more matching seed than others, for the email or address it might be hard to find a perfect similarity metric for them.
To solve this we will make a field weighting algorithm, which gives more weight to a feature if it provides more good seeds while selecting the seeds
The algorithm will be like this we will first give every field equal weight = 1/num of fields, then we will calculate the Manhattan distance for the match and mismatch seed and update our weights based on specific equations I will talk about it, and if the difference between the new weights and the old weights is smaller than a threshold we stop.
An example of the non-match seed selection process with 2-dimensional similarity vectors. The process is performed in 3 iterations. In the first iteration (blue) the weights of the fields are ( 1 2, 1 2 ). In the second iteration (red) the weights were updated to ( 2 3, 1 3 ). In the last iteration (green) the weights were set as (34, 1 4 ).
so how we update our weights?
The Formula is more explained in the paper but the idea is that we sum all the distances between the matching seed and the 1 vector and the non-matching seed and 0 vector for each field and if the result = 0 then this field is having more power in selecting right seed more than others so it gets a weight of 1 and everything else is 0, but if we have more than one field with result = 0 then giving them all equal weights and the other field 0, else if there is no field have result = 0 then we assign them according to the formula above.
here is the function the calculate the weights
def calculate_weights(df,features,Xm,Xu,w):
ls=[]
djs=[]
for index,f in enumerate(features):
match =(df.loc[Xm][f]-1).sum(axis=0)
notmatch =(df.loc[Xu][f]-0).sum(axis=0)
dj = match + notmatch
djs.append(dj)
if dj == 0:
ls.append(index)
if len(ls)>0:
w = np.zeros(w.shape)
w[ls] = 1/len(ls)
else :
s = sum(1/d for d in djs)
djs = [round(1/(dj*s),5) for dj in djs]
w = np.asarray(djs).reshape(w.shape)
return w
and this is the function that used to get the seeds
def automatic_seed_selection(df,Mm,Mu,e,w):
Xm = set()
Xu = set()
tm,tu = 0,0
while(len(Xm)<Mm):
t = np.dot(abs(df[~df.index.isin(Xm)].values-1),w)
Xm.update(df[~df.index.isin(Xm)][t<=tm].head(Mm-len(Xm)).index) ### fill the seed until we reach Mm without repeating
tm+ = 0.05
while(len(Xu)<Mu):
t = np.dot(abs(df[~df.index.isin(Xu)].values-0),w)
inde = set(df[(~df.index.isin(Xu))][t<=tu].head(Mu-len(Xu)).index) ### make sure that no matching point is selected for not matching point
Xu.update(inde-Xm)
tu+ = 0.05
wnew = calculate_weights(df,df.columns,Xm,Xu,w)
while( np.array(abs(wnew-w)>e).any()):
Xm = set()
Xu = set()
tm,tu = 0,0
w = wnew
while(len(Xm)<Mm):
t = np.dot(abs(df[~df.index.isin(Xm)].values-1),w)
Xm.update(df[~df.index.isin(Xm)][t<=tm].head(Mm-len(Xm)).index)
tm+ = 0.05
while(len(Xu)<Mu):
t = np.dot(abs(df[~df.index.isin(Xu)].values-0),w)
inde = set(df[(~df.index.isin(Xu))][t<=tu].head(Mu-len(Xu)).index)
Xu.update(inde-Xm)
tu+ = 0.05
wnew = calculate_weights(df,df.columns,Xm,Xu,w)
return Xm,Xu
Mm is the number of rows we want in the matching seed, Mu is the same for the mismatching seed, the function starts with tolerance = 0 which mean getting the perfect match and mismatch and increase by 0.05 each iteration until we fill the Mm and Mu.
I’m using python sets to make sure that no example is provided in matching and non-matching at the same time.
gathering all this together we get the indices of the seeds for each feature schema
Xm =[]
Xu =[]
for schema in sachems:
print(schema)
z = len(schema)
w = np.full((z,1),1/z)
x1,x2 = automatic_seed_selection(pairs[schema],200,8000,0.5,w)
Xm.append(x1)
Xu.append(x2)
4- Selecting highly diverse sets of seeds
For each schema we now have a small number of labelled examples (seeds). Consequently, any binary classifier can be trained using the selected seeds to classify the remaining unlabelled similarity vectors. With ensemble learning it is a common practice to select a collection of Binary Classifiers that has the highest diversity to form an ensemble.
example :
The table above shows a simplified example of two ensembles, each consisting of three BCs. The output of each classifier is denoted as 1 for being correct and 0 for being wrong. The final prediction of the ensemble is determined as the mode of the three individual predictions. It can be seen that in Ensemble I each of the BCs makes mistakes on different examples. As a consequence the combined classification is better than any of the BCs. In Ensemble II, every BC misclassified the same example. Therefore, combining them does not make any overall improvement.
therefore we are going to calculate the Q static value which is a pairwise diversity measure that is defined based on a 2 × 2 table representing the relationship between predictions of two classifiers.
and it is calculated like this
The value of Q1, 2 is between −1 and 1. The classifiers with high values of N00 and N11 (classifiers that make the same predictions) will have a positive value. On the other hand, the classifiers with high values of N10 and N01 (classifiers that have different classifications on the same examples) will have a negative value.
here is the code for it
## combine all the selected maching seed from each features schema
ALL = set().union(*Xm) def calculate_Q(set0,set1):
S00 = set0.intersection(set1)
S11 = ALL-(set0.union(set1))
S01 = set0-set1
S10 = set0-set1
Q =((len(S00)*len(S11))-(len(S01)*len(S10)))/((len(S00)*len(S11))+(len(S01)*len(S10)))
return Q from itertools import combinations
Qs=[]
for f in combinations(range(0,len(Xm)),2):
Qs.append((calculate_Q(Xm[f[0]],Xm[f[1]]),f[0],f[1])) pd.DataFrame(Qs).sort_values(by=0)
5- Training
now we finally reached the training step as we said before for the semi-supervised we train on the seed, predict all the data, select the most confident one and add them to the seed, and iterate.
Thank god there are some awesome people made that for us here is the repo for the self-learning library.
and this is the code for this step
from frameworks.SelfLearning import *
from sklearn.linear_model import LogisticRegression, SGDClassifier
models=[]
for i in range (0,len(features_schema)):
model=SelfLearningModel(LogisticRegression(tol=1e-3))
models.append(model) X_features=[]
for schema in schema:
X_features.append(schema)
i=0
for model in models:
X = pairs[X_features[i]].values
name = 'match_'+str(i)
pairs[name] = -1 ##unlabeled sympol
pairs.loc[Xm[i],name] = 1
pairs.loc[Xu[i],name] = 0
y = pairs[name].values
model.fit(X,y)
pairs.loc[pairs[name] == -1,name] = model.predict(pairs[pairs[name] == -1][X_features[i]].values)
i+=1
if i == len(models):
break
6- Selecting the final ensemble using the contribution ratios of BCs
Since the proposed method is fully unsupervised we are not able to evaluate how good each of classification models is. Therefore, there is a risk of including classifiers with very poor accuracy (i.e., below 0.5) which are not valid in general, into the ensemble. In order to address this issue we propose a statistic which takes into account the contribution ratio of each individual BC to the final output of the ensemble. Each BC makes a prediction on each record pair as match or non-match. Following this, the mode of all the predictions by all the BCs is taken as the prediction of the ensemble.
and the contribution ratio is calculated from this formula
and here is the code by making mode prediction on except the initial seed
def calculate_CR(df):
CRS=[]
for col in ['match_0','match_1','match_2']:
ls = [abs(~(i^j)) for i,j in (zip(predicted[col],predicted['mode']))] CRS.append(ls.count(1)/len(ls))
return CRS
Conclusion
the problem already have other solution like genetic algorithms but this approach is very helpful when you are not familiar with the genetic approach and you don’t have the time or the resources for labeling the data, the paper already provided its results and I already tested it on different data set and the result was good.
This is my first article hope u like it, and please don’t hesitate to comment with any advice u have for improvements.
Thanks | https://medium.com/@mogady/ensemble-semi-supervised-approach-for-unsupervised-record-linkage-problem-8ca05b873ba2 | ['Ahmed Magdy'] | 2021-04-08 16:34:41.227000+00:00 | ['Unsupervised Learning', 'Machine Learning', 'Semi Supervised Learning', 'Record Linkage'] |
To Be Successful With Your Money, Treat Your Finances Like a Business | You spend a lot of time at work, doing a great job for your boss, your company, and your clients. You use your time effectively, and you work hard to keep your numbers within the parameters.
Shouldn’t you be as diligent with your personal life? I can answer that: Yes, you should! Treat your personal finances like a business and you can be more successful with your money.
Here are some steps to consider when you start to run your finances like your own personal business.
Create More Than One Income Stream
Businesses have more than one client, which is more than one way of getting income. You should also have more than one income stream, in case one goes dry.
You work, you get paid. This is one income stream. What about other ways you can get side money?
Income streams have two ways that they are fed: either with your time or with (some of your capital) money.
If you want to create another income stream with your time; a side-gig or a second job, create product or services using technology, art, engineering or science. Look at what interests you and see if you could possibly make some extra money doing something you love.
If you want to invest your money into something that creates another income stream, then you can look at investing your money into a variety of entities that can earn you money. It will depend how involved you want to be with your time; buying rental homes may take more time investment than buying long-term stocks or bonds.
Pay Yourself First
You are the employee of your business…er…household. If you want to feel secure and well-compensated, you must be paid a good salary. You (and your family) are the only reason why you do this, because you want to be taken well care of, and you should get compensated for the hard work you do. The best way to do this is to pay yourself first.
Payments to yourself first is considered savings of any form, giving you a foundation on which to build. Saving money for later will give you security, and make it easier to make needed purchases when they come up. It’s also the way to plan for important events, holidays, vacations, a new(er) car, or even to buy a home.
Putting away 10% is ideal, but you can work up to that if you’re too nervous about paying yourself that much money. This “pay yourself first” goes towards savings: emergency fund, fuck-off fund, and retirement. When you have a solid foundation of cash for future and unexpected needs, you are placing yourself in a great place when you get into a financially dire situation.
Get Lean with Overhead Costs
The goal is to decrease your debt and keep your monthly expenses as low as possible. As a business, you don’t want to spend more than you’re taking in, and high overhead costs can financially ruin you.
Your overhead costs are what you need to live — mortgage or rent, utilities, food, transportation, etc. Break down your costs for living; how can you decrease this total amount?
Start by going over each bill and find if there is a better, cheaper, or more appropriate way to spend money for maintaining your household.
Recently, I went through all of my services and decided we should try to decrease our utility bills. We found that our household didn’t use cable services, so we discontinued it. After calling our local cell phone company, I updated our plan to something that was less expensive and more appropriate for us. This saved me over $200.00 per month just on two bills.
Look at also decreasing any unnecessary fees or interest, either by paying off loans, or looking at a loan with less interest. And pay bills on time to avoid fees.
Be properly insured
There are 3 different types of ways to lose large sums of money and you must protect yourself from that risk.
“3 Types of Risk in Insurance are Financial and Non-Financial Risks, Pure and Speculative Risks, and Fundamental and Particular Risks. Financial risks can be measured in monetary terms. Pure risks are a loss only or at best a break-even situation. Fundamental risks are the risks mostly emanating from nature.” ~iedunote.com
Anything that is an asset or that you still owe debt, you will need to insure. At the basic, you will need medical insurance, home or renters insurance, and auto insurance. If you are a freelancer or a self-employed professional, you know that you may need business and professional insurance. It’s better to have these upfront. If you don’t, you’re putting yourself at risk for financial collapse if anything adversarial happens.
Streamline Your Daily Operations
You don’t want to spend a lot of your time doing the tasks of bill paying. Your time is money. Instead, work on having the money covered in your budget, and autopay all of your regular monthly bills. Load your accounts with dollars to cover those bills and let the banks do the work for you.
Grow Your Capital
Make sure your money is working for you. Once You are on a path of decreasing your debt, you will also want to tackle increasing your assets. Take an interest in responsibly investing your money into assets that will help you earn even more money. Learn how your money can be used to grow your capital, and get interested in building your wealth.
Plan for The Future
There will be a day you plan to “retire” whatever that will mean as you get closer to that age. Bankroll yourself so that your ‘business’ can pay for you once you decrease or no longer bring in streams of income. What does your golden parachute look like? Make sure you know what you want and plan your future around it, whether it’s next year or twenty years from now. | https://michellejaqua.medium.com/to-be-successful-with-your-money-treat-your-finances-like-a-business-63c60affbfc2 | ['Michelle Jaqua'] | 2020-12-11 04:01:56.588000+00:00 | ['Personal Finance', 'Money', 'Budget', 'Money Mindset', 'Saving'] |
CIT (Center Of InsurTech, Thailand) ศูนย์กลางของนวัตกรรมด้านเทคโนโลยีประกันภัยยุคใหม่ บริการด้วยใจพัฒนาธุรกิจประกันภัยเพื่อประชาชน | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/center-of-insurtech-thailand/cit-center-of-insurtech-thailand-%E0%B8%A8%E0%B8%B9%E0%B8%99%E0%B8%A2%E0%B9%8C%E0%B8%81%E0%B8%A5%E0%B8%B2%E0%B8%87%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B8%99%E0%B8%A7%E0%B8%B1%E0%B8%95%E0%B8%81%E0%B8%A3%E0%B8%A3%E0%B8%A1%E0%B8%94%E0%B9%89%E0%B8%B2%E0%B8%99%E0%B9%80%E0%B8%97%E0%B8%84%E0%B9%82%E0%B8%99%E0%B9%82%E0%B8%A5%E0%B8%A2%E0%B8%B5%E0%B8%9B%E0%B8%A3%E0%B8%B0%E0%B8%81%E0%B8%B1%E0%B8%99%E0%B8%A0%E0%B8%B1%E0%B8%A2%E0%B8%A2%E0%B8%B8%E0%B8%84%E0%B9%83%E0%B8%AB%E0%B8%A1%E0%B9%88-cf62af93f1fc | ['Siwawut Wongyara'] | 2020-12-24 11:29:27.563000+00:00 | ['Thailand', 'Technology', 'Startup', 'Insurance', 'Insurtech'] |
Bloodline — Family Tree Creator. Why do you need to care? | You and I aren't interested in shit so let’s dive into the main topic directly ✊.
Why do we need to create a family tree?
Fun: While creating a family tree you discover more about yourself and about your history. Keep Netflix aside for some time and spend with elders to talk about your Ancestors and am sure you will get to know more interesting stories than many web series 😅.
Importance: It’s a record of your lineage, showing the members of your family throughout recent, and even distant, history. Many people know Akbar, Alexander The Great, etc., but don’t know about our ancestors lived at that point in time and there is no way we get that information back as it is lost forever. Because of the Technology now we will able to store info and pass it on to further generations.
Within six months of release, Bloodline now supports in a variety of devices and manages around 50,000 users data from 150+ countries around the globe. | https://medium.com/@bloodline/bloodline-family-tree-creator-why-do-you-need-to-care-e9489f9e43d7 | [] | 2021-01-25 19:07:59.457000+00:00 | ['Android', 'Website', 'Application', 'iOS', 'Family Tree Creator'] |
20 Terminal Commands That You Must Know | ----------------Manipulation With Files and Folders-----------------
1. Encrypting Files
I Know windows is not so much famous for the security it offers but still, there are some methods that can give a guarded feel. Encrypting Files is one of them. Many windows users use third-party apps to encrypt their data but windows also offer an inbuild encryption system for securing files.
Open Your Terminal (Win+R Type CMD and Press Enter), and target your terminal to the folder where your files are that you want to secure. Then simply use the command below.
Cipher /E
Now No one without the password can not access your files. If You Want to Decrypt the Files than you can use Cipher \D .
2. File Compare
We all store our important data in files and overtime when the data of the files change and gets updates then it becomes very tough to find the difference between the previous and latest version of the file. You can also relate it with two versions of a coding project. We usually create multiple versions for our project file and in the end, we forgot what changes we have done.
Using the file compare command of the terminal we can find the difference between the two files by just a simple line of command.
fc /a File1.txt File2.txt ##Simple compare
fc /b File1.txt File2.txt ##Binary compare (Best For Images)
3. Hiding Folders
You Might be thinking that one I already know but wait the one you are thinking is not good enough. we all know there is an easy way of hiding folders using the right-click and then in properties checking the checkbox “Hidden”. if you know it then you also know that folders can be seen if you go in the view and then check the “Hidden Files” Check box in the top bar. Anyone who is using your computer can do that and easily access your hidden files. There is a much better and safe way is to use the terminal.
In the Terminal Target The location to the parent of your desired folder and then type the below command.
Attrib +h +s +r FOLDER_NAME ## Attrib +h +s +r studymaterial
Now YOur Folder is hidden completely and you can’t even see it by checking the Hidden Files checkbox in the top bar. To unhide the folder, you can use the command
Attrib -h -s -r FOLDER_NAME ## Attrib -h -s -r studymaterial
4. Showing File Structure
This one I found useful because most of the time when you are working in a team on a big project the most important thing is the file structure. One Mistake in the file structure and your all efforts wasted. You don’t do a bigger mistake like this that's why CMD comes with a command which helps you to show the file structure. | https://medium.com/pythoneers/20-terminal-commands-that-you-must-know-f24ebb54c638 | ['Abhay Parashar'] | 2020-12-23 14:35:17.737000+00:00 | ['Tech', 'Technology', 'Productivity', 'Windows 10', 'Education'] |
Rickie Goes to Bat for Diabetes Ticket Package On Sale Now | Rickie Goes to Bat for Diabetes Ticket Package On Sale Now
Today, we launched a very special ticket package called Rickie Goes to Bat for Diabetes, supported by RedPrairie. For just $149, fans can purchase a ticket to the Gehl Club at Miller Park for the Friday, May 28 game vs. the New York Mets at 7:10p.
In addition, for each ticket purchased, fans will receive an autographed Rickie Weeks bat and a portion of the proceeds will benefit the American Diabetes Association.
Besides being a great offer for a great cause, this is a unique opportunity to purchase individual tickets to the Gehl Club, one of the most popular all-inclusive areas at Miller Park, which is normally only available for group purchase.
Fans taking advantage of the offer will be able to enjoy the full buffet in the Gehl Club, including two complimentary glasses of wine or beer. Fans will receive their Weeks autographed bat on the night of the game.
Tickets for this special offer are limited and can be purchased by calling the Brewers ticket office at (414) 902-GAME (4263) or by visiting brewers.com/giveback.
Proceeds from this event will flow to Brewers Community Foundation, the official charity of the Milwaukee Brewers Baseball Club, and all donations will be specifically earmarked for American Diabetes Association. Funds generated by Brewers fans benefit American Diabetes Association of Wisconsin.
I hope you all will come out to Miller Park to join Rickie Weeks and RedPrairie in supporting such a wonderful organization and cause!
-CAIT
[email protected] | https://medium.com/@cmoyer/rickie-goes-to-bat-for-diabetes-ticket-package-on-sale-now-14981220cada | ['Caitlin Moyer'] | 2016-11-08 19:14:48.003000+00:00 | ['Dailies', 'Diabetes'] |
Real Love Endures | Real Love Endures
Photo by Pablo Merchán Montes on Unsplash
A story about a
Boy who
Captured me
Dashing
Exquisite, I
Fell
Gullibly
Hopeful that
I had
Just
Kindled true
Love
My eyes glazed
Never fully
Open
Potential was all that I saw until the day he
Quit me
Retreated, regressed to old ways
Shattered me.
Time heals slowly
Until one day you rise and
Value his exit for real love does not
Waver; it holds on, it shows up
Xenial to life’s challenges
Youthful and
Zealous no matter what. | https://psiloveyou.xyz/real-love-endures-32bc0d8a6984 | ['Galit Birk'] | 2020-12-13 13:03:25.183000+00:00 | ['Relationships', 'Poetry Sunday', 'Relationships Love Dating', 'Love', 'Lost Love'] |
Struggling With Your Goals? Take a Break. | Photo by Vladislav Muslakov on Unsplash
This morning I woke up at 5 a.m., just like I do every morning. I poured myself a cup of coffee, did my inspirational reading, and then opened my journal to write my daily pages. It’s a routine I developed years ago, and has helped me to work out issues I’m struggling with, lay out my goals for the day, and keep me focused on the things I hope to accomplish in the long run. I end each journaling session with a checklist for the day, which usually includes tasks related to health, writing, work, and family obligations.
But today, that blank page stared me in the face. I felt the “shoulds” rising up in me. I should write about all the things bothering me right now so I can discover the solutions. I should address the reason I’m not sticking to my health goals, and what I can do differently today. I should figure out why I’m feeling blocked while editing my novel. I should add a bunch of things to my to-do list today so that I can end my work week on a productive note.
I should.
But I didn’t want to.
And so I didn’t.
I closed my journal and made a decision for the day. I was not doing any of it. Gym? No. Sticking to food rules? Nope. Editing my novel? Not today. Instead, I decided that today was a day for no rules, no obligations, and definitely NO SHOULDS.
Every January, there’s a lot of pressure to be productive and gung-ho about taking the right steps to reach goals. Add a new decade (save the debate…it’s the ‘20’s, all right?), and the pressure increases. This is the year/decade when we’ll lose the weight, write that book, reach our dreams, accomplish more, BE more… It’s when we change our lives and finally dedicate our energy to those things that only felt like wishes last year. This year will be different. This year will be when our wishes turn into reality.
And then, life steps in, and much of it comes in the form of bad habits you’ve developed. Remember last month when you partied it up over the holidays? When you accepted the darker mornings and slept in instead of waking up early to get a head start on your day? When you indulged in the desserts pouring into your office or delivered to you via cookie exchanges? Remember the missed workouts and the extra five pounds of winter weight you gained? Remember whatever it is that you slacked on last month, and are trying desperately to get on track this month…and just can’t?
I’m not trying to make you feel guilty. I’m SHOWING you your guilt. We all have these grandiose plans for how we’re going to radically change our lives and make a better version of ourselves, especially after a month of indulgence, and when we slip up, we feel like failures. And then what happens?
We fall completely off the wagon.
We start the day with good intentions, and then end it feeling like we’ll never get it right, like we’re not even worthy of success because we can’t even stick to the plan.
We don’t just eat a bite of cake, we eat the whole thing. We don’t skip one workout, we just stop going to the gym altogether. The manuscript is pushed aside. The emails pile up. The goals feel further and further away. We start the day with good intentions, and then end it feeling like we’ll never get it right, like we’re not even worthy of success because we can’t even stick to the plan.
Stop.
This whole changing your life thing? It’s not simple stuff. It’s actually quite hard. There are going to be moments you’ll slip up. You’re going to take steps backward along with all your forward steps. Your timeline might need to be adjusted, and your actions may be the reason why.
And you might get really, really tired of trying to accomplish it all and ending the day accomplishing nothing.
So stop. Stop trying to be better. Don’t go to the gym. Let go of your lofty goal. Sleep in. Enjoy your favorite food. Binge watch your favorite TV show. Keep your computer closed.
But just for today.
Let go of the guilt and recognize you’re a human being trying to do hard things.
Today, give yourself permission to rest and not be so hard on yourself. Let go of the guilt and recognize you’re a human being trying to do hard things. Tomorrow you can go back to your goals and all the steps you need to take to reach them. Tomorrow, after you’ve rested, you’ll find the energy to reassess all the things you need to do to reach the finish line, and if you need to adjust your timeline. Tomorrow you can go back to working toward the dream.
But today? Take a break. | https://medium.com/swlh/struggling-with-your-goals-take-a-break-da9e3590102c | ['Crissi Langwell'] | 2020-01-21 16:44:50.530000+00:00 | ['Self Improvement', 'Life Lessons', 'Health', 'Success', 'Goals'] |
Stonez Talk Vol. 1 | The MetaStonezDAO
Treasury
The treasury of the DAO is already over 15 ETH (or ~$60,000 USD in current exchange rates). 10% of the Public Genesis Mint proceeds went directly to the DAO Treasury and the treasury at the end of the Public Origins Mint is expected to grow beyond 107 ETH by the time the Origins Mint full mints. The first proposal upon full mint of Origins will be that Lead Devs @shaunt___ and @karlsinger will donate 20 ETH to the DAO Treasury on top of the 10% of total mint proceeds, 20% of secondary royalty sales, and 50% of Elemental Stonez Auctions (to be held in Q1 2022).
This significant treasury will allow the holders of MetaStonez to have a direct say in how they want this community to have an impact in the Metaverse.
Voting:
A topic that we see constantly questioned in our channels so we would like to make a final clarification; Genesis and Origins have equal voting power in the DAO.
DAO Utility for Genesis and Origins are the same
You will need to own a MetaStonez NFT to vote. The voting on proposals are all to be held on chain, with each proposal needing to reach a specific quorum where the majority vote wins on a Pass/Fail basis.
Structure of the DAO and how the system operates
MetaStonez DAO Process Overview
Stage 1: Proposal to be discussed before creating it
MetaStonez will launch a forum where “soft proposals” will be deployed. In this stage, the community will gauge how they feel about the said proposal and if they want to vote (pass/fail) to the next stage.
If the said proposal is passed, it will be deployed to the official proposal forum of the DAO to be voted on.
Examples of Proposals/Actions/Treasury decisions
Holistic development of Project MetaStonez (gamification, land, tokenomics, partnerships proposals)
Promotion of MetaStonez contributers to Core Team members in development, art, and marketing capacities
Issuance of grants and scholarships to MetaStonez contributors
Purchase & Fractionalization of NFT projects
Acquisition or Distribution of ERC-20/721/1155 tokens on any chain with multi-functional purposes (acquiring, staking, yielding, airdrops)
Purchasing of real life assets (art, naming rights, real estate, etc)
Donations to charities or causes voted on by the DAO
Delegation, Election & Assignment of ‘officials’ to carry out proposals for IRL investments or off-chain
Stage 2: Voting Period Delay
During the voting period delay, the DAO will release one of a couple fail-safe mechanisms. This will allow holders to reassess the proposal and change their voting decisions, if need be.
Stage 3: Execution
If/when the proposal passes through the voting period, it will be passed to the execution phase. Here, another fail-safe mechanism will be deployed;
Stage 4: Execution delay
This will act as a final fail-safe mechanism for holders to review their vote and explain either their vote of confidence or their reservations on any said proposal. Based on the outcome of these discussions, it will passed or failed through final decision.
Stage 5: Final Governance
In the early onset period of the DAO, MetaStonez will have a fail-safe multi-signatory to protect against malicious activity. In the case that we and the community feel there is malicious activity transpiring, the multi-sig will have veto power against the proposal. As the DAO and community mature and diversify to a balanced ownership, we will shift the contract to revoke veto power in which the DAO will be 100% self-governed.
Lead Backend Dev
MetaStonez Future Utility/Value Cycle
As of right now, Genesis MetaStonez users have benefits in the following ways:
Exclusive Mint Pass for subsequent collections
Enhanced drop rate (over Origins) for Airdrops, Whitelists, and Raffles
Enhanced benefits in future tokenomics yield development (to be approved through DAO proposals)
The team is working on value-creation for membership and participation which is still being refined and scoped. We want to make sure everything is done the right way and commitments are firm when we publicize them, so we can prove sustainable long-term value to all holders through all of our future plans.
Eligible Genesis holders have already received Airdrops and Raffles totalling over 2 ETH in just two days.
Rest assured, bringing value and quality back to our holders is something that we have committed ourselves fully achieving. If and when any such commitments have been solidified, we will maintain the utmost transparency and fill our community in on all of the details in a timely manner.
Thank you all for such a swift launch and we look forward to the Origins Presale Mint on Saturday, January 1st, 2022 at 6 P.M. UTC to be followed by the Origins Public Mint on Sunday, January 2nd, 2022 at 6 P.M. UTC. | https://medium.com/@projectmetastonez/stonez-talk-vol-1-4b5febf7df99 | [] | 2021-12-30 03:25:48.710000+00:00 | ['Utility Tokens', 'Web3', 'Dao', 'Metastonez', 'Nft'] |
Talaash (2012) — Review | Talaash (2012) — Review
Release Date: November 30th, 2012
Genre: Drama, Thriller, Crime
Rating: 7.5
Playtime: 2hrs 19mins
Cast: Aamir Khan, Rani Mukharjee and Karina Kapoor
Director: Reema Kagti.
Plot: Aamir Khan always gives us a new surprise whenever we watch his film……..Every time a new subject & a unique way of presentation, a unique style of execution…….Talaash is the biggest surprise given by Aamir……It’s a murder mystery…Till now we’ve watched various murder mysteries through films, through TV serials but Talaash is beyond our imagination. It holds your breath from 1st frame till the last frame.
It starts from a road accident happens near Worli Seaface, where all of a sudden a car comes on an empty road & dashes the footpath & jumps into the sea. The person inside the car is a famous Actor Arman Kapoor who dies
due to drowning in the sea. There the investigation of this accident starts. From past 2–3 years similar kind of accidents often happens on the same spot but due to improper evidences these cases were moves to A-File cases. Now the investigation of these cases is handed over to Inspector Surjan Shekhavat (Aamir Khan). On the parallel lines, Aamir’s personal life is already disturbed & this accident adds to his disturbed married life with his wife Roshni (Rani Mukherjee). This accident reminds him of a similar accidental death of his son Karan. The memories of this painful situation always haunts him.
In the process of investigation, he comes across a Prostitute named Rosy (Kareena Kapoor) who helps him at every step of investigation. She also gives him an emotional support. So this murder mystery goes parallel with Aamir’s personal life. How he solves this mystery & his persoanl life, how the ‘Talaash’ ends, the answer lies in the 2 & half ours of the movie. This movie makes u speechless. Every time you come up with a new suspect but within a fraction of second you realize you are wrong.
Your brain stops working while finding the answers but the mystery keeps on going, adding more & more to your confusion. Reema Kagti & Zoya Akhtar’s compact script compel you stick to your seats. Kagti’s direction is too good. Especially in Aamir’s case. Most prominent directorial punch in the film is, the situation when Aamir thinks of the possibilities he could do to save his son.
Conclusion: Though this story is a Fiction but the impact is in such a way that you start believing its real. Kareena looks beautiful on the screen, her performance is very natural. Though Rani doesn’t have much role but she has given her best. Ram Sampath’s music keeps revolving in your mind, very situational. New upcoming star Nawazuddin Siddiqui who played “Langda Taimur” also leaves a strong impression on everyone’s mind. Others have also performed well.
And last but not at all least Mr.Aamir Khan, he always stick to his title “The Perfectionist”. Whichever role he plays he just goes into it in such a way that you cant distinguish between Aamir & his character just like the character Surjan Shekhavat. His performance is enriched by the dialogues written by Farhan Akhtar & Anurag Kashyap. So do watch this movie to unfold the unthinkable, unimaginable murder mystery. | https://medium.com/ninerpeak/talaash-2012-review-a1199a599678 | ['Parag Joshi'] | 2016-12-10 20:45:06.782000+00:00 | ['Movies', 'Bollywood', 'Conspiracy', 'Drama', 'Thriller'] |
Policing Authorities in Africa | I implore you to listen to N.W.A. If you take the time to listen to the second track on their critically acclaimed debut album, “Straight Outta Compton”, the first 30 seconds offer a foreboding, aghast with something all too familiar with African-Americans, against a backdrop of drums and trumpets, which are somehow harrowing, yet triumphant.
Ice Cube is called to the stand as a witness, swearing to tell the truth, the whole truth and nothing but the truth, to which he blithely obliges. The next line aptly depicts the sentiments that have inundated black people of every ilk, where they are the minority;
“Fuck the police coming straight from the underground, a young n***a got bad cuz I’m brown”
Iconic.
Iconic because what this line captures transcends police brutality and goes beyond all strands of relational society, wherein, a social contract exists between an authority and a people. This denotes an implicit agreement between members of society and the state for social benefits, not least of which include STATE PROTECTION. However, the very substratum of this song, keys in on Ice Cube’s vulnerability to the police authorities because he’s a black man. Thus, the caveat to this social contract is that it can be breached by living in a certain pigmentation.
Suggested solutions often made to rectifying this phenomena is to augment representation in law enforcement, which would likely reduce criminal bias placed on minorities. Doing this would abrogate policies that target minorities, which would lead to less black people in prison, right?
To agree is to trivialise the depths of police brutality, because in as much as it is a problem delineated by race, it is equally a problem that is exacerbated by corruption and poor governance.
What if I riddled you a story about police brutality and the incessant abuse of law enforcement in a place where black people are meant to be neighbours and not oppress on another?
The story of policing in Africa (not all but several) is a narrative of flawed leadership and toxic political regimes who have turned a blind eye to illicit practices of government and quasi-governmental officials, allowing for corruption to run rampant. Corruption is a given in the remit of politics and in exceptional circumstances, can be a necessary evil. For example, when power is in the hands of an authoritarian government that keeps bureaucrats under firm control, the state is able to act like a smart monopolist: its employees charge prices that are high but not too high, and are able to deliver what they promise.
The problem is that the debilitating impact that political corruption imposes on those living in the global North are vastly different from those living in the global South, largely by consequence of economic development, suppression of free speech among many others. Nevertheless, the different geographies do not insulate the human rights crisis that people suffer at the hands of police.
#ENDSARS is a decentralised social movement in Nigeria, remonstrating for the complete disbandment of what is a gangster police unit of Nigerian law enforcement, the Special Anti-Robbery Squad, known for unlawful killings, torture and extortion. Hitherto, the legacy of #ENDSARS is not unique to the history of police brutality in Africa. The world mobilizing in support of #ENDSARS, isn’t solely a win for Nigeria, but for the continent, because protests rarely occur or, in most cases, are rarely ever respected by authorities in Africa and those abroad. Such is the example of the #AnglophoneCrisis protests in Cameroon, where over 300 people have been killed at the hands of security forces. In East Africa, violence is a hallmark of the Kenyan police as a defining trait since its genesis.
In August, A 15 year old boy with down syndrome, Nathaniel Julius, was shot in the head and chest for not answering police questions. This must be noted — the South Africa police service kills three times as many people per capita than the US police do. For such a contemptuous relationship to exist, one that isn’t motivated by race, the reason for this is clear: poor governance and corruption.
A fish rots from the head down, which is why you are exercising your right when you blame country conditions on political leadership. Unfortunately, there is still a very powerful political class of cronies for whom establishing a context where rule of law, checks and balances and efficient systems of bureaucracy exist is the exception and not the rule. The high growth rates African Economic Ministries boast at Davos every year mean nothing if they aren’t transmuted into channels of upward mobility and wider safety nets for the poor — Africa’s most vulnerable demographic to police brutality.
There’s a collective emergence in our universal black voice that is cutting across state lines, tribes, gender and continents. 2020 will close as the year where Africa’s youth activism arrested any fallacious and moral portrayal of their undeserving leaders. Every act in pursuit of justice, no matter how unsuccessful, will find its justification in history. The momentum must continue.
#ENDANGLOPHONECRISIS #ENDSARS #CONGOISBLEEDING #AMINEXT#ZIMBABWEANLIVESMATTER #SHUTITALLDOWN #BLACKLIVESMATTER #RAPENATIONALEMERGENCY #ENDCHILDTRAFFICKINGCOTEDIVOIRE #ENDCHILDTRAFFICKINGGHANA #STOPWARINETHIOPIA #ENDPOLICEBRUTALITYINUGANDA | https://medium.com/@eemukete/policing-authorities-in-africa-f93807dfccf6 | ['Victor Ekoko Mukete'] | 2020-12-16 16:31:08.861000+00:00 | ['Corruption', 'Police Brutality', 'Inequality', 'Protest', 'Development'] |
Granny’s Old House | So long ago of this house I remember
Down in the Bottom along side the river
Swinging Bridge just off of her front lawn
Another big swing on Granny’s porch swung
It was in this home in 1959 I was taught time
Three of our generations lived here in this house
This home where my oldest brother was born
Didn’t have a bath but it did have an inside toilet
Taking hoe baths before we knew what they were
The House stands, yet all the people, have passed on | https://medium.com/@oldandgrumpy56/grannys-old-house-2609382b966e | ['T.R. Savage'] | 2020-12-23 19:59:01.293000+00:00 | ['History', 'Memories Of Childhood', 'Grannys Old House', 'Poems On Medium'] |
Internship Week 18 | Photo by Gábor Szűts on Unsplash
Hey there! Thanks for reading. In 18 weeks, I’ve come a long way from where I started at RoleModel Software, and I guess Ken thinks so too.
I got an offer to stay on full-time with the company. I’m excited to keep learning and getting stuff done and I’m looking forward to where God leads me in the days ahead.
In the midst of having a feeling of accomplishment and excitement, there are always unknowns. I wonder what they will look like, and I wonder how I will handle them. As a chronic overthinker, I battle daily with things that have not yet materialized.
I ask myself, “What is the point?” There is none. But, it sometimes doesn’t stop me.
But then I remember. I can trust my unknown future to a known God. Easier said than done. But, what a comfort to an anxious soul. | https://medium.com/@theoluciano/internship-week-18-d9c86f4c4a22 | ['Theo Luciano'] | 2020-12-13 03:46:53.226000+00:00 | ['Life', 'Product Development', 'Praxis', 'Learning Out Loud', 'Software Development'] |
Learn Mercurial revision-control basics in 10 minutes | If you are using git as a version control tool, you can learn the basics of Mercurial as easy as pie. I was wondering if their motto(Work Easier Work Faster) is true or not and I determined to do basic version control actions using mercurial just by using --help and it worked like a charm!
Installation
Doesn't need any explanation: sudo apt install hg .
If you are using other operating systems this link may help you.
If everything goes well you can see the version using hg --version
Config mercurial
After the installation, we need to add a user (just like Git) using this command:
hg config --edit
And in the config file just fill username and you can start using Mercurial:
Initialize a repository
Use this command: hg init .
Now you are on the default branch which is similar to master in git.
If you want to be sure use this command: hg branch and this will show you: default
Create a file and commit that file
You can create your file inside the folder or with using this command in Linux or Mac OS: touch myFirstFile.js .
Now, this file is untracked. We can use the Add command to make mercurial track this file: hg add ./myFirstFile.js .
Create another file but don’t add that file. For example touch secondFile.js
similar to git status we can use hg status here and the result will be like this:
Result of hg status
Our first file which is added is showing with capital A(Which I assume it means Added!) and the second file(Which we created but didn’t add) is showing by a question mark.
To commit that file we must use: hg commit --message "my first commit message" . Now if you use: hg status again you can see that file is gone.
Question: How can I see my previous commits?
Answer: Exactly like what you do on Git
hg log will give you a brief history of your commits.
hg log result
changeset is something similar to a Git commit hash.
Create a branch and merge that to default
First check that you are in default branch: hg branch the result should be default .
Create a branch: hg branch branchName .
In Git you needed a -b option to create and checkout to that branch branch -b branchName but here you don’t need an option and after creating a branch your current branch is what you just created. Now the result of hg branch should be your branch name.
Switch branches:
Simply by using this command: hg update branchName .
Let’s create a file in this branch and commit that file.
touch thirdFile.js
hg add thirdFile.js
hg commit thirdFile.js
Now we want to merge our branch to default branch. First, we need to change our current branch: hg update default . Second, we need to merge our branch: hg merge -r myBranchName . Why -r ? -r means revision.
After the merge Mercurial shows a message:
Mercurial merge message
My branch name is dev . I had a file in my branch named thirdFile.js and after the merge command, it has a capital M at the beginning when I use hg status to see my working directory files (I assume that means Merge?!).
And the message says clearly that I have to commit this merge:
hg commit --message "merge branch"
Now the default branch is updated and it contains our thirdFile.js .
Clone a remote repository
Mercurial has a test repository and you can easily clone it by using this command:
Push and pull are mostly the same as git and I’m going to write a separate article about them.
And there you have it! Now you will hopefully feel more confident about using Mercurial! :) | https://medium.com/javascript-in-plain-english/learn-mercurial-revision-control-basics-in-10-minutes-if-you-know-git-4466bd9d1fa9 | ['Poorshad Shaddel'] | 2020-09-25 07:46:29.342000+00:00 | ['Git', 'Mercurial', 'Programming', 'Software Development', 'Version Control'] |
Creating an Ethereum Token to Enable a Decentralized Rent-to-Own Network | Introduction
With real estate prices soaring to record heights, it has never been more difficult to qualify for a mortgage. Even those with steady, well-paying jobs can struggle to prove their creditworthiness and save up enough cash to start building equity in a home. Rent-to-Own, as the name implies, is an alternative route to homeownership in which renters have the option to purchase the property when their lease expires. While the terms of the agreement can vary, what this essentially means is that a certain portion of each rent check goes toward an eventual down payment. Ideally, this arrangement is a win-win: landlords benefit from extra-motivated tenants, and dependable renters are rewarded for their financial responsibility. However, in practice, Rent-to-Own is far from perfect. Renters who decide to move have to forfeit their accrued payments and start over. Predatory landlords change the terms at the last minute, and both parties must rely on each other to keep honest and accurate records.
Fortunately, innovations in Decentralized Finance (DeFi) offer a potential solution to these problems. Using smart contracts, we can program the business logic of rent-to-own agreements as independently-verifiable transactions in an immutable ledger. We can even create an cryptocurrency-based incentive system that rewards both renters and landlords. These are the principles behind the RTO token, the non-transferrable asset class that drives a potential network of Rent-to-Own participants. Let’s start with a conceptual overview of the (preliminary) economics before diving into technical details.
Motivation
For the RTO system to work, it must benefit all stakeholders. First, landlords who list a home through any rent-to-own scheme expect to make a certain amount of cash during the rental period before the sale. Typically, this represents some portion of the property’s appraisal value. Once this threshold is met, they want to sell to a motivated, creditworthy buyer. But how do you go about finding such a buyer? Credit scores do not tell the whole story, and references can be easily forged. Because of the difficulty in establishing trust through these imperfect metrics, landlords often have no choice but to allow each tenant to prove themselves through firsthand experience (i.e. years of on-time payments). For this reason, most rent-to-own agreements are non-transferrable, meaning that tenants have to return to square one if they decide to move.
Thus, a ubiquitous network of rent-to-own homes and participants would be advantageous. Property management companies would be one way to achieve this, but such centralized options remove individuals’ autonomy in exchange for standardization and shared infrastructure, like secure payment platforms and databases. RTO offers a decentralized alternative. We start by leveraging the secure, stable, and immutable financial transactions that are already baked into the Ethereum ecosystem, wrapping our business logic in a smart contract to establish trust among distributed network participants.
The RTO Smart Contract
Our smart contract is simple, but powerful. To list a rent-to-own home on the RTO network, landlords must first be verified by the contract administrator. Once approved, they can add their rental property, which is defined by the renter’s ETH address, an earnings threshold (representing a USD amount that they would like to recover before selling the home), and an earnings percentage (the fraction of each rental payment that will contribute toward the earnings threshold). New homes and renters are initialized with an empty RTO balance, and homes will not be sold until both buyer and seller have an RTO balance that exceeds the earnings threshold. This condition ensures that landlords are guaranteed a certain amount of profits and renters have proven themselves creditworthy.
Each time rent is paid (in ETH) through this smart contract, both the renter and home acquire non-transferrable, dollar-normalized RTO. The exact amount of RTO earned is determined by the following equation:
amountToMint = rentAmountEth * earningsPercent * usdPriceEth
So let’s say a home has a monthly rent of $2000 (market value of about 1 ETH at the time of writing), an earnings percentage of 20%, and an earnings threshold of $20,000. With each payment, 1 ETH * 20% * ($2000/ETH) = 400 RTO are added to the balances of both renter and home. At this rate, the renter will have the opportunity to purchase the house after about 4 years of steady payments. If they decide to move, both tenant and home retain their RTO balance. It’s important to note that valid rent payments are the only way to acquire RTO — tokens are non-transferrable, non-burnable, and cannot be swapped on secondary markets, as opposed to standard ERC20 coins. Like a credit score, there are no shortcuts to establishing good financial standing.
By making the critical information publicly available, the RTO network creates a competitive marketplace where, in addition to amenities and square footage, participants can compare and negotiate the terms of the earnings agreement. Even within the contract’s relatively simple boundaries, dozens of unique, mutually-beneficial scenarios can occur naturally, such as:
Renter and home both start with 0 RTO and renter does not move: This represents the traditional rent-to-own structure. Both entities reach the earnings threshold at the same time, at which point the renter can buy the house.
Renter RTO > Earnings Threshold & Home RTO < Earnings Threshold: Renter is ready but landlord is not. By the time both conditions are met, the renter may have enough leverage to negotiate for more a more favorable mortgage (e.g. 20 instead of 30-year term).
Home RTO > Earnings & Renter RTO < Earnings Threshold: The opposite of the last scenario. Here, the renter still has to prove themselves, but they may wish to accelerate their earnings by negotiating for a higher earnings percentage.
Code and Tutorial
Skip this section if you’d like to get right to the demo of the decentralized application.
Deploying the Smart Contract
The business logic discussed above is codified in the following smart contract, written in Solidity.
If you’re familiar with object-oriented programming languages, most of the code should be pretty straightforward. I would like to highlight a couple of key functions:
getThePrice() : RTO should be pegged to the dollar to account for the volatility in the price of ETH. To normalize ETH payments, we need a way to access the current USD/ETH market price. Unfortunately, we can’t just call the CoinGecko API within Solidity, but we can access a “Price Feed”, which cleverly stores the price on-chain via a Chainlink aggregator interface. More on that here.
: RTO should be pegged to the dollar to account for the volatility in the price of ETH. To normalize ETH payments, we need a way to access the current USD/ETH market price. Unfortunately, we can’t just call the CoinGecko API within Solidity, but we can access a “Price Feed”, which cleverly stores the price on-chain via a Chainlink aggregator interface. More on that here. payRent(address _to): This is the heart of the smart contract, so it deserves special consideration. First of all, we make sure that the payment is made from the home’s renter to its verified landlord. Next, as a payable function, we also facilitate the transfer of ETH (L119), the amount of which is stored in the transaction data. Finally, we calculate amountToMint (see the equation above) and update the balances of home and renter accordingly.
I deployed the smart contract to the Ropsten Testnet using the Remix IDE and my Metamask wallet. Here are more detailed instructions if you’re interested. Before we move on to the application, we (as the contract deployer) need to add a verified landlord and sign the transaction using the same account that deployed the contract (representing a trusted administrator). We can do that right in Remix.
Image by author.
We can also view our contract and all associated transactions on Etherscan.
Application Infrastructure and Metamask Integration
The application provides methods for us to use the smart contract functions through a payment portal-esque interface. I’m using Python’s FastAPI to create the API endpoints, the Web3.py library to prepare (but not sign, more on that in a second) the Ethereum transactions, and a basic PostgreSQL database to store some off-chain data, like URL’s to images of the homes. Rather than asking the user to hard-code their private keys into the application itself (a SERIOUS security risk), we sign transactions using the popular Metamask browser extension. For example, we generate the transaction data for the payRent function like this:
# from transactions.py def payRent(landlord, amount_eth):
txn_dict = to_contract.functions.payRent(w3.toChecksumAddress(landlord)).buildTransaction({
'chainId': 3,
'value': w3.toWei(float(amount_eth), 'ether'),
'gas': 2000000,
'gasPrice': w3.toWei('2', 'gwei')
})
return txn_dict
We can then send this data to Metamask with a few lines of JavaScript:
# from confirm_transaction.html sendTxnButton.addEventListener('click', () => {
sendTxnButton.disabled = true;
sendTxnButton.innerHTML = "Connecting to Metamask...";
ethereum
.request({
method: 'eth_sendTransaction',
params: [
{
from: ethereum.selectedAddress,
to: "{{contract_address}}",
value: "{{value}}",
gasPrice: '0x4A817C800',
gas: '0x1E8480',
data: "{{txn_data}}",
chainId: '0x3'
},
],
})
.then((txHash) => logTransaction(txHash))
.catch((error) => console.error);
});
The full project code, with instructions of how to run the demo application, can be accessed here.
Demo
Now for a quick demo that demonstrates functionality from the perspectives of both landlord and renter. Before the tenant and home can start earning RTO, our landlord will have to add their property to the smart contract. They can do that by navigating to the “View Listings” tab and filling out a quick form.
Role: Landlord
Listing a new rental on the RTO network. Image by author.
Some of these values, like the street address and description, will be stored in our off-chain database. Others, like the down payment (aka earnings threshold) and earnings percentage, will be stored in the smart contract. This page simply prepares the function details, but they will then be asked to explicitly sign the transaction using their Metamask wallet. | https://towardsdatascience.com/creating-an-ethereum-token-to-enable-a-decentralized-rent-to-own-network-cc3786cf1142 | ['Evan Diewald'] | 2021-07-24 11:55:49.347000+00:00 | ['Metamask', 'Defi', 'Smart Contracts', 'Ethereum', 'Rent To Own'] |
Kendall County, Illinois | I’m usually out at this time of year photographing snow laden fields lined with drifting ditches. But the rural spots near my home have the air of Spring — even the light casts itself brightly on barns and farmhouses. The only hints of Winter are the deep black shadows and silhouettes of trees.
We’ve had only one light dusting of snow here in the Midwest.
Tree Line Between Fields, ©V.Plut
Tree with Outbuildings, ©V.Plut
I always think of Mr. Thomas Rowland when I revisit this area with my camera in tote — buried alone beneath a white military marker in the local cemetery. He came from Nottinghamshire, England in the late 1850’s with his wife and three daughters, and drove to Chicago in an oxcart, where his wife took sick and died. He made his way down to Kendall County, Illinois where the earth is black and rich with nutrients, a place where a farmer might make a living. | https://vplut.medium.com/kendall-county-illinois-afd6cce0a352 | ['V. Plut'] | 2019-01-18 23:08:06.933000+00:00 | ['Photo Essay', 'History', 'Genealogy', 'Lit', 'Photography'] |
CS371g Fall 2020 Final Entry: Siddhesh Krishnan | That’s me!
The following are long-term takeaways from this class:
test first, test during, test after, test, test, test
when designing algorithms , demand the weakest iterators (e.g. bidirectional vs. random access)
, demand the iterators (e.g. bidirectional vs. random access) when designing containers , provide the strongest iterators (e.g. random access vs bidirectional)
, provide the iterators (e.g. random access vs bidirectional) build adapters on top of containers , iterators , and functions
on top of , , and always look for reuse and symmetry in your code
and in your code collaboration is essential to the quality of your code and to your well-being in producing it
refactor, refactor, refactor
make your code beautiful
How well do you think the course conveyed those takeaway?
I believe that this course did an amazing job in conveying these key takeaways! Coming into this course, I had no experience with C++ and the various nuances which C++ does carry with it. However, with this course, I was able learn more about containers and functions and also solidify my understanding of iterators and the different types of iterators. Based on this, I believe that this course has done an outstanding job in conveying those takeaways.
Were there any other particular takeaways for you?
Some other key takeaways which I obtained from this class include the purpose of Unit Tests and Acceptance Tests, the use of Continuous Integration and Version Control, and the concept of UML diagrams. I believe that these takeaways are as crucial as the ones mentioned above when it comes to good style and good programming.
How did you feel about two-stage quizzes and tests?
I liked the two-stage quizzes and tests! It really gave me a good opportunity to first try to tackle the problem on my own and then work on it with a small group. It was always nice to already come in with an idea and explain it to my group and then get their perspective.
How did you feel about cold calling?
While cold-calling could be intimidating at times, I really did like the cold calling as it made me more accountable to keep up with class material and stay on task during class (especially now with the online format) so that I would be able to answer the questions if Professor Downing did call on me.
How did you feel about office hours?
While I never had the opportunity to attend office hours, I believe that they would be really useful in solidifying concepts which anyone is confused on. It would also come in handy for project help as well.
How did you feel about lab sessions?
Lab sessions were extremely helpful when it came to setting up the appropriate files for a project or just getting the foundations set for the projects such as having Docker, GitLab and the other tools necessary. With this, I believe that the lab sessions were really helpful in this class.
Give me your suggestions for improving the course.
Overall, I believe that Professor Downing did an outstanding job in instructing this course! I was able to gain a better understanding of various Object-Oriented Programming styles and learn more at C++. I really enjoyed the structure of this course and would not change the format of the course. | https://medium.com/@siddheshkrishnan1/cs371g-fall-2020-final-entry-siddhesh-krishnan-c646e6bd37c0 | ['Siddhesh Krishnan'] | 2020-12-06 20:26:06.562000+00:00 | ['Computer Science', 'Object Oriented', 'College', 'Internships', 'Software Development'] |
Measuring Content Engagement in Google Analytics — Part 3: Social Site Exits | Difficulty: 2/5
Requirements: Google Analytics, Google Tag Manager
Site exits aren’t typically an ideal situation. However, there are times where they should be measured as a small win. If, for example, you have a strong desire to send users from your site over to social media networks for further coverage, this is a winning exit, not a loss..
Other examples might be when you have 5 partner sites with varying domains — while it may be considered an exit in Google Analytics, you should mark this departure as one which is leading to another part of your business.
In this article I’m just focusing on Social Exits. This is where a user leaves your site by clicking on a link which is specifically pointing at a social media page owned by your company.
The setup for this one is actually really simple. It relies on a couple of things:
A GA Event Tag
A Link Click trigger
A Regex Lookup Table
New Variable: Regex Lookup Table
It’s a straightforward regex lookup table, we use regex because then we can enable a partial match, just in case of the occasional stray trailing slash etc.
Make sure you untick Full Matches Only and Enable Capture Groups and Replace Functionality. A rule of thumb with the last one is to only enable it if you intend to use it. This can create some odd outputs if you are unsure how capture groups work.
New Trigger: Link Click
From a user’s perspective, they won’t notice the delay. You could make it shorter but 2 seconds is the default set by GTM and provides a good safety net.
New Tag: Google Analytics Event
Click “+New Custom Metric”
Name your new custom metric “Social Exits” — always go with pluralised names for Metrics because when you report on them, it makes far more sense. We’re not looking at a single action, we are going to be summarising many.
Once saved, you’ll be able to find out the Index number to use within the GTM Tag. For us, it’s 6:
With this index number memorised, head back to your tag and add it as a Custom Metric:
With everything set, try previewing your container and testing some social links on your site. So I can actually see the event fire in Preview Mode, I will test by using CTRL+Click which opens the URL in a new tab.
If I open up the Event, I can see all the information passed to GA:
This is working exactly as intended — the next part is to publish your work and let the data collect. Depending on how much volume your site receives, you might be able to start seeing the effect within a day or two. If you jump into Google Data Studio, you could make a table with the Dimension “Page” and the metric “Social Exits” to gauge which pages trigger users to exit to social media, if any. | https://medium.com/@jakekimpton/measuring-content-engagement-in-google-analytics-part-3-social-site-exits-c5e5e7f04a73 | ['Jake Kimpton'] | 2020-03-04 10:36:01.259000+00:00 | ['Google Tag Manager', 'Marketing', 'Measurement', 'Engagement', 'Google Analytics'] |
A weeknote, starting Monday 4 Mar 2019 | One of the most interesting things in my week was the Cross-government Service Design meetup held in Sheffield so that’s the main focus of this note.
Poster showing the title of the meetup I went to this week. It was the seventh cross-government service design meetup and the theme was designing for context
It was the first time I’ve been able to meet up with fellow service designers from other departments and it was a good opportunity to hear about some of the great work they’re doing under the theme of designing for context. Thanks to Martin Jordan and team for organising it.
It was also an opportunity to put real life faces to some of the people I only know so far from twitter.
One of the things I picked up early in the session is that context is dynamic and can change how the same person interacts with the same service in the same channel. An exercise run by Ben Carpenter highlighted some universal barriers people face:
Time
Money
Access
Interface and interaction skills
Self confidence
Awareness
Comprehension skills
Emotional state
Trust
Evidence
Enthusiasm
It was great that he was willing to share his emerging thinking on this and explore it with a willing group. I’m looking forward to see where this work goes next.
We also had two talks, one about designing for prisoners and prison service staff, the other about a discovery project relating to the replacement/redevelopment of a key law enforcement system. Big respect to those who do a similar job to me but in much more challenging situations.
Gemma Barnes, Marc Kendrick and Eddie Shannon shared their challenges and some of the impactful design they’re developing in prisons. They’re doing some important work which will have a real impact for the people they’re designing for.
Photo of a screen showing the title slide of a presentation called Designing for prisons
The second talk also featured some great illustrations by Chris Taylor which I enjoyed as someone who spent last year following through on a resolution to devote more time to drawing.
Screen showing a slide from a presentation with an illustration of a man holding a letter from the police
I enjoyed hearing from Amy Everett about how the research had been done. This included how spending extended periods of time with research participants helped get past the point where they are on their best behaviour and showing what they think should be done (and what they think you want to see) and to a point where they revert to more usual behaviour which begins to reveal where the challenges and opportunities for improvement actually lie.
Other stuff this week
Preparing to run a lightning decision jam next Monday with our wider team as a way to inject some momentum into our collective response to our employee engagement survey results
Got some more stakeholder buy-in to using service blueprints to help us understand our services in a more joined-up way, and identifying ways to use them as engagement tools with a wider pool of senior stakeholders
Did some thinking about content for a presentation I’ve committed to next month at a breakfast tech meetup in Nottingham
Amy Hupe
Looking forward to next week | https://medium.com/@pjmoran/a-weeknote-starting-monday-4-mar-2019-c3e50d87a5dc | ['Paul Moran'] | 2019-03-09 22:36:24.643000+00:00 | ['Weeknotes', 'Government', 'Design', 'Service Design'] |
Townsend Shine Down | Townsend Shine Down
Thick sweater in a 73 fahrenheit room
No one is around
One is inflamed
Feels like an ordinary afternoon
Music in AirPods
Call in Bedroom
Good vibrations are setting the frame
One Hundred Seventy Eight, countdown
How much longer till I am gone
Take a polish with me
My forehead got flooded with emotions
Take a polish with me
Bright daylights dim down
Take a polish with me
Leave my toes crystal clean in town
Take a polish with me
Fill the blank pages with tea leaves
Fill the cracks with euphoria
Tell me you’ll write my name on it
when the Townsend Shines Down | https://medium.com/@kaearea/townsend-shine-down-17eaefe28f1a | ['Monty Burns Kate Crubs'] | 2020-12-04 01:07:30.206000+00:00 | ['San Francisco', 'Mood'] |
Lyrics RHCP Outside | Am F C
How long how long will I slide
G Am F C
Seperate my side I don’t
G Am F
I don’t believe it’s bad
C G
Slit my throat it’s all I ever
Am Em
I heard your voice through a photogragh
Am Em
It thought it up it brought up the past
Am Em
Once you know you can never go back
G Am
I’ve got to take it on the otherside
Am Em
Centuries are what it meant to me
Am Em
A cemetery where I marry the sea
Am Em
Stranger things could never change my mind
G Am
I’ve got to take it on the otherside
G Am
Take it on the otherside
G Am
Take it on take it on
[Chorus]
Am Em
Poor my life into a paper cup
Am Em
The ashtray’s full and I’m spilling my guts
Am Em
She wants to know am I still a slut
G Am
I’ve got to take it on the otherside
Am Em
Scarlet starlet and she is in my bed
Am Em
A candidate for my soul mate bled
Am Em
Push the trigger and pull the thread
G Am
I’ve got to take it on the otherside
G Am
Take it on the otherside
G Am Em C x 2
Take it on take it on
[Chorus]
Em
Turn me on take me for a hard ride
C
Burn me out leave me on the otherside
Em
I fell and tell it that it’s not my friend
C
I tear it down I tear it down
Am F C G
And it’s born again
[Chorus]
Am F C G Am F
How long I don’t I don’t believe it’s bad
C G Am
Slit my throat it’s all I ever | https://medium.com/@bangkillasem/lyrics-rhcp-outside-910ee55e7b9b | ['Indonesian Recipes Food'] | 2020-11-16 04:02:29.897000+00:00 | ['Startup', 'Songs', 'Lyrics'] |
We Need To Abandon This Piece of Relationship Advice | We Need To Abandon This Piece of Relationship Advice
We’ve all heard it before — but is it serving us?
Photo by George Gvasalia on Unsplash
I think back to all of the advice I’ve been given over the years about love and I can’t help but feel deceived. Wisdom is passed down to us in many forms and when it comes to love, it finds its way to us in bad clichés: love means never having to say you’re sorry, love will set you free, love is blind. I learned to accept that life is not fair and that it could be hard, but for most of my life I fervently refused to accept that love would be anything but idyllic when I finally found the right person.
It frustrates me to think of how for many years I believed that when I met the right person that it would be easy. I can’t count the number of times someone gave me this advice or some variation of it. When you meet the one it will just feel right or you will be surprised how easy it all is. Looking back I can see that part of the reason I never questioned this advice — and rather clung to it with determination — is because I desperately wanted it to be true. | https://medium.com/wholistique/its-time-we-abandon-this-piece-of-relationship-advice-ff7f843226c7 | ['Casey A.'] | 2020-12-24 16:30:06.355000+00:00 | ['Relationship Advice', 'Relationships', 'Relationships Love Dating', 'Love', 'Personal Growth'] |
TV Picks: 22–28 March 2021 | If you appreciate my UK TV Picks, and they make your viewing a little easier to manage, please consider buying me a coffee, or making a PayPal donation. (It’s also my birthday week, so…)
— MONDAY 22nd —
★ FOOTBALL’S DARKEST SECRETS — BBC1, 9PM.
New documentary series about the child abuse that happened in youth football during the 1970s to the 1990s. Continues tomorrow. (1/3)
Also watch:
24 Hours in Police Custody — Channel 4, 9pm . Series 11 of the observational documentary. (1/4)
. Series 11 of the observational documentary. (1/4) Hoarders— Channel 5, 9pm. Series 2 of the observational documentary show about hoarding and how people with this compulsion can be helped. (1/6)
— TUESDAY 23rd —
★ KATE GARRAWAY: FINDING DEREK — ITV, 9PM.
Documentary about presenter Kate Garraway’s personal experience with COVID-19 after her husband Derek was hospitalised with the virus a year ago
Also watch:
The Detectives: Fighting Organised Crime— BBC2, 9pm . New observational documentary following Manchester detectives. (1/5)
. New observational documentary following Manchester detectives. (1/5) Strangers Making Babies — Channel 4, 9.15pm. Documentary about the rise of platonic partnerships as 70,000 people are looking for people to have children with without love or marriage playing a role. (1/4)
— WEDNESDAY 24th —
★ THIS IS MY HOUSE — BBC1, 9PM.
New gameshow where four people claim to own the same house, meaning a panel of celebrity judges (Bill Bailey, Emily Atack, Judi love, Jamali Maddix) have to deduce who is telling the truth. Hosted by Stacey Dooley. (1/6)
Also watch:
First Dates Hotel — Channel 4, 9pm . Series 6 of the First Dates spin-off. (1/6)
. Series 6 of the First Dates spin-off. (1/6) Predator: Catching the Black Cab Rapist — Channel 5, 9pm . Documentary about John Worboys, the cab driver who drugged and raped over 100 women between 2000–08.
. Documentary about John Worboys, the cab driver who drugged and raped over 100 women between 2000–08. Framed by the Killer — Sky Crime, 9pm. New crime documentary focusing on cases where the real killer framed an innocent person. (1/3)
— THURSDAY 25th —
★ BLACK POWER: A BRITISH STORY OF RESISTANCE — BBC2, 9PM.
Documentary about the experience of black British people during the 1960s and 1970s. Featuring Altheia Jones-LeCointe, Darcus Howe & Roy Sawh.
Also watch:
Sort Your Life Out — BBC1, 8pm . New series helping people declutter their homes. Hosted by Stacey Solomon.
. New series helping people declutter their homes. Hosted by Stacey Solomon. COVID-19: Through the Storm — Sky Documentaries, 9pm. Observational documentary about COVID-19 from inside NHS hospital wards.
— MY 42nd BIRTHDAY! —
★ INVINCIBLE — AMAZON PRIME VIDEO.
New US adult animation about a teenager (Steven Yeun) whose father is a famous superhero (J.K Simmons), and starts to develop his own abilities while realising his dad’s legacy may not be all that honourable. Co-voiced by Gillian Jacobs, Andrew Rannells, Zazie Beetz, Jason Mantzoukas, Sandra Oh, Seth Rogen, Mark Hamill, Walton Goggins & Zachary Quinto.
Also watch:
Churchill — Channel 5, 9pm. New documentary series looking at the life of the iconic British Prime Minister, Winston Churchill, who was born into the aristocracy and led the country to victory during World War II. (1/6)
— SATURDAY 27th —
★ BEAT THE CHASERS: CELEBRITY SPECIAL — ITV, 8.30PM.
Spin-off to The Chase with celebrity contestants taking on Mark Labbett, Anne Hegerty, Shaun Wallace, Jenny Ryan & Darragh Ennis. Hosted by Bradley Walsh.
— SUNDAY 28th —
Nothing. Have you clipped your toenails this week? | https://dansmediadigest.co.uk/tv-picks-22-28-march-2021-9a8ff02d7af1 | ['Dan Owen'] | 2021-03-22 17:02:49.470000+00:00 | ['Tv Picks', 'TV', 'Listings', 'Television', 'UK'] |
Healthy Habits Entrepreneurs Should Adopt | Entrepreneurs, ultimately, are responsible for their success. Therefore, maintaining a healthy lifestyle is a vital part of ensuring you’re successful. Below, we’ve outlined some of the best healthy habits you should make a part of your daily routine.
Wake Up Early
An early start to the day has many benefits, reasonably speaking. However, more than anything, having an early start to the day allows you to take personal time before you go to work or attend to other commitments. Research has shown that individuals who take 30 minutes to an hour a day before work are happier and are less likely to experience work-based burnout.
Eat Breakfast or Have a Smoothie
Another essential habit you should develop as an up-and-coming entrepreneur is a regular breakfast routine. You’ve likely heard the phrase “breakfast is the most important meal of the day” a time or two in your life. Breakfast helps kick-start the brain, allowing you to be more productive and have more energy throughout the day.
Take Time to Meditate
While meditation can be difficult, taking even 15 minutes a day to meditate allows you to clear your mind, slow down and be present in the moment. It gives you the ability to reflect on your personal goals, remind yourself of where you are, and create a mindset that will allow you to get to where you want to be.
Exercise Daily
Exercise is also essential for many reasons. Regular exercise helps release energy from the body, helping you sleep better at night. It also helps reduce the adverse side effects of stress, anxiety, and depression — which thousands of individuals across the world battle. Plus, it has dozens of health benefits. Ultimately, it keeps your blood pumping and helps relieve the tension built up throughout the day.
Maintain a Regular Sleep Schedule
Entrepreneurs often suffer from exhaustion. Maintaining a regular sleep schedule, however, helps negate this exhaustion. It allows you to be more productive and ultimate to perform better daily. Realistically, it’s the crucial step to developing any routine.
These are just a few of the healthy habits successful entrepreneurs adopt . Ultimately, habits breed success, so assuming these practices can improve entrepreneurs’ overall health. | https://medium.com/@destrywittwa/healthy-habits-entrepreneurs-should-adopt-44de58b5a885 | ['Destry Witt'] | 2020-12-15 14:43:49.808000+00:00 | ['Entrepreneurship', 'Healthy Habits', 'Skills'] |
Why Wait for the New Year? Start Now. | Why Wait for the New Year? Start Now.
Photo by Avi Richards on Unsplash
New year New Goals! I am going to start my side project in 2021. I am going to start exercising daily in 2021. I am going to smoke less in 2021.
But why? Why should we wait for the new year before starting?
This realisation struck me as I was watching Ryan Serhant (one of America’s top real estate broker) latest vlog: You’re WASTING Your Time. In his introduction, he said: “Why wait? It makes no f**king sense.”
He then goes on and talks about the importance of seizing the present and putting in the work consistently for your dreams.
Many of us have our personal projects that we will like to start doing. It can be starting a side hustle like writing or doing youtube, executing a new fitness plan or even starting to save consistently.
However, many times we like to procrastinate and tell ourselves to start our projects later, perhaps start on a new day or even a new year. New year resolutions are common because it is a promise, a personal commitment to do something that will improve your life in the coming year. It invokes the feeling of renewed hope.
Yet, it makes no difference. There is no point waiting for a new year to start a change when you can just start it now. In fact, it is better to start now as you can achieve your goals faster. If the resolution means so much to you, just do it today.
Photo by Bruno Nascimento on Unsplash
So yes, start today. Even if it just a couple of days before 2021, it does not hurt by starting now. By starting a couple of days earlier, you are a few steps closer to your goals. Acting now also creates a positive feedback mechanism that creates further emotional reactions and inspirations and moves on to motivate your future actions.
Just by taking action today and starting before the new year can actually create a strong mindset that is essential in sticking to your commitment in the long run.
This is because you are telling yourself that this project is a non-negotiable one and you can and will commit to it, regardless of the occasion.
Only 23% of people who set a New Year’s resolution actually follow through with them according to a survey by FranklinCovey. Thus, if you treat each day as a gift and seize the present moment instead of waiting for the magical first day of the year, you realized that you have a greater chance of following through with your goals.
Let’s start today for a better tomorrow.
Thank you for reading to the end. I really appreciate it. If you are interested to join me in this self-development journey, consider following me on Medium, Youtube or LinkedIn. Have a great year ahead! | https://medium.com/change-becomes-you/why-wait-for-the-new-year-start-now-8b8c278db823 | ['Rui Yi Gan 颜睿易'] | 2021-01-16 02:05:41.711000+00:00 | ['Growth', '2021', 'Dreams', 'Resolutions', 'Work'] |
How Video Stores are Surviving during COVID-19 | The former location of Eddie Brandt’s Saturday Matinee in North Hollywood, LA.
When the COVID-19 outbreak mandated a nationwide shutdown this past Spring, small businesses were among the worst hit. Sadly, that included most of the last remaining video stores across the country. Despite the collapse of Blockbuster Video and Hollywood Video a decade ago, some independently run establishments have still managed to hang on, even with the ever-increasing popularity of streaming, thanks to dedicated customers and the deep catalogue of films available only on physical media. When the shutdown began in mid-March, many people believed, or at least hoped, that it would only last for a few months. Tragically, it quickly became apparent that wouldn’t be the case. As people and businesses struggled to adapt to stay at home orders, some video stores could not survive the crisis.
The first major blow came on April 7th when Vulcan Video in Austin, Texas announced it was closing permanently. This renowned store, which first opened in 1985, has been an iconic destination for film lovers. The store was popular enough to get a segment on Jimmy Kimmel’s late night show in 2015, where Kimmel and Texas native Matthew McConaughey made a hilariously cheesy commercial for the store. The skit is available on YouTube, and now serves as a memorial for a lost haven of cinema. The Austin Film Society has also done a series of interviews with former Vulcan employees on their YouTube channel.
Next came Odd Obsession in Chicago, which on April 28th announced they were closing their current location. A Chicago institution for 16 years, it was cherished not only for its 25,000+ film collection but also its treasure trove of bizarro Ghanaian film posters. They have put their vast collection in storage with the hopes of reopening, but no one knows when or if that will be possible. Even before the shutdown, they had attempted to fundraise online, but fell short of their $25,000 goal. Ignatiy Vishnevetsky, film critic and former Odd Obsession employee, penned a lovely tribute to the store for The AV Club.
The empty shelves of Eddie Brandt’s, no longer filled with VHS.
And then came Eddie Brandt’s Saturday Matinee in North Hollywood, California. First opened in 1969 as a movie memorabilia store, Eddie Brandt’s counted celebrities like Quentin Tarantino among their customers, and still rented out VHS tapes, including many out of print titles that never made it to DVD. On May 12th, the family announced they were moving out of the current location, and going into storage until they’re able to reopen elsewhere. I drove over the next day to see the store one last time, but by then the shelves had already been emptied, leaving only the dusty wooden shelves that once held rare and unique artifacts. By my account, it seems that no Los Angeles news outlet even reported this closure. At least local press reported Vulcan and Odd Obsession’s final days. The iconic Eddie Brandt’s mural on the building has already been painted over.
One of the saddest aspects of these closures is that, due to the pandemic, fans were not even able to say goodbye in person to their beloved video stores. Every person had to remember their last visit before the shutdown, and realize that that had been their final visit, without even knowing it. This cruel fate is all too familiar to independent video stores. All three of these stores had moved locations over the years, struggling to deal with rising rent and the loss of business with the advent of streaming. A worldwide pandemic delivered the final blow, and both the loyal patrons and the respective cities lost a piece of culture history and communal space.
Now, other video stores are fighting to avoid this fate. Film Noir cinema, an independent movie theater and video store in Greenpoint, Brooklyn, has started a GoFundMe campaign to stay alive. Film Noir appears to be the only video store that still survives in New York City. To think that NYC, the biggest city in the country and home to so many incredible cinemas and film festivals, could end up having zero video rental stores is shocking and upsetting. And in another blow to Austin, I Luv Video, the only major video store left in the city after Vulcan closed, was also forced to shutter this year, but owner Conrad Bejarano told the Austin American-Statesmen he hopes his collection of 120,000+ films will live on in a permanent collection or with a new owner (you can still buy a t-shirt from the store’s website).
On a positive note, other video stores were able to adapt to the situation thanks to curbside pickup. Here in Southern California, Broadway Video in Long Beach started providing curbside rentals immediately, and Videotheque in Pasadena and Cinefile Video in West L.A. followed in May once the city of Los Angeles allowed it. As for the famous Last Blockbuster in Bend, Oregon, it’s open with safety measures in place, and was even featured in one of Jimmy Kimmel’s late night monologues. Let’s hope they fare better than Vulcan Video, the last recipient of a Jimmy Kimmel endorsement. Meanwhile, Family Video, the last remaining video store chain in North America, had to permanently close 200 of their locations to stay afloat, but they haven’t given up yet. They’ve started the #SaveTheVideoStore campaign on social media, and you can support them by buying merchandise on their website. And Vidiots, the iconic Santa Monica video store that was supposed to relaunch in Eagle Rock this year before being delayed by the pandemic, is also selling merch online as they plan for their 2021 opening. With the support of celebrity patrons like Aubrey Plaza & Alex Winter and generations of loyal customers, the reopening of Vidiots next year could be the shining beacon of hope for the future of video stores.
The planned look for the new Vidiots & Eagle Rock Theater.
As this horrible year draws to a close, video stores tentatively remain open, requiring face masks and customer limits, always in danger of a new shutdown. The COVID vaccine has started to roll out, but COVID cases continue to rise across the country, so this pandemic is far from over. The best thing to do is continue supporting your local video stores, now more than ever. Check out that recent blockbuster you missed, or dig into a classic genre or beloved auteur from the past. Streaming has had a big boost this year with everyone inside, but even if you subscribe to every single site, you won’t get the depth of films available at your neighborhood video store. Support them today, so we’ll still have them when this is finally over. And stay safe! | https://medium.com/@conorholt/how-video-stores-are-surviving-during-covid-19-dead440a77d3 | ['Conor Holt'] | 2020-12-22 07:14:44.387000+00:00 | ['Los Angeles', 'Blockbuster Video', 'Vhs', 'Video Stores'] |
🌐 G20 Won’t Regulate 🗞️ News Roundup — Cryptos for the Rest of Us | 🌐 G20 Won’t Regulate
What’s the Story?
The Financial Stability Board (FSB) which coordinates global financial regulation between the Group of 20 (G20) nations stated that cryptocurrencies do not currently pose a systemic risk to the global financial markets.
The FSB’s initial assessment is that crypto-assets do not pose risks to global financial stability at this time
FSB Chair Mark Carney said in a letter to G20 central bankers and finance ministers who will meet in Buenos Aires on Monday and Tuesday.
Why Isn’t it a Risk?
The main takeaway is that the total of crypto assets even at their peak was less than 1% of the global financial system. In short, because it’s such a small portion of the global market any failure would be very well contained.
Why Should I Care?
The FSB is a powerful force when it comes to creating new regulatory frameworks. It has been one of the primary forces coordinating the G20 regulatory response to the global financial crisis of 2008.
Carney, who is also the head of the Bank of England, resisted calls within the G20 to start regulation around cryptocurrencies.
The decision seems in line with his view that the FSB will be more focused on reviewing rules, rather than pushing out new standards.
As its work to fix the fault lines that caused the financial crisis draws to a close, the FSB is increasingly pivoting away from design of new policy initiatives towards dynamic implementation and rigorous evaluation of the effects of the agreed G20 reforms
In particular, this shows that while cryptocurrencies may have been in a bubble earlier this year, they are far from being a mainstream part of the financial markets — for now. In short, the FSB have bigger fish to fry.
The immediate response from the market was a rally which saw Ethereum jump $60 (~10%) after the news broke. Whether this will signal a reversal of fortunes for the crypto markets is yet to be seen. | https://medium.com/cryptosfortherestofus/g20-wont-regulate-%EF%B8%8Fnews-roundup-cryptos-for-the-rest-of-us-30ce30e0a73c | ['Shan Han'] | 2018-03-19 08:02:03.629000+00:00 | ['Hacking', 'Blockchain', 'Regulation', 'Bitcoin', 'Coinbase'] |
Dissecting Dutch Death Statistics with Python, Pandas and Plotly in a Jupyter Notebook | The CBS (the Dutch Centraal Bureau Statistiek) keeps track of many things in The Netherlands. And shares many of its data sets as open data, typically in the form of JSON, CSV or XML files. One of the data sets is published is the one on the number of births and deaths per day. I have taken this data set, ingested and wrangled the data into a Jupyter Notebook and performed some visualization and analysis. This article describes some of my actions and my findings, including attempts to employ Matrix Profile to find repeating patterns or motifs.
TL;DR : Friday is the day of the week with the most deaths; Sunday is a slow day for the grim reaper in our country. December through March is death season and August and September are the quiet period.
The Jupyter Notebook and data sources discussed in this article can be found in this GitHub Repo: https://github.com/lucasjellema/data-analytic-explorations/tree/master/dutch-birth-and-death-data
Preparation: Ingest and Pre-wrangle the data
Our raw data stems from https://opendata.cbs.nl/statline/#/CBS/nl/dataset/70703ned/table?ts=1566653756419. I have downloaded a JSON file with the deaths per day data. Now I am going to explore this file in my Notebook and wrangle it into a Pandas Data Frame that allows visualization and further analysis.
import json import pandas as pd ss = pd.read_json("dutch-births-and-deaths-since-1995.json") ss.head(5)
Data frame ss now contains the contents of the JSON file. The data is yet all that much organized: it consists of individual JSON records that each represent one day — or one month or one year.
I will create additional columns in the data frame — with for each record the date it describes and the actual death count on that date:
import datetime def parse_full_date(row): dateString = row["Perioden"] if ('MM' in dateString) or ('JJ' in dateString) or ('X' in dateString): return None else: date = datetime.datetime.strptime(dateString, "%Y%m%d") return date def parse_death_count(row): deathCount = int(row["MannenEnVrouwen_4"]) return deathCount ss["date"] = ss['value'].apply(parse_full_date) ss["deathCount"] = ss['value'].apply(parse_death_count) ss.head(14)
Column date is derived by processing each JSON record and parsing the Perioden property that contains the date (or month or year). The date value is a true Python DateTime instead of a string that looks like a date. The deathCount is taken from the property MannenEnVrouwen_4 in the JSON record.
After this step, the data frame has columns date and deathCount that allows us to start the analysis. We do not need the original JSON content any longer, nor do we care for the records that indicate the entire month or year.
# create data frame called data that contains only the data per day data = ss[ss['date'].notnull()][['date','deathCount']]
data.set_index(data["date"],inplace=True)
data.head(4)
Analysis of Daily Death Count
In this Notebook, I make use of Plotly [Express] for creating charts and visualizations:
Let’s look at the evolution of the number of deaths over the years (1995–2017) to see the longer-term trends.
# initialize libraries
import plotly.graph_objs as go
import plotly.express as px from chart_studio.plotly
import plot, iplot from plotly.subplots
import make_subplots
# sample data by year; calculate average daily deathcount
d= data.resample('Y').mean(['deathCount'].to_frame(name='deathCount') d["date"]= d.index
# average daily death count per year (and/or total number of deaths per year)
fig = px.line(d, x="date", y="deathCount", render_mode='svg',labels={'grade_smooth':'gradient'} , title="Average Daily Death Count per Year")
fig.update_layout(yaxis_range=[350,430])
fig.show()
This results in an interactive, zoomable chart with mouse popup — that shows the average number of daily deaths for each year in the period 1995–2017:
The fluctuation is remarkable — 2002 was a far more deadly year than 2008 — and the trend is ominous with the last year (2017) the most deadly of them all.
The conclusion from the overhead plot: there is a substantial fluctuation between years and there seems to be an upward trend (probably correlated with the growth of total population — some 60–70 years prior to the years shown here)
The death count data is a time-series: timestamped records. That means that some powerful operations are at my fingerprints, such as resampling the data with different time granularities. In this case, calculate the yearly sum of deaths and plot those numbers in a bar chart. It will not contain really new information, but it presents the data in a different way:
# sample data by year; calculate average daily deathcount
d= data.copy().resample('Y').sum()['deathCount'].to_frame(name='deathCount')
d["date"]= d.index
fig = px.bar(d , x="date", y="deathCount" ,title="Total Number of Deaths per Year" , range_y=[125000,155000] , barmode="group" )
fig.show()
And the resulting chart:
The next scatter plot shows all number of deaths on day values for a randomly chosen period; the fluctuation from day to day is of course quite substantial. The seasonal fluctuation still shows up.
# arbitrarily show 2013
# ensure axis range from 0-550
fig = px.scatter(data, x="date", y="deathCount", render_mode='svg',labels={'grade_smooth':'gradient'}, title="Death Count per Day")
fig.update_layout(xaxis_range=[datetime.datetime(2013, 1, 1), datetime.datetime(2013, 12, 31)],yaxis_range=[0,550])
fig.show()
The next chart shows the daily average number of deaths calculated for each month:
# create a new data frame with the daily average death count calculated for each month; this shows us how the daily average changes month by month
d= data.copy().resample('M').mean()['deathCount'].to_frame(name='deathCount')
d["date"]= d.index
fig = px.scatter(d, x="date", y="deathCount", render_mode='svg',labels={'grade_smooth':'gradient'} , title="Average Daily Death Count (per month)")
fig.update_layout(xaxis_range=[datetime.datetime(2005, 1, 1), datetime.datetime(2017, 12, 31)],yaxis_range=[0,550])
fig.show()
And the chart:
Day of the Week
One question that I am wondering about: is the number of deaths equally distributed over the days of the week. A quick, high-level exploration makes clear that there is a substantial difference between the days of the week:
# create a new data frame with the death counts grouped by day of the week
# reindex is used to order the week days in a logical order (learned from
df_weekday = data.copy().groupby(data['date'].dt.weekday_name).mean().reindex(cats)
df_weekday cats = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']# create a new data frame with the death counts grouped by day of the week# reindex is used to order the week days in a logical order (learned from https://stackoverflow.com/questions/47741400/pandas-dataframe-group-and-sort-by-weekday) df_weekday = data.copy().groupby(data['date'].dt.weekday_name).mean().reindex(cats)df_weekday
In over 20 years of data, the difference between Friday and Sunday is almost 30 — or close to 8%. That is a lot — and has to be meaningful.
A quick bar chart is easily created:
df_weekday['weekday'] = df_weekday.index
# draw barchart
fig = px.bar(df_weekday , x="weekday", y="deathCount" , range_y=[350,400] , barmode="group" )
fig.update_layout( title=go.layout.Title( text="Bar Chart with Number of Deaths per Weekday" ))
fig.show()
And confirms our finding visually:
To make sure we are looking at a consistent picture — we will normalize the data. What I will do in order to achieve this is to calculate an index value for each date — by dividing the death count on that date by the average death count in the seven day period that this date is in the middle of. Dates with a high death count will have an index value of 1.05 or even higher and ‘slow’ days will have an index under 1, perhaps even under 0.95. Regardless of the seasonal and multi-year trends in death count, this allows us to compare, aggregate and track the performance of each day of the week.
The code for this [may look a little bit intimidating at first]:
d = data.copy()
d.loc[:,'7dayavg'] = d.loc[:,'deathCount'].rolling(window=7,center=True).mean()
d.loc[:,'relativeWeeklyDayCount'] = d.loc[:,'deathCount']/d.loc[:,'7dayavg'] cats = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] df_weekday = d.copy().groupby(d['date'].dt.weekday_name).mean().reindex(cats)
df_weekday['weekday'] = df_weekday.index
# draw barchart
fig = px.bar(df_weekday , x="weekday", y="relativeWeeklyDayCount" , range_y=[0.95,1.03] , barmode="group" )
fig.update_layout( title=go.layout.Title( text="Bar Chart with Relative Number of Deaths per Weekday" ))
fig.show()
The 2nd and 3rd line is where the daily death count index is calculated; first the rolling average over 7 days and subsequently the division of the daily death count by the rolling average.
The resulting bar chart confirms our finding regarding days of the week:
Over the full period of our data set — more than 20 years worth of data, there is close to 0.08 difference between Friday and Sunday.
I want to inspect next how the index value for each week day has evolved through the 20 years. Was Sunday always the day of the week with the smallest number of deaths? Has Friday consistently been the day with the highest number of deaths?
I have calculated the mean index per weekday over periods of one quarter; for each quarter, I have taken the average index value for each day of the week. And these average index values were subsequently plotted in a line chart.
d = data.copy()
# determine the average daily deathcount over a period of 30 days
d.loc[:,'30dayavg'] = d.loc[:,'deathCount'].rolling(window=30,center=True).mean()
# calculate for each day its ratio vs the rolling average for the period it is in - a value close to 1 - between 0.9 and 1.1
d.loc[:,'relative30DayCount'] = d.loc[:,'deathCount']/d.loc[:,'30dayavg']
# assign to each record the name of the day of the week
d.loc[:,'weekday'] = d['date'].dt.weekday_name
# resample per quarter, (grouping by) for each weekday
grouper = d.groupby([pd.Grouper(freq='1Q'), 'weekday'])
# create a new data frame with for each Quarter the average daily death index for each day of the week (again, between 0.9 and 1.1)
d2 = grouper['relative30DayCount'].mean().to_frame(name = 'mean').reset_index()
d2.head(10)
Let’s now show the line chart:
fig = px.line(d2, x="date", y="mean", color="weekday" ,render_mode='svg',labels={'grade_smooth':'gradient'} , title="Average Daily Death Count Index per Quarter")
fig.show()
This chart shows how Sunday has been the day of the week with the lowest death count for almost every quarter in our data set. It would seem that Friday is the day with the highest number of deaths for most quarters. We see some peaks on Thursday.
The second quarter of 2002 (as well as Q3 2009) shows an especially deep trough for Sunday and a substantial peak for Friday. Q1 2013 shows Friday at its worst.
Note: I am not sure yet what strange phenomenon causes the wild peak for all weekday in Q1 1996. Something is quite off with the data it would seem.
The PlotLy line chart has built-in functionality for zooming and filtering selected series from the chart, allowing this clear picture of just Friday, Wednesday and Sunday. The gap between Sunday and Friday is quite consistent. There seems to be a small trend upwards for Friday (in other words: the Friday effect becomes more pronounced) and downwards for Sunday.
The Deadly Season — Month of Year
Another question I have: is the number of deaths equally distributed over the months of the year. Spoiler alert: no, it is not. The dead of winter is quite literally that.
The quick inspection: data is grouped by month of the year and the average is calculated for each month (for all days that fall in the same month of the year, regardless of the year)
# create a new data frame with the death counts grouped by month
df_month = data.copy().groupby(data['date'].dt.month).mean()
import calendar
df_month['month'] = df_month.index
df_month['month_name'] = df_month['month'].apply(lambda monthindex:calendar.month_name[monthindex])
# draw barchart
fig = px.bar(df_month , x="month_name", y="deathCount" , range_y=[320,430] , barmode="group" )
fig.update_layout( title=go.layout.Title( text="Bar Chart with Number of Average Daily Death Count per Month" ))
fig.show()
The bar chart reveals how the death count varies through the months of the year. The range is quite wide — from an average of 352 in August to a yearly high in January of 427. The difference between these months is 75 or more than 20%. It will be clear when the undertakers and funeral homes can best plan their vacation.
The powerful resample option can be used to very quickly create a data set with mean death count per month for all months in our data set.
m = data.copy().resample('M').mean()
m['monthstart'] = m.index
# draw barchart
fig = px.bar(m , x="monthstart", y="deathCount" #, range_y=[0,4000] , barmode="group" )
fig.update_layout( title=go.layout.Title( text="Bar Chart with Daily Average Death Count per Month" ))
fig.show()
The resulting chart shows that the average daily death count ranges with the months from a daily average of 510 in chilly January 2017 to only 330 in care free August of 2009. A huge spread!
Resources
The Jupyter Notebook and data sources discussed in this article can be found in this GitHub Repo: https://github.com/lucasjellema/data-analytic-explorations/tree/master/dutch-birth-and-death-data
Our raw data stems from https://opendata.cbs.nl/statline/#/CBS/nl/dataset/70703ned/table?ts=1566653756419. | https://medium.com/oracledevs/dissecting-dutch-death-statistics-with-python-pandas-and-plotly-in-a-jupyter-notebook-68b089ac75a3 | ['Lucas Jellema'] | 2019-10-13 21:57:30.933000+00:00 | ['Plotly', 'Jupyter Notebook', 'Death Statistics', 'Data Science', 'Open Data'] |
Dear Julie: Agents | Dear Julie: Agents
Do I need to have an agent to publish successfully?
Photo by Curtis Potvin on Unsplash
Dear Julie, So few people get taken on by literary agents these days is there any point trying to find one? Michele
Dear Michele,
Every author and every career is so different, it’s hard to say if you, personally, should try to find an agent. You will never sell a book to a mainstream imprint of one of the large mainstream publishing houses without an agent; they just don’t take unsolicited submissions. But not everyone wants to publish with Penguin or Hachette.
Some authors have very successful careers without agents—particularly those who are self-published, or who publish with independent and/or mainly digital publishing houses, or who write for publishers with boilerplate (non-negotiable) contracts. It used to be that writers had slim to no chance of getting a novel published without the help of an agent; that’s not the case any more, as there are many more opportunities for authors who want to go it alone.
But if you ask me—and you have— I think that it’s a really good idea to try to find an agent. I, personally, know that I would not have the career I’ve had without my agent. I would have got published, but there’s more to a writing career than getting published. A writing career is strategic and focused. And that’s where an agent is uniquely placed to help you.
In fact, if I did not have the agent that I’ve got, I might not have a career at all. My agent is fierce and fearless on my behalf. She never pulls punches, and she always goes into battle to get what is best for me. She has literally saved my career, and once or twice she has saved my sanity. A publisher works for themselves; an agent works for you.
A good agent does more than just get you a publishing deal, or negotiate contracts. She’ll discuss your work with you, give you advice, introduce you to the right people, talk you through the steps you need to take to reach your goals.
Here are some of the other things a good agent will do:
—An agent will give you business and market advice.
—An agent will work with you to devise an overall strategy for your career, beyond that first book.
—An agent will serve as your first reader, and often help you edit your work.
—An agent will often get you a better deal with publishers, and will help sell your foreign, film and TV rights.
—An agent will, if necessary, act as the ‘bad guy’ with your publisher so that you can be the happy, sunny, easy-to-work-with author that your editor will love.
—An agent will negotiate contracts, and help you when something goes wrong with your publisher.
—An agent will tell you when you’re doing something stupid.
—An agent will be the biggest cheerleader and advocate for your work.
—An agent may be able to help you self-publish more efficiently.
—Your editor works for a publisher, but your agent works for you. A good agent will always put your best interests first. (And if they don’t—they’re not a good agent.)
A lot of writers think that agents are these scary, unapproachable people. But in fact, nearly all of the agents I have met are very nice. What you have to remember is that agents need writers. They love books, just like we do. Selling good books is how they make their living. And they are always, ALWAYS, looking for a good book.
That good book might be yours.
So…is it worthwhile looking for an agent, even though it might be hard to get one to take you on? I think so. Others might disagree. Maybe right now isn’t the right time for you; maybe this book you’re submitting isn’t yet the right book. Ask yourself: What’s your dream career? What are you working towards? Do you want an agent? Do you believe an agent could help you? And if the answer is yes, then keep on trying, and keep on writing.
Love
Julie x | https://medium.com/novel-gazing/dear-julie-agents-65686d692198 | ['Julie Cohen'] | 2020-04-20 16:31:57.037000+00:00 | ['Publishing', 'Writing', 'Writing Advice', 'Agents', 'Writing Tips'] |
I watched the movie Girl, Interrupted today. | I watched the movie Girl, Interrupted today. The last time I saw it was in 2000. I thought about how much my life has changed in 20 years.
I also thought about how many changes the actors went through and I wondered if they could’ve imagined the lives ahead of them
Theatrical release poster
Angelina Jolie was an unknown actress in 1999. Her future (ex)husband, Brad Pitt, was dating Jennifer Aniston. But she had to marry/divorce Billy Bob Thorton first.
Winona Ryder, who produced the film, was a massive star, the “it” girl of my generation. But too few know of her talent today.
Elisabeth Moss was a minor actor but is now a giant, award-winning star.
Brittany Murphy died ten years later — almost to the day of the release — in odd circumstances.
You never know how life will turn out for you or the people around you. | https://medium.com/everything-shortform/i-watched-the-movie-girl-interrupted-today-9491ab52e31 | ['Kevin Ervin Kelley'] | 2020-12-18 15:51:20.678000+00:00 | ['Life', 'Hollywood', 'Self', 'Personal Growth', 'Movies'] |
Health Questions Part 2: For What Purpose | Day 2!
Welcome back! I hope you've learned something about yourself by honestly answering the questions from Part 1.
If you haven’t done so yet, read and answer the three questions before answering these questions. Otherwise, it won’t make much sense and it won’t be as effective.
Now that you’ve established what it is that you want, it’s time to look at your big WHY. Why do you want that six-pack? Place your hand on your heart and find your honest reason.
What’s this all about?
A gentle reminder.
This is a series of 4 health discussions taking you on a journey like no other. You’ll discover more about yourself and grow your clarity around your own personal health. All in the comfort of your home.
Before we go ahead and ask the questions below I want you to record your answers. I’m assuming most of you would be happy to write down your thoughts and answers as we’re all writers here on Medium. However, a simple web-cam or phone recording will do, too. The reason I suggest video recording as well is that it’ll force you to speak your mind and answer these questions as if you were having a conversation with someone. Perhaps a coach.
So open your Google Docs or Web-cam and answer these questions:
Why do you want this over anything else?
For what purpose do you want to have/achieve this?
What is having/achieving this going to give you?
Don’t overthink these or filter your answers. And for the love of health don’t Google these either. It’s your interpretation and your journey.
Don’t forget to have fun! | https://medium.com/in-fitness-and-in-health/health-questions-part-2-for-what-purpose-e91efbe8775e | ['Michaela Grek'] | 2020-12-03 15:14:58.781000+00:00 | ['Self Improvement', 'Life', 'Health', 'Goals', 'Coaching'] |
What I Wish People Knew About Reporting Suicidal Friends on Facebook | What I Wish People Knew About Reporting Suicidal Friends on Facebook
With no one to turn to, I turned to Facebook — and ended up with a cop on my doorstep
Photo: Jack Halford/EyeEm/Getty Images
In the winter of 2013, I found myself spending a month on a leaky air mattress. I was staying at the home of my ex-fiancé’s Facebook friend, in Iowa. She’d generously welcomed me after my ex kicked me out of our shared Tennessee apartment.
I was three months pregnant and battling suicidal ideation every day. When my fiancé told me to go back to Minnesota and began spending all of his time trolling online for dates, my prenatal depression kicked into high gear. I was pregnant, recently dumped, filled with guilt, and terrified of being a bad mother. I was afraid my depression would prevent me from bonding with my child, and I was in desperate need of help. No matter how much people told me to move on, I couldn’t understand how to actually do it.
In those days, I still had a Facebook account, which constantly reminded me of the breakup. Everything online did, but Facebook was particularly good at it. Plus my Facebook posts were pretty damn depressing. I like to think I was careful about what I posted. I knew I shouldn’t tell people how much I wanted to die. I knew I shouldn’t share how often I went for walks in the middle of the night with a knife in my pocket.
One day I posted a status I don’t remember posting: “Today I’m thinking a lot about taking a walk and disappearing for good.”
I was alone when there was a loud knock at the door. Startled, I opened the door to see a police officer.
“Are you Shannon Ashley?” he asked.
“Yes,” I answered, the blood draining from my face. I didn’t understand what he wanted.
“One of your friends was concerned about some things you posted on Facebook,” he said. “Can I come in and talk?”
The officer sat down and asked me some questions about what was going on and how I was feeling. As I realized what was happening, I felt my face burn. Someone had reported my post to Facebook, which advised them to contact my local authorities.
I knew if I answered too honestly, I would have to go to the hospital. For a lot of folks battling suicidal ideation, going to the hospital is an unknown that seems even scarier than our darkest thoughts. We will do everything we can to avoid it.
So I was careful to tell the police officer just enough to get him to leave me alone. I’m not sure why we don’t talk more about this flaw in the system: So many of our protocols surrounding depression or suicide checks assume the person who needs help will tell the truth.
But a lot of us won’t.
The officer didn’t stay long, and my main concern was making sure he left before anyone else returned home. It was bad enough to feel so miserable; the last thing I wanted was to explain myself to somebody else.
I still don’t know who reported my post and called the police. I do know Facebook didn’t deem the post “against community guidelines” because it’s still visible six years later on my now-unused account.
“If someone you know is in danger, please contact local emergency services for help immediately,” Facebook advises on its help page. “After you’ve called emergency services, connect with your friend or call someone who can. Showing that you care matters. Make sure they know that you’re there for them, and that they aren’t alone.”
I’m glad to see Facebook recommends that the concerned user reach out to their friend, but I have mixed feelings about the entire policy. In my case, the person who reported my post and called the cops never revealed themselves or reached out to offer personal support.
People don’t know how to react to a pregnant woman who isn’t glowing with joy or delightfully sharing photos of baby showers and nursery makeovers. And they definitely don’t know how to deal with one in the deep throes of prenatal depression. Some friends did send baby gifts, but they were hard to look at — more reminders of what I didn’t have, and the massive responsibility that was about to come screaming into my unprepared arms. | https://humanparts.medium.com/i-made-a-facebook-post-that-had-the-police-knocking-on-my-door-b5e3d11baf20 | ['Shannon Ashley'] | 2020-01-15 17:30:33.282000+00:00 | ['Life', 'Facebook', 'Depression', 'Social Media', 'Mental Health'] |
The Tools You Need to Help You Write Your Story | HOW TO WRITE
The Tools You Need to Help You Write Your Story
Photo by Markus Winkler on Unsplash
There’s a delicate balance between what a story is about and how it is told. If it’s a boring story, nothing will save it, including perfect grammar, syntax, and spelling. If it’s exciting but poorly composed, no one will read it.
Between these two extremes is probably where your storytelling ability lies. There aren’t many tools that will make you less boring, but many will help you with composition.
Of course, you don’t need tools, but they save time, which lets you either produce more or lead a happier life. Think of Noah and what he could have done with power tools. Would he have forgotten the unicorns if he’d had a cordless drill? Probably not.
I’ve been using Grammarly Pro for a few years and recently added ProWritingAid as well, and between the two, they pick most of my minor errors up. I also use Hemingway Editor from time to time, but my rough drafts (humble brag) do alright enough by Ernest to use him less and less. I sometimes use the text reading feature on Hemingway if I’m looking for a home-run, but that’s only occasionally.
CoSchedule helps me with my headlines by scoring them. I read anything over 73 is acceptable, and so that’s what I shoot for. It’s harder than you think. I then use TitleCaseConverter to render them into proper title case.
You might never stop being boring, but your copy should always be clean. With so many free tools out there, you don’t have an excuse for bad copy.
I won’t provide links because the attached article put together by Maryam Merchant has them all and more, and if anyone deserves the read here, it’s her, not me. | https://medium.com/the-story-smith/the-tools-you-need-to-help-you-write-your-story-6418e7eb26b7 | ['Teresa J Conway'] | 2021-01-03 14:46:50.741000+00:00 | ['Writing', 'Editing', 'Writing Tools', 'Writing Tips', 'Proofreading'] |
Easy Visualization Techniques to Help You Find Your True Purpose in Life | When it comes to finding your true life’s purpose, the key to achieving what you want may lie in your ability to utilize visualization to achieve it. Knowing how to visualize the outcome that you desire will help you become a stronger, happier, and more effective person and help you find your true life’s purpose. Here are three easy visualization techniques to help you reach your goals.
Treasure Map Technique
This visualization technique uses both a physical component as well as the obvious mental one. The first thing that you need to do is to think of something that you would like to achieve. Begin by drawing a physical representation of all the factors that are involved. If you want to earn the top score on an exam, draw a picture of yourself, a building to represent the school and a book to represent study. You want to try and make the drawing as detailed as possible. The drawing itself isn’t the critical factor, but rather it is what you are picturing as you draw.
Receptive Visualization
This technique is like watching a movie in your mind, except you get to control the scenes. While this is a more passive approach than the previous one, it can still be just as effective. Lie down, close your eyes, and try to picture the scene you want to visualize vividly. When you get a clear image of the scene in your head, add people and the noises to the movie. Slowly build the image until you have the entire picture of the scene in your head and can feel yourself being involved in the action.
Altered Memory Visualization
This technique focuses on changing memories from your past to have a more favorable outcome. This is especially helpful for resolving memories that involved resentment and anger. Replay the scene in your mind, but replace the angry responses with more controlled and calm ones. After a while, your brain will only remember the scene playing out as you re-created it, and the uncomfortable memories of the event will begin to fade away.
Using any of these visualization techniques will help you achieve your goals and find your true life’s purpose. Along with these techniques, you will need to put in the work to actually reach your goals, but after some time and hard work, you’ll achieve your goals and discover what you’re really meant to do.
Do you need help uncovering your vision? Developing your action plan? Need motivation to achieve your goals? Contact me and let’s get started! | https://medium.com/@barbaraascott/easy-visualization-techniques-to-help-you-find-your-true-purpose-in-life-46d23540065f | ['Barbara A. Scott'] | 2020-12-27 17:57:37.088000+00:00 | ['Life Purpose', 'Goals', 'Visualization'] |
The Less You Have is Actually Enough | Photo by Josh Appel on Unsplash
I remember being stumped the first I went in college — I was majoring in Multimedia. I got a crappy laptop whilst some of my friend has a gaming one, fetching a huge price tag at that and boasting bigger capabilities than mine. You guessed right. I got jealous. I always thought to myself, “If I had that laptop, I could probably get more job done than what he had done.” But is that really the case?
Nope. That’s the short answer, actually. The long answer starts with this logical fallacy that somehow if we had better equipment, we can get more job done. But being that, are you sure that will not make us lazier? Realistically speaking, if you are not able to have whatever it is that you want, then just by daydreaming and not getting any work done, you are NOT any inch closer to realizing that.
So instead of being grouchy all the time, we need to acknowledge. We cannot have all the things in the world, and trust me — some does not really spark joy at all, Marie Kondo style. Have you ever save money just to buy something that ended up being ‘meh’? That could happen again, maybe closer than you think. Maybe that will be your very next stuff on your wish list. Let’s start by looking at we already have; the less that we have is actually enough.
This is not, by any means, a demotivation to not getting what you need, however; just a clear, rational thinking before you splurge all that dolla-dolla bills unto impressing your friends or just to try and keep up with the social elites. Maybe that expensive laptop that my friend has, maybe he had saved a year for it or something. Maybe he really needs it that much, maybe his workload are actually that much more compared to me. The more we think about the positive side and let rational thinking takes over, the more we are going to cherish others for what they have rather than having a rat race of possessions.
So, always be wise in your spending and never be jealous of such things again,
Because the less you have might actually be better. | https://medium.com/@alexxmulya31/the-less-you-have-is-actually-enough-6e66c89d5803 | ['Alessandro Mulya'] | 2020-11-05 07:42:26.937000+00:00 | ['Saving Money', 'Money Mindset', 'Money Management', 'Saving', 'Economy'] |
Machine Learning: Lessons Learned from the Enterprise | Photo by IBM
There’s a huge difference between the purely academic exercise of training Machine Learning (ML) models versus building end-to-end Data Science solutions to real enterprise problems. This article summarizes the lessons learned after two years of our team engaging with dozens of enterprise clients from different industries including manufacturing, financial services, retail, entertainment, and healthcare, among others. What are the most common ML problems faced by the enterprise? What is beyond training an ML model? How to address data preparation? How to scale to large datasets? Why is feature engineering so crucial? How to go from a model to a fully capable system in production? Do I need a Data Science platform if every single data science tool is available in the open source? These are some of the questions that will be addressed, exposing some challenges, pitfalls, and best practices through specific industry examples.
0. ML is not only training models
I’ve realized this is a pervasive misconception. When I interview aspiring data scientists, I usually ask:
“Say you’re given a dataset with certain characteristics with the goal of predicting a certain variable, what would you do?”
To my dismay, their answer is often something along these lines:
“I’ll split the dataset into training/testing, run Logistic Regression, Random Forest, SVM, Deep Learning, XGBoost,… (and a few more unheard-of algorithms), then compute precision, recall, F1-score,… (and a few more unheard-of metrics), to finally select the best model”.
But then, I ask them:
“Have you even taken a look at the data? What if you have missing values? What if you have wrong values/bad data? How do you map your categorical variables? How do you do feature engineering?”
In this article, I go over the seven steps required to be successful creating an end-to-end machine learning system, including data collection, data curation, data exploration, feature extraction, model training, evaluation, and deployment.
1. Gimme some data!
As data scientists, data are evidently our main resource. But sometimes, even getting the data can be challenging and it could take weeks or even months for the data science team to obtain the right data assets. Some of the challenges are:
Access: most enterprise data are very sensitive, especially when dealing with government, healthcare, and financial industries. Non-disclosure agreements (NDAs) are standard procedure when it comes to sharing data assets.
most enterprise data are very sensitive, especially when dealing with government, healthcare, and financial industries. Non-disclosure agreements (NDAs) are standard procedure when it comes to sharing data assets. Data dispersion: it’s not uncommon to see cases where data are scattered across different units within the organization, requiring approvals from not one but different parties.
it’s not uncommon to see cases where data are scattered across different units within the organization, requiring approvals from not one but different parties. Expertise: having access to the data is often not sufficient as there may be so many sources that only a subject matter expert (SME) would know how to navigate the data lake and provide the data science team with the right data assets. SMEs may also become a bottleneck for a data science project as they’re usually swamped with core enterprise operations.
having access to the data is often not sufficient as there may be so many sources that only a subject matter expert (SME) would know how to navigate the data lake and provide the data science team with the right data assets. SMEs may also become a bottleneck for a data science project as they’re usually swamped with core enterprise operations. Privacy: obfuscation and anonymization have become research areas on their own and are imperative when dealing with sensitive data.
obfuscation and anonymization have become research areas on their own and are imperative when dealing with sensitive data. Labels: having the ground truths or labels available is usually helpful as it allows to apply a wide range of supervised learning algorithms. Yet, in some cases, labeling the data may be too expensive or labels might be unavailable due to legal restrictions. Unsupervised methods such as clustering are useful in these situations.
having the ground truths or labels available is usually helpful as it allows to apply a wide range of supervised learning algorithms. Yet, in some cases, labeling the data may be too expensive or labels might be unavailable due to legal restrictions. Unsupervised methods such as clustering are useful in these situations. Data generators: an alternative when data or labels are not available is to simulate them. When implementing data generators, it is useful to have some information on the data schema, the probability distributions for numeric variables, and the category distributions for nominal ones. If the data are unstructured, Tumblr is a great source for labeled images while Twitter may be a great source for free text. Kaggle also offers a variety of datasets and solutions on a number of domains and industries.
2. Big data is often not so big
This is a controversial one, especially after all the hype made by big data vendors in the past decade, emphasizing the need for scalability and performance. Nonetheless, we need to make a distinction between raw data (i.e., all the pieces that may or not be relevant for the problem at hand) and a feature set (i.e., the input matrix to the ML algorithms). The process of going from the raw data to a feature set is called data preparation and it usually involves:
Discarding invalid/incomplete/dirty data which, in our experience, could be up to half of the records. Aggregating one or more datasets, including operations such as joins and group aggregators. Feature selection/extraction, e.g., removing features that may be irrelevant such as unique ID’s and applying other dimensionality reduction techniques such as Principal Component Analysis (PCA). Using sparse data representation or feature hashers to reduce the memory footprint of datasets with many zero values.
After all the data preparation steps have been completed, it’s not hard to realize that the final feature set — which will be the input of the Machine Learning model — will be much smaller; and, it is not uncommon to see cases where in-memory frameworks such as R or scikit-learn are sufficient to train models. In cases where even the feature set is huge, big data tools such as Apache Spark come handy yet they may have a limited set of algorithms available.
3. You dirty data!
Data are often dirty
Yes, I better tell you something you didn’t know already, but I can’t emphasize this enough. Data are dirty. In most of our engagements, clients are proud and excited to talk about their data lakes, how beautiful their data lakes are, and how many insights they can’t wait to get out of them. So, as data scientists, this becomes our mental picture:
Nonetheless, when they actually share their data, it actually looks more like this:
This is where scalable frameworks such as Apache Spark are crucial as all of the data curation transformations will need to be performed on the entire raw data. A few typical curation tasks are:
Outlier detection: a negative age, a floating point zipcode, or a credit score of zero are just a few examples of invalid data. Not correcting this values may introduce high bias when training the model.
a negative age, a floating point zipcode, or a credit score of zero are just a few examples of invalid data. Not correcting this values may introduce high bias when training the model. Missing/incorrect value imputation: the obvious way to address incorrect/missing values is to simply discard them. An alternative is imputation, i.e., replacing missing/incorrect values by the mean, median, or the mode of the corresponding attribute. Another option is interpolation, i.e., building a model to predict the attribute with missing values. Finally, domain knowledge may also be used for imputation. Say we’re dealing with patient data and there’s an attribute indicating whether a patient has had cancer. If such information is missing, one could look into the appointments dataset and find out whether the patient has had any appointments with an oncologist.
the obvious way to address incorrect/missing values is to simply discard them. An alternative is imputation, i.e., replacing missing/incorrect values by the mean, median, or the mode of the corresponding attribute. Another option is interpolation, i.e., building a model to predict the attribute with missing values. Finally, domain knowledge may also be used for imputation. Say we’re dealing with patient data and there’s an attribute indicating whether a patient has had cancer. If such information is missing, one could look into the appointments dataset and find out whether the patient has had any appointments with an oncologist. Dummy-coding and feature hashing: these are useful to turn categorical data into numeric, especially for coefficient-based algorithms. Say there’s an attribute state which indicates states of the USA (e.g., FL, CA, AZ). Mapping FL to 1, CA to 2 ,and AZ to 3 introduces a sense order and magnitude, meaning AZ would be greater than FL and CA would be twice as big as FL. One-hot encoding — also called dummy-coding — addresses this issue by mapping a categorical column into multiple binary columns, one for each category value.
these are useful to turn categorical data into numeric, especially for coefficient-based algorithms. Say there’s an attribute state which indicates states of the USA (e.g., FL, CA, AZ). Mapping FL to 1, CA to 2 ,and AZ to 3 introduces a sense order and magnitude, meaning AZ would be greater than FL and CA would be twice as big as FL. One-hot encoding — also called dummy-coding — addresses this issue by mapping a categorical column into multiple binary columns, one for each category value. Scaling: coefficient-based algorithms experience bias when features are in different scales. Say age is given in years within [0, 100], whereas salary is given in dollars within [0, 100,000]. The optimization algorithm may assign more weight to salary, just because it has a higher absolute magnitude. Consequently, normalization is usually advisable and common methods include z-scoring or standardization (when the data are normal) and min-max feature scaling.
coefficient-based algorithms experience bias when features are in different scales. Say age is given in years within [0, 100], whereas salary is given in dollars within [0, 100,000]. The optimization algorithm may assign more weight to salary, just because it has a higher absolute magnitude. Consequently, normalization is usually advisable and common methods include z-scoring or standardization (when the data are normal) and min-max feature scaling. Binning: mapping a real-valued column into different categories can be useful, for example, to turn a regression problem into a classification one. Say you’re interested in predicting arrival delay of flights in minutes. An alternative would be to predict whether the flight is going to be early, on time, or late, defining ranges for each category.
4. It’s all about Feature Engineering
In a nutshell, features are the characteristics from which the ML algorithm will learn. As it is expected, noisy or irrelevant features can affect the quality of the model so it is critical to have good features. A few strategies for feature engineering are:
Define what you want to predict. What would each instance represent? A customer? A transaction? A patient? A ticket? Make sure each row of the feature set corresponds to one instance.
Avoid unique ID’s. Not only are they irrelevant in most cases, but they can lead to serious overfitting, especially when applying algorithms such as XGBoost.
Use domain knowledge to derive new features which help measure success/failure. The number of hospital visits may be an indicator of patient risk; the total amount of foreign transactions in the past month may be an indicator of fraud; the ratio of the requested loan amount to the annual income may be an indicator of credit risk.
Use Natural Language Processing techniques to derive features from unstructured free text. Some examples are LDA, TF-IDF, word2vec, and doc2vec.
Use dimensionality reduction if there are a very large number of features, e.g., PCA and t-distributed Stochastic Neighbor Embedding (t-SNE).
5. Anomaly detection is everywhere
Photo by feature.fm.
If I were to pick one single most common ML usecase in the enterprise, that would be anomaly detection. Whether we’re referring to fraud detection, manufacturing testing, customer churn, patient risk, customer delinquency, system crash prediction, etc., the question is always: can we find the needle in the haystack? This leads to our next topic, which relates to unbalanced datasets.
A few common algorithms for anomaly detection are:
Auto-encoders One-class classification algorithms such as one-class SVM. Confidence intervals Clustering Classification using over-sampling and under-sampling.
6. Data are often unbalanced
Unbalanced data
Say you have a dataset with labeled credit card transactions. 0.1% of those transactions turn out to be fraudulent whereas 99.9% of them are good normal transactions. If we create a model that says that there’s never fraud, guess what? The model will give a correct answer in 99.9% of the cases so its accuracy will be 99.9%! This common accuracy fallacy can be avoided by considering different metrics such as precision and recall. These are defined in terms of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN):
TP = total number of instances correctly predicted as positive
TN = total number of instances correctly predicted as negative
FP = total number of instances incorrectly predicted as positive
FN = total number of instances incorrectly predicted as negative
In a typical anomaly detection scenario, we’re after minimizing false negatives — e.g., ignoring a fraudulent transaction, not recognizing a defective chip, or diagnosing a sick patient as healthy — while not incurring a great amount of false positives.
Precision = TP/(TP+FP)
Recall = TP/(TP+FN)
Note precision penalizes FP while recall penalizes FN. A model that never predicts fraud will have zero recall and undefined precision. Conversely, a model that always predicts fraud will have 100% recall but a very low precision — due to a high number of false positives.
I strongly discourage the use of receiver operating characteristic (ROC) curves in anomaly detection. This is because the false positive rate (FPR) — which ROC curves rely on — is heavily biased by the number of negative instances in the dataset (i.e., FP+TN), leading to a potentially small FPR even when there’s a huge number of FP.
FPR = FP/(FP + TN)
Instead, the false discovery rate (FDR) is useful to have a better understanding of the impact of FP’s in an anomaly detection model:
FDR = 1 - Precision = FP/(TP+FP)
7. Don’t predict. Just tell me why!
I have come across several projects where the goal is not to create a model to make predictions in real time but rather to explain a hypothesis or analyze which factors explain a certain behavior —this is to be taken with a grain of salt, given that most machine learning algorithms are based upon correlation, not causation. Some examples are:
Which factors make a patient fall into high risk?
Which drug has the highest impact on blood test results?
Which insurance plan parameter values maximize profit?
Which characteristics of a customer make him more prone to delinquency?
What’s the profile of a churner?
One way to approach these questions is by calculating feature importance, which is given by algorithms such as Random Forests, Decision Trees, and XGBoost. Furthermore, algorithms such as LIME or SHAP are helpful to explain models and predictions, even if they come from Neural Networks or other “black-box” models.
8. Tune your hyper-parameters
Machine Learning algorithms have both parameters and hyper-parameters. They differ in that the former are directly estimated by the algorithm — e.g., the coefficients of a regression or the weights of the neural network — whereas the latter are not and need to be set by the user — e.g., the number of trees in a random forest, the regularization method in a neural network, or the kernel function of a support vector machine (SVM) classifier.
Setting the right hyper-parameter values for your ML model can make a huge difference. For instance, a linear kernel for a SVM won’t be able to classify data that are not linearly separable. A tree-based classifier may overfit if the maximum depth or the number of splits are set too high, or it may underfit if they maximum number of features is set too low.
Finding the optimal values for hyper-parameters is a very complex optimization problem. A few tips are:
Understand the priorities for hyper-parameters. In a random forest, the number of trees and the max depth may be the most relevant ones whereas for deep learning, the learning rate and the number of layers might be prioritized. Use a search strategy: grid search or random search. The latter is preferred. Use cross validation: set a separate testing set, split the remaining data into k folds and iterate k times using each fold for validation (i.e., to tune hyper-parameters) and the remaining for training. Finally, compute average quality metrics over all folds.
9. Deep Learning: a panacea?
During the past few years, deep learning has been an immense focus of research and industry development. Frameworks such as TensorFlow, Keras, and Caffe now enable rapid implementation of complex neural networks through a high level API. Applications are countless, including computer vision, chatbots, self-driving cars, machine translation, and even games — beating both the top Go human player and the top chess computer in the world!
One of the main premises behind deep learning is its ability to continue learning as the amount of data increases, which is especially useful in the era of big data (see figure below). This, combined with recent developments in hardware (i.e., GPUs) allows the execution of large deep learning jobs which used to be prohibitive due to resource limitations.
Picture by machinelearningmastery.com
So… does this mean that DL is always the way to go for any Machine Learning problem? Not really. Here’s why:
Simplicity
The results of a neural network model are very dependent on the architecture and the hyper-parameters of the network. In most cases, you’ll need some expertise on network architectures to correctly tune the model. There’s also a significant trial-and-error component in this regard.
Interpretability
As we saw earlier, a number of use-cases require not only predicting but explaining the reason behind a prediction: why was a loan denied? Or why was an insurance policy price increased? While tree-based and coefficient-based algorithms directly allow for explainability, this is not the case with neural networks. In this article some techniques are presented to interpret deep learning models.
Quality
In our experience, for most structured datasets, the quality of neural network models is not necessarily better than that of Random Forests and XGBoost. Where DL excels is actually when there’s unstructured data involved, i.e., images, text, or audio. The bottom line: don’t use a shotgun to kill a fly. ML algorithms such as Random Forest and XGBoost are sufficient for most structured supervised problems, being also simpler to tune, run, and explain. Let DL speak for itself in unstructured data problems or for reinforcement learning.
10. Don’t let the data leak
While working on a project to predict arrival delay of flights, I realized my model suddenly reached 99% accuracy when I used all the features available in the dataset. I sadly realized I was using the departure delay as a predictor for the arrival delay. This is a typical example of data leakage, which occurs when any of the features used to create the model will be unavailable or unknown at prediction time. Watch out, folks!
Photo by the Kini group.
11. Open source gives me everything. Why do I need a platform?
It has never been easier to build a machine learning model. A few lines of R or Python code will suffice for such endeavor and there’s plenty of resources and tutorials online to even train a complex neural network. Now, for data preparation, Apache Spark can be really useful, even scaling to large datasets. Finally, tools like docker and plumbr ease the deployment of machine learning models through HTTP requests. So it looks like one could build an end-to-end ML system purely using the open source stack. Right?
Well, this may be true for building proofs of concept. A graduate student working on his dissertation would certainly be covered under the umbrella of the open source. For the enterprise, nevertheless, the story is a bit different.
Don’t get me wrong. I’m a big fan of open source myself and there are many fantastic tools, but at the same time, there are also quite a few gaps. These are some of the reasons why enterprises choose Data Science platforms:
a. Open source integration: up and running in minutes, support for multiple environments, and transparent version updates.
b. Collaboration: easily sharing datasets, data connections, code, models, environments, and deployments.
c. Governance and security: not only over data but over all analytics assets.
d. Model management, deployment, and retraining.
f. Model bias: detect and correct a model that’s biased by gender or age.
e. Assisted Data Curation: visual tools to address the most painful task in data science.
g. GPUs: immediate provisioning and configuration for an optimal performance of deep learning frameworks, e.g., TensorFlow.
h. Codeless modeling: for statisticians, subject matter experts, and even executives who don’t code but want to build models visually.
In a quest for a Data Science Platform? Consider trying Watson Studio for free! | https://towardsdatascience.com/machine-learning-lessons-learned-from-the-enterprise-d8588f3d3060 | ['Oscar D. Lara Yejas'] | 2019-06-12 22:26:32.054000+00:00 | ['Deep Learning', 'Machine Learning', 'Data Science', 'AI', 'Enterprise'] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.