title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Deploying with Libraries on Remix-IDE | Libraries are very similar to contracts but they do not store state. Using one is a gas saving strategy that comes in handy when you have a contract that will be deployed many times — but some of the code only needs to be deployed once. By splitting those functions off from a contract into a library, you are deploying less code in the contract— so the gas costs are less.
Once a library is deployed, anyone can use it as long as they have the library’s source code. So, if you can use some functions from someone else’s audited library (like a safemath library)— you’ll be writing safer contracts and writing them more efficiently.
How to connect the contract to the library?
With Remix IDE and all the other dev tools, the library’s address does not go in the contract’s source code. The address is injected (or linked) in the contract’s byte-code at deployment.
Specifically with Remix, there is a metadata.json file which is generated when the contract is compiled. This metadata file contains (among other things) a placeholder where the user will put the libraries addresses.
Let’s go into this more…
An example of deploying with a library
Here is a solidity file that contains a contract and a library. The contract sampleContract contains a function that calls a function in the library aLib:
In this example, the library and the contract get compiled at the same time. If they would have been in separate files — the library would need to be imported (using the import keyword) into the file with the solidity contract.
Here is the link to the Solidity docs about Libraries: https://solidity.readthedocs.io/en/latest/contracts.html?highlight=library#libraries
The Deployment Issue
The library and the calling contract get deployed separately and so they will have separate addresses.
In order to use a library, the calling contract must have the library’s address.
But as stated above, the library’s address is not directly specified in the solidity code. The calling contract’s compiled bytecode contains a placeholder where library’s addresses will go.
At deployment of the calling contract, Remix looks in the contract’s metadata for the library’s address and updates the placeholder with the address.
So before deploying a contract that calls a library, you need to generate the contract’s metadata (AKA its build artifact) and then you need to input the library’s address into the metadata file.
A contract’s metadata is generated at compile time.
Generating Metadata
By default, Remix doesn’t generate a file with the contract’s metadata. To generate this file, we need to go to the settings module.
Go there by clicking on the settings icon in the icon panel.
Check the first option Generate contract metadata.
Go to the Solidity Compiler and compile the file
Go to the files explorer to see the artifacts folder and open it up
Open up the sampleContract.json file. The deploy object in that file should look like this:
After we get the address of the library, we’ll replace the <address> tag of the network we are deploying to with the actual address of the library.
But as you can see, the default behavior of Remix is to automatically deploy the library and link it automatically in the calling contract.
autoDeployLib: true
The default behavior works well for testing out some code. But when the library has already been deployed, you’ll need to make these edits. And that is the situation we are simulating in this article.
But note, when using a library that has already been deployed, you still need to import it in the contract’s source code before you compile it.
Deploy the library & grab its address
In the deploy and run module, deploy the library by selecting in the contract select box. Then, retrieve its address, by clicking on the copy icon at the right of deployed instance.
Edit the calling contract’s metadata file
Go to the calling contract’s metadata file and replace the <address> tag with the library’s address in whatever network you are deploying to.
In this example case, we are deploying to the Javascript VM - in the metadata file above — it is called VM.
We also need to update the autoDeployLib setting so that Remix doesn’t automatically deploy the library when it deploys the contract. So set autoDeployLib to false.
Deploy the library calling contract
Switch to the Deploy & Run transaction module.
Select the JavaScript VM Environment and select the sampleContract in the list of compiled contracts.
Click on Deploy.
And you have deployed a contract linked to a library!
Under the Hood
Let’s see where the library’s address gets placed in the contract’s bytecode.
Switch to the Solidity Compiler module
Select the sampleContract contract in the list of compiled contracts.
Click on ByteCode, it should copy the following to the clipboard:
Essentially, this what the compiler is returning.
For our purposes we are interested in the linkReferences element and the object element.
linkReferences: ( the 1st element) describes what are the libraries used by the contract.
( the 1st element) describes what are the libraries used by the contract. object (the next element) is the compiled contract (bytecode). This is what gets deployed and saved onto the blockchain. In this example, the value __$83229fb62534ab89035722de277194ff6d$__ inside the bytecode is just a placeholder for the library address. In your case, the placeholder will be between __$ and $__. It is a bit buried — so search for the __$ (underscore underscore dollar sign).
The metadata JSON from Remix IDE tells Remix to replace the placeholder with the given address.
To review, the library’s address gets injected at deployment — so the replacement of the placeholder with the actual address happens at this stage. If we had just used the library as a normal contract and imported it at compile time, we would be deploying its code with the contract’s code — so there would be no gas savings. By digging into the metadata.json file and updating its settings, we can use a library to decrease the size of the contract and bask in the joy of saving some gas. | https://medium.com/remix-ide/deploying-with-libraries-on-remix-ide-24f5f7423b60 | ['Rob Stupay'] | 2020-06-15 21:08:54.716000+00:00 | ['Solidity', 'Ethereum', 'Remix Ide', 'Libraries', 'Design Patterns'] |
The New Great Age: The Age of AI and Automation | The New Great Age: The Age of AI and Automation
(Chapter 1) Artificial Intelligence And Its Significance
Mankind, throughout history, has gone through massive changes. Dozens, if not hundreds, of advances, have claimed to have revolutionized our lives. Our historical timeline could be broken into 4 great ages, 4 moments that have altered the course of human development, culture, and livelihood. The first of which was the discovery of fire, and the creation of language. Which then led to the great age of agriculture and cities. Now that we had harnessed the omnipotent fire, learned how to communicate, built cities and started farming, humans started looking for ways to encapsulate their knowledge and also share it with the rest of the world. This led to the birth of writing and wheels. According to ourworldindata.org only 12% of the population could read and write in 1820, today the share has reversed: only 14% of the world population, in 2016, remained illiterate. Over the last 65 years, the global literacy rate increased by 4% every 5 years — from 42% in 1960 to 86% in 2015 (Literacy, by Max Roser and Esteban Ortiz-Ospina, First published in 2013; last revision September 20, 2018.)
With more and more people able to read and write, and in general have education, the world once again started developing, leading us to our fourth age, the age contemporary to us. The age of Robots and Artificial Intelligence.
Creator: Andy | Credit: Getty Images/iStockphoto
What Is Artificial Intelligence (AI):
Artificial Intelligence, is a term that researchers and developers till this day have not been able to properly wrap their head around. I will try my best to explain what AI means and why it is significant in today’s society. Throughout the series of chapters, I will keep my explanations easy to comprehend.
The term Artificial Intelligence was coined all the way back in 1955, by math professor John McCarthy. The professor, later on, regretted calling it AI, and suggested the term computational intelligence, and even today, many do not like the term Artificial Intelligence, as it seems alarming. Although I agree with insecurity created by the definition of AI, I also stand by the fact that if perceived and properly utilized, automation will be more of a companion rather than a competitor.
Homage to John McCarthy, the Father of Artificial Intelligence (AI), by Andy Peart, published on October 29th 2020.
A widely accepted definition of artificial intelligence is that it is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. AI uses machine learning to mimic human intelligence. To be able to mimic humans the computer has to learn how to respond to certain actions, so it uses algorithms and historical data to create something called a propensity model. A propensity model is a statistical approach and a set of techniques that attempt to estimate the likelihood of subjects performing certain types of behaviour. Most of us can relate to this, for example when we do online shopping and search up an item or even browse on the internet, we see advertisements related to the product. This is no coincidence, it is the AI using the customer’s data, machine learning, and other computational concepts to predict a customer’s wants. Later on in our chapters, we will discuss how AI is not just portrayed robots with human-like characteristics, and the three types of AI.
As artificial intelligence develops, a growing number of public sectors and companies are looking for ways to utilize automation, and why not? The benefits of AI are numerous, let’s look at some today.
Collaborative Intelligence:
Robots and AI will help people perform their tasks better, not just take their jobs. The combination of man and machine will be unstoppable. In fact collaboration between humans and artificial intelligence is already happening. The Harvard Business Review completed research involving 1,500 businesses, concluding that companies benefit the most when humans and machines are working together (Collaborative Intelligence: Humans and AI Are Joining Forces, by H. James Wilson and Paul R. Daugherty From the July–August 2018 Issue). BMW discovered that when their robot and human teams worked together, they were approximately 85% more productive compared to when they had robots working on one side of the factory, and their employees working on their old automated assembly line on the other side of the factory. James Wilson, an expert in the field of Artificial Intelligence describes, “Together, they really started to see those big productivity improvements that just weren’t possible through the old way of thinking about automation.” (James Wilson, Co-author of Human + Machine: Reimagining Work in the Age of AI, published on 26 June 2018). Collaboration of humans and AI is a topic that will be further discussed in later chapters.
Collaborative Intelligence: Humans and AI Are Joining Forces, by H. James Wilson and Paul R. Daugherty From the July–August 2018 Issue
Who Needs Breaks?
An average human worker works for around 40hrs/week, humans need breaks to function and produce results, some might need less than others. Yes Elon, I am talking about you. That being said, no human can work without taking rest, as they simply would die otherwise. On the other hand, robots are more active and don’t get tired like humans. Automation can work on repetitive tasks without taking breaks and getting bored. They can work 24/7/365 without making any error. An epitome of this is how robots could be used to wrap chocolates, a task which is boring, repetitive and needs no creativity. Humans will have to take breaks, whilst robots would do the task with no stops and absolutely zero error.
Lightning Fast Decisions:
Humans before making a decision analyse a situation both emotionally and practically, whilst AI uses machine learning to make decisions. In simpler words, automation uses previously fed data to make decisions. For example, in the movie I, Robot, when Will Smith was drowning with another little girl. A robot decided to save Smith over the girl due to his chances of survival being higher than the girl. Although this example might be from a movie and might seem too far-fetched, automation can make similar decisions. A study conducted by the MIT Technology Review on who should a self-driving car kill, a baby or a grandma found that the decision depends on which region you live in. In cultures that revere the elderly, the car would aim for the child, while in the West, which prizes youth, the car would hit the grandmother. Either way, the car would come to a decision way faster than a human, as ample data has been fed to it. That being said, the final decision might be ambiguous. (Should a self-driving car kill the baby or the grandma? Depends on where you’re from. By Karen Hao, published on October 24th 2018).
Illustration by Paul Blow for POLITICO
Summary:
The novel age of AI has plenty to offer. Similar to previous ages, AI will benefit humanity in numerous ways. Above, I have listed just three, but there are many more. However, like any invention, AI has drawbacks too, something we will talk about in chapter 2. | https://medium.com/swlh/the-new-great-age-the-age-of-ai-and-automation-288dc6ace526 | ['Agam Johal'] | 2020-12-22 08:36:05.103000+00:00 | ['Technology', 'AI', 'Machine Learning', 'Automation'] |
When We Conquer Ageing , We Must Transform How Society Works | When We Conquer Ageing , We Must Transform How Society Works
As age reversal sciences approach major breakthroughs; we need to start planning for how to live in a world of immortals Kesh Anand Follow Dec 23, 2020 · 5 min read
“Nothing is certain, except for death and taxes” goes the famous quip.
Although — technically, there are quite a few countries (including several in the Middle East) where taxes aren’t really a thing.
Death, however, has been the great leveller from time immemorial. It ravages all — from the mightiest kings to the lowliest peasants.
That might not be the case going forward, however. An increasing number of scientists envision the human lifespan becoming indefinite; with some going so far as to say key breakthroughs for this are just a couple of decades away.
Now it is all very well for you dear reader, to live for 100s of years. But what if everyone does? Is society even equipped to handle a population composed entirely of supercentenarians?
Here is a look at how things might change from Social, Political, Economic, and Environmental perspectives:
Social Changes:
If you could live for 500 years, would you still aim to have all your kids before you’re 40 (~30 seems to be the norm to become a parent in OECD countries)? Maybe you’d space it out significantly. How would familial relationships change if there is a 50-year gap between your siblings?
Would marriage still be till death do us part? It’s all very well putting up with your husband or wife for a few decades; but can you really deal with them for 300 years? Maybe eternity? Perhaps we will start to have marriage contracts that expire every century.
Would people still take part in perceived risky activities like riding motorcycles or being a volunteer firefighter?
Political Changes:
How would politicians change the way they acted if they knew they’d be around to deal with the consequences in 200 years? Action on climate change is an obvious one that might be different.
A lot of political change occurs due to the clearing out of older generations (and their world views), being replaced by younger ones. Can you imagine the laws we’d have in place today if the majority of the population had been born in 1750 and were still around to vote and pass laws?
The same goes for the future — if laws in 2250 were made by people born in the 1970s; I’d imagine that things would be a lot more less progressed than they otherwise would have been.
What about lifetime appointments like those to the Supreme Court in the US, the House of Lords in the UK, or even tenure in universities. Would these continue as is, or will they have finite term limits?
Economic Changes:
Would you keep the same job and career for all that time? Or will you have multiple career transitions? Perhaps you could spend “one life span” as a doctor, another as a pilot and so on.
Perhaps you spend the first couple of hundred years working hard to establish a solid nest egg to live off, and then spend the next few pursuing low earning but more personally rewarding “jobs”.
Retirement would largely be a thing of the past. This would mean the entire aged care industry, pension system, funeral parlours, life insurance and other industries and services focused on the elderly would largely be rendered moot.
With the lack of older people dying and leaving behind inheritances; younger generations will find it harder to get a foothold into the economy. This might mean they need to move further and further out of the city due to affordability, increasing urban sprawl on the one hand — but increasing the needs of infrastructure on the other.
Environmental:
We’re going to run out of room. Sort of.
Human populations are expected to top out at around 11 billion people. As all countries develop and start to adopt resource-intensive diets like those of Australia, North America and Europe — we will find that there is not enough arable land to support these populations.
As such, we will need a way to output more calories per acre than we do today, or we will need to look at moving to other planets and adopting farming practices there. | https://medium.com/predict/we-are-on-the-verge-of-immortality-but-first-we-must-transform-society-299ca4e42103 | ['Kesh Anand'] | 2020-12-24 03:09:24.894000+00:00 | ['Future', 'Society', 'Life', 'Science', 'Aging'] |
Build Your Docker Images Automatically When You Push on GitHub | Build Rules
To let Docker Hub know how and when to automatically build your images you can specify Build rules. You may have multiple rules that apply in parallel, effectively having multiple tags assigned to your images with just one git-push:
Build rules for tag-based and master branch automated builds
On the above figure we set up two different rules — let’s see what they do.
Tag rule
A “tag rule” allows Docker Hub to start building an image upon discovery of a new tag in your git repository. This should probably be your preferred way to build images for the official releases of your image.
As git tags can be arbitrary and contain anything a developer might choose, Docker Hub allows you to define a Regular Expression, in order to identify which part of a git tag should become part of the tag for the Docker image. In the example above we chose /^[0-9.]+$/ as the regular expression with which the image tag is extracted. Practically speaking, this tells Docker Hub that our git tags are eligible for automated builds, should be in the form of numbers with dots, and that should be the only text of the tag. The image tag that’s extracted and assigned to it is denoted by the {sourceref} attribute entered under Docker Tag.
Branch rule
A “branch rule” allows Docker Hub to start building an image upon activity on a specific branch. The typical use case here is to have such a rule to build a latest image based on the daily activity of your git repository. The branch to be monitored is entered under Source and the tag to be assigned to the Docker images built from it under Docker Tag.
Build your first image
As we now have everything in place, we only need to tag the current state of our git repository and push the tag back to GitHub; Docker Hub will do its magic and build an image for us:
Pushing a tag to trigger automatic build
Indeed, a couple of minutes later Docker Hub has automatically built and tagged the 1.0.0 image:
The first automated build of release 1.0.0
To also see our latest image being created automatically we need to push a new change in the master branch:
Pushing a change to master branch
Again, after a few minutes:
latest image built from master branch
Docker Hub provides you two additional screen on which you can monitor your builds:
Builds screen | https://betterprogramming.pub/build-your-docker-images-automatically-when-you-push-on-github-18e80ece76af | ['Nassos Michas'] | 2019-10-01 04:25:47.972000+00:00 | ['Containers', 'Docker', 'Github', 'Continuous Delivery', 'Continuous Integration'] |
Preparation for Bitcoin cash stress test day begins | A group that runs a website ‘BCH stress test day’ reported a stress test on the platform. This was reported on news.bitcoin.com As part of this test, the group is planning to process millions of transactions of minimum fee. That too, all those minimum fee transactions at once in one single day.
Since the number of transactions that are to be processed is insanely huge, the lead developers working on the project want everyone in the community to participate and pull this off.
Furthermore, spendbch.io, along with Bitbox which is an open source project, came up with a tool for this project. This tool allows anyone to ‘spam’ the network with transactions.
Node.js App for stress testing by Spencbch.io
The BCH blockchain has a block capacity size of 32 MB, which is multiple time huge than many blockchain based ledgers available today, including the prime cryptocurrency Bitcoin. This broad block size is exactly that is making higher throughput for BCH possible. Higher block size means that more transactions can be placed within every block. This directly means that more number of transactions can be processed every time a block is verified and added to the blockchain/public ledger.
The BCH chain had an 8MB block size until this may. Miners have been processing the transactions between 2–8MB. Furthermore, Viabtc was able to process an 8MB block that had 37000 transactions in it. The 8MB block that was added to blockchain successfully is an evidence that a 32MB block size would be a really amazing thing to help process more transactions.
Now that the platform has a 32MB block size, the developers want to stress test it. The test will have the system process huge number of transactions all at once in a 24-hour window.
Therefore, Spendbch.io came up with an app for stress testing the network. The tool is a “starting point” for the 1 BCH bounty program that will be offered to someone who can develop an advanced version of the app.
“BCH-stress test is a concept app that can be used as a starting point to claiming the stress test bounty,” said the spendbch.io developers
Furthermore, there has been a spike in the number of BCH transactions. Bitbox creator Gabriel Cardona along with many other BCH enthusiasts have been sharing screenshots of the number of transactions. Johoe’s Mempool and Fork.lol have been collecting data that show a steep jump in the number of BCH transactions.
“Big increase in bitcoin cash transaction count since yesterday. Here’s the script from Spendbch.io if you’d like to play along — Let’s stress test BCH to prove to the world we can scale,” Cardona tweeted
The stress test is scheduled to happen on September 1st around 12 PM UTC. Furthermore, the open source tool developed by spendbch.io is available here in the Github repository.
What do you think of this stress test? Are you going to participate in the test? Let us know your thoughts in the comment section below. | https://medium.com/bitfolio-org/bitcoin-cash-stress-test-day-begins-with-a-flyer-9c549842b864 | [] | 2018-07-05 07:30:09.870000+00:00 | ['Cryptocurrency', 'Crypto', 'Blockchain', 'Bitcoin', 'Bitcoincash'] |
Build an ETL Pipeline using SSIS | Data Engineering — How to Build an ETL Pipeline Using SSIS
Discussing how to build an ETL pipeline for database migration using SQL Server Integration Services (SSIS) in Visual Studio 2019.
Image Credit to Goh Rhy Yan
SQL Server Integration Service (SSIS) provides an convenient and unified way to read data from different sources (extract), perform aggregations and transformation (transform), and then integrate data (load) for data warehousing and analytics purpose. When you need to process large amount of data (GBs or TBs), SSIS becomes the ideal approach for such workload.
One example usage is to migrate one database to another database with different schema on a different server. There are many other ways to do it, such as:
Console Application
PowerShell
SQL Server command line tool — SqlPackage.exe
SQL Server Management Studio (SSMS)— Generate Scripts with data
The downside of the above approaches can be error prone in terms of error handling, user friendly, or not being able to handle large amount of data. For example, Generate Scripts in SSMS will not work when the database size is larger than a few Gigabytes.
In this article, I will discuss how this can be done using Visual Studio 2019. You can also just clone the GitHub project and use it as your SSIS starter project. Here is the GitHub link.
Prerequisites
Visual Studio 2019 already installed. Note Visual Studio 2017 works slightly different regarding SSIS and this article may not work exactly for Visual Studio 2017.
SQL Server already installed
Preparation Part 1 — Install SQL Server Data Tools in Visual Studio
Microsoft has documentation on the installation process as well, but all you need is to launch Visual Studio Installer and install “Data storage and processing” toolsets in the Other Toolsets section.
Visual Studio Installer (Screenshot by Author)
Check Data storage and processing (Screenshot by Author)
Preparation Part 2 — Install the SSIS Visual Studio Extension
Download the extension from Visual Studio Marketplace and follow the intuitive instruction to install. | https://medium.com/swlh/data-engineering-how-to-build-an-etl-pipeline-using-ssis-in-visual-studio-2019-caa85e6b9c94 | ['Shawn Shi'] | 2020-11-13 04:21:37.012000+00:00 | ['Data Engineering', 'Data Science', 'Visual Studio', 'Sql Server', 'Etl'] |
Has the type of health care system or type of government mattered during the coronavirus pandemic? | By Kent R. Kroeger (April 9, 2020)
Key Takeaways: There is no systematic evidence that the overall quality of a country’s health care system has had an impact on the spread (morbidity rate) and lethality (mortality rate) of the coronavirus. Instead, a country’s per capita wealth and exposure to the international economy (particularly international tourism) significantly increases the spread of the virus within a country. This latter finding may be partly a function of wealthier populations being more likely to have their coronavirus-related illnesses diagnosed and treated. But it is also likely that international travel is spreading the virus worldwide. As for the mortality rate, the story is more complicated: The single biggest driver of the mortality rate, so far, is simply the time since the country’s first coronavirus-related death. Once the virus has found a vulnerable host, the final outcome may be difficult to change (at least for now). As for the charge by the US intelligence community that China has under-reported the coronavirus’ severity in their country, the model reported here suggests China, given its size and characteristics, should have so far experienced 10 times the coronavirus cases they have reported and a case fatality rate twice their current estimate. If they are under-reporting, as charged by the US, China may have between 33,600 to 70,000 deaths related to the coronavirus, not the 3,339 they are currently claiming. To the contrary, it is also plausible that their aggressive suppression and mitigation efforts have successfully limited the spread and lethality of the coronavirus. The model reported here cannot determine which conclusion about China is true. Or if both conclusions have truth.
_________________________________________________________________
It’s OK to feel some tentative optimism about the coronavirus pandemic. It does appear, finally, that the virus and its associated illness — COVID19 — is peaking in many of the countries hardest hit by the virus (see Figure 1).
Figure 1: New daily COVID-19 cases in Italy, South Korea, Iran and Spain
Data Source: World Health Organization (as of 7 APR 2020)
Almost a month-and-a-half after the coronavirus reached its peak in new daily cases in South Korea (around 900 cases-a-day), the virus has peaked in Italy around March 22nd, and in Spain and Iran around April 1st.
If President Donald Trump’s advisers were correct in Monday’s White House daily coronavirus update, the U.S. may also witness its peak in new daily cases within the week.
This weekend, New York, the current locus of the US outbreak, saw a significant decline in the number of new infections and deaths.
“In the days ahead, America will endure the peak of this pandemic,” Trump said Monday.
In fact, from April 6th to 7th, the aggregate US data showed its first day-to-day drop in the number of new COVID-19 cases since late March (see Figure 2).
Figure 2: Cumulative and new daily COVID-19 cases in the U.S.
In many of US states hardest hit by the coronavirus — such as New York, Washington, and California — the number of new cases each day have leveled off or declined in the past week.
These are genuine reasons for optimism. While Trump’s hope for an economic return to near-normal by Easter was overly optimistic, the possibility it could happen in early May is not.
Europe and the U.S. were caught flat-footed by the coronavirus, but it is looking increasingly like they will escape with far fewer cases and deaths than originally anticipated by many epidemiological models.
[Of course, additional waves of this virus may still occur and we may never see a true return to normal until a coronavirus vaccine is made widely available — and by widely available I mean free to everyone.]
________________________________________________________________
In this moment of cautious cheer, my questions increasingly focus on how the world measured (and mismeasured) this pandemic and what national-level factors may have suppressed and, conversely, aided the spread of the coronavirus?
Everyone has theories. Some are convinced autocratic countries (i.e., China, Iran, Venezuela, Russia) have hidden the true impact of the coronavirus on their countries. Others have declared the coronavirus proves the importance of universal health care in containing such viruses. Still others have conjectured the number of COVID19-related deaths have been over-reported by anti-Trump forces, most likely to make Trump look bad. Conversely, the national media has unofficially declared (without conclusive evidence, as usual) that the US government has been under-counting COVID19 deaths (presumably to make the Trump administration look more effective in its coronavirus response than justified).
It is speculation at this point. It will be many months — probably years — before we know what actually happened during the 2019–20 Coronavirus Pandemic. The coronavirus pandemic is still on-going, after all, and the reality is: counting the number of people with any disease or virus is genuinely hard and prone to human error.
But we can start to address some of the controversies, if only tentatively.
If we assume that the majority of countries have exercised a fairly high level of due diligence in measuring the presence of the coronavirus within their jurisdiction, we may be able to identify those countries who have been much less than honest.
Moreover, after controlling for suspected dishonest coronavirus measurement, we may also see hints at the impact of national health care systems and containment policies on the spread and lethality of the coronavirus.
________________________________________________________________
Let us start our inquiry with this premise — there are two fundamental measures of the coronavirus: (1) the number of confirmed coronavirus cases relative to the total population (morbidity rate), and (2) the number of coronavirus-related deaths as a percent of those confirmed to have the virus (mortality rate).
For simplicity’s sake, what I am calling the mortality rate is actually the case fatality rate. In reality, the coronavirus’ mortality rate is much lower than the case fatality rate as its calculation will include undiagnosed cases experiencing only minor or no symptoms.
If universal health care were ever to show its value, now is the time. The logic is simple: Countries where citizens do not need to worry about the cost of a doctor visit, the probability these citizens get tested and treated early for the coronavirus is significantly higher.
Also, countries with universal health care may also be more likely to institute broad-based coronavirus testing, thereby identifying asymptomatic super-spreaders of the virus. Subsequently, when diagnosed with the virus, these citizens will be isolated sooner from the healthy population. Furthermore, early diagnoses of the coronavirus may also improve the chances infected individuals survive the virus.
Can we see this in the data?
________________________________________________________________
Figure 3 (below) is produced directly from World Health Organization (WHO) data. The chart shows the morbidity rate of COVID-19 (i.e., frequency of COVID-19 cases per 100K people) compared to its mortality rate (i.e., deaths per confirmed case).
I’ve segmented the chart in Figure 3 into four quadrants, each defined by countries’ morbidity and mortality rates. Countries with high morbidity and mortality rates are in the upper right-hand quadrant of Figure 3 (e.g., Italy, France, Spain, Netherlands, UK and Iran.); while countries with low morbidity and mortality rates are in the lower left-hand quadrant (e.g., Russia, Japan, Pakistan, Nigeria, and India).
Figure 3: COVID-19 Cases per 100K persons versus Number of Deaths per Confirmed Case.
What does Figure 3 tell us? In truth, not much.
Ideally, a country would want to be in the lower left-hand quadrant (Low/Low) of Figure 3, right? But a simple inspection of the quadrant reveals it is occupied mainly by countries in eastern Europe, Africa, South America and southern Asia (Russia, Ukraine, Pakistan, India, Nigeria, among others) — few of which find themselves ranked by the WHO among the countries with the best health care systems. One reason for their favorable performance so far may be that the coronavirus hasn’t significantly spread to those countries yet — after all, many are in the southern hemisphere.
Here are two fair questions to ask: Are these countries performing relatively well with the coronavirus due to favorable circumstances (fewer people traveling to and from coronavirus sources like China; climatic context; stronger containment policies — an area where authoritarian governments may have an advantage; and/or better health care systems)?
Or, are some of these countries simply not deploying the resources and expertise necessary to measure the impact of the coronavirus? Do they even have the capacity to do so?
________________________________________________________________
Figure 3 begs more questions than it answers, but it still may hint at some tentative conclusions. For example, experience tells me countries clustered around the intersection of the average country-level morbidity (34 cases per 100K people) and mortality rates (3.4%) are in the accuracy ballpark. If I am feeling generous, that list includes the US and China, along with countries like South Korea, Poland and Turkey.
The countries that raise my eyebrows are the major outliers from the center cluster: Italy, Spain, UK, France, Bangladesh, Nigeria, Indonesia and India.
The variation in the coronavirus mortality rate ranges from 12 percent in Italy to near zero percent for New Zealand (a country with 1,239 confirmed cases and only one death). What could possibly explain this difference in the coronavirus mortality rate between two advanced economies? Could it be their health care systems? WHO ranks Italy’s health care system 2nd in the world, while New Zealand’s is only 41st. Russia has a reported coronavirus mortality rate of 0.8 percent and has the 130th best health care system in the world, according to the WHO.
More in line with expectations, Germany, a country given significant positive coverage for its coronavirus response — plaudits comparable to perhaps only South Korea’s — has a reported 2.1 percent mortality rate on a base of 113,296 confirmed cases.
Why such discrepancies in reported mortality rates?
Dietrich Rothenbacher, director of the Institute of Epidemiology and Medical Biometry at the University of Ulm in Germany, credits Germany’s broad-based, systematic testing as being the reason his country’s mortality figures are hard to compare to other countries.
“Currently we have a huge bias in the numbers coming from different countries — therefore the data are not directly comparable,” Dr. Rothenbacher recently told the BBC. “What we need to really have valid and comparable numbers would be a defined and systematic way to choose a representative sampling frame.”
This is where statistics — my profession — becomes critical. As Dr. Rothenbacher asserts, Germany would not have understood the extent of the coronavirus crisis without testing both symptomatic and asymptomatic cases, just as South Korea and, sadly, only a few other countries have done.
Systematic random sampling needed to be a component of every nation’s coronavirus testing program.
It wasn’t.
In New Jersey, where I live, the office of the state’s Health Commissioner told me I couldn’t get tested for the coronavirus without meeting one of the following qualifications (…it felt like a job application):
Already being hospitalized and showing symptoms of COVID-19.
A health care worker showing symptoms and having who been exposed to others known to have the virus
having who been exposed to others known to have the virus Anyone known to be part of a cluster outbreak (one example being a recent Princeton, NJ dinner party where multiple attendees were diagnosed with the coronavirus)
And vulnerable populations (e.g., nursing home residents).
Someone like me, a 55-year-old male with no underlying health problems but showing mild flu symptoms — low-grade fever, persistent cough, and chest congestion — cannot get tested in New Jersey.
The New Jersey testing protocol is common across the U.S. given the relative scarcity of testing kits.
________________________________________________________________
Anytime the anecdotal evidence is contradictory or unclear, I turn to data modeling — even if crude — to test some of the initial hypotheses surrounding a controversy.
The challenge with the coronavirus is the availability and data quality of the key causal factors we’d like to test in a coronavirus model for morbidity and mortality rates. In the following linear models, I tested these independent variables:
Out of necessity, I limited the data analysis to countries with reliable data on all key independent measures and with populations over 3 million people, leaving the analysis with 76 countries.
[Note: The linear models, however, were not weighted by country population size. For example, China weighted the same as Serbia in the following models.]
The estimated linear models for morbidity and mortality rates are reported in the Appendix below.
Figures 4 and 5 show the model predictions for each country versus the actual morbidity and mortality rates. In the morbidity model graphic (Figure 4), I only show a selection of key countries in order to simplify the data presentation.
Figure 4: Predicted versus Actual COVID-19 Cases per 100K Persons for Selected Countries (as of 4 APR 2020).
Figure 5: Predicted versus Actual COVID-19 Deaths per Confirmed Cases (as of 4 APR 2020).
On the issue of autocratic countries (who are also U.S. adversaries), there is circumstantial evidence that Venezuela, China and Russia have fewer COVID-19 cases than we would expect given their key characteristics, even while their deviance as a group is not statistically significant.
For example, China may have 10 times the coronavirus cases they have officially reported and a mortality rate twice their current estimate. If true, China may have between 33,600 to 70,000 deaths related to the coronavirus, not the 3,339 they are currently claiming.
Likewise, Russia may have 19,500 coronavirus cases, not the 10,031 they have reported to the WHO and Venezuela may have 1,625 cases, not 167 cases.
Even if, according to the model, the reported numbers for China, Venezuela and Russia are low, we can’t rule out the possibility they are low because these countries have done a superior job containing the virus.
Perhaps the most puzzling (and saddest) case is Iran. Our model suggests Iran has experienced far more COVID-19 cases than we would expect given its characteristics. The most recent WHO numbers for Iran are 66,220 confirmed cases and 4,110 deaths.
Has Iran done an especially poor job of containing the virus or are they measuring more comprehensively than other countries? Unfortunately, my model can’t settle that point.
Final thoughts
I anticipated when I started looking at the coronavirus in 76 countries that the quality of their health care system s— starting with affordable, universal health care — would show up as a significant factor in distinguishing between countries that successfully took on the coronavirus pandemic (e.g., South Korea, Germany, Singapore, and Japan) and those less successful (e.g., Italy, Spain, France, UK and Iran).
While the number of hospital beds per 1,000 people does correlate significantly with lower mortality rates (see Appendix, Figure A.2), the overall quality of a country’s health care system did not. In fact, countries with the best compensated medical professionals actually have higher coronavirus mortality rates.
The coronavirus has hit Europe (and China) the hardest. In Italy, the high percentage of elderly helps explain its high volume of cases, but that can’t be the only explanation. And isn’t just that advanced economies have put more effort into measuring the occurrence of the virus in their communities that explains this fact. The coronavirus has found disproportionately more friendly hosts in these societies. We may have to accept that the coronavirus is one of the evolving risks associated with high disposable incomes and deeps global connections through trade and tourism.
I know this: I will never go on a cruise ship ever again.
Theories on why some countries handled the pandemic better than others are also plentiful. The most compelling analysis may have occurred while the pandemic was just starting.
Writing in early March, Chandran Nair, founder and CEO of the Global Institute for Tomorrow, may have come up with the best explanation still. “Strict and centralized enforcement of lockdowns, quarantines, and closures are the most effective way to contain the virus,” wrote Nair. “What’s emerged from the coronavirus crisis is the fact that some states are equipped to handle this type of action, and some are not — and it has little to do with development status.”
Or, more cynically, could we conclude that one of the costs of emphasizing individual freedom is that when collective action is necessary — including a strong, central state response — Europeans and Americans answer the call by hoarding toilet paper and Jim Beam?
I’m not quite there yet. For one, I don’t believe Nair fully appreciates how the modern state and elites consolidate their power during these uncertain times, and how this can leave even more people vulnerable economically and physically to the next pandemic — and there will be another one. Second, for every example of state power getting this done quickly and efficiently, there are dozens more where greed, incompetence, and arrogance lead the state to do more damage than good. Before we give the modern state more power, let us think this through some more first.
Here is what our governments and scientific community should be doing...
If this global pandemic ends relatively soon — as it appears it might — our governments and health researchers must immediately resolve themselves to understand how many people really did get infected by the coronavirus and how many actually died from its consequences.
Currently, we have a global mish-mash of epidemiological data of unknown quality or generalizability. Only probability-based sample studies can give us the real numbers and it is only with those numbers that we can really sit down and decide: What worked and what was a total waste of time and resources?
K.R.K.
Data used in this article are available by request to: [email protected]
APPENDIX: The Linear Models
Figure A.1: Linear Model for Confirmed COVID-19 Case per 100K Persons (Morbidity)
Figure A.2: Linear Model for COVID-19 Deaths per Confirmed Cases (Mortality) | https://kentkroeger.medium.com/has-the-type-of-health-care-system-or-type-of-government-mattered-during-the-coronavirus-pandemic-e063dd64377 | ['Kent Kroeger'] | 2020-04-09 21:46:28.906000+00:00 | ['Statistical Analysis', 'Coronavirus Update', 'Coronavirus', 'Epidemiology', 'Covid 19'] |
How VOTing classifiers work! | How VOTing classifiers work!
Classification is an important machine learning technique that is often used to predict categorical labels. It is a very practical approach for making binary predictions or predicting discrete values. The classifier, another name for classification model, might have the intention of predicting whether someone is eligible for a job or it could be used to classify the images of multiple objects in a store.
Classification, like other machine learning techniques, use data sets. A dataset is a combination of multiple values from different variables. After obtaining an optimal dataset, it is split into two: the training and testing set. The training set often has a larger proportion of the dataset. It is likely to take up about 70% to 90% of the dataset.
The training set is inserted into the machine learning algorithm to create a predictive model with an added step called cross-validation. Cross-validation is a great way to ensure that the built model does not overfit the training set and it also optimizes the versatility of the model. Then, the model can be used to predict the labels in the testing set. The predicted labels are further compared to the actual testing set labels via metrics such as confusion matrix, precision score, recall score, F1-score, roc auc score.
Once the construction of the classification model is over, a data point’s values can be inserted into the algorithm and the algorithm makes a decision by attributing a specific label to this data point based on the variables’ inputs.
Now imagine if different classification methods were asked to make decisions based on the data instances’ inputs. There are bound to be different answers. This is where voting classifiers come into play.
What is a Voting Classifier?
A voting classifier is a classification method that employs multiple classifiers to make predictions. It is very applicable in situations when a data scientist or machine learning engineer is confused about which classification method to use. Therefore, using the predictions from multiple classifiers, the voting classifier makes predictions based on the most frequent one.
A real life scenario could see a data scientist being confused about whether to use a random forest classifier or a logistic regressor to predict the type of flower based on their dimensions.
Using the prompt above, a step-by-step guide has been created below on how to use python via Jupyter Notebooks to build voting classifiers.
Starting with the code below, one can import the classifiers using scikit-learn.
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import VotingClassifier
Using python via Jupyter Notebooks, the scikit-learn’s ensemble feature is accessed and a voting classifier is imported. There are three other classifiers in the code above: a random forest classifier, a logistic regressor and a KNearest Neighbor classifier. These three will be attributed to objects as seen below:
log_clf = LogisticRegression()
rnd_clf = RandomForestClassifier()
knn_clf = KNeighborsClassifier()
Afterwards, an object is created for the voting classifier. The voting classifier has two basic hyperparameters: estimators and voting. The estimators hyperparameter creates a list for the objects of the three classifiers above while assigning names to them. The voting hyperparameter is set to either hard or soft.
If set to hard, the voting classifier will make judgments based on the predictions that appear the most. Otherwise, if set to soft, it will use a weighted approach to make its decision. I’d recommend setting it to soft when using an even number of classifiers because of its weighted approach and setting it to hard when using an odd number of classifiers because of its “majority carry the vote” approach.
vot_clf = VotingClassifier(estimators = [('lr', log_clf), ('rnd', rnd_clf), ('knn', knn_clf)], voting = 'hard')
The voting classifier like any other machine learning algorithm is used to fit the independent variables of the training dataset with the dependent variables
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split iris = load_iris()
x, y = iris['data'], iris['target']
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state = 42, train_size = 0.85) vot_clf.fit(x_train, y_train)
After fitting, it can be used to make predictions and the accuracy of its predictions is measured.
pred = vot_clf.predict(x_test)
accuracy_score(y_test, pred)
Here is an image that shows how a voting classifier was used as a predictive model for a dataset and compared to other classifiers. The code was initially obtained from Aurelion Geron’s book, Hands-On Machine Learning with Scikit-Learn and TensorFlow Concepts, Tools, and Techniques to Build Intelligent Systems, and I ran them on Jupyter Notebooks. | https://towardsdatascience.com/how-voting-classifiers-work-f1c8e41d30ff | ['Mubarak Ganiyu'] | 2020-11-06 03:01:20.030000+00:00 | ['Python', 'Data', 'Machine Learning', 'Classification', 'Data Science'] |
A Search For The Beautiful | I showed up to a manicure appointment earlier this week drenched in sweat and desperate for water. It seems that walking anywhere in Bali for more than five minutes results in the kind of clammy flush that makes you look unwell. The nail technician, concerned, walked out with cool towels, water, and a little piece of paper with a Ralph Waldo Emerson quote on it. Precious, I thought. But, it was more than precious, it was ironic timing.
Though we travel the world over to find the beautiful, we must carry it with us or we find it not.
This week I got what I’m going to refer to as ‘the feeling’. My first taste of this feeling happened about a month into my six-week sabbatical from work — nearly 10 months ago. The feeling? It’s one of complete contentment. Different from the bursting feeling of happiness, it’s a quiet stillness that overshadows any worry, or any doubt. It makes me feel light and joyful. It makes me feel undoubtedly connected to myself and what I’m doing. As someone who has a difficult time finding contentment, it is quite possibly the most pleasant emotion I’ve been touched by.
After settling into this newfound feeling on my sabbatical, I felt determined to bring it home with me. To keep the light airy feeling as I re-entered my real life. And I did for maybe a month, and then as most pleasant emotions do, it slipped between my fingers faster than I could cup my hands in a way that would allow me to hold onto it. My days were once again filled with things that didn’t feel like conscious choices, but more as burdens that kept my life afloat. The setup of my sabbatical allowed me a quick taste of the beautiful, but it wasn’t yet mine to take home with me. In order to figure it out, I needed to deconstruct how I landed where I did.
Most of my 20s were spent on a path I never thought to question. It was a decade filled with college parties, finding a husband, bridal showers, bachelorette parties, a house, a dog, and a corporate job. Each milestone felt like posing for a forced Instagram photo boasting an impressive view with cropped out imperfections. Despite my discontent, I trusted it. It was so beaten down by people who seemed to have it all, I hardly noticed the offshoots. The less-obvious paths. Instead, I kept walking, hoping I at least looked good while doing it.
I obsessed over making my life look like the lives of those I admired — of those who took good pictures in front of good views. I wanted to be a witty mom with cute kids whose friends thought I was a ‘cool mom’. I wanted a job that would make others think I was powerful. I wanted to have a house on Lake Washington others would envy. I wanted so many ideas of things, I stayed on course, forgetting to ask myself if any of the stops on the path excited my soul. I’m not saying the people living waterfront on Lake Washington aren’t happy — I’m sure many of them are — but I’m certain the ones who are happy are happy because they thoughtfully selected paths that filled their soul up. They didn’t chase the house, the house was a result of chasing things that mattered to them.
To build the idea of the life I thought I wanted, I chose my degree and optimized my life around money. I positioned marriage as an arbitrary finish-line that needed to be crossed — I even put pressure on its happening. My days were filled with things I thought I was supposed to do. Things the world told me I should do. Not things I loved.
Shortly before my 25th birthday, I was swallowed whole by a deafening cloud of anxiety. It infected my entire being, leaving me as a shell of who I was before. I felt unable to think. Unable to be happy. Unable to leave the path I had so intently walked down. This led to a distancing of my body and mind — a case of depersonalization that made it feel like I was no longer attached to the body that somehow kept moving. I forced myself to continue working, to socialize harder, to drink more, to be on a plane as often as I possibly could. Anything to distract me from the discomfort of not being whole.
I lived this way for close to 4 years. The alarms in my body were sounding, but I was too afraid to listen, because I knew that if I did, it would destroy everything I had worked so hard to build. To find contentment would surely lead me down a path that wasn’t my marriage. That wasn’t my job. That wasn’t my life. In absolute fear of what others would think, I put bumpers up in my lane. Letting myself fall into the gutter and past the pins I thought I was supposed to hit felt like too much for my ego to bear. I wanted a nice house, not to soul search.
I now call this the default mode. It looks different for every person, but it’s the mode where we detach ourselves from our true desires. It’s the one where we keep mindlessly walking. It’s the mode where we give too many shits about what those around us would think if we did anything different. If we’re lucky, our bodies will revolt. They’ll let us know when what we’re doing isn’t right. When I finally started to listen, I realized how much time was spent in this place. How my life had become an accumulation of choices and beliefs that were hardly my own. This awareness emerged slowly, but with a permanence that required action.
To figure out where I was going, I felt a burning need to go alone. Dragging someone else up and down different offshoots searching for the right path wasn’t fair. You can’t walk happily together if one is certain of their path and the other needs to wander. As it turns out, a relationship will never bring ‘the feeling’ if you can’t find it on your own. That old adage feels far too simple for the depth of truth it holds.
The impetus of me leaving my husband, my home, and my job wasn’t to find ‘the feeling’, it was to find and start doing what I truly wanted to be doing. But, in being so wrapped up in what I wanted to be doing, that glorious, wonderful feeling came back. While I know this feeling will always be a fleeting one, its presence in these opportune moments feels like enough to let me know I’m on the right path. This new one feels scary. It doesn’t feel safe or certain. Yet, I feel the most at peace I’ve ever felt. I can’t yet boast that I’ve found ‘the beautiful’, but I’m definitely a step closer than I was before. | https://medium.com/@katiethegreen/a-search-for-the-beautiful-a3ba3533e4fd | ['Katie Green'] | 2019-11-21 12:37:46.288000+00:00 | ['Travel', 'Life Lessons', 'Divorce', 'Awakening', 'Breakups'] |
Machine Learning: The Great Stagnation | Machine Learning: The Great Stagnation
This blog post generated a lot of discussion on Hacker News — many people have reached out to me giving more examples of the stagnation and more examples of projects avoiding it. Maybe I’ll add to this article or maybe I’ll write a new one, let’s see what happens. In the meantime if you can’t wait for me to stop staring at the ceiling and write something new, I’m pretty sure you’ll enjoy my e-book robotoverlordmanual.com
Machine Learning Researchers
Academics think of themselves as trailblazers, explorers — seekers of the truth.
Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations.
The construction of Academia was predicated on providing a downside hedge or safety net for researchers. Where they can pursue ambitious ideas where the likelihood of success if secondary to the boldness of the vision.
Academics sacrifice material opportunity costs in exchange for intellectual freedom. Society admires risk takers, for it is only via their heroic self sacrifice that society moves forward.
Unfortunately most of the admiration and prestige we have towards academics are from a bygone time. Economists were the first to figure out how to maintain the prestige of academia while taking on no monetary or intellectual risk. They’d show up on CNBC finance and talk about “corrections” or “irrational fear/exuberance”. Regardless of how correct their predictions were, their media personalities grew with the feedback loops from the YouTube recommendation algorithm.
It’s hard to point the blame towards any individual researcher, after all while risk is good for the collective it’s almost necessarily bad for the individual. However, this risk free approach is growing in popularity and has specifically permeated my field “Machine Learning”. FAANG salary with an academic appointment is the best job available in the world today.
With State Of The Art (SOTA) Chasing we’ve rewarded and lauded incremental researchers as innovators, increased their budgets so they can do even more incremental research parallelized over as many employees or graduate students that report to them.
Machine Learning Researchers can now engage in risk-free, high-income, high-prestige work They are today’s Medieval Catholic priests
https://en.wikipedia.org/wiki/Disputation_of_the_Holy_Sacrament#/media/File:Disputa_del_Sacramento_(Rafael).jpg CC BY 4.0
Machine Learning Students
Machine Learning PhD students are the new Investment Banking analysts, both seek optionality in their career choices but differ in superficial ways like preferring Meditation over Parties and Marijuana & Adderall over Alcohol and Cocaine.
A Machine Learning PhD is now just an extended interview for FAANG
The entire data science interview process at larger labs has become a mix of trivia and prestige. Checking out a portfolio takes way to too long but checking that you graduated from Stanford or coauthored a paper with Google Brain, now that’s a good filter!
We’ve gamified and standardized the process so much that it’s starting to resemble case studies at consulting interviews.
“Recruiter: Which activation functions do you know?”
“Me: Infinitely many x ^n * sigmoid(x) ∀n”
“Recruiter: Ok.. Tell me about your biggest career failure”
“Me: Having to answer your questions”
Matrix Multiplication is all you need
I often get asked by young students new to Machine Learning, what math do I need to know for Deep Learning and my answer is Matrix Multiplication and Derivatives of square functions. All these neuron analogies do more harm than good in explaining how Machine Learning actually works.
LSTMs a bunch of matrix multiplications, Transformers a whole bunch of matrix multiplications, CNNs use convolutions which are a generalization of matrix multiplication.
Deep Neural Networks are a composition of matrix multiplications with the occasional non-linearity in between
Mark Saroufim: “Matrix Multiplication is All you Need”
Matrix multiplication was invented way back in 1812 by Jacques Philippe Marie Benet but you’d be forgiven for thinking forward propagation was invented much later than that.
With Automatic Differentiation, the backward pass is essentially free and is as engaging to compute as 50 digit number long division. Deriving long complicated gradients is fake rigor that was useful before we had computers.
Mark Saroufim: “Numerical Computing > Calculus”
When I was a graduate student at UC San Diego I remember shying away from Deep Learning because it was not considered serious Machine Learning because there were no good proofs for why these models should work.
Empiricism and Feedback Loops
I’ve learnt “the hard way” that Deep Learning is an empirical field, so why or how something works is often anecdotal as opposed to theoretical.
The best people in empirical fields are typically those who have accumulated the biggest set of experiences and there’s essentially two ways to do this.
Spend lots of time doing it Get really good at running many concurrent experiments
Age is a proxy for experience but an efficient experimentation methodology allows you to compress the amount of time it would take to gain more experiences.
If you have a data center at your disposal this further multiplies your ability to learn. If you and all your peers have access to data centers this is yet another multiplicative feedback loop since you can all learn from each other.
This helps explain why the most impactful research in Machine Learning gets published in only a few labs such as Google Brain, DeepMind and Open AI. There are feedback loops everywhere.
The Rise of Transformers
The past 3 years in particular have been an unrelenting deluge of incremental work with paper titles that read like tabloid headlines:
“Attention is all you need!”
“Transformers on Proteins!”
“Transformers on Molecules”!”
“Transformers on Images”
“Fast Transformer!”
“Transformers are Graph Neural Networks!”
“Learning to Transform with Transformers“
“Small Transformer!”
“Long Transformer”!
“Useful” Machine Learning research on all datasets has essentially reduced to making Transformers faster, smaller and scale to longer sequence lengths.
This is a problem reminiscent of the discovery of NP-completeness — journals were flooded with the proofs that yet another problem was NP-complete.
BERT engineer is now a full time job. Qualifications include:
Some bash scripting Deep knowledge of pip Waiting for new HuggingFace models to be released Watching Yannic Kilcher’s new Transformer paper the day it comes out Repeating what Yannic said at your team reading group
It’s kind of like Dev-ops but you get paid more.
Graduate Student Descent and the Death of First Principles
Neural Network weights are learnt via Gradient Descent and Neural Network architectures are learnt via Graduate Student Descent.
The below flow chart describes how Graduate Student Descent works.
Mark Saroufim: “Graduate Student Descent”
Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student descent you are more likely to get published or promoted than if you took on uncertain projects.
The popularity of Graduate Student Descent stems from Cargo Culting configs where certain loss functions, depths, architecture are generally regarded as good.
It’s quite difficult to actually reason from first principles because Machine Learning algorithms are complex systems with a huge number of variability in parameters where the interactions are nonlinear and unpredictable. Ablations help but even then aren’t entirely conclusive over such a wide range of parameters.
Technical papers that try to give intuition behind how or why a specific technique works often looks closer to astrology than science with sentences like “encourage the network to be confident in its predictions”.
Mark Saroufim: “The deep lines”
Scale is Trivial
I sometimes get the impression that academics think that transitioning to large models goes something like:
python model.py --tiny_model --local
python model.py --super_large --on_pristine_cluster
It’s trivial to think of running a large model but it’s certainly not trivial to actually do it. This misconception is best illustrated with this immensely popular meme.
If something simple like stacking more layers works better than statistical learning, then you have to wonder who the real clown is
To “STACK MORE LAYERS” you need to worry about model and data parallelism, pipelining, tuning your hyperparameters, hardware accelerators, network vs compute vs storage vs IO bottlenecks, early stopping, scalable architectures, distillation, pruning etc..
You can do all the convergence bounds you like with Chernoff Bounds, Markov Inequality over some parameter which is assumed to be Gaussian but if your proposed algorithm is worse than “STACK MORE LAYERS” then your proposed algorithm isn’t very good.
Every paper is SOTA, has strong theoretical guarantees, an intuitive explanation, is interpretable and fair but almost none are mutually consistent with each other
If Constantinople fell so can the institution of Theoretical Machine Learning. It doesn’t really matter how many angels can dance on the head of a pin.
I’m cautiously optimistic about Causal Reasoning, what I’d like to see is it graduating from a tool for meditation to libraries that people actually use daily.
Fake Rigor
This does not mean I’m opposed to Mathematical formalism, if anything I love math and I want to see more of it in Deep Learning — I only caution against fake rigor.
Assuming certain “nice properties” about data so that theorems work out, gradient derivations that take up multiple pages of an appendix instead of just using Automatic Differentiation.
But how do you prove that new theoretical results are useful?
Well the worst way to do this is to simply combine complicated mathematical ideas into a neural network because you can, you can draw on the readers aesthetic sensibilities and discuss why Fourier is the heart of computation. The optimization community is often guilty of guilty of this where they propose activation functions like swish and then spend pages and pages talking about the nice properties of the loss landscape.
The most reliable to get new ideas widespread is to create a benchmark where existing SOTA methods fail and then show how your technique is better. This is hard by design, it should be hard to displace proven ideas with unproven ones. This technique has the advantage of not requiring any Twitter arguments.
It’s important to avoid becoming Gary Marcus and criticize existing technique that work without proposing something else that works even better.
If you can’t do that then maybe you haven’t stumbled on something useful, maybe it’s just beautiful or elegant. Searching for warmth and fuzziness is still a worthwhile goal.
Where is the Innovation in Machine Learning?
While my introductory lament may have led you to believe that there is no innovation going on in Machine Learning, that we’ve settled on a cargo-culting monoculture — nothing could be further from the truth.
There is still substantial innovation happening in Machine Learning just not from Data Scientists or Machine Learning Researchers.
Here are the projects that I believe represent a glimmer of hope against the Stagnation of Machine Learning.
Keras and Fast.ai
Machine Learning is a Language, Compiler and Design problem
A programming language is tool for thinking that needs to be designed with the same sensibilities as any other consumer product.
When I first got interested in Keras, some of my peers mentioned to me that it wasn’t as serious doing “real ML work” in Tensorflow and it made me silently wonder why they weren’t programming in FORTRAN.
Keras introduced me to the idea of viewing Machine Learning as a language design problem. Instead of thinking of matrices or neurons I was now thinking in terms of layers just like I do when I read a paper.
# Create layers
layer1 = layers.Dense(2, activation="relu", name="layer1")
layer2 = layers.Dense(3, activation="relu", name="layer2")
layer3 = layers.Dense(4, name="layer3") # Call layers on a test input
x = tf.ones((3, 3))
y = layer3(layer2(layer1(x)))
A matrix is a linear map but linear maps are far more intuitive to think about than matrices
You can then build networks with multiple inputs and outputs to build far more interesting networks.
model = Model(inputs=[x1, x2], outputs=y)
Keras is a user centric library whereas Tensorflow especially Tensorflow 1.0 is a machine centric library. ML researchers think in terms of terms of layers, automatic differentiation engines think in terms of computational graphs.
As far as I’m concerned my time is more valuable than the cycles of a machine so I’d rather use something like Keras.
This doesn’t mean I’m happy with slow code which is why it’s crucial to build good compilers and Intermediate representations like XLA that would run my user friendly code fast.
Performance vs Abstraction is a false dichotomy — the history of computing is proof of this
Putting the user first is the approach that Fast.ai took when building their Deep Learning library. I think of Jeremy Howard as the Don Norman of Machine Learning.
Instead of just focusing on the model building part, Fast.ai builds tools around all of the below.
https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/ with permission
By tools I don’t mean a black box service, I mean software design patterns specific to Machine Learning. Instead of Abstract Factories, abstractions like Pipelines to chain preprocessing steps, callbacks for early stopping all the way to generic yet simple implementations of Transformers. Design patterns are more useful than black boxes because you can understand how they work, modify them and improve for both yourself and others.
Honorable mention to nbdev which aims at removing some of the common annoyances of working with notebooks by eliminating all obstacles that would get in your way of shipping your code as a library including a human readable git representation, continuous integration and automated PyPi package submission.
Julia
The promise of Deep Learning is Differentiable Computing, where instead of writing programs to accomplish certain tasks you feed a model input/output pairs and ask it to generate a program for you.
Once you start thinking of neural networks as programs, you can also think of programs as neural networks and differentiate programs.
Unfortunately, Python is not differentiable which is Google and Facebook have each built their own languages in C++ with Python bindings that is automatically differentiable.
Julia on the other hand is a language made for scientific computing where everything is automatically differentiable by default. So if you build an ODE solver you get a neural ODE solver for free.
Ordinary and Partial Differential Equations are the way we formalize most relationships in Science from astronomy to pharmacology so being able to speed up these simulations by orders of magnitude means we’re in the middle of a golden age of Scientific Computing.
I also particularly find the Neural ODE approach to Robotic simulations to be extremely exciting. The main problem in Reinforcement Learning is that models are generally large and difficult to train but you can make them much smaller and easier to train if instead of treating a simulation as a black box you can treat it a like differentiable white box ODE solver.
HuggingFace
What is the most important Machine Learning company today?
If you asked this question in 2018 the answer may have been Open AI. They dazzled the world with beautiful demos of agents besting expert video game players. They reminded me why I got into Machine Learning in the first place. Their reputation shadowed everything but overtime it became their core product.
Open AI is a media and service company
Look at the pretty blog posts with beautiful typography, pay to use GPT-3. Open AI is not a platform company.
https://huggingface.co/ with permission
I can’t think of a single large company where the NLP team hasn’t experimented with HuggingFace. They add new Transformer models within days of the papers being published, they maintain tokenizers, datasets, data loaders, NLP apps. HuggingFace has created multiple layers of platforms that each could be a compelling company in its own right.
Billions of Dollars of value will be created from HuggingFace on problems which are a lot less speculative than AGI. HuggingFace avoids the typical ML startup trap of turning into either a consulting firm or citation farm.
Hasktorch
Haskellers often resemble a cult, everything is a function or a Monad or a Lens. It’s often hard to follow what Bi-Category is or why you should care.
Real world Haskell functions try to model IO as a Monad or a web server as a function with state but there’s an application that I believe Haskellers don’t hammer on quite enough.
Haskell is the best functional programming language in the world and Neural Networks are functions
This is the main motivation behind Hasktorch which lets you discover new kinds of Neural Network architectures by combining functional operators. Justin Le does this idea a lot of justice in his series Purely Functional Typed Models
Training a model
Composing Logistic and Linear Regression
RNN
Admittedly the RNN code sample is a tad complicated but consider it’s orders of magnitude less code than what you’d see in a typical neural network library and not something you can get by combing logistic regression with a couple of extra operators. I’m hoping I can convince Justin to one day create a lecture for us mere mortals to invent our own neural network architectures.
Unity ML agents
Open AI gym was a great attempt at creating a platform where various games were used in benchmarks for Reinforcement Learning research. The goal was to eventually advance Reinforcement Learning to deal with more complex problems and really push the boundary.
Unfortunately, SOTA chasing meant that the benchmarks became the goal and an entire community of researchers has overfit techniques like “world models” to this dataset. So generally there’s a trend towards ever more complex algorithms on simple benchmarks whereas the most complex benchmarks like Dota seem to be using the simplest algorithms scaled with an impressive infrastructure.
Mark Saroufim: “The Bitter Lesson”
Unity is the most user friendly Game Engine out today, I love it and use it for all my side projects. Unity ML agents is a way for you to turn a video game into a Reinforcement Learning environment.
Reinforcement Learning environments are essentially custom datasets and I’m confident it will be the de-facto simulator of complex robotic applications. Create a complex multi agent negotiation with bluffing game where agents need to have a grasp of physics and an understanding of optical illusions. Go crazy!
Any intelligent behavior is best benchmarked with Unity ML agents
Biotech
I wrote this article before AlphaFold2 was released — I explain how it works on YouTube. Biotech is a huge deal and is most definitely not stagnating, I’ll probably address this in a future article. But as a sneak peek, most interesting scientific problems can be modeled as graphs which motivates why I’m so interested in Graph Neural Networks.
Parting words
I am sad to see that most exciting work in Machine Learning is coming from outside of Machine Learning — I’ve spent 10 years in this field and I learn more from the crackpot outsiders on Twitter today then I do from peer reviewed papers.
I want to hear more proposals that I know for a fact will probably work, I want more ideas, I want Machine Learning to be fun again.
Keep an open mind and most importantly don’t be this guy. And if you enjoyed this please let me know by subscribing!
https://www.wikiwand.com/en/Pedant CC BY 4.0
Acknowledgements
Thank you sudomaze, thecedarprince, krishnanpc and 19_Rafael for helpful feedback while I was livestreaming myself writing this article on twitch.tv/marksaroufim | https://towardsdatascience.com/machine-learning-the-great-stagnation-3a0f044e17e0 | ['Mark Saroufim'] | 2021-03-19 19:37:26.355000+00:00 | ['Deep Learning', 'Editors Pick', 'Machine Learning', 'Bert', 'Transformers'] |
40 Curious Nuclear Energy Facts You Should Know | Over 450 nuclear power reactors are used around the world. About 31 different countries have operational nuclear reactors. Nuclear energy supplies 11% of the world’s electricity. It is the second-highest provider of low-carbon energy in the U.S., next to hydro energy. Nuclear power plants are safer work environments compared to offices. Enrico Fermi is the father of nuclear energy. He discovered nuclear fission and created the first nuclear power plant, the Chicago Pile-1. The word ‘nuclear’ stems from the nucleus of an atom. Nuclear energy is released from “nuclear fission” or the process of splitting an atom in two. The first commercial nuclear power stations started operating in the 1950s. There have been over 55 tests a year for the last 30 years. Uranium is the most common fuel for nuclear energy. The U.S., France, and Japan are the largest producers of nuclear power. Nuclear waste is radioactive and must be disposed of properly. In the history of nuclear energy, there have been 3 major disasters — Three Mile Island, Fukushima, and Chernobyl. Only 2 nuclear weapons have ever been used in warfare, although many weapons have been tested. A rem is a large dose of radiation. Every creature on Earth receives an average of 300–500 mrem (milirems) a year. France uses 80% nuclear energy as an electricity source. Nuclear fusion is the safest way to create power. Uranium was also used to color stained glass in the medieval ages.
Nuclear Energy Facts Infographics
The sun is the largest nuclear reactor.
The sun converts hydrogen into helium through nuclear reactions. Through nuclear fusion, the Sun fuses 620 million metric tons of hydrogen and makes 606 million metric tons of helium each second in its core. This is similar to how nuclear reactors produce heat, and consequently, energy. The heat created by nuclear fission makes steam that spins a turbine to generate electricity.
Nuclear power isn’t really more dangerous than “traditional” energy.
Over 10 major disasters have been caused by fossil fuel energy in the last 25 years — namely, the BP oil spill. On the other hand, only three have been caused by nuclear power.
Nuclear energy powers the Mars rovers.
Prior Mars expeditions relied on solar panels, but the exploration process was slowed down by dust build-up on the solar panels or days with little sunlight. To solve this, NASA devised the Multi-Mission Radioisotope Thermoelectric Generator (MMRTP). The MMRTP is an energy source that relies on the heat generated by decaying plutonium dioxide to power the Curiosity rover.
It costs over 6 billion dollars to build one new reactor for a nuclear power plant.
Nuclear power plants may be expensive to build, but they are relatively cheap to run. Nuclear energy competes with fossil fuels as a means of electricity generation. If you consider the environmental and economic consequences of long-term reliance on fossil fuels, nuclear energy shines as an alternative energy source. | https://medium.com/@sahas-dahal/40-curious-nuclear-energy-facts-you-should-know-facts-net-20879a355137 | ['Sahas Dahal'] | 2020-10-13 04:57:33.423000+00:00 | ['Facts', 'Energy', 'Nuclear', 'Energy Efficiency', 'Nuclear Energy'] |
Who is saying CERB is a disincentive to work and what is the outcome of removing CERB? | Who is saying CERB is a disincentive to work and what is the outcome of removing CERB? WW Winter Jul 20, 2020·3 min read
What is the goal of this person or group of persons in removing CERB?
What has kept people away from work?
Is it CERB or is it the pandemic?
CERB is the Canadian Emergency Response Benefit. CEWS is the Canadian Emergency Wage Subsidy. The first is paid to individuals by the government. The second is paid to businesses by the government.
To blame the CERB for not having more unemployed people is like blaming the not-working-in-a-paid-job-female for being the one to have a pregnancy. In this case, a pregnancy is a natural consequence of sex between humans who want to have a child. Unemployment in a pandemic is a natural consequence of risk avoidance by business (avoiding the expense of paying salaries). If removing the CERB from individuals and giving it to the business is meant to change the effects of the pandemic, then we should have seen business doing the most to qualify for the subsidies already existing. However, the reception was lukewarm. So the government is sweetening the pot. Does this sound like something we’ve seen before? Perhaps these businesses were shaky even before the pandemic and the pandemic was just a fast-forward function in this reality show we’re all part of.
Obviously, there is no need for this by the business in the first version of CEWS as shown by the low actual use of this fund, and I suspect, even of the second version.
If the social experiment is to provide incentives to business, how about something that does not cost money today? How about tax credits for businesses that increase their full-time headcount?
Obviously, you want to fund business that will survive to pay taxes eventually. You’re not funding business just to survive another month and then go bankrupt the next. If you didn’t care if the business goes bankrupt, then it would be a bad solution because a bankrupt business means laid off employees, which brings you back to the original problem.
I’m wondering in what situation CERB is worse than CEWS if the goal is to keep people safe and employed.
Is it the “economy” that’s more important than people? Who is the economy, by the way? Isn’t it people, too? Isn’t it the same person that’s currently unemployed?
The people who are not working in paid jobs are contributing to the “economy” because the bread, utilities, and even the seeds for planting are bought and paid for within the economy.
In fact, I would argue that people not working in paid jobs are contributing more to the economy than they are getting out of it. These individuals still require the same utilities and calories to live and yet they don’t get the wages to about their days. If you think about it each person contributes the same to the earth each day: all the carbon dioxide exhaled into the environment, and all the other excretions in a day goes back to the earth, and at death, molecules to the earth: mostly carbon. These same molecules that have existed for billions of years that have formed part of stars, oceans, some other person in history, consumed food, and in the future, would be part of other stars, oceans, persons, and consumer food. What is the difference?
Is it the one year in 15 billion years that was spent working to earn wages in a paid job that is subject to a wage subsidy?
In the extreme situation imagined to justify removal of CERB where paid jobs are available to everyone who seeks them (which has not been the case for a few years now, as shown by the presence of many overcrowded employment agencies funded by the government whose sole mission is to help people find employment): If a person refuses to work, just as if a person refuses to have a pregnancy in my earlier example, is the consequence of that denial of basic income or denial of societal inclusion? With a denial of basic income, the person would eventually die. | https://medium.com/@wonder.wall.winter/who-is-saying-cerb-is-a-disincentive-to-work-and-what-is-the-outcome-of-removing-cerb-1d2f61e631e9 | ['Ww Winter'] | 2020-07-20 09:46:01.293000+00:00 | ['Cerb', 'Covid Diaries', 'Pandemic Diaries', 'Trudeau', 'Working In Canada'] |
Enviro-Art Books as Micro Ecologies | The following notes are from curator, Jeffrey A. Lee, for the 2009 exhibition, Opening a Book at The LAND/an art site gallery in Albuquerque, New Mexico.
My open format environmental materials book, Seasonal Shades (2009), was included in this group exhibition along with the following artists:
David Abel, Kathy Bruce, Basia Irland, Mary Ellen Long, Alastair Noble, Maria Rooks, Marilyn Stablein, Beata Wehr, and Margaret Whiting.
detail of Season Shades (2009) | environmental materials as an open book | Abigail Doan
___
THE BOOK is, then, the container of provocation.[1]
Consider a book as a sequence (of texts, images, objects, ideas) bound or otherwise contained so that it’s kept together and in order. The binding or container also preserves its contents.
The book for me should be without limits, like the desert, thus an exploded book… a desert form whose only limits are the four horizons.[2]
Consider the sequence of seasons, of growth and perpetuation, of metamorphosis, or any other natural sequence. What binds it together or contains it? What preserves it? What interrupts or disturbs it, and what are the consequences?
Books, like ecologies, are local systems. Like an ecosystem, any part removed from a book compromises the whole. Unlike an ecosystem, anything put into a book can belong there.
THE SPACE we love is unwilling to remain permanently enclosed. It deploys and appears to move elsewhere without difficulty…[3]
I know from my experience that books ultimately convey their own unspeakable language to the solitary reader.[4]
A book is a sequence of spaces.[5]
[1] Dick Higgins
[2] Edmond Jabès
[3] Gaston Bachelard
[4] Harry Reese
[5] Ulises Carrión
___
Links to other environmental (art) book projects/investigations of mine:
The LAND/an art site, retrospective catalogue (2011)
Azimuth: Writing on Walls (2012)
Winter to Spring Nomad Archive (2013)
#CorrespondenceCourse with Brece Honeycutt Norte Maar (2016)
Walking Libraries (2016–2019) | online archive | https://medium.com/@abigaildoan/enviro-art-books-as-micro-ecologies-ce0dfbe43816 | ['Abigail Doan'] | 2019-09-10 17:37:35.245000+00:00 | ['Environment', 'Ecology', 'Book Art', 'Photography', 'Abigail Doan'] |
How to build an automated DIY irrigation system controlled by an app | In this article I will show you how to build an affordable automated irrigation system so you no longer have to water your plants by hand. The system can be used for house plants, raised beds, but also for plants in the garden or larger green areas, because the number of connected plants is scalable.
Here, the decision as to when to irrigate is not made according to a timer, but rather the soil moisture is measured directly and a decision is made on the basis of this whether irrigation is necessary or not.
Want to read this story later? Save it in Journal.
The system has the following features, which can be controlled via the web app:
Monitor and display time series data at the minute, hour, day, week and month levels
Setting the water level from which automatic watering should be triggered
Setting how long the pump works during an irrigation
Manual activation of irrigation with a button
Switching between different sensor profiles
Switching between dark and light theme
Part List
Disclosure: These are affiliate links. As an Amazon Associate I earn from qualifying purchases.
The “n” in the amount is due to the number of pumps or different plants. For example, in a raised bed it is usually sufficient to have one pump and one sensor. However, if you have different potted plants, they all need to be watered separately and therefor you have to get one pump and sensor for each potted plant.
Hardware Architecture
To measure the soil moisture, the NodeMCU ESP8266 microcontrollers read the analog signals of the capacitive sensors. The filtered and interpolated measured values are then sent through your local network to the Raspberry Pi. This is where you decide whether or not to trigger the relay associated with the sensor. When the relay is opened, the circuit to the pump is closed and the plants get watered.
The architecture was chosen so that pump logic and recording of measurement data is separate. This makes it possible to control up to 26 pumps with the Raspberry Pi (amount of default available GPIO pins). It is also not possible to read the analog signals of the capacitive sensor with the Raspberry itself, because the Raspberry can only process digital signals. Surely it is possible to read the sensors with an MCP3008 and the serial interface, but this requires more pins and the setup is not as clean as it used to be. The pumps are also separately connected to a power supply, whose circuit is controlled by the relay. So it is also possible to use 12V or higher pumps.
Software Architecture
For the software architecture the MERN Stack was used. The software consists of a Node.js backend with Express.js, a Mongo database and a React frontend. A C++ script runs on the NodeMCU ESP8266, which sends data to the REST interface of the backend. The data is processed in the backend, where it is decided whether to irrigate or not. In addition, the data is then stored in the MongoDB. With the frontend, this data can also be requested from the backend via REST.
Setup the NodeMCU ESP8266
To record the measurement data of the sensors with the NodeMCU, you must first install the software. In order to flash the NodeMCU microcontroller you have to follow the steps described in this video.
Before you upload the program you have to set your wifi password, wifi name (ssid), the ip of the raspberry pi (host) and the sensor name (this name need to be unique!). The sensor name will be the name that is displayed in the app. So it’s best to choose the name of the plant the sensor should be associated with.
If the Arduino IDE is successfully configured for the NodeMCU, you can upload the program you find in this repository under arduino-code/ESP8266_moisture/ESP8266_moisture.ino to the NodeMCU.
Installing the project on the Raspberry Pi
First you have to download the software on your Raspberry pi. You can find the source code on GitHub. The following command downloads the repository onto the Raspberry Pi:
cd automated-irrigation-system git clone https://github.com/PatrickHallek/automated-irrigation-system.git cd automated-irrigation-system
To avoid having to install the required programs manually, you can also run the application with Docker in containers. To do this, carry out the following steps:
sudo usermod -aG docker pi
sudo apt-get install -y libffi-dev libssl-dev
sudo apt-get install -y python3 python3-pip
sudo apt-get remove python-configparser
sudo pip3 install docker-compose curl -sSL https://get.docker.com | shsudo usermod -aG docker pisudo apt-get install -y libffi-dev libssl-devsudo apt-get install -y python3 python3-pipsudo apt-get remove python-configparsersudo pip3 install docker-compose
Now you have to pass the ip address of your pi into the REACT_APP_BACKEND_URL=http://<YOUR-RASPI-IP>:3000 environment variable in the docker-compose file:
sudo nano docker-compose.yml
You can find the ip with the command ifconfig . It should be something like 192.168.178.44. You can save your input in the Nano editor with ctr + x , then type in yes , finally exit with enter .
Now everything should be ready and you can start the application with the following command:
sudo docker-compose up
Attention: If you have a Raspberry Pi with a processor other than ARMv6, you need to adjust the image for the mongodb in the docker-compose file. Since the version of the image is only suitable for this type of processor.
To run the software at startup, you can execute the following command: | https://medium.com/@patrickhallek1998/automated-smart-home-irrigation-system-9061c391f8e2 | ['Patrick Hallek'] | 2020-08-27 10:28:57.959000+00:00 | ['Smart Home', 'Irrigation', 'Watering', 'Automated', 'DIY'] |
IBUKUN AWOSIKA: LESSONS FROM A LEGENDARY ENTREPRENEUR | “The generation of our fathers have been entrepreneurs for so long.” — Ibukun Awosika
Mrs. Awosika started her entrepreneurship calling with the carpentry industry selling just hand made furniture then, however she has always dreamt of working in a bank, maybe as a banker or otherwise. Unfortunately, she graduated as a chemical engineer from the Obafemi Awolowo University (OAU), and tried sending her application letters to financial institutions around the country, but after waiting for too long, she never had the opportunity to work with a bank in the early stage of her life.
This disappointment jeered Awosika to venture into the Wood industry and even few people really know she is the brain behind the security doors you see in entrance of most Nigerian banks.
However Awosika was made the General Manager of first bank early this year wish finally fulfilled her dream to be able to work in a bank but now as a director and not an ordinary banker.
Awosika had resigned from working with a furniture company because their value was not what she like, so she started her own company, aside leading first bank, she has various business investment.
Having achieved this much, she shares her top advice to young Nigerian entrepreneurs at the AYESA Conference in Lagos last week.
CREATE VALUE
Starting her first enterprise so many years ago with the intention of adding value to the society and fulfilling her dreams, she advise young entrepreneurs to follow in the same line, because business is about adding value. She said ‘as young people there are so many areas for you to create value for yourself and the community.’ This advice is the bedrock of contributing to the society, unlike most young people who has money has the first thought, Awosika advices that young people should think first about creating value.
BE A JOB CREATOR
As Nigeria is faced with many unemployment challenges, she advises the youth to stop thinking about graduating then looking for a job. She said ‘the problem of Nigeria now is due to the fact that thousands of graduate leave school to look for job.’ While most job could have been created while still in school, most successful start ups’ started while still in school.
TAKE RISK
My conclusion is that if you can’t take risk you cannot survive in this world, because entrepreneurship is about taking risk. Awosika said “every individual who is positively engaged is a person of risk in the society.” She reveals how she took so many risks in her career, while starting up.With this way young entrepreneurs should be keen about taking risk and expect it in their ladder to success.
BE EDUCATED, BUT DON’T DEPEND ON IT
Like many Nigerians who graduate with the impression that “I have a certificate and I have a right to get a job”, it was made known that formal education does not translate the government must look for a job for you, as a matter of factly, thousands of jobs are created everyday but the job created are not enough which is the reason why more entrepreneurs are needed. She said most young Nigerians have the impression of graduating and going to work in the bank, according to her this impression should be changed because very soon many people in the bank will lose their job as technology will be playing the roles of humans in the banks. She said ‘Education offers you an exposed minds, your education is good, but don’t make it handicap. Education is only a facilitator.’
BE FOCUSED; DON’T ENVY YOUR FRIENDS WHO ARE EMPLOYED
Like most people who envy their friends who are employed, she advised the young entrepreneurs never to envy, though it takes time before your fruit start blooming as an entrepreneur, but it will come as a flood of surprise. It is of no value if you envy your friends who are already making it big in their field. Awosika puts it this way ‘don’t envy your friends that already have work.’ You are creating yours.
With these point listed above Awosika was able to inspire over seven hundred young people at the AYESA conference held on December 5th 2015 at The University of Lagos. | https://medium.com/passionafrica/ibukun-awosika-lessons-from-a-legendary-entrepreneur-6d341a93959e | ['David Atilola'] | 2017-05-05 14:42:05.204000+00:00 | ['Business', 'Ibukun Awosika', 'Awosika', 'Education'] |
Market Neutrality: how it works in simple terms | Market Neutrality: how it works in simple terms
The CAPM model and the Beta
Source: Pikrepo
Proprietary trading firms and hedge funds normally use market neutral portfolios, they do not do directional trading. This is so common that the iconic view of long-term value fund managers is much less common than we tend to think.
By directional trading I mean the way that small investors, retail traders, professional day traders and long term value investors usually do: a portfolio of assets (or just a single asset/instrument) is selected and a long or short position is taken with the expectation that it will increase (or decrease) its value in the market. For short-term traders the operation will last minutes, for long term traders will last months or years, but both groups base their strategy on an analysis that tries to determine the future direction of the asset/instrument price.
While directional trading is profitable when executed by a professional (much more profitable than market neutral portfolios, especially if leveraged instruments are used) it always poses much higher risks. Loses can be devastating on directional trading strategies if the wrong direction is chosen or under high volatility periods when markets trigger risk thresholds forcing to exit positions.
Investors (I mean professional investors with large amounts of money) usually value safety over profitability, therefore any investment strategy which reduces risk is usually more interesting than higher returns at higher risks. The industry is committed to model strategies that reduce risk over finding models with high profitability.
The industry is committed to model strategies that reduce risk over finding models with high profitability.
An additional point is that many directional strategies do not work with large capitals, as they are usually based on the fact that the applied capital is small enough to be noticed by the market. This is especially true for intraday.
Market Neutrality
Market Neutrality (also known as Dollar Neutrality) is a concept where the investor holds at the same time both long and short positions on the same underlying asset/instrument/commodity but with slightly different assets/instruments. The difference can be different options of the same underlying asset, different settlement periods in future contracts or two ETFs with the same underlying or even companies within the same industry (Ford/GM) — industries tend to move together —.
Pair trading
A well known and simple to understand market neutral portfolio is the one built by long and short positions on WTI and Brent. Both have the same underlying asset: oil. So whatever price does in each instrument, both prices are highly correlated and can not diverge much, if one rises, the other will rise too. The divergences found are related to short-term events (inventories, demand expectations, new pipelines) and any other aspect that might impact fundamentals on supply/demand (macroeconomics, geopolitics, etc.). Those divergences tend to be corrected over time and difference of prices in highly correlated assets/instruments tend to show a strong mean reversion property because, at the end, it is the same underlying asset: an oil barrel.
The most basic strategy (which would be more accurately described as a pair trading strategy) implies buying WTI and selling Brent (or the opposite) when prices diverge. The strong mean reversion property will make both prices converge again so the operator is actually trading the difference. This means that the profitability is — theoretically — decoupled with the actual price of the oil, because we are both short and long at the same time, for the same amount of dollars on the same commodity (oil).
Is market neutral risk-free?
A widely known example on how market neutral strategies are not necessarily risk-free investments is the LTCM case. LTCM operated hedged strategies and included Myron S. Scholes and Rober C. Merton (both awarded with Nobel prize in 1997) as part of their board. You could not think of better travel companions. Their models outperformed the market by a significant margin to later suffer from an organised liquidation. The case shows how theoretical models — no matter how good are the people behind them — can fail and how there is no such thing as zero risks.
Figure 1. When maths fail: Long Term Capital Management Hedge Fund. Profitability per invested dollar. Chart data extracted from: https://en.wikipedia.org/wiki/Long-Term_Capital_Management
Despite this case, Hedged strategies are much less risky than any other investment strategies. Especially when planned to get moderate returns. The drawdowns can be smaller and the risk can be better controlled. As a trade-off, their implementation is more complex (often using derivatives) and some strategies require large amounts of money to create truly diversified neutral equity portfolios. This is why the approach is not widely known (or used) by individual investors as probably strategies such as passive investment achieve good profitability and are much easier to understand and implement.
A much more recent event that shows how market neutral strategies involving correlated future contracts are not risk-free is the April 2020 contract CME WTI future contract case. In the middle of the worst pandemic in decades, the price collapsed into negative values (some people just learnt that day that futures can have negative quotes). So it was as crazy as it sounded: the market would pay you for getting 1000 oil barrels (get the money and the barrels). You would become a distressed owner of 1000 oils barrels (that nobody wanted at that time and likely without a single clue on how to sell them) to be delivered to a Chicago warehouse designated by a CME agent.
Anybody operating a pair with that contract in the wrong direction, with strong leverage and without risk control measures in place would be in high distress (maybe broke). So there are risks, yes there are.
Side note: COVID-19 WTI April Future Contract
Despite Terry Duffy from CME stated that ‘futures market worked to perfection’ (whatever that means) the fact is that the role of market makers during exceptional market conditions is controversial, as it is common to find that they fulfil their contractual obligations by providing liquidity at absurd levels. For the average trader, the moral from the WTI COVID-19 case can be summarized in a couple of new rules that I personally incorporated to my trading plan — even when I do not operate crude oil — : 1. never operate future contracts with physical delivery that are close to their settlement dates, or as a stronger and probably safer corollary: stay away from future contracts with physical delivery. 2. stay away of contracts without enough liquidity —for example, monthly versus quarterly — . Those are rules normally observed by every small trader but I never had that actually written until the event took place.
While the examples might be scary they are also very specific outliers. Market neutral strategies are by definition less risky than anything else because you are applying the concept of coverage and you are operating the marginal. Stop-loss thresholds are harder to hit because one position covers the other, but you still need to apply some common sense and risk management to avoid these “free-fall/trap” situations.
The CAPM model and the Beta
The Capital Asset Pricing Model can be considered as the main pillar of market neutrality. The theory was originally proposed by William T. Sharpe and basically models asset prices in two components: one is due to the general market (systematic) and another one (residual) is due to the individual asset.
Figure 2. The CAPM model: market and residual components
What CAPM model states is that assets move with the market (any bullish day will tend to sweep along prices in all assets, similarly an all-red day will drag prices down no matter what). What varies is how much the individual asset will follow the market. That figure is modelled as a Beta parameter. The Beta parameter represents how much the asset varies with respect to the market returns (rm).
The Theta value represents the residual component and the model (this is controversial though) says that it presents a strong mean reversion property.
The idea is that you build portfolios where Betas cancels each other. So you go long on certain assets and short on another set. An inverse ETF is selected as the short reference and regular equities are used as long positions, but the alternatives to building market neutral portfolios are diverse and can involve much more complex portfolios (including partially neutral portfolios).
The trading activity takes place on the residual components that are expected to be much more predictable due to its mean reversion property, while the overall market trend (or bias) is cancelled out by having the same amount of dollars in long positions than in short positions.
The model is also the basis of factoring, where the theta is estimated by the prediction of several factors.
Calculating the Beta
The first step of operating a market neutral strategy is to build a market neutral portfolio. My first portfolio ever built is based on S&P500 equities and an inverse S&P500 ETF, being my first objective to obtain a plain P&L equity curve.
To calculate asset betas covariance between underlying and asset under analysis is divided by market variance. Covariance represents the correlation between both assets while variance represents how much the asset (market) is moving from its mean.
In my example, daily equity data is downloaded from Polygon.io and incorporated into CSV files with daily OHLC data per asset. The daily returns are calculated for each asset and compared against the reference asset (inverse S&P500) obtaining each asset Betas. By doing this, a market neutral portfolio can be built where we shall be most of the time at break-even in our investment.
Figure 3. AAPL 60 days Beta vs S&P500 calculated using Polygon.io data
Figure 3 shows Apple (NASDAQ: AAPL) beta calculated daily session returns versus an estimated value of S&P500. Calculating Betas is the first step to build a market network portfolio as it gives us a quantitative value on how close is the asset price movement when compared with the market.
Daily returns and equity curve
Figure 4. AAPL vs SP500, daily returns per 1000$ invested.
Figure 5. AAPL vs SP500 equity curve per each 1000$ invested.
Figures 4 and 5 show daily returns and equity curve for a neutral portfolio made out of SP500 index and its most capitalised asset (AAPL). Even with just one asset, it can be seen how mean reversion is stronger and that the equity curve gravitates toward zero. The Beta selection and the fact of having built a really poor neutral portfolio (just one asset and the index) are the probable causes of the strong bias shown from its 0 average in 2016–2017.
While the example is not complete and can not be used as a trading strategy, it shows the way a market neutral portfolio operates. The underlying idea is that the combined returns show a stronger mean reversion than assets show in directional trading.
The way Betas are calculated can vary, as a certain period of time needs to be defined, but by no means a Beta can be seen as a static figure (as some textbooks implicitly state).
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/market-neutrality-how-it-works-in-simple-terms-239050927dd6 | ['M. Emmanuel'] | 2020-09-07 17:48:05.755000+00:00 | ['Finance', 'Stock Market', 'Data Science', 'Money'] |
Useful JavaScript Tips — Object Properties and Copying | Photo by Kouji Tsuru on Unsplash
Like any kind of apps, JavaScript apps also have to be written well.
Otherwise, we run into all kinds of issues later on.
In this article, we’ll look at some tips we should follow to write JavaScript code faster and better.
Comparing 2 Objects with Object.is()
An alternative way to compare 2 objects is with the Object.is method.
It’s almost the same as the === operator.
However, NaN is the same as itself.
For instance, we can write:
Object.is(a, b)
Then we can compare 2 variables a and b .
Get the Prototype of an Object
We can get the prototype of an object with the Object.getPrototypeOf method.
For instance, we can get the prototype of an object as follows:
const animal = {
name: 'james',
age: 7
}; const dog = Object.create(animal);
We created a dog object with Object.create so that dog will inherit from animal .
So if we call Object.gerPrototypeOf with it:
const prot = Object.getPrototypeOf(dog);
Then:
prot === animal
would be true since animal is dog ‘s prototype.
Get the Non-inherited Symbols of an Object
The Object.getOwnPropertySymbols returns all the non-inherited symbol keys of an object.
If we have the follow object:
const name = Symbol('name')
const age = Symbol('age')
const dog = {
[name]: 'james',
[age]: 7
}
Then we can call getOwnPropertySymbols as follows:
const syms = Object.getOwnPropertySymbols(dog);
Then we get that syms is :
[Symbol(name), Symbol(age)]
Get Non-inherited String Keys of an Object
The Object.getOwnPropetyNames method lets us get an array of string keys of an object.
The keys return aren’t inherited from any prototypes.
For instance, we can write:
const dog = {
breed: 'poodle'
} const keys = Object.getOwnPropertyNames(dog);
Then we get [“breed”] as the value of keys .
Get Key-Value Pairs of an Object
The Object.entries method returns an array of key-value pairs of an object.
For example, we can write:
const person = { name: 'james', age: 18 }
const pairs = Object.entries(person);
Then pairs would be:
[
[
"name",
"james"
],
[
"age",
18
]
]
where the first entry of the inner arrays is the key name, and the 2nd is the value.
Add a Single Property to an Object
We can call Object.defineProperty on an object to create a new property.
For instance, we can write:
const dog = {};
Object.defineProperty(dog, 'breed', {
value: 'poodle'
})
We added the breed property into dog using the defineProperty method.
Add Multiple Properties to an Object at Once
In addition to the Object.defineProperty , there’s also the Object.defineProperties method to add multiple properties to an object.
For instance, we can write:
const dog = {};
Object.defineProperty(dog, {
breed: {
value: 'poodle'
},
name: {
value: 'james'
},
})
Then dog would be {breed: “poodle”, name: “james”} .
It’s a convenient way to add multiple properties to an object at once.
Creating an Object with Object.create
Object.create lets us create an object with a prototype.
For instance, we can write:
const animal = {
name: 'james',
age: 7
}; const dog = Object.create(animal);
Then dog is has animal as its prototype.
It’ll inherit all the properties from animal .
So dog.name would be 'james' .
Photo by Victor Malyushev on Unsplash
Copying and Combining Objects with Object.assign
Object.assign lets us combine multiple objects into one or copy them.
To make a copy of an object, we can write:
const copy = Object.assign({}, original)
We make a copy of the original object and assigned it to the copy variable.
{} should be the first argument so that we won’t modify any existing objects and copy them into an empty object.
To combine multiple objects, we can write:
const merged = Object.assign({}, obj1, obj2)
We copy all the own string properties from obj1 and obj2 into the empty object in the first argument.
So merged would have all the own properties from both.
Conclusion
We can compare and copy objects with static Object methods.
Also, we can define properties on an object with them.
Also, we can copy and merge objects with the Object.assign method.
It does a shallow copy so that the top-level is copied. | https://medium.com/dev-genius/useful-javascript-tips-object-properties-and-copying-760b65dce256 | ['John Au-Yeung'] | 2020-06-28 18:42:56.678000+00:00 | ['JavaScript', 'Software Development', 'Web Development', 'Programming', 'Technology'] |
Statistical analysis of a stock price | In 2008, for my Bachelor’s Degree in Theoretical Physics, I had to analyze stock prices in order to check the validity of a stock market model. In the following part of the article, I’m going to show you some of those analyses applied to Google stock price using Python.
The code can be found in my GitHub repository here: https://github.com/gianlucamalato/machinelearning/blob/master/Stock_market_analysis.ipynb
Download data
First, we need to get stock data. I’m going to use the yfinance library to download the price time series.
First, we have to install this library.
!pip install yfinance
Then we can import some useful libraries.
import pandas as pd
import numpy as np
import yfinance
import matplotlib.pyplot as plt
from scipy.stats import skew,kurtosis,norm,skewtest,kurtosistest
from statsmodels.graphics.tsaplots import plot_pacf,plot_acf
We can now get Google price data from 2015 to 2020.
name = 'GOOG'
ticker = yfinance.Ticker(name)
df = ticker.history(interval="1d",start="2015-03-15",end="2020-09-10") x = df['Close']
We are now going to work with x object, which contains the daily close price of the stock.
Daily close price
Let’s plot the daily close price.
Image by author
As you can see, there’s quite a nice bullish trend. There are some drawdowns, but the drift seems to be quite positive.
Daily Returns
When you perform the statistical analysis of a stock, it’s very useful to work with its returns and not with the price itself.
The return from one day to another one is the percentage change of the closing price between the two days.
In Python, series objects have the pct_change method that allows us to calculate this quantity. The argument is the lag to use. In this article, I’ll use a 1-day lag.
returns = x.pct_change(1).dropna()
The result is this:
The first number is (x[1]-x[0])/x[0], the second one is (x[2]-x[1])/x[1] and so on.
This is the data we are going to analyze.
The probability distribution of returns
We are now going to calculate some insights about the probability distribution of returns.
Histogram and boxplot
Let’s make a first raw histogram of returns.
plt.hist(returns,bins="rice",label="Daily close price")
plt.legend()
plt.show()
Image by author
As you can see, it’s quite centered around zero and it seems symmetric. However, we can see that this histogram has tails that don’t seem to be neglectable.
Let’s make a boxplot to better understand the distribution of this dataset.
plt.boxplot(returns,labels=["Daily close price"])
plt.show()
As you can see, it’s full of outliers. The interquartile range (i.e. the height of the box) is quite narrow if compared with the distribution total range. This phenomenon is called fat tails and it’s very common in stock analysis.
Main observables
Let’s calculate some observables of our dataset.
The mean value is:
It’s quite similar to zero, but the fact that it’s positive explains the positive drift of the price time series.
Let’s now take a look at the standard deviation:
It’s more than an order of magnitude higher than the mean value. It’s clearly the effect of outliers. In stock price analysis, the standard deviation is a measure of the risk and such a high standard deviation is the reason why stocks are considered risky assets.
Let’s take a look at the median:
It’s not so different from the mean value, so we might think that the distribution is symmetrical.
Let’s check the skewness of the distribution to better assess the symmetry:
It’s positive, so we can assume that the distribution is not symmetrical (i.e. null skewness) and that the right tail as a not neglectable weight.
If we perform a test on the skewness, we find:
The very low p-value suggests that the skewness of the distribution can’t be neglected, so we can’t assume that it’s symmetrical.
Finally, we can measure the kurtosis (scipy normalizes the kurtosis so that it is 0 for a normal distribution)
It’s very different from zero, so the distribution is quite different from a normal one.
A kurtosis test gives us these results:
Again, a very small p-value lets us reject the null hypothesis that the kurtosis is the same as a normal distribution (which is 0).
Are returns normally distributed?
Although the statistically significant high values of kurtosis and skewness already tell us that the returns aren’t normally distributed, a Q-Q plot will give us graphically clear information.
t = np.linspace(0.01,0.99,1000)
q1 = np.quantile(returns,t)
q2 = norm.ppf(t,loc=np.mean(returns),scale=np.std(returns)) plt.plot(q1,q2)
plt.plot([min(q1),max(q1)],[min(q2),max(q2)])
plt.xlim((min(q1),max(q1)))
plt.ylim((min(q2),max(q2)))
plt.xlabel("Daily returns")
plt.ylabel("Normal distribution")
plt.show()
Image by author
The straight line is what we expect for a normal distribution, while the blue line is what we get from our data. It’s clear that the quantiles of our dataset aren’t comparable with the quantiles of a normal distribution with the same mean and standard deviation.
So, returns aren’t normally distributed and this makes models like Geometric Brownian Motion (which assumes normal returns) only an approximation of the reality.
Volatility clustering
Once we have ensured the non-normality of the returns probability distribution, let’s take a look at the raw time series.
plt.plot(returns)
plt.xlabel("Time")
plt.ylabel("Daily returns") plt.show()
Image by author
It’s clear that there are time periods with high volatility and other periods with low volatility. This phenomenon is called volatility clustering and it’s very common in the stock market. Practically speaking, the standard deviation changes during time, making the time series non-stationary.
A closer look at the 20-days rolling standard deviation will make everything clear.
plt.plot(returns.rolling(20).std())
plt.xlabel("Time")
plt.ylabel("20-days rolling standard deviation")
plt.show()
Image by author
It’s clear that it’s not a constant value, but it has spikes and oscillations. The fat tails of the distribution may be caused by these volatility spikes, which create non-neglectable outliers.
Autocorrelation function
Finally, we can plot the partial autocorrelation function, which gives us some ideas about the autocorrelation of the time series and the possibility that this autocorrelation can be used for prediction purposes in some ARIMA model.
plot_pacf(returns,lags=20)
Image by author
It’s clear from this plot that there’s no high correlation among all lags. There are some lags between 5 and 10 that show some correlation, but it’s quite small compared with 1.
So we can easily understand that using some ARIMA models would be quite useless.
Conclusions
In this article, I’ve shown a simple statistical analysis of Google stock price. The returns aren’t normally distributed, since they are skewed and have fat tails. The time series is not stationary because the standard deviation changes over time. The autocorrelation of returns time series is very low at almost any lag, making ARIMA models useless.
So, predicting stock prices using statistics and machine learning is a great challenge. Some results have been achieved using LSTM models, but we are very far from clearly modeling a stock market in a money-making way. | https://towardsdatascience.com/statistical-analysis-of-a-stock-price-e6d6f84ac2cd | ['Gianluca Malato'] | 2020-09-15 14:39:33.753000+00:00 | ['Trading', 'Stock Market', 'Data Science', 'Statistics', 'Data Visualization'] |
Defining the future of Machine Learning with Google and TensorFlow Lite | There are more than 20 million developers around the world[1]. Developer experience and adoption plays a pivotal role in the success or failure of any software. Apple when it opened their platform for developers in 2008, it changed the entire mobile industry, now there are more than 2 million apps in the appstore[2].
Machine learning is a difficult and complex discipline. Implementing a machine learning project involves significant effort and deep understanding of underlying principles. Thanks to Google’s TensorFlow this is far less daunting for a developer. TensorFlow makes it easy for acquiring, training, serving and building machine learning solution. Currently TensorFlow is the most popular framework for developing machine learning solutions[3].
StackOverflow trends
What is TensorFlow Lite?
TensorFlow is an open source numerical computation library created by Google brain team to distribute machine learning for large scale deployment. This framework can be used to run machine learning code on a large cluster of machines easily. Thus freeing developers and researchers to focus on solving machine learning problems than distributed systems.
Tensorflow Lite is the light weight solution of TensorFlow built specifically for mobile devices, it abstracts away the optimisation and performance enhancements essential for running a machine learning model on a mobile device where resources are constrained and vary significantly from user to user. With more than 2.1 billion mobile devices worldwide[4], there is a very strong need to create an engaging experience.
Innovation @ YML
At YML we have an open innovation culture & we constantly keep exploring machine learning in various aspects like Age & Gender classification, Exploring Text Region & Face Detection, Hand Gesture Recognition and more. When Google approached YML to partner and work with their TensorFlow Lite developer experience we were a natural fit.
Initially we brainstormed and came up with a collection of ideas. Working with the Google team narrowed down and decided on 4 examples. Our goal was to simplify the problem broadly for 3 kind of developers. Android developers, iOS developers and ML engineers. The breadth of knowledge required to conceptualize and implement such a solution included machine learning, frontend, iOS, Android. Not only was the proficiency in machine learning essential, but team needed to have strong cross functional expertise in all the technologies to execute well.
Our work specifically involves Gesture recognition, Image classification, Object detection and Speech recognition.
Once we started to work together, we quickly adapted to collaborate and create multiple iterations of the solution. Agility and iteration was key to creating a long lasting experience and deliver on the promise of developer experience.
www.tensorflow.org/lite
Be sure to checkout the TensorFlow Lite and learn the technology which would shape the next generation of Machine Learning products.
The story is far from over, there is still lot more to be achieved and lot more work to be done. Follow our publication to get more udpates.
References & Links | https://medium.com/ymedialabs-innovation/ymedialabs-partnered-google-tensorflow-lite-machine-learning-228502c8d11a | ['Darshan Sonde'] | 2019-06-07 11:34:05.297000+00:00 | ['Machine Learning', 'Google', 'TensorFlow', 'Artificial Intelligence'] |
Part 06: November 14, 1966 (Mao Backs the Shanghai Rebels) to January 3, 1967 | 14/11/66 Mao convenes the Politburo Standing Committee to ratify Zhang Chunqiao’s decisively pro-rebel handling of the Anting rail incident.
14/11/66 Mao reminds the Politburo that the PRC constitution guarantees Chinese citizens’ right to organise: the Workers’ Rebel HQ is legal.
14/11/66 Tao Zhu makes a self-criticism for having opposed Zhang Chunqiao and the rest of the CCRG’s response to the Anting rail incident.
15/11/66 In Suzhou, Zhang Chunqiao signs an agreement with the Shanghai Workers’ Rebel HQ, agreeing to (a second set of) five demands.
15/11/66 Lin Biao comes under attack in a poster written by “Yilin Dixi” (a pseudonym), a Beijing Agricultural University middle schooler.
15/11/66 Yilin Dixi is disappointed that Lin’s command of theory is so poor, taking him to task for his 18/9/66 speech which belittled Marx.
15/11/66 Kang Sheng tells Xinjiang Red Guards that seizing US dollars from bourgeois households is “class struggle in the economic realm”.
15/11/66 The Central Committee voids all the damaging political labels applied to Red Guards, and suggests Party organs destroy their files.
15/11/66 The anti-Confucius liaison office in Qufu complain to the State Council for making the Sage’s “reactionary” tomb a protected site.
15/11/66 Outside Confucius’ family home, the Red Guards stage a mass rally and smash the State Council’s “National Cultural Relic” plaque.
16/11/66 Zhang Chunqiao reports to Mao on his handling of the Anting rail incident. Mao approves: “you can act first and report afterwards”.
16/11/66 On Mao’s instruction, Nie Yuanzi leads a delegation of Beijing students and teachers to Shanghai to support the rebels there.
16/11/66 Zhou Enlai addresses a mass meeting about the vast number of people coming to Beijing. He estimates 200 000 people enter every day.
17/11/66 Mao receives a delegation from Guinea. He congratulates Public Works Minister Ismaël Touré on their struggle against imperialism.
17/11/66 A conference of local leaders from all across China begins in Beijing, to discuss how to keep production going through the GPCR.
18/11/66 Copies of the 15/11/66 “Yilin Dixi” open letter to Lin Biao, which criticised Lin’s grasp of Marxism, are produced and distributed.
18/11/66 The Beijing Party Committee bans factories and schools from setting up their own prisons — violators will “severely punished”.
18/11/66 Students at Beijing 6th Middle School write to the CCRG. Murders have been committed at their school, but the police will not act.
19/11/66 Unannounced, Chen Boda leads a CCRG delegation to inspect Beijing 6th Middle School, just over the road from the Zhongnanhai.
19/11/66 Chen Boda heads straight to the music classroom, which has been converted into a prison and “interrogation” room since mid-August.
19/11/66 Two adults and a student have perished in the gaol. “Long live the red terror” is painted on the wall in blood (but whose blood?).
19/11/66 Chen Boda strongly admonishes the young Red Guards in charge and closes down the classroom prison, freeing its remaining inmates.
20/11/66 The CCP transmits, to all levels, Beijing’s instruction against Red Guards running their own prisons. Mao wants captives released.
21/11/66 At the Ministry of Education, a big-character poster goes up criticising the minister for being “confused” by the May 16 Circular.
21/11/66 After weeks of being occupied by Nie Yuanzi’s Peking University student opponents, her mouthpiece Xin Beida resumes publication.
22/11/66 Mao rejects the cautious views coming out of a conference of local leaders about how to stop the GPCR ruining economic production.
23/11/66 A Red Guard faction from the Beijing Forestry Institute puts up posters carrying the unusual slogan: “Long Live Liu Shaoqi!”.
24/11/66 At the Aeronautics Institute, ex-Red Guards opposed to the Red Flag faction put up a poster asking pointed questions of the CCRG.
24/11/66 Zhou Enlai asks Mao for approval to intervene in a Red Guard “investigation” into North West China Party Chief Liu Lantao’s past.
25/11/66 Before sunrise at offices of the People’s Daily in Beijing, students emerge having spent the night talking politics with Chen Boda.
25/11/66 The last dregs of the Picket Corps storm into and smash the Red Guard 3rd HQ’s base near Tiananmen, injuring some of those inside.
25/11/66 The Qufu Party Committee start recruiting a 300-strong “Peasant Grave Digging Team”, preparing to help Red Guards dig up Confucius.
25/11/66 On Shanghai’s Cultural Plaza, 15 000 Red Guards hold a rally to “open fire on” the local “capitalist reactionary” Party Committee.
25/11/66 On a freezing cold Tiananmen Square, another Red Guard rally begins. Arranged at short notice, it is the eighth one since 18/8/66.
25/11/66 For four hours, Mao waves to a parade of “young revolutionary fighters”. Red Guards recite Mao quotes and sing The East is Red.
26/11/66 Beijing’s Red Guard rally resumes after yesterday’s parade. Students fill Tiananmen square and line the avenue to the airport.
26/11/66 Hundreds of different contingent of “Long March” Red Guards have walked to Beijing from all over China: from Jiangxi to Mongolia.
26/11/66 Mao is driven from Tiananmen Square to the airport, waving to crowds on the way. He reviews yet more Red Guards at the airfield.
26/11/66 As the rally ends, Red Guards try to cram through the airfield’s one and only exit. Many are trampled to death in the ice-cold mud.
26/11/66 The Party decides this should be the last rally, at least until spring. Mao has reviewed about 12 million Red Guards since August.
26/11/66 Shanghai workers opposed to the Workers’ Rebel HQ form the “Red Defenders Battalion” and start recruiting large numbers of members.
26/11/66 The Red Defenders Battalion call on Zhang Chunqiao to self-criticise. They reject the 5 “poisonous” demands he signed on 15/11/66.
27/11/66 Li Hongshan, a student at the Beijing Forestry Institute, puts up a poster criticising the Central Cultural Revolution Group.
27/11/66 At Peking University, members of a High School Red Flag faction criticise Lin Biao and the CCRG’s “new bourgeois reactionary line”.
27/11/66 The Red Flag members and sympathisers from other groups set up a “United Action” committee to coordinate resistance across Beijing.
27/11/66 Officials from the CCRG begin a set of meetings with the Red Guard 3rd HQ, to decide how to stamp out anti-CCRG student resistance.
27/11/66 Guan Feng completes a report for Jiang Qing, accusing Tao Zhu of following the Liu-Deng line since May. Jiang forwards it to Mao.
28/11/66 While reading out a standard list of top leaders considered Mao’s “comrades-in-arms”, Jiang Qing conspicuously misses out Tao Zhu.
28/11/66 The Central Military Commission announces Jiang Qing’s appointment as its cultural adviser, technically an active service role.
28/11/66 Tan Houlan leads a 20 000-strong rally outside the Confucius Temple in Qufu. Red Guards drag Confucius’ broken statue through town.
29/11/66 The August 1 Red Guards at the Beijing Aeronautics Institute publish a second big-character poster pointedly scrutinising the CCRG.
29/11/66 In Qufu, Red Guards break commemorative arches in Confucius’ cemetery. The local “Grave Digging Team” get to work with pick axes.
29/11/66 The anti-Confucius Red Guards write to Mao. In a “thrilling development”, they have removed or smashed various reactionary symbols.
30/11/66 On Portuguese-administered Macau, the local CCP newspaper calls for an escalation in protests against the “ruthless imperialists”.
30/11/66 Outside the colonial Governor’s Palace, 60 Red Guards join and politicise a local demonstration about planning regulations.
30/11/66 The “Capital Red Guard” paper prints an essay from an anonymous writer at the Geology Institute, satirising the majority faction.
30/11/66 The essay mocks those who only rebel to a bear minimum, protect leaders and “lick ass” — they just want a nice job after the GPCR.
30/11/66 In Qufu, Red Guards dig a 3-metre trench on the site of Confucius’ grave, but have failed so far to actually locate the corpse.
30/11/66 The anti-Confucius Red Guards dig up three of Confucius’ more recent relatives. They strip them naked and hang them from a tree.
30/11/66 Student Li Hongshan puts up a banner at the Forestry Institute, reading: “Kick aside the CCRG; let’s make revolution ourselves”.
30/11/66 Zhou Enlai forbids students in Jilin to “investigate” local leaders at mass meetings. Mao knows the history and there’s no problem.
30/11/66 Rebel Red Guards take over the offices of Liberation Daily — the Shanghai Party Committee’s official newspaper — and close it down.
30/11/66 The Shanghai Party Committee organises students to drive out the occupiers, but Wang Hongwen’s rebel workers arrive and repel them.
1/12/66 Red Guards resort to using dynamite to open Confucius’ tomb. But when they peer into the crater, nothing discernibly human is left.
1/12/66 The State Council issues a circular on Red Guard revolutionary networking: too many are doing it and it’s straining infrastructure.
1/12/66 Student Li Hongshan, who called for young people to sideline the CCRG, takes part in a public debate at the Forestry Institute.
1/12/66 Li Hongshan says it’s wrong to try to apply Mao’s writings from the 1920s to the 1960s, and for the unelected CCRG to lead the GPCR.
2/12/66 Allies of Li Hongshan mount a poster on Tiananmen, accusing the CCRG of continuing the bourgeois reactionary line of the work teams.
2/12/66 The Aeronautics Institute’s August 1 Red Guards publish a third poster “inquiring” into the CCRG’s arbitrary persecution campaigns.
2/12/66 In Macau, mainland radio broadcasts call colonial authorities “fascist” and accuse them of premeditated violence against protesters.
2/12/66 Mao reviews yesterday’s State Council circular on the issue of Red Guards’ revolutionary networking: Zhou Enlai should implement it.
3/12/66 In Macau, Red Guards demand to speak to the Governor. When this is denied, they storm the building. Outside, 2000 protesters gather.
3/12/66 Macau’s colonial authorities deploy riot police and tear gas. Chinese protesters set to work destroying symbols of Portuguese rule.
3/12/66 Portuguese colonial police shoot dead two protesters but still can’t quell Macau’s revolutionary rioting. Martial law is declared.
4/12/66 Macau remains under martial law. Tanks drive student protesters off the streets, with 8 killed and hundreds injured since yesterday.
4/12/66 The Red Guard 3rd HQ react to recent anti-CCRG dissidence with posters urging people to open fire on the bourgeois reactionary line.
4/12/66 Driving around Beijing with megaphones and loudspeakers, the Red Guard 3rd HQ pledge to “defend to the death” the CCRG and Lin Biao.
4/12/66 Lin Biao chairs an enlarged Politburo meeting to deal with a report into the GPCR’s impact on China’s industry and communications.
4/12/66 Lin Biao makes a speech to the Politburo on “grasping revolution; promoting production” and whether the GPCR is disrupting industry
5/12/66 Founded secretly on 27/11/66, the United Action committee of Red Guards reveals its existence, swearing to defend the Party and Mao.
5/12/66 The first campaign of United Action is to leaflet in support of the four Marshals’ remarks on 13/11/66, which criticised extremism.
5/12/66 United Action declare that since the CCRG has made them into outlaws, they are going to rebel. They call the CCRG reactionary.
5/12/66 Student Li Hongshan criticises Chen Boda at a debate. Yi Zhenya attacks the Red Guard 3rd HQ’s “reactionary” obedience of the CCRG.
5/12/66 Wang Li tells the Politburo that Tao Zhu’s focus on “economic production” is a red herring; simply an excuse to constrain the GPCR.
6/12/66 Lin Biao closes the enlarged Politburo meeting. The economic planners’ proposal to rein in the GPCR in factories is totally rejected
6/12/66 People are gawping at Confucius’ exhumed relatives, hanging naked from a tree since 30/11/66, so Red Guards burn them in a ditch.
7/12/66 At a meeting of the China-Japan Youth Friendship Association, United Action and Red Guard 3rd HQ members fight, verbally, for hours.
8/12/66 As Red Guards meet Jiang Qing, she hints at who to go after next: mentioning Chen Yun and Bo Yibo in the same breath as Liu Shaoqi.
9/12/66 The Politburo adopts “Ten Points on Industry”, letting workers take part in the GPCR if it doesn’t get in the way of their day jobs.
10/12/66 At Peking University, big-character posters appear which criticise the Central Cultural Revolution Group — “Bombard the CCRG”.
10/12/66 Mainland authorities demand that Macau’s Governor accepts Chinese protesters’ demands and sacks officials responsible for violence.
11/12/66 Macau’s Governor meets local Chinese leaders, and agrees for the colony to compensate victims of the police (by borrowing $2m).
11/12/66 At a high school attached to Beijing Agricultural University, a poster appears which criticises Lin Biao, signed by “Yilin Dixi”.
12/12/66 Macau’s Governor accepts demands from the Guangdong Government, including banning the KMT in Macau and extraditing 7 KMT “spies”.
12/12/66 Incited by CCRG members, particularly Jiang Qing, rebel students invade the house of purged Mayor Peng Zhen and force him out.
12/12/66 In the Workers’ Stadium, Acting Beijing Mayor Wu De leads 120 000 Red Guards in “struggle” against his predecessor, Peng Zhen.
12/12/66 We De declares Peng Zhen’s clique “the scum of the Party” and some members of it, including Wu Han and Wan Li, suffer beside Peng.
13/12/66 In the small hours of the morning, crowds gather at the State Council to salute the national flag. Jiang Qing addresses them.
13/12/66 Jiang Qing applauds the crowd’s “spirit of insurgency” and pledges her support to destroy “the small clique of capitalist roaders”.
13/12/66 The Red Flag journal accuses anti-CCRG dissidents of being “a small group of capitalist roaders” attacking the “genuine left”.
13/12/66 Red Flag says targeting the spearhead of criticism is about right versus wrong. Wang Li extols dictatorship of the proletariat.
13/12/66 The US Air Force admits that one of its jets crashed in China on 20/9/65 and that its pilot, Philip E. Smith, is still being held.
14/12/66 Jiang Qing, Kang Sheng and Chen Boda meet with leaders from the Red Guard 3rd HQ and the Aeronautics Red Flag group.
14/12/66 Kang Sheng tells the rebel students that Li Hongshan, who criticised the unelected CCRG, “is a petty counter-revolutionary leader”.
14/12/66 At the Forestry Institute, factional opponents of anti-CCRG student Li Hongshan ransack his base and hold him for “interrogation”
15/12/66 The Party Centre issues “Ten Points on Villages”, legalising rural Red Guard groups (but they still need to do their farming work).
16/12/66 Lin Biao writes the foreword to a reprint of “Quotations from the Chairman” (the Little Red Book) in which he calls Mao a “genius”.
16/12/66 Chen Boda cuts (purged propaganda chief) Lu Dingyi’s stipend. He suggests Lu’s house arrest end and he be handed to the Red Guards.
16/12/66 The Forestry Institute put on a “meeting to criticise and struggle against Li Hongshan and his counter-revolutionary activities”.
16/12/66 Chen Boda attends a mass meeting of Beijing middle schoolers, discussing the Picket Corps’ opposition to “real” revolutionaries.
16/12/66 The Red Guard “patrols” have been taken over by bad elements and will be disbanded, Chen Boda’s “oath-taking meeting” announces.
16/12/66 As the oath-taking meeting ends, participants ransack Western District Picket Corps and United Action committee bases in Beijing.
16/12/66 United Action respond to the attacks on their bases with their own invasion of the Public Security Ministry: long-time adversaries.
16/12/66 Jiang Qing addresses high school Red Guards about the Picket Corps and other patrol groups: a handful of reactionary “little kids”.
16/12/66 Zhou Enlai is at the meeting, alongside his subordinates Wang Renzhong and Zhou Rongxin. Without warning, Jiang attacks the pair.
16/12/66 Jiang forces Wang and Zhou to stand before her and bow their heads, saying they enabled the patrols to act like “military police”.
16/12/66 The patrols are guilty of using weapons and murdering, says Jiang. She hints at the fact that Zhou Enlai provided material support.
16/12/66 Kang Sheng speaks next. He brands anti-CCRG student Li Hongshan a counter-revolutionary, much to the delight of his young audience.
16/12/66 Zhou Enlai speaks, saying he agrees with Jiang Qing’s words but suggests leniency in dealing with the (very young) patrol members.
17/12/66 Public Security Ministry officers carry out an early-morning raid and arrest an Aeronautics Institute anti-CCRG dissident student.
17/12/66 Forestry Institute rebels put anti-CCRG Li Hongshan through struggle, then turn him over to the Public Security Ministry’s custody.
17/12/66 At a meeting of Red Guards visiting Beijing from across China, Chen Boda says “if the enemy does not surrender, make him perish”.
18/12/66 Kuai Dafu, the Qinghua student radical favoured by the CCRG, goes to the Zhongnanhai for a private meeting with Zhang Chunqiao.
18/12/66 Zhang Chunqiao tells Kuai to go after two certain reactionaries at the heart of the CCP — and to make sure he goes “all the way”.
18/12/66 Tan Lifu, blamed for the “bloodline theory” that class is mainly inherited, is arrested for spreading this “reactionary” idea.
18/12/66 Red Guards update the CCRG and Public Security Minister Xie Fuzhi on their collaborative arrests of anti-CCRG student dissidents.
18/12/66 Jiang Qing tells Red Guards to send their captives to the Public Security Ministry (not the untrustworthy Public Security Bureau!).
18/12/66 Jiang says Wang Guangmei should go to Qinghua University to be criticised. So should Bo Yibo, for attacking Kuai Dafu on 3/7/66.
18/12/66 CCRG member Guan Feng tells the rebel students that the Picket Corps’ elite sponsors must be dragged out and shot “without mercy”.
18/12/66 On Jiang Qing’s orders, Beijing Red Guard groups have been scouring Chengdu all month for long-purged PLA Marshal Peng Dehuai.
18/12/66 Geology Institute East is Red members locate Peng. They tell him Geology Vice Minister He Changgong (a friend) has been arrested.
19/12/66 Peking University Red Guards apprehend a student who criticised the CCRG. They turn him over to the Public Security Ministry.
19/12/66 Following Zhang Chunqiao’s instruction to unify the rebel students at Qinghua, Kuai Dafu founds the “Qinghua Jinggangshan” group.
20/12/66 Lin Jie, who works for the CCRG, says that those who criticise it are class enemies and US spies; to be “suppressed without mercy”.
20/12/66 Qi Benyu writes a letter to rebel student groups. He says it’s permitted to name Tao Zhu in critical posters (unbeknownst to Tao).
20/12/66 Kuai Dafu, chemistry sophomore and leader of the Qinghua Jinggangshan, sets up a team to compile “The Selected Works of Kuai Dafu”.
21/12/66 Zhou Rongxin and Wang Renzhong, top officials working for Zhou Enlai, are arrested for having provided support to the Picket Corps.
21/12/66 Provisions to let Red Guards travel around the country for free as “revolutionary networking” are, at least in theory, abolished.
22/12/66 At Qinghua University, the Jinggangshan group calls for Wang Guangmei, Wang Renzhong and Bo Yibo to return to campus for criticism.
22/12/66 Red Guards from the Beijing Geology East is Red group return to Marshal Peng Dehuai’s Chengdu house, after finding it on 18/12/66.
23/12/66 Xu Ming, the purged Vice-Secretary of Zhou Enlai’s State Council and mother of the Red Guard Picket Corps’ leader, kills herself.
24/12/66 Student Tang Wei resigns from Qinghua’s Jinggangshan group, criticising its leaders for condemning Liu Shaoqi without due process.
24/12/66 At the Beijing Workers’ Stadium, purged General Luo Ruiqing is dragged on stage to be struggled against before 10 000 Red Guards.
24/12/66 Lin Biao has arranged for the rally to seem spontaneous (“the cadres must not go on stage”). Luo’s associates are also persecuted.
24/12/66 East is Red members in Chengdu try to contact their Beijing HQ for instructions on what to do with Peng Dehuai, but get no reply.
24/12/66 Wang Dabin, leader of East is Red’s Chongqing branch, arrives in Chengdu. He is furious that nobody has abducted Peng Dehuai yet.
24/12/66 East is Red students in Chengdu are “rightists” and “traitors”, says Wang Dabin. They start discussing how to arrest Marshal Peng.
25/12/66 While the East is Red students are still plotting Peng Dehuai’s capture, Aeronautics Red Flag seize the Marshal during the night.
25/12/66 Wang Dabin leads the East is Red students to arrest Marshal Peng, but he’s already been taken. Instead, they ransack his house.
25/12/66 At 4AM, Marshal Peng’s bodyguards arrive at his home to find him gone. Aeronautics Red Flag are holding him at the railway station.
25/12/66 Around 6000 Qinghua students (mostly members of Kuai Dafu’s Jinggangshan group) march to Tiananmen Square for a “great action”.
25/12/66 The students pledge to “thoroughly crush” the reactionary line of Liu Shaoqi and Deng Xiaoping. They put up posters and sing songs.
25/12/66 In the freezing wind, the rally “criticises in absentia” Liu Shaoqi’s wife Wang Guangmei — bête noire of rally organiser Kuai Dafu.
25/12/66 The rally breaks into marching columns. People chant “Liu Shaoqi must confess his crimes!” and “Battle Liu-Deng to the bloody end”.
25/12/66 Zhou Enlai cables Marshal Peng Dehuai’s captors in Chengdu, ordering them to involve the PLA in his transport and not to abuse him.
26/12/66 Zhou Enlai tells the Qinghua Jinggangshan group, led by Kuai Dafu, that they ought to stop using the slogan “overthrow Liu Shaoqi”.
26/12/66 Zhou also tells Jinggangshan that Mao has not (yet) approved their request to bring Liu’s wife Wang Guangmei back to campus.
26/12/66 A Shanghai workers’ group meet with Jiang Qing, Chen Boda, Zhang Chunqiao and other CCRG members to talk about their contracts.
26/12/66 Workers complain that the contract system is too loose and allows enterprises to exploit them with the Labour Ministry’s blessing.
26/12/66 Moved, Jiang Qing tells workers to wipe away the “capitalist” contract system. “I am getting so angry!” she shouts through tears.
26/12/66 Trying to reduce tensions, United Action hold a “Rally to Establish Public-Spiritedness” and invite the CCRG and Red Guard 3rd HQ.
26/12/66 No CCRG or 3rd HQ members attend United Action’s rally, and so their attempt to mollify the left with self-criticism seems futile.
26/12/66 Film of autumn’s Red Guard rallies is played. United Action cheer when Mao or Zhou Enlai appear. They boo Jiang Qing and the CCRG.
26/12/66 Angry at the CCRG’s absence, members of the crowd storm the stage. They lead chants to overthrow “certain members of” the CCRG.
26/12/66 After the rally, United Action members exit the hall and march to protest outside a rebel faction’s base at the Forestry Institute.
26/12/66 The United Action breakaway group chant slogans attacking “arrogant” Jiang Qing, and calling for Chen Boda to be given a “scare”.
26/12/66 Mao is 73 today. He invites top leftists to dinner, although not Kang Sheng or Lin Biao (it is supposed to be a party, after all).
26/12/66 Mao toasts “to the unfolding of a nationwide all-round civil war!” and celebrates the rebels “rising up in Party and state organs”.
27/12/66 In a moment of privacy Jing Xizhen, Peng Dehuai’s aide, tells him of Zhou Enlai’s 25/12/66 instructions. The Marshal is stunned.
27/12/66 “Did he really call me comrade?” asks Peng. He did: Zhou is trying to protect him. Peng covers his face and trembles at the news.
27/12/66 In Chengdu, Aeronautics Red Flag (who are holding Marshal Peng Dehuai in custody) meet with the Wang Dabin’s East is Red group.
27/12/66 Red Flag and East is Red agree to share custody of Marshal Peng, overseen by the local PLA command (as per Zhou Enlai’s orders).
27/12/66 Both factions accompany Marshal Peng into a train. Zhou Enlai stipulated not to use aircraft (as rebels control Beijing airport).
28/12/66 In Beijing, Zhang Chunqiao receives a phone call telling him that his home in Shanghai has been ransacked by the “Scarlet Guards”.
28/12/66 The Scarlet Guards, a workers’ group opposed to Wang Hongwen’s Rebel Workers’ HQ, have not in fact done anything to Zhang’s home.
28/12/66 The PLA detonates a 300kt dirty atomic bomb (Hiroshima was 15). China’s fifth nuclear test is another step towards a hydrogen bomb.
28/12/66 Members of United Action invade the Ministry of Public Security in Beijing, demanding that their arrested comrades be released.
28/12/66 Zhang Chunqiao phones his wife. He says “the peach of Shanghai is now ripe” — Mayor Cao Diqiu cannot be allowed to pick it first.
29/12/66 Zhang Chunqiao phones Wang Hongwen, leader of the Shanghai Rebel Workers’ HQ. It’s time to “wage a blow-for-blow struggle”.
29/12/66 Mao convenes Politburo and CCRG to hold a “joint meeting to offer Wang Renzhong some views”, under Zhou Enlai’s chairmanship.
29/12/66 A devoted follower of Mao, Red Guards have discovered some of Wang’s slightly over-familiar poems about the Chairman and are angry.
29/12/66 Peng Dehuai arrives in Beijing. Student leader Wang Dabin leads him through the station in chains, claiming to represent the CCRG.
29/12/66 After dark, 100 000 members of the Rebel Workers’ HQ assemble on Kanping Road in Shanghai. They are met by 20 000 Scarlet Guards.
29/12/66 Mao tells Tao Zhu his problems aren’t so bad (Jiang Qing is just narrow-minded and intolerant) but he should soften his work style.
30/12/66 Two hours past midnight in Shanghai, Wang Hongwen leads 100 000 Rebel Workers HQ men into battle against the 20 000 Scarlet Guards.
30/12/66 After four hours of workers fighting with clubs and sticks on Kanping Road, 91 people are wounded and the Scarlet Guards retreat.
30/12/66 As the Kanping Road chaos unfolds, disguised in a floppy hat and old army tunic, watches a visitor from Wuhan called Jiang Zemin.
30/12/66 The “Wuhan to Guangzhou Rebel Troop for Grabbing Wang Renzhong” threaten to go on hunger strike unless Tao Zhu speaks to them.
30/12/66 Tao agrees to see the rebel Red Guards, but the second he does so they harangue him with accusations. A planned arrest is averted.
30/12/66 The State Council asks Aeronautics Red Flag to stop its “Red Sea” campaign to paint all of Beijing’s doors red — it’s confusing.
30/12/66 Jiang Qing visits He Long’s son at Qinghua University. She says she can prove his father’s crimes: “You tell him, I can touch him”.
30/12/66 Jiang Qing, Yao Wenyuan and other CCRG members praise Qinghua Jinggangshan. Their 25/12/66 march showed revolutionary leadership.
31/12/66 Rebels from the Propaganda Department write a big-character poster: “Tao Zhu practises the Liu-Deng Bourgeois Reactionary Line”.
31/12/66 The Shanghai Rebel Workers’ HQ issues an (illegal) arrest warrant for the defeated Scarlet Guard workers. They capture 240 members.
31/12/66 Thousands of Shanghai Scarlet Guards set off to lodge complaints in Beijing but are intercepted by Rebel Workers in nearby Kunshan.
31/12/66 For the second time, members of United Action make a foray into the Ministry of Public Security, trying to get detainees released.
31/12/66 At a PLA rally, Marshal Ye Jianying gives a self-criticism for belittling excessively fanatical Red Guards at a 13/11/66 rally.
1/1/67 At 3AM, Zhou Enlai tells Shanghai Party Chief Chen Pixian (convalescing after cancer surgery) to get back to work and restore order.
1/1/67 Two hours after Zhou’s phone call, Chen Pixian convenes leaders (including Wang Hongwen) to deal with the Scarlet Guards in Kunshan.
1/1/67 At Chen Pixian’s meeting, some rebels blame the city’s chaos on him paying back-dated wage demands. Others demand bigger pay-outs.
1/1/67 Wang Hongwen, leader of the largest mass organisation in Shanghai, sits in the corner and nods off. He thinks the meeting is a sham.
1/1/67 Shanghai Mayor Cao Diqiu approves measures to placate rebels, such as moving long-term “temporary” workers onto proper contracts.
1/1/67 A pair of staffers from the Zhongnanhai write slogans on President Liu Shaoqi’s house: “Down with China’s Khrushchev: Liu Shaoqi!”.
1/1/67 In its New Year editorial, the People’s Daily calls upon readers to “Carry Out the Great Proletarian Cultural Revolution to the End”.
1/1/67 The People’s Daily predicts a year of “nationwide all-round class struggle” against capitalist roaders; the wording finalised by Mao.
1/1/67 The United Action Committee pledges to “crush the left-deviationist line” and protect the Party’s “loyal and courageous cadres”.
1/1/67 Marshal Peng Dehuai writes to Mao and apologises for letting him down with his work in Sichuan. He gives Mao his “last salute”.
1/1/67 Liao Zhengguo, commander of the Shanghai PLA garrison, orders the city’s militia to hand in their guns for “inspection and repair”.
2/1/67 Yao Wenyuan writes in The People’s Daily: “Criticising the Reactionary Two-Faced Zhou Yang” (Tao Zhu’s Propaganda Department deputy).
2/1/67 Wang Hongwen, leader of the Shanghai Rebel Workers’ HQ, flies to Beijing to discuss developments in Shanghai with Zhang Chunqiao.
2/1/67 Zhang tells Wang Hongwen not to help Shanghai’s leaders end the chaos — “that would make them happy!” — but to attack them over it.
2/1/67 CCRG members Zhang Chunqiao and Yao Wenyuan are promoted to the Party’s informal but powerful Central Caucus, chaired by Zhou Enlai.
2/1/67 Having resigned from Kuai Dafu’s Qinghua Jinggangshan group, Tang Wei sets up the “Mao Zedong Thought Column” in opposition to it.
3/1/67 At Qinghua University, Tang Wei’s Mao Zedong Thought Column holds a massive criticism of fellow rebel Kuai Dafu’s Jinggangshan group.
3/1/67 Shanghai Scarlet Guards, protesting in Kunshan after having failed to reach Beijing with their petition, cease disrupting transport.
3/1/67 Liu Shaoqi’s daughter Liu Tao puts up a big-character poster at Qinghua University — “Witness the Despicable Soul of Liu Shaoqi”.
3/1/67 Rebel Red Guards clash with more pro-establishment factions in Nanjing’s city centre, blocking a vital Yangtze River crossing.
3/1/67 In freezing Shanghai, rebel workers and student equip themselves with safety helmets and iron bars to storm local newspaper offices.
3/1/67 Hundreds of Shanghai enterprises’ accountants queue outside banks to withdraw cash, preparing to satisfy rebel workers’ wage demands.
3/1/67 At Peking University, Jiang Qing and Kang Sheng defend Red Guard leader Nie Yuanzi from rivals, who they brand counter-revolutionary.
3/1/67 Jiang Qing calls Nie Yuanzi’s allies-turned-critics Kong Fan and Yang Keming reactionary, and suggests “smashing their social base”.
3/1/67 Shanghai mass organisations endorse a “message” to the city. Party Chief Chen Pixian approves a printing order for 200 000 copies.
3/1/67 Red Guards break into Liu Shaoqi’s house to lecture him for 40 minutes. They order him to recite passages from Mao’s Little Red Book.
3/1/67 Foreign Minister Chen Yi discusses the Foreign Languages Institute’s two Red Guard factions with his subordinate Qiao Guanhua.
3/1/67 Chen says “one lot wants me struck down, the other wants me roasted. I don’t know the difference”. Qiao suggests the latter is worse.
3/1/67 “Qiao you old bureaucrat,” teases Marshal Chen, “Still calling them children? Why, those people are Daring Revolutionary Generals!”. | https://medium.com/@GPCR50/part-06-november-14-1966-mao-backs-the-shanghai-rebels-to-january-3-1967-d1bbfa6cc773 | ['Cultural Revolution', 'Otd'] | 2020-02-10 20:29:48.674000+00:00 | ['China', 'Cultural Revolution', 'Communism'] |
Be Relentless | Journey Across The State
Life Is going to test us at times. Life Is going to throw you curve-balls. It’s up to us to take those curve-balls and knock em out of the park. Nobody is coming to save us but ourselves and more than you think that is all you need. It does help to have the kindness of strangers along the way as well.
I want to share with you a personal story that i think truly defines what it means to be relentless at least in my view , and when you find your true path and believe in it more than everything life will find a way of helping you or getting out of your way. This was my journey. My trip from Ft. Hood TX to Houston.
I grew up a relatively poor kid, weren’t completely homeless or starving but we were far from well off. In school I was very bright but my home life was a burden, and I had trouble focusing on class.Needless to say My grades weren't the best and my attendance was minor. There was one class that I Always attended and Got Straight A’s in though.
Auto Shop. Books were boring the lectures were a bore, but something about engines inspired me. I could use my hands to create something and immediately see the results. I participated in competitions, but it was tough finding other that shared my passion the way I did, and everything was teams. So I competed but I never won.
I just enjoyed doing what I loved at a high level and testing my skill. When it was close to my senior year I started looking into schools. The top school in the country was Universal Technical Institute. Everyone at the top dealerships as well as NASCAR went to the school. So when it was time to choose there was no other options this was where I wanted to go.
I went to the school for a tour. I took the test. Passed with flying colors. I was just about in.All I needed was the money.Well once again bad grades and being a poor kid was holding me back. Well I got denied Financial Aid and my hopes of UTI were done. I never forgot though…..
I ended up getting forced out of high school and signed up for a lesser auto school. Eventually I was forced out of the Auto school as well. It just wasn’t the place for me. I joined the Army and developed my skills as a mechanic in the military. I learned what I needed and became very good but I never forgot UTI.
When it came time to leave the military I made it my mission to pick up where I left off. I had free college now. I could finally afford UTI. I didn't need my family. I didn’t need my grades. I was in. I found my own way.
I ensured my last duty station was in Texas and made a plan to attend UTI Houston. In The Summer of 2015 Everything was set. I had my DD214. I was enrolled in UTI. I had a cool mustang I built myself and All I had to do was pack my bags and drive.
The Easy Part Right?
Wrong. A week before I left the mustang went down on me. I wasn’t leaving it behind though. This was the project i was going to be working on in school. My last check was $1500 So I bought a $1000 truck and a tow bar to pull the mustang. Once again I was heading to Houston I was going to UTI.
The night before I packed up my truck and hooked the car to the back. I filled up both tanks of the truck and it was off to Houston. It was just me and my puppy and the two vehicles. On a 6 hour drive to the hotel I would stay in until I found a place.
One Hour in to the drive I noticed the fender was messed up on the mustang. I pulled over and it had torn up the tire. There was enough tread left so i took off the fender put it in the car and journeyed on.
Another Hour into The trip I noticed one of the magnetic lights came off the car. Not too much of a big deal I got out and moved them to the truck instead. 30 minutes later the wind took the magnetic light.
Hour three we are halfway through the journey and the first tank starts to run out so I hit the switch to the second tank.
The truck immediately dies. I’m in the middle of nowhere Texas 300 miles from my destination. No money no gas just a truck full of clothes a puppy and luckily a gas can.
The nearest Gas station was three miles. I wasn’t going to make my dog walk it so. I put my puppy on my shoulders a gas can in my hand and started walking.
Two miles in someone saw me and pulled over. They asked if I needed help. They took us to the gas station got us gas and water. Then took us back to the truck. This whole process repeated itself for the next 2 hours.
It was probably three or four more times. I have never been the type to ask for help. I even offered to sell some of my stuff off the back of my truck . Nobody wanted to buy anything they just wanted to truly help. I owe a lot to them and honestly i wouldn’t be where I am today if it wasn’t for kind people in this world and I try my best to repeat what they have done for me. All they ever asked was that I paid it forward.
So we came down to the final hour of the journey. I was 5 miles from my hotel and lost as hell. My phone died so I lost my GPS and once again I was running out of gas. I decided to stop at a gas station to ask directions, and just in case I ran out of gas again I was at a pump.
As soon as I turned the corner the tow bar broke. My truck went right my mustang went straight into another vehicle completely destroying my mustang. I had put everything into this car. This was my project I built from the ground up. I new every part of it and it was destroyed. I was devastated and once again I was out of gas. I know go figure.
Well I called my insurance and a tow truck came out took the car and luckily gave me gas and directions. I drove my truck without my car after everything to my hotel. As soon as i pulled into the parking lot the belt on the truck snapped and it dropped coolant everywhere.
Mustang gone. Truck broken. Tired. Frustrated. I had made it. School started the next day, and I was the first one there and the happiest person in the room.
I had been hit with just about everything you could think of. I could of quit a million times. I could of turned around. I didn’t. I had wanted to go to this school my entire life and I had an opportunity and nothing was stopping me.
We will all have destinations in our lives we work hard to get to. There are going to be hurdles we are going to go through to get there. Sometimes we are going to run out of gas along the way. Sometimes we are going to have to sacrifice things we care about. People are going to come and go in our life but we just need to always remember….
“As Long As You Stay On The Road You Will Eventually Reach Your Destination”
Keep chasing your dreams,Keep moving forward,Take Care Of those You Care About,Don’t Be Afraid Of Help, And Pay It Forward
I am Niko Hartman, I thank you for taking the time to read my article. I understand there are plenty of articles on medium and you taking the time to read mine means the world to me. I am constantly trying to grow at both a content creator and a person so I can keep bringing better articles to you. I always look forward to your input and responses and would love to hear from you. You can can reach at the following. I will take the time to read all responses and look forward to your feedback.
Facebook @ NikoHartman
Email [email protected] | https://medium.com/nikorhartman/be-relentless-210a066dc810 | ['Niko Hartman'] | 2020-08-06 09:50:46.228000+00:00 | ['Personal Development', 'Goals', 'Automotive', 'Journey', 'Personal Growth'] |
Quicdent Is One of the Best On-Demand Dental Jobs Platform for iOS and Android | The app uses advanced algorithm to connect dental professionals with dental offices looking for short term jobs.
Quicdent was introduced here already as the business platform for dental professionals and dental offices to match. Because we really liked this app and all that it offers for dental pros to match with dental offices in need of temporary help, we are pronouncing it as our Business App of the Month.
General Intro
Developed for Android and IOS users, this business app offers on-demand dental jobs, matching dental professionals with dental offices looking for temporary help. It’s easy to use, user-friendly, and intuitive, with advanced algorithms to match dental professionals with dental offices. It has a web and mobile platform available.
App’s Features
This on-demand business platform is designed especially for dental professionals that look to connect with dental offices in need of help. Dental professionals with the help of this app will be able to earn additional income. Review, accept, and decline extra shifts through this on-demand app.
The app cuts the middle man and helps both dental professionals find a temporary job and earn more per booking and dental offices to save big on booking fees. On this app, you can manage your appointments with ease. All the details about the job are available for you to make a decision before accepting or declining. Get matched and earn, all through this app. It’s the future of temp agencies.
Download the app on Google Play and App Store to get matched with dental offices for temporary jobs!
App Store Download Link: Quicdent
Google Play Download Link: Quicdent | https://medium.com/@hightechholic/quicdent-is-one-of-the-best-on-demand-dental-jobs-platform-for-ios-and-android-baecfb80f253 | ['Tech News'] | 2021-12-14 17:05:19.236000+00:00 | ['Dentistry', 'Apps', 'Jobs', 'Dentist', 'Dental'] |
2018 NFL Regular Season Team Stats | As always a-la CDM, my obsession with sports data visualization.
As we get ready for the first games of the NFL Playoffs here is the full picture of how each team performed during regular season, in order of playoffs ranking.
Each team is presented in a chart which contains all games with points scored and points against and a line that indicates the points difference, positive is a win, negative is a game lost.
It may be interesting to check how teams did during the season, for example notice how the Chiefs had always strong scoring averages and the few (4) games they lost where for few points (-3, -3, -2, -7) and by the way they lost to teams who are all in the Playoffs.
The Saints, if you remove the first and the last games, always had amazing scoring averages and point differences, except for the game they lost with the Cowboys, by 3 points.
The Rams with an impressive season, only one scored less than 20 points, when they lost to the Bears, and only 4 times scored less than 30.
AFC
Division Leaders | https://medium.com/sport-the-digital-r-evolution/2018-nfl-regular-season-team-stats-143f6e3889b1 | ['Carlo De Marchis'] | 2019-01-03 23:59:24.835000+00:00 | ['Sports', 'NFL Playoffs', 'NFL', 'Data Visualization'] |
LCX LCX is a new kind of global financial technology company | LCX
LCX is a new kind of global financial technology company
LCX is the Liechtenstein Cryptoassets Exchange which is Empowering the Blockchain Industry and whose goal is
goal is to become one of the world’s first licensed and supervised security token exchanges as a regulated marketplace for digital assets.
Its Vision is
👉 To bring the power of blockchain technology to the finance and banking industry.
Its initial product is
👉 To bring an advanced crypto trading desk to trade on all major crypto exchanges within a single interface.
Mission
Its mission is
👉 To become one of the world’s first licensed and supervised blockchain ecosystems for professional investors.
Its products are:-
a) Terminal
b) Smart Order
C) STO Launchedpad
LCX Token:-
👉 The LCX Token is a utility Token which may be used to pay all fees associated with the services offered by LCX AG.
🌟 LCX Token's Minimum Utility Value is of 0.10 USD
We can buy LCX token here on:-
♦️LCX-Website: https://accounts.lcx.com/buy_lcx
♦️Uniswap: https://app.uniswap.org/#/swap?inputCurrency=ETH&outputCurrency=0x037a54aab062628c9bbae1fdb1583c195585fe41
♦️Liquid: https://app.liquid.com/exchange/LCXBTC
♦️IDEX: https://exchange.idex.io/trading/LCX-ETH
♦️Hotbit:
https://hotbit.io/exchange?symbol=LCX_BTC
♦️Probit: https://www.probit.kr/app/exchange/LCX-BTC
https://www.probit.kr/app/exchange/LCX-KRW
The USPs of LCX token is:-
Regulated exchange (security following) under superior Lichtenstein blockchain law, trading terminal with sophisticated arbitrage trading algorithm, defi terminal for Uniswap, vault service, price service partnering with chain link and STO launchpad. Tokenize everything more then 100 Million are kn pipeline.
LCX will be offering industry-leading crypto asset financial products and services. As a fintech company, LCX is focused on helping both traditional and blockchain market participants make the most out of opportunities arising from the growth of cryptocurrencies and tokenized assets.
The LCX Token ($LCX) is an exchange based token and utility token of LCX, the Liechtenstein Cryptoassets Exchange.
LCX Token Smart Contract: 0x037a54aab 062628 c9bbae1fdb1583c 195585 fe41
$LCX Token at Etherscan: https://etherscan.io/token/0x037a54aab062628c9bbae1fdb1583c195585fe41
$LCX Token at CoinMarketCap: https://coinmarketcap.com/currencies/lcx/
$LCX Token at CoinGecko: https://www.coingecko.com/en/coins/lcx
Its features are:-
👉Manage Multiple Exchange
👉Detailed Analytics
👉Crypto News
LCX supports crypto assets investors to be successful by delivering an advanced trading experience.
Its partners are:-
Animoca Brands ,BioID,Blockchain Research Institute,Chainlink,Elliptic,FinTank,Global Digital Finance ,ZenGo ,World Economic Forum, Technopark Liechtenstein, Security Token Alliance , Sele Frommelt & Partner, Regula,Pliance, LunarCRUSH ,Liquid,
Ledger, JUN capital, Icon Foundation.
The LCX Referral Program
The LCX Referral Program lets you earn a bonus for each successful referral registered to your account!
Earn 100 LCX Token for every new .
🥰🥰
The LCX Referral Program gives you and your friends an opportunity to earn bonuses. All rewards will be paid out in LCX Token and will be shown in your earn section of LCX Account .
🌟You will get a 100 LCX Tokens for every new user registering via your personal referral link.
🌟The new user, who joined LCX via your referral link, will also receive 50 LCX Tokens in their LCX Account as well
🌟You will receive 50% of all subscription fees once this new user will upgrade to a paid LCX Pro Account.
You can join here:-
https:// accounts.lcx.com
KYC is the most essential things here🥰.
LCX is a member of the World Economic Forum C4IR and had been named Blockchain Pioneer by the Blockchain Research Institute .
You can ask for support here:-
[email protected]
Please share this so that we can support This project🥰🥰❣️ | https://medium.com/@narbadabhattarai9/lcx-lcx-is-a-new-kind-of-global-financial-technology-company-b4b0e91863cf | [] | 2020-12-25 15:22:58.354000+00:00 | ['Cryptocurrency News'] |
Magical moments for the festival of love — Christmas and AR | Christmas Time: the days get shorter, you see the traces of your breath outside, the cities and towns are lit up with fairy lights, and Christmas decorations all around. And of course the highlight of the year is right ahead of us. Christmas, the festival of love. All you need to add some emotional and magical moments to this already emotional day is Augmented Reality.
Regardless of whether it’s for business or pleasure; AR can create festive experiences and add to the contemplative atmosphere of the holiday season in an innovative way.
Merry Augmented Christmas!
And although Christmas is celebrated largely in a traditional way, AR can add an extra magical touch to the moment you all share your gifts, complementing the traditional elements. For example, the nativity figures can be brought to life through AR. Filters that allow you to transform into Santa Claus himself, or take selfies next to him complete the Christmas flair.
What’s more, Christmas cards or Advent calenders can also be enhanced with personalized videos or adorable Christmas videos that are guaranteed to put a smile on the recipients’ faces. Analog, outdated Christmas Cards and postcards are now given a new life and an entirely new personality.
Christmas treasure hunt with AR
Indoor and outdoor spaces can be embellished at Christmas time with interactive “treasure hunts”. Participants can play along by searching the rooms or places for the interactive clues, and scanning them with their smartphones. For example, they can find santa’s missing elves, Christmas gifts, or other festive mascots in an Augmented Reality. After completing the treasure hunt, participants then receive their personal treasures in form of discount codes, or small gifts for example.
This way, brands can create unique experiences that are sure to leave a lasting impression on everyone involved.
Covent Garden London was transformed into one grand AR Christmas experience back in 2016. Over 140 restaurants and stores in Covent Garden joined together to create this unique experience. It was the largest showcase of Augmented Reality tech ever attempted in one place, and it was successful!
Shoppers had to hunt for Santa’s missing reindeer by scanning posters on store fronds and in participating stores. Once they found all the cute reindeer, they would win great prizes! 52% of participants completed the treasure hunt, 30% redeemed an offer, and 24% shared the experience on social media. It’s safe to say it was memorable for all!
Other companies making use of AR technology
The U.S. retail chain Target, for one, has developed an AR app that allows you to project Christmas trees into your own four walls.
When you open the app, an AR icon appears that, when tapped, displays a selection of virtual Christmas trees. Now you are to select a virtual, artificial Christmas tree and place it in the living room into the desired location using AR. This way, you can test in advance, whether the type, height, and width of the tree actually fit and save yourself unnecessary furniture moving if it doesn’t fit after all.
Although you can only test artificial Christmas trees in the Target app, it can also reduce stress for people that are to buy real ones. After all, you can easily imagine what a real tree with the same dimensions and decorations would look like, once you’ve seen the artificial one!
Coca-Cola has also made use of this tech in 2019. The cans of the classic Holiday Edition could be brought to life through Augmented Reality. All that was needed was scanning the can of said edition with the Coca-Cola app for iOs & Android, and a winter wonderland magically opened itself up to the viewer, completing it with Coca-Cola’s signature polar bears and, of course, Santa Claus in his red suit.
Aside from Coca-Cola, Starbucks also enchanted its users with a wintry AR application in 2019, just in time for Christmas.
Starbucks added Augmented Reality to their Holiday edition cups. To do so, they created an AR Instagram filter called “Holiyay”, which was available to try exclusively on Starbucks’ Instagram profile.
The filter is an image tracker that could be used to scan Starbucks’ Christmas cups. The filter could be applied to several differently sized and different looking Holiday edition cups, and opened the gates into a magical Christmas world: Christmas buildings of starbucks with Christmas decorations, lovely Christmas messages, dreamy falling snow or an interactive Christmas-themed game with bursting bubbles. All this and even more impressive features available to the user.
Augmented Reality at Christmas time
Last year, a study by the University of Federal Armed Forces in Munich showed that Augmented Reality applications deliver impressive results, especially at Christmas time, and that brands can benefit enormously from AR, particularly during Christmas Season.
“Augmented Reality is always particularly well suited when it comes to personal and emotional content. Christmas is certainly that kind of topic”, says Prof. Philipp Rauschnabel from the Chair of Digital Marketing and Media Innovation at the University of Federal Armed Forces in Munich. The vacation and Christmas seasons offer the perfect opportunity to get festive, have creative campaigns going. This time of the year, people really want to get into the holiday spirit and be enchanted.
We really can not wait for Christmas to finally get here! How about you? | https://medium.com/@getbaff/magical-moments-for-the-festival-of-love-christmas-and-ar-327f96940864 | [] | 2020-12-21 11:51:24.704000+00:00 | ['Christmas Ideas', 'Christmas', 'AR', 'Digitalization', 'Augmented Reality'] |
The Evolution of CI/CD | The Evolution of CI/CD
Photo by Eugene Zhyvchik on Unsplash
Continuous Integration and Continuous Delivery have become a staple of any competent software house. They have become the lifeblood of the team and the engine that drives changes out into production.
Modern CI/CD solutions are all singing, all dancing masters of delivery. They come packed with useful, funky features. Yet, it’s easy to forget that the origins of CI/CD were very humble. This article is a high-level look at how CI/CD has changed over the years, to give a perspective of the direction of travel.
In the beginning…
In the early stages of continuous integration, build “pipelines” were nothing more than simple scripts.
#!/bin/bash unzip file.zip
cd file/
mvn test
mvn package
ftp # And so on...
It would be easy for me to bash these scripts (pun definitely intended). To point at them and laugh at how horrible it must have been to work with them. To wonder, how the engineers couldn’t possibly see the gaping issues with this approach.
I hate to disappoint, but I won’t be doing that. These scripts worked. They were written in well understood, simple languages. They ran on the tin that was always there so when they did run, they ran predictably. Not to mention that they were blazing fast. They had absolutely no overheads. No docker containers to spin up, no VMs to wait for. In short, they did the job.
So what happened?
The job changed. As more and more organizations started to realize the power of these scripts, a set of needs emerged. Needs that would have been difficult to fulfill with only the bash shell.
Visibility
As more and more of these scripts began to break halfway through, we needed some sense of where the breaks were. It wasn’t enough to dig through the bash stderr stream and try to work out where the fire was. The build needed feedback quickly.
Centralization
As these scripts were being made, a common problem occurred. Drift. Drift is what happens when ten engineers solve the same problem in ten different ways. The absence of any strategic solution creates a problem. Soon, you’ve got scripts in every corner of your office. Every single developer machine could potentially dismantle the production environment with a single command. A knife in every shadow. So there arose the need to bring all of these scripts into one place.
And the CI server was born…
CI servers have taken many different forms, with many different responsibilities. Tools like TeamCity and Hudson began to take over the horizon. The functionality that tools like these (and many more!) brought to the table was fantastic.
At the time, this felt like strapping rockets to your engineering practice.
CI servers have morphed and shifted with the times, introducing ideas like “Stages” into “Pipelines” that create a finer level of granularity in the software. For those of you who recognize the Hudson UI above and don’t know why — Hudson is now known by its more modern moniker, Jenkins.
Keeping these damn servers going
In the early days, the problem wasn’t with the features and functionality. It was in the build quality of some of these applications. The nature of their jobs meant holding onto resources for minutes at a time. As we know, computers sometimes like to cling onto resources… until there are no more left. This is known as a memory leak.
And by God, there were memory leaks all over these tools. It became entirely commonplace to restart Jenkins regularly. I’ve worked in offices where a nightly cron job would restart the Jenkins server every night because their Jenkins instance had a half-life of about 30 hours.
But this wasn’t the fault of the vendors
Jenkins (or Hudson for that matter) were not bad pieces of software. Again, they solved new problems that were being posed by the industry. Now, all of your builds ran from the same place.
Jenkins had (and still has) a plugin-based system, where open source tools can be installed onto your Jenkins server via the UI. This creates a highly extensible platform that can perform all manner of interesting tasks, but it also means there is a risk of clashing.
Anyone who has worked with Jenkins for more than a few months will have experienced this. Install Plugin A, then plugin B and suddenly, Plugin A is broken. So is plugin Z, which came with the Jenkins core.
So then we found some new problems to solve…
Spinning up Jenkins takes knowledge. Keeping it running takes a little care and attention. You need to have a rough idea of what you’re doing. It isn’t rocket science, but it isn’t a walk in the park either. It’s somewhere in between. Picture yourself riding a rocket through a park. It’s like that.
As the industry moved to small, lean companies that were looking to get products quickly to market, the tooling needed to evolve. The constraints in CI/CD became obvious:
The time it took to get a CI/CD server running.
The trial, error and agony involved in building those first CI pipelines.
Jenkins responded, with vigor.
The introduction of Jenkins pipelines in Jenkins 2 was a big step forward. Like many other CI solutions, the focus became defining the pipeline in code. This involved far less work in Jenkins’ clumsy UI and far more coding. Something engineers are more comfortable with.
So the first problem still existed, but the second problem was limited. Engineers would quickly iterate on code, rather than clicking around a complex, procedurally generated UI that could be very confusing for even the most simplistic pipelines.
Interestingly, Jenkins’ solution was for you to write your pipeline in a groovy script and have the Jenkins server execute it for you. This is very similar to the old method we discussed at the start of this article, isn’t it?
So what does the future look like?
In recent years, new offerings have come out. Tools that lift a lot of the burden off the engineer. They come in the form of a SaaS offering. CodePipeline, CircleCI or Buddy, to name a few. You follow some conventions, use a couple of yaml files and boom — you’ve got a CI pipeline.
This low barrier to entry is a huge selling point for smaller organizations that don’t want to take on the burden of running a CI server. This is why many organizations are moving closer to a Serverless and Managed service approach. Offloading the maintenance burden onto a third party for a fee is definitely an empowering choice for any company.
These offerings try to tackle the two more modern problems listed above. You don’t need to run your CI server any more, so that issue evaporates into thin air, but what about the ease with which you can build pipelines?
Configuration rather than code, it seems
The common approach to modern CI/CD solutions is to provide a configuration file inside your application repository. yaml appears to be the syntax of choice for these tools. This approach offers the lowest possible barrier to entry.
CI/CD is moving in a declarative direction and, with modern tools like Buddy, we can simply declare what we would like to happen and see it done. | https://medium.com/the-devops-corner/the-evolution-of-ci-cd-763df723f05b | ['Chris Cooney'] | 2021-01-22 11:11:38.493000+00:00 | ['Continuous Integration', 'Continuous Delivery', 'Agile', 'Software Development', 'Programming'] |
Striving in the Time of COVID-19 — My Second Summer at Strava | Hi! I’m Kurt, I’m an Android engineer intern on Strava’s Trust and Safety team. After graduating from Gonzaga university last May I made the move out to Denver to join Strava’s Business team also as an Android Engineer Intern. Since then I’ve completed my first year of a computer science Master’s program at Eastern Washington University and have returned to Strava for yet another summer of running, cycling, striving, and Android development.
When COVID began closing gyms, cancelling plans, and urging people to practice social distancing, I saw a lot of my friends turn to running, cycling, and in turn Strava as a means for staying active and connected during such uncertain times. Despite getting the call that my internship was to be moved remote (which put a hold on my plans for tackling the remaining Colorado 14ers), I was very excited to return to work on a product that not only did I enjoy but that I was watching have a positive impact in those around me as well.
Working Remote
My trusty commuter bike that I rode through Denver each day last summer would see a lot less action this time around.
Right around when I was browsing apartment listings in Denver, I got the email that Strava was officially work-from-home and the entire intern experience was going to be pushed remote. While bummed I wouldn’t get to see everyone in person, I knew that experiencing the challenges of a remote working environment now would make me more adaptable and productive in the long run.
You can learn a lot about someone by their Zoom background. Whether it’s virtual or real, most people (though probably not willing to admit it) have put something behind their usual video conferencing spot that they would be happy to talk about. While the nuances of working remote were well dialed and the workday was smooth, the bikes, running race numbers, and mountain landscapes featured behind the grid of my coworkers on screen served as a bittersweet reminder that in less pandemic-y times, the Strava workday would often blend into early morning bike rides, lunchtime runs, and weekend hiking trips.
Android Engineering
Well now that you’ve indulged me in rambling about how I miss Colorado, we can get to what you’re actually here for: an exhaustive discussion about why introducing a Kotlin-Reflect dependency may not merit a 0.7MB APK increase. Just kidding. But let’s do get on to some of the technical contributions I made and the lessons I learned along the way.
Despite my 9 month hiatus from the Strava Android codebase, the first bug fix, the first feature implementation, the first commit and push were far less intimidating than they were at the same time last year.
Much like last year, shortly after my internship started the company shifted gears into “Guild WeekWeek,” a two week sprint where engineers pause the work on their usual product teams and instead meet with the other engineers on their same platform. These weeks allow each platform to catch up on tech debt, build new dev tools, make app-level upgrades, and work on other projects that are important but don’t fall under the jurisdiction of any particular product team. For me this meant building a network logger.
Network Logging and the Underrated Art of Dev Tools
When building features that interact with Strava’s API, ensuring the proper content and delivery of requests and responses that Strava sends can be a particularly tricky task. The network logger that I built serves to mitigate these challenges. Because it is a tool that is exclusively for our developers, a lot of considerations had to be made about the way it interacts with our non-developer users as well. In app development usability is the name of the game and with an app as feature-rich as Strava, even milliseconds of additional load time for users is not an acceptable sacrifice for a dev tool. The network logger needed to be as lightweight and streamlined as possible.
The network logger has three main components; the interceptor, local storage, and the display UI. Since the network logger is a superuser/debugging tool there is no need for it to change any data that it intercepts — thereby acting as a simple observer of network access. The interceptor receives any requests or responses going in and out of the Strava app and immediately makes a copy and pushes the data on to the OkHttpClient. For regular users, the interceptor is blocked even from this activity, observing nothing and immediately pushing the data forward. For our superusers, the data that has been copied is saved in a local Room database, ready to be presented should the user navigate to the tool’s UI. The local storage, when in use, is also streamlined to only cache a finite number of requests and responses to ensure a smooth experience for employees as well.
Trust and Safety
Upon the completion of Guild WeekWeek, I fully moved into my role on the Trust and Safety Team. On the Trust and Safety Team, we focus on all the aspects of Strava that make users comfortable with their data, our product, and sporting in general. This means managing reporting tools, privacy settings, user data control, and more.
One of the things that sets apart a good programmer and a great programmer is their ability to create extensible solutions. It is one thing to be able to build a feature, but it is another to build tools that can be reused for building many later features. While I felt very adept at building singular features, I wanted to hold my code to a higher standard and see it become something that can be reused at Strava long after I return to school.
The first feature that I built while actively pursuing these goals was adding a new option to our past activities editor settings. Up until now it was only possible to update the visibility of an activity, but with minimal work server-side, we wanted to add the ability to show or hide an athlete’s heart rate data on past activities as well. Migrating a workflow from one possible option to multiple is usually the most complex hiccup when building the extensibility of a solution. Adding the heart rate flow required some refactoring of the data structures that maintained the state of the UI and the data submitted.
Options for editing past activities and our other, broader privacy controls settings.
Following the completion of the edit past activities project, the Trust Team as a whole was shifting gears to start a new feature of more detailed segment reporting. This new work would enable users to benefit from and contribute to the knowledge of the community — starting with specific, on-the-ground details about the conditions of a segment.
Last year the project that culminated my internship was closer to a solo effort. I had mentorship and plenty of people to turn to for guidance and it was very rewarding being able to own a project from start to finish, but this year I wanted to get more experience with the challenges that arise in parallel development. I decided that I would take the reins on some components of the front-end experience for segment reporting.
Besides making sure the UI matches the expected behavior for specific workflows, I also created one of the integral display elements for the community report. We had a need to display a list of items, which if you know Android development you’ll know this is going to utilize a RecyclerView, but in this particular case we wanted to make sure that the list that we are displaying can handle a varying number of items without dominating the screen space. With the page already scrollable it didn’t make sense to nest another scrollable window so we decided that we should have an expandable view that can show the full RecyclerView list if it’s small or a dropdown expandable preview of the list if it is larger. A dropdown recycler view is not something that I could find a library for online and while it would be a bigger project, I knew that it would be very helpful in the future to build an entirely new view object that could fit this use case and any like it that we may later encounter.
And thus the “DynamicallySizedRecyclerView” was born. Without getting too into the nitty gritty, the DynamicallySizedRecyclerView is a child of the ConstraintLayout and comes complete with an expand/contract caret, a linear gradient opacity for the collapsed list, and the animations and transitions to make a very clean and professional looking component. I specifically designed the new view object to be as developer-friendly and customizable as possible. One function call gives access to the internal RecyclerView and from there, the behavior is identical. Other individual settings can then be configured manually like whether or not to expand or collapse the list by default, the thresholds for how many items to hide or display, or the number of items to display when collapsed.
Non-Technical Growth
It is impossible to work at Strava without feeling at least a little inspired to get on your bike or lace up your running shoes. It’s one of the major things that drew me back for a second internship. Even though we couldn’t ride or run together in person I’ve been hitting the trails more than ever. I haven’t clocked this many miles on my running shoes since high school track and cross country. While I’m still comfortably behind the mile PR that I’m striving to get, I’m headed in the right direction thanks to Strava and it has been a pleasure working on a product designed to help others reach their own goals too. | https://medium.com/strava-engineering/striving-in-the-time-of-covid-19-my-second-summer-at-strava-452787e90720 | ['Kurt Lamon'] | 2020-09-23 16:02:55.662000+00:00 | ['Strava', 'Android', 'App Development', 'Remote Work', 'Internships'] |
Why so many Latinos are becoming Muslims | Just as the U.S. Latino population is on the rise — Hispanics are now the nation’s largest minority — so is the number of Latino Muslims. And it’s not just a result of Arab Latin Americans emigrating to the United States.
According to organizations like WhyIslam.org, Latinos are one of the fastest growing segments of the Muslim community. About six percent of U.S. Muslims are now Latino — and as many as a fifth of new converts to Islam nationwide are Latino.
The American Muslim Association of North America, based in North Miami, says heavily Hispanic South Florida in particular is home to a rising number of Latino Muslims. | https://medium.com/virtual-mosque/why-so-many-latinos-are-becoming-muslims-3f8cc5575770 | ['Virtual Mosque'] | 2016-04-09 17:56:10.884000+00:00 | ['Islam', 'America', 'Converting'] |
VERTIGO PRESALE 2 GOES. LIVE TODAY💰 | A real #Defi world. Invest on VTG and its products to get rewards NOW and in the FUTURE. | https://medium.com/@vertigofinance/vertigo-presale-2-goes-live-today-99b4bd39412 | ['Vertigo Finance'] | 2020-12-09 18:20:08.341000+00:00 | ['Decentralized Finance', 'Presales', 'Blockchain', 'Cryptocurrency', 'Eth'] |
PETG vs PLA Filaments Comparison | Two popular filaments, but different features. In the rapidly developing industry that is 3D printing, sometimes it’s hard to follow the trends. Check our PETG vs. PLA comparison and learn about the main differences and sample applications.
Sample PLA prints you can put on your desk. Left: pencil holder; right: laptop stand.
What’s the Difference Between PLA and PETG?
PLA was always the go-to basic filament and is still recommended when you start your journey with 3D printing. The rising popularity of PETG makes some people switch to the latter material. The question is: should you? Hopefully, after you read this article you won’t have any problem with choosing a filament for your next project.
PLA (or polylactic acid) is one of the most popular thermoplastics for 3D printing due to its wide availability and color range. It’s cheap, easy to print, and doesn’t emit dangerous fumes. Because of its printability and versatility, it’s often recommended for inexperienced users. PLA is also partially biodegradable and doesn’t require a closed printing space (meaning that almost any FDM 3D printer on the market will do well with the material).
Sample model 3D printed with PLA on ZMorph Multitool 3D Printer using Dual Extruder.
PET (polyethylene terephthalate) is a thermoplastic polymer used for drink bottles or sailcloths. Mixed with glycol (hence the “G”) exhibits good 3D printing features. PETG filament is resistant to high temperatures and water, presents stable dimensions, no shrinkage, and good electrical properties. PETG combines ABS durability and PLA printability, and it comes as no surprise that it’s frequently chosen by professionals seeking a reliable filament for their projects.
PETG electronics casing.
PETG gained a lot of recognition during the coronavirus pandemic because it can be easily sanitized and therefore is useful for medical applications such as face shields.
3D printed PETG headband for face shield.
PETG vs PLA — Technical Data
Below you’ll find the most important technical properties for PETG and PLA.
PLA
Vicat softening temperature: 55 ºC (131 ºF)
Heat deflection temperature: 55 ºC (131 ºF)
Impact strength: 16 kJ/m2
Flexural modulus: 3500 MPA
Density: 1.24 g/cm
Melt Flow Index: 6 g / 10 min
PETG
Vicat softening temperature: 85 ºC (185 ºF)
Heat deflection temperature: 70 ºC (158 ºF)
Impact strength: 11 kJ/m2
Flexural modulus: 1880 MPA
Density: 31.27 g/cm3
Melt Flow Index: 11 g / 10 min
PETG vs PLA Temperature for 3D Printing
Because the two filaments are different and have distinctive technical properties, you should use different printing temperatures for PLA and PETG. Here’s the main difference between PLA and PETG:
PLA
Printing temperature: 200–230 ºC
Bed temperature: 60 ºC
Printing space: Open
PETG
Printing temperature: 230–250 ºC
Bed temperature: 60–80 ºC
Printing space: Closed
This tail lamp was 3D printed using PETG filament.
If you want to learn about how to 3D print with PLA or PETG, how to set up your machine or slicer, check our other articles:
PLA: 3D Printing Materials Overview →
PET-G: 3D Printing Materials Overview →
PLA figurines showcasing the Voxelizer supports.
PETG and PLA Prices
There is a slight difference in prices when it comes to PLA and PETG. Also because of the popularity, PLA is slightly cheaper. They both come in a wide variety of colors, although it’s easier to find PLA in the desired color. On the other hand, it’s more common for PETG to be transparent.
In our online store, you can find PLA in the price of $34 / kg. PETG costs $39 / kg.
PET-G ZMorph 3D printing filament.
What’s the Difference Between PETG and PET?
You may encounter two options — PETG (sometimes written as PET-G) and PET. The “G” stands for glycol — a substance added to prevent corrosion and crystallization effects. Glycol modified PET is easier to 3D print. PET without glycol is more fragile and prone to breaking.
On the other hand, PETG is characterized by a delicate outline that’s easy to scratch. That being said, the chemical properties of PETG are far better for 3D printing despite some minor disadvantages.
PLA vs ABS vs PETG
Another popular 3D printing filament is ABS, which is often used as an alternative to PLA or PET. It’s much more durable than PLA and frequently used for industrial applications.
Compared to PETG, ABS is more prone to warping and requires higher printing bed temperatures (100 ºC for ABS, and 60–80 ºC for PETG). Another advantage of PETG over ABS is that it emits much fewer fumes, a feature that you will appreciate after 3D printing with ABS.
3D printed ABS.
You can read more about the differences between ABS and PLA here, and a comprehensive comparison between PETG and ABS is coming soon.
Sample PETG and PLA Applications
It’s time we talk about where PLA and PETG are to be used. With different features, these two materials have miscellaneous applications.
PLA laptop stand.
PLA is designated for early-stage prototyping, showcase models, and quick fixtures, whereas PETG can be applied in more industrial operations. Namely, mechanical components, electronic devices housings, transparent elements, etc. Thanks to the water resistance, PETG can be used also to 3D print containers and bottles. PLA doesn’t go well with moisture and may start to decay when exposed to water.
PETG garden hose connectors.
PETG garden hose connectors.
PLA pencil holder.
Discover More Materials for 3D Printing
Now that you know the main differences between PETG and PLA, you may want to learn about other materials for 3D printing. Because ZMorph VX supports most of the filaments available on the market, we prepared the ZMorph Materials Library where you can find out about other materials — their main features, sample applications, and, most importantly, the technical data. It’s all available online for free.
3D Printing with ZMorph VX
ZMorph VX Multitool 3D Printer is designed to work with a vast array of available filaments including both PLA and PETG. We make sure that the workflow is as simple as possible.
Use Voxelizer software to slice the STL file and send it to ZMorph VX. The default presets prepared for our branded filaments ensure the outcome will be perfect with the first try if you follow the ZMorph workflow. | https://medium.com/@ZMorph/petg-vs-pla-filaments-comparison-dfa813403ea2 | ['Zmorph Sa'] | 2020-07-10 07:49:47.882000+00:00 | ['3d Printer', '3D Printing', 'Pla'] |
Data Points 2.0: 5 Must-Read Articles for Google Analytics for SMEs | Data Points 2.0: 5 Must-Read Articles for Google Analytics for SMEs
5 Articles Full of Google Analytics Tips for SMEs
Small and medium enterprises (SMEs) are one of our major audiences, so this week’s Data Points will focus specifically on them (and how Google Analytics can lend them a helping hand).
To SMEs, allocating precious time and resources to the most important marketing activities is one of the biggest obstacles in their path to growth.
The challenge does not stop there: doing marketing just for marketing’s sake isn’t enough — you need to balance marketing research, strategy and implementation all at once in order for your efforts to truly serve a business purpose.
One key to unlocking the mystery of efficient digital marketing is Google Analytics.
As a one-stop-shop for your customer data, Google Analytics can support you every step of the way, and provide beneficial insight into all aspects of your customer journey.
While the value of Google Analytics is evident, after interacting with over 200 SMEs in the past 2 years, we can confidently say that over 80% of SMEs either have their Google Analytics configured incorrectly, or lack a systematic approach to analytics in their company/organization.
Perhaps for this reason, being a master at Google Analytics is a daunting task for many SMEs. But don’t worry a bit, because making Google Analytics easy to understand is for one of our specialties at Analytics for Humans (its in the name, after all)!
This week’s Data Points will open that mysterious Google Analytics door for SMEs, regardless of your expertise level. We will primarily focus on the following topics:
How to set-up your Google Analytics account The business value of Google Analytics Measurement framework methodologies An overall practical guide to Google Analytics The best place to get Google Analytics resources
So let’s dive right in.
Article 1: Main Street ROI, “Guide to Google Analytics”
In the first leg of your Google Analytics journey, we are going to provide you with a comprehensive and thorough guide on how to setup Google Analytics and do basic analyses.
If that sounds complicated, don’t worry — even for an absolute rookie, you won’t feel lost after reading through this playbook.
See how clear they make each step?
This article breaks down each topic in separate, detailed steps, making it a process you can both follow and refer back to again and again.
In addition to basic information such as account configuration and standard dashboard, the article also provides you with an “Advanced Tips” section, which contains bit of information that’ll advance your GA setup even more.
Bill’s Thoughts:
This is a great article for anyone who is trying to get started using Google Analytics for their website.
I have one problem with this article — even though the “Google Analytics Configuration” part of the guide provides you with one way of setting up Google Analytics, we believe there is a better way — please see the article below for more details.
Article 2: Humanlytics, “4 Business Questions Google Analytics Can Answer About Your Website Visitors”
Our second article comes from yours truly (well.. Bill, but you get the point). It is titled, “4 Business Questions Google Analytics Can Answer About Your Website Visitors.”
As mentioned in the introduction, to allocate your precious resources to only your key marketing activities, you will need helps from Google Analytics.
In this piece, we help you with that by connecting your key business needs (such as understanding who your users are) with concrete analyses you can perform in Google Analytics to fulfill those needs.
In order to simplify what you’ll get out of Google Analytics, we explained the entire article in the perspective of Patrick: a visitor of your website whose behavior that you can measure through Google Analytics.
Bill’s Thoughts:
Hey, one of my articles ❤!
To me, all analytics/marketing activities in your business should serve a singular purpose — to better understand and better serve your core audience.
The logic here is quite simple: the better you understand and serve your core audience, the more likely they are going to convert at your website. Therefore, the more revenue your business is going to make.
After reading this article, I encourage you to ponder on the following question: what is the single most important question for me to answer right now?
Then, go to Google Analytics and try to answer it.
Getting a grasp as to what data you should measure and why it matters can be a bit overwhelming. That’s why we chose Avinash Kaushik’s article, “Rethink Web Analytics: Introducing Web Analytics 2.0.”
Avinash breaks down the value of data and how to extract insights at different levels to understand different aspects of your business.
If you have time, I would recommend listening to the entire hour-long video, as it provides a full circle in establishing a measurement framework.
If you don’t quite have the time (as many SMEs do not), the video is summarized in his blog for a faster (albeit less comprehensive) viewing experience.
As a highlight, take a look at this graph to see Web Analytics explained in “plain English.”
Bill’s Thoughts:
Avinash is probably one of the THE lead thinkers in the field of web and digital analytics.
The impact of this web analytics 2.0 framework is extremely profound — it influenced a generation of web analysts and marketers, helping them make more data-driven decisions on how to improve the digital presence of their business.
While it was introduced over a decade ago, lots of the points in this framework still rings true for organization of all sizes.
I would recommend using this framework as a guiding principle to your entire analytics structure at your company, and explore technologies that can help you with each of the areas.
Article 4: Neil Patel, “How to Get Actionable Data from Google Analytics”
Neil Patel does an excellent job at being relatable to SMEs by opening the blog with his journey to discovering Google Analytics.
As he says in this article, “I went to Google Analytics to get those numbers, but I quickly got overwhelmed. There was way too much information! I didn’t know where to start!”
“Find out how people are finding you (1 minute)”
Knowing that, he simplifies and breaks down how to get actionable insight with your GA data in just ten minutes! That’s right, ONLY TEN MINUTES (although it might take just a bit longer). And to make it even more concise, he guides you through the journey in time intervals (see pic on the left).
The best part of this article is that it’s very high-level.
You may tell yourself, “I know I need to have a Google Analytics account, but I don’t have the time to become an expert, so just tell me what I need to know.”
This article does just that.
Bill’s Thoughts:
Okay, let’s be honest, it is going to take you more than 10 minutes to accomplish those tasks. However, those tasks are well-worth accomplishing.
I would encourage you to use similar principles when approaching analytics at your company: If you only have 1 hour every day to look at your analytics, what should you look at?
Having those analytics plans will help you focus ONLY on the analyses that are important to your business, increasing your chance of extracting valuable, actionable insights from your data.
Article 5: Google, Google Analytics Academy
Our last article comes straight from the big enterprise themselves — Google. But we thought it was fitting since their Analytics Academy is incredibly useful (I’ve used it myself)!
If you do have the time, or simply want to make the time, you can learn about the Google Analytics platform inside and out through the Google Analytics Academy. What better place to learn about Google Analytics account than straight from the source?
Google offers free (yes, FREE!) online courses to make you proficient in GA. This place will be both your foundation and resource, and will ultimately show you to a whole world of useful information! You can also go at your own pace in finishing the course, so don’t feel pressured that it’s going to take all your time. The first step is getting started, so go for it!
Bill’s Thoughts:
The great thing about Google is that they actually dedicate a lot of resources in trying to make analytics easy to use and understandable for all marketers — and the analytics academy is one illustration of this commitment.
I would strongly encourage you to go through the process of getting the GA certification — the learning itself is extremely valuable, and the certification (which is free) can give you a great career boost.
To learn more:
That wraps it up for today’s Data Points 2.0. Stay tuned next week, where we’ll have a guest collaborator! | https://medium.com/analytics-for-humans/data-points-2-0-5-must-read-articles-for-google-analytics-for-smes-8b126999f475 | ['Victoria Apodaca'] | 2018-08-09 22:07:03.395000+00:00 | ['Google', 'Digital Marketing Tips', 'Analytics', 'Digital Marketing', 'Google Analytics'] |
Basketball without cable: A cord-cutter’s guide to the NBA | NBA norms have been upended like virtually everything else this pandemic-plagued year. The 2020-21 NBA season is set to start Dec. 22, just two months after the conclusion of the interrupted 2019-20 season.
The number of games will be reduced again—the league is aiming for a 72-game season running through May 16—but teams will return to playing in their home arenas (and receive daily COVID-19 testing), with the exception of the Toronto Raptors, who will play indefinitely in Tampa, FL, due to Canada’s COVID-19 restrictions.
Updated December 16, 2020 to report all your streaming options for the 2020-21 NBA season.
Although there has been talk of teams allowing some fans back in the stands, the safest place to view games this season if from your own home. Fortunately, there are plenty of options for cable cutters to stream games. As in the past, big matchups will be broadcast nationally on ABC, ESPN/ESPN2, TNT, and NBA TV; you can get most, if not all, of these covered with a single streaming package.
Unfortunately, the same can’t be said for the regional sports networks (RSNs) that air the bulk of teams’ games. Fox Sports, NBC Sports, YES Network, Marquee Sports Network, and other RSNs have been dropped from YouTube TV, Hulu With Live TV, Sling TV, and fuboTV, because of disputes with providers. If you’re mainly interested in following your local team through the season, your best bet is to determine which streaming service has an agreement with your team’s regional network and go with that one. Chances are it will also stream some or all of the cable networks mentioned above. A few teams can still be found on over-the-air channels—the Chicago Bulls on the windy city’s WGN, for one—but that arrangement is rapidly going the way of the two-handed set shot.
Over the air Martyn Williams/IDG Winegard’s Flatwave Amped antenna (model number FL-5500A) is one of our favorite indoor antennas.
The good news is you can access ABC for free if you have an over-the-air TV antenna (you’ll find our top antenna picks here) and are within the radius of a local ABC affiliate’s broadcast tower. The bad news is the network is scheduled to air only 10 of this year’s nationally televised games. These, however, include some of the league’s marquee matchups, including three Christmas Day games: Warriors vs. Bucks, Nets vs. Celtics, and Mavericks vs. Lakers.
You can watch the remaining games with some combination of the following services.
Sling TVThe easiest way to catch many of the cable telecasts is with the Sling TV streaming service, but you’ll need to pony up for a monthly subscription. For $30 per month, Sling’s Orange package will get you ESPN/ESPN 2 and TNT. With a digital antenna to catch the ABC broadcasts, that leaves only the NBA TV games to account for.
AT&T TV NowYou can also get ABC, ESPN, ESPN2 and TNT with AT&T TV Now. All you need is the basic Plus package for $55 per month. But upgrading to the $80-per-month Max package will get you those networks as well as a host of regional sports networks including NBC Sports, YES Network, MSG Network, and Fox Sports.
Hulu with Live TV and YouTube TVBoth Hulu with Live TV and YouTube TV give you access to ABC, ESPN/ESPN2, and TNT, though as with other services, the exact channel lineup varies by market. But only YouTube TV offers NBA TV, making its slightly higher monthly price—$65 per month compared to Hulu with Live TV’s $55 subscription—worth the expense for hoops fans. With only the one channel package, though, you don’t get the customizability of Sling TV or AT&T TV Now, so keep that in mind if you plan to use your subscription beyond basketball season.
Jared Newman / TechHive YouTube TV uses a small number of menu sections to make navigation easier.
FuboTVFuboTV recently added ABC and the ESPN channels to both its Family and Elite bundles. Unfortunately, it also dropped TNT, and earlier in the year, Fox Regional Sports networks. If you’re a fan of the Kings, Warriors, Lakers, or Clippers, you might want to opt for the $80-per-month Elite package as it also includes regional networks NBC Sports Bay Area and NBC Sports California. You can add NBA TV by purchasing the Sports Plus add-on for an extra $11 per month.
FuboTV Fubo has lots of appeal to sports fans, and it includes NBA TV. But your subscription won’t give you access to ABC, TNT, or ESPN.
NBA League PassIf you’re truly hardcore for the hardwood, you should consider a subscription to NBA League Pass, the league’s official streaming service. For $200 a year or $29 per month, you can watch every live out-of-market game that isn’t being broadcast nationally on one of the four networks we’ve mentioned.
A League Pass subscription allows you to watch up to four games at once in HD, and it works on computers, tablets, smartphones, and streaming devices. Games originally broadcast on ESPN, TNT, and ABC are available 24 hours after completion in the video archives. You also get anytime access to a curated selection of “classic” games.
For $250 a year or $40 per month, you can upgrade to NBA League Pass Premium, which adds an in-arena stream during game breaks.
NBA Media Ventures, LLC. With an NBA League Pass subscription, you can stream live out-of-market games to your TV, computer, or mobile device.
NBA Team PassTwo bills is a big investment if you only want to follow your favorite team. NBA Team Pass is one less-expensive alternative. For $120 a year or $18 per month, you get access to all your squad’s local broadcasts for both home and away games.
The rub is that NBA blackout rules still apply. If you live in your team’s “home” market—a Warriors fan residing in the Bay Area, for example—you still won’t be able to watch their games even with a Team Pass subscription (this goes for League Pass as well). Your team’s home market, however, isn’t necessarily defined by your town’s city limits.
In the NBA’s own words, the league determines blackout zones “using zip code (if watching via a satellite television provider), a combination of zip code and cable system distribution territory (if watching via a cable television provider), or by the IP address associated with your internet connection or your mobile device’s GPS coordinates.”
What that means is this isn’t a cord-cutting option for everyone. The only way to be sure of your situation is to enter your zip code in the Blackout Notice in the middle of this page before you sign up for one of the NBA subscriptions.
NBA streaming is still 50-50 ballStreaming live NBA games continues to be a mixed bag for cord-cutters. The availability of national broadcasts through streaming services gives you a courtside seat for some of the biggest matchups of the season. But local fanbases who want to follow their team continue to be left on the bench, for the most part. Until streaming options for regional sports networks become more widely available, you might want to dust off your radio.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@kristi42164062/basketball-without-cable-a-cord-cutters-guide-to-the-nba-c695643b3c6a | [] | 2020-12-24 18:10:46.322000+00:00 | ['Mobile', 'Internet', 'Audio', 'Headphones'] |
Chicken Philosophy II | We continue to search for the secrets of the universe as the chicken crosses yet another road.
GEORGE W. BUSH: We don’t really care why the chicken crossed the road. We just want to know if the chicken is on our side of the road, or not. The chicken is either against us, or for us. There is no middle ground here.
If there’s any one man who exemplifies Either/Or thinking, it’s President W. In the film The Princess and the Warrior, the protagonist is an ex-soldier who is expertly skilled in fighting and adapting to life-threatening situations. Yet, in the midst of battle, he weeps and continues to fight. His tears do not display weakness; they show us his strength. In the Lord of the Rings films, Aragorn fights for his rightful place as king, yet he does so with a moving combination of conviction and humility.
Are there any leaders in real life who can call us to more while also acknowledging our humanness?
DONALD RUMSFELD: Now to the left of the screen, you can clearly see the satellite image of the chicken crossing the road. Now watch as the F-15… | https://medium.com/narrowridge/chicken-philosophy-ii-7f8ee4479158 | ['Tom Eckblad'] | 2018-01-02 01:31:13.637000+00:00 | ['Humor', 'Philosophy', 'Politics'] |
Sony Didn’t Make Enough PS4 Back Button Attachments | A couple of months ago, when the DualShock 4 Back Button Attachment was announced, I was skeptical. Would the gaming market really want a $30 plastic controller attachment with a costly OLED screen that adds two extra buttons?
Turns out, yes. And it sold out almost immediately.
Now the stock situation isn’t just dire… it’s nonexistent. I haven’t seen a single shipment even hit shelves in my local stores, let alone any restocks. I imagine that the seemingly small initial shipment was entirely pre-ordered. Best Buy has stocked a few sets of open-box returned units online, but over on Amazon it’s a crazy land of price gouging for the few third-party vendors who managed to snag one.
This $30 add-on is now going for around $90, an absurd price that no one should pay. I’m desperately curious as to whether the attachment turned out to be worth the money…but not so desperate as to pay three times its actual price.
Screenshot taken by Alex Rowe.
The Xbox Elite Controller Series 2 had similar shipment issues at launch, but since it was an update of a proven product, Microsoft probably had a good idea of how many units would sell. In contrast, the Back Button Attachment feels more like a shot in the dark, with its awkward name and lack of marketing. In grand Sony fashion, it was announced shortly before its release with no fanfare, and now that it’s a wild success, there have been precisely zero updates.
I thought perhaps the PS5 reveal would show a new controller with similar back button functionality, either built-in or through the same attachment, but Sony has quiet about their new console’s form factor. They don’t really need to show their hand yet, since the PS4 just crossed an astronomical 109 million unit installed base (see page 8 here).
If even one tenth of that audience wants a back button attachment, that’s an easy 10 million units sold right there. It took Microsoft nearly a year to ship their millionth unit of the first Elite Controller, but that’s also a more expensive and complex product, even taking into account the OLED screen.
Official Sony Marketing Image. Source: https://blog.us.playstation.com/2019/12/17/introducing-the-dualshock-4-back-button-attachment/
With current manufacturing challenges due to the Coronavirus and other supply chain concerns, and the pressure mounting for the new consoles to make a big splash and meet aggressive demand this fall, I’m worried that Sony will quietly forget about their controller attachment, especially if it isn’t also designed for the PS5. They may just be content that the first shipment sold out, and call it good.
After all, once the manufacturing pipeline is back to full speed, won’t they want to spend all their money and energy on building PS5's?
I hope I’m wrong, because I recently dove back in and bought a new PS4 Pro so I could play Final Fantasy VII Remake. My last PS4 Pro killed itself with a rare internal firmware chip bug that often costs users money to repair even if their system is technically still under warranty. Hilariously, my system was such a new SKU (a non-bundled version of the quieter-running Red Dead 2 variant) that Sony didn’t have the serial number in their system yet when I went to see if I could somehow get a warranty repair.
Putting all that behind me, I now find myself desperately curious about Sony’s OLED screen and button attachment, with no easy way to buy one for a reasonable price. $30 was already the top price I’d be willing to pay for such an attachment. I hope to bring you a review of it some day if I can get my hands on one.
I’d love to be wrong about Sony’s manufacturing focus shifting away from their surprise-hit new peripheral, and I hope that Sony will give their massive fan base some news soon about the PS5. Maybe the attachment will be compatible with the new system and come in a whole array of fun colors …or maybe it’ll be a weird experiment in selling small OLED panels that never gets restocked. It’s a coin flip at this point, and I’m eager to see the outcome. | https://xander51.medium.com/sony-didnt-make-enough-ps4-back-button-attachments-7f611c5bc253 | ['Alex Rowe'] | 2020-03-03 23:53:01.905000+00:00 | ['Gaming', 'Supply Chain', 'Gadgets', 'Business', 'Technology'] |
Graco Milestone Review | All-in-1 Convertible Car Seat | Graco Milestone Review | All-in-1 Convertible Car Seat to aid your research as well as ensuring the highest safety to your child. Do you prefer a car seat that will grow with your kids? Then this Graco Milestone Review All-in-1 Convertible Car Seat review might be helpful for you.
In this Graco Milestone Review all in 1 convertible car seat, we have reviewed Graco Milestone All-in-1 Convertible Car Seat to aid your research as well as ensuring the highest safety to your child.
Therefore, now you do not have to worry about buying a car seat every six months as this product will not upset you anyway.
Most of the parents had to buy several car seats as their child grows up quickly, and the car seat remains as it was before. However, Graco Milestone comes with many unique products, which provide both comfort and safety.
Moreover, this unit stands out among any other products because of its flexibility to adjust with kids of different ages. Besides, it is compact as well as budget-friendly, which makes this product indeed a versatile one.
If you need a car seat immediately for the next trip with your kid, then our recommendation will be to check out the highlighted features we have listed below.
It is essential to go through every phase of a product, even before you invest a penny on it. So, to comprehend well, we have described each of its features.
Well, let us see what makes this product the best for your child.
Disclosure
This review content is independent of editorial content, we may receive compensation if you did purchase through our website. You don’t need to pay anything extra.
Key Features
Washable pads
Flexible with different aged child
Easy assembly
Multiple positions- Two rear-facing and Two-forward facing
Robust steel frame
No-rethread harness
Removable pillow
Adjustable cup holder
Well-designed and comfortable
Compact enough to fit in a car seat
You May Like
Graco SlimFit 3 in 1 Convertible Car Seat | Infant to Toddl…
$172.49$229.99
(795)
Graco Milestone All-in-1 Convertible Car Seat, Gotham
$242.99
(820)
Graco 4Ever DLX 4 in 1 Car Seat | Infant to Toddler Car S…
$269.99$299.99
(321)
Graco Nautilus 65 LX 3 in 1 Harness Booster Car Seat, C…
$156.61$169.99
(920)
All
Ads by Amazon
Graco Milestone All-in-1 Convertible Car Seat Review
From the very next section, readers will be able to comprehend all the characteristics of this product, such as its mobility, safety, durability as well as stability.
Besides, it will allow them to know how it works and why it should get priority while purchasing a product for their baby. So, let us check out what does it have to offer us.
Graco 4Ever 4 in 1 Car Seat featuring TrueShield Side Impact Technology
By Graco Baby
$311.66$329.99
Rated 4.5 out of 5 by 381 reviewers on Amazon.com
Buy Now
Safe and Durable
Safety is the first concern when we look up to something for our kids. However, Graco Milestone is one of the top-notch products in the market that is crash tested as well as an engineered safe seat. Therefore, it assures the highest safety to your child.
Besides, it has a steel frame that proves it a robust and reliable unit. Moreover, there are some car seats, which are not safe at high temperatures. But this is not the case for this one as it is tested and able to adjust to extreme heat well.
Machine Washable Cover
It is not easy to keep baby stuff clean for a long time because there are cases that they split their foods or drinks here and there.
Thus, this car seat comes with a washable machine cover, which you can quickly put into a cleaner. The padding is easy to remove, so you need to undo some loops, and it is ready to go in the washer.
Adjustable Cup Holder
Every child wants to keep their belongings close to them, especially when it is food or drink.
Therefore, this car seat comes with individual drink holders where your baby can hold their drinking bottle as well as, if necessary, their snacks. Besides, these cup holders are adjustable and moveable, which makes placing things easier.
Reclining
This car seat offers you four different reclining positions. It makes your growing child relax every time.
There is an adjuster in front of the chair, so you can easily adjust it by pulling the adjuster. Moreover, for an acceptable recline angle, choose positions 1 or 2. Besides, for forward-facing, you can go for position 3 or 4.
Furthermore, the luxurious inserts in the rear mode provide your child with a cozy and comfortable environment.
Latch Connectors
Graco comes with latch straps in both forward and rear-facing. Besides, it has only one belt, so it has to be changed according to modes.
It would be nice if it had two latch straps, but it is very is to change. All you need to change the position RF to FF, which is under the red bar.
Harness Option
According to the American Academy of Pediatrics, it is highly recommended for a child’s safety that a 5-point harness is necessary. With that said, Graco Milestone comes with a 5-point harness, which is quite extended to fit in with children of around 65 pounds.
It is essential to have more extended harness at least until your kid reaches to approximate height for their seat. Hence, from now on, if you are using this car seat, it would be convenient for you to fit in with your child.
Easy Assembly
One of the unique and special features of this car seat is its easy installation. It is most helpful for busy parents as they do not have to waste a lot of time to assemble it.
Besides, it does not even take more than five minutes to change the latch belt from rear-facing. And also, these belts are colored as well as the label, which makes your life more comfortable.
Even though the latch and harness straps are secure with the seat, when needed, it can be easily loosened.
Graco Extend2Fit Convertible Car Seat | Ride Rear Facing Longer with Extend2Fit, Kenzie
By Graco
$150.39$199.99
Rated 4.5 out of 5 by 4081 reviewers on Amazon.com
Buy Now
Wrapping Up
Finally, if you are a person who is looking for the best car seat for your child, then this adjustable product can live up to your expectations.
The product we talked about in this Graco Milestone review is one of the top-notch units in the market, which offers you the best quality as well as lots of other advantages.
Therefore, we hope the review of Graco Milestone All-in-1 Convertible Car Seat is informative enough to aid your research.
Furthermore, this unit comes with enough cozy inserts, which provides your child with a safe and relaxed environment while traveling. Besides, features like easy washing, integrated holder, and several reclining make this product extra-ordinary.
Before reviewing this product, we have gone through months of research. Therefore, as a team, we rate it five stars. We believe that this product can make each of your dollar worth.
Meta: If you want the best protection with ease of use, then have a look at our Graco Milestone Review, an all in 1 solution for car seats that has all the state-of-the-art features you could expect. | https://medium.com/@iamfat/graco-milestone-review-all-in-1-convertible-car-seat-ef2fad1d4639 | ['Serena Windy', 'Mssc'] | 2019-11-01 21:26:08.215000+00:00 | ['Baby Products', 'Parenting', 'Graco', 'Car Seat'] |
My life with PTSD | I was diagnosed with PTSD at the end of 2020. Along with PTSD, I was diagnosed with anxiety and depression. This wasn't really a surprise to me because I could tell something was off. I wasn't feeling myself. I hadn't felt myself in a very long time. I think these diagnoses were around for a long time.
I started my therapy in the mists of a pandemic. Therefore I have video chats with my therapist, in my apartment, with my 4 month old next to me. It has been hard to adjust to my new diagnosis as well as a new life with a child. I am 22 years old, I have PTSD, anxiety, and depression, with a new child. All during a global pandemic that everyone has to deal with.
Prior to 2020, I lived in a different area with my wife who was going to school and I was working. Any time I could spare I was out in the mountains, exploring, taking photos, driving around, and so many more. I loved to do this. I was happy with my wife and my exploration. We didn't make much money, We scraped by with what we had, trying to pay for college each semester so we didn't have to get a student loan. We lived in a tiny one-bedroom apartment that was under 400 square feet. Our bathroom had a door that was just wider than two feet, the shower head was sitting at five feet, so you had to bend to get your head wet. The kitchen had a fridge that was five feet tall, and like two feet wide, the stove could barely fit a cookie sheet longways and a sink. I love to cook and I had two feet by two feet counter space. It was a nightmare. The entrance to the kitchen was about two and a half feet wide and had no windows. Oh, and the entire apartment was made out of cinderblocks. So the wall inside and out was concrete. It was a cozy apartment, and I really did like it, there were a few things that bothered me but overall I really did like the place. It was nice to be close with my wife when we were first married and it helped us learn how to better live together.
Fast forward to 2020 and we moved to a new area that we both were somewhat familiar with because my wife graduated and was offered a job. We moved and found out we were pregnant within the next month. It was an exciting time as we learned more about our baby growing and developing. We decided my wife would work and I would stay home with our child. I had just started college back up at the beginning of 2020. This just seemed to be the best option for us. We went to the hospital the day of the birth and it was a long 12 house labor. This was one of the factors of my anxiety. (to clarify, I had all the signs of depression and anxiety prior to this, as well as my PTSD is from prior things, that I might get into in another post) The pregnancy and the labor were definitely hard. labor especially was traumatic. My child got stuck in the birth canal, my wife blew her iv and was bleeding out. It was definitely a hard day as well as the days after we were in the hospital due to complications with my wife and child. I think I got around three hours of sleep in three days. It was not good. This adding to my anxiety, when I don't get much sleep, my anxiety goes up.
At the end of 2020, I started going to the doctor and talking about my mental health. I was recommended to my temporary therapist just to see if I needed to find a long-term therapist. I am now diagnosed with PTSD, depression, and anxiety, and I am six sessions in with my therapist. I will soon start with my long-term therapist and start working harder on my PTSD specifically. I had never been to therapy, prior to this, so it has been interesting to only go over the video call. It's not like what you see in any movie that's for sure.
I have seen an improvement since I have gone to my therapist. Especially in my suicidal thoughts, which is always something they want to focus on and tackle right away. It has been nice to be able to clear my mind from those thoughts or not even have them because I have taught my mind to think of other things. I am doing better in my anxiety and my depression, but it definitely is still there. Better as in its not at the top of the scale, it's more of the high middle, like a 1–10 scale I was at a 10 and now I'm at a 7 or so. All depending on the day.
My life has definitely changed in the last year. So has everyones though. Amidst a global pandemic that is well over a year old now, life for anyone is no longer the same. For me, the pandemic has affected everything we do, and adding to that a new baby, PTSD, anxiety, depression, and more suicide thoughts, it has not been the best year. 2020 was amazing that I got to spend more time with my wife and that we had a baby, don't get me wrong, that was amazing. The parts that were the hardest were the effects of the pregnancy and the labor, as well as the pandemic and my diagnoses.
Trying to resume everyday life has been hard enough for everyone with this pandemic. Having to try and figure out my new normal with a child and PTSD has made it all the harder. The night mears, the day terrors, and the flashbacks have been all too real in a world where I cant get out of my head. I honestly know that my wife and baby have saved my life. I don't think I would have any purpose without them. I would still be in the crappy situation that I was in prior, just going down an ever dropping hole. I can thank my wife and baby for keeping me here and improving my life.
It has been a struggle to deal with all of this. Trying to relive past experiences with my therapist to help me deal with them has been the most triggering thing I have done. I have had triggered times in my life, but bringing them on, on purpose… has been a whole different story. Trying to go so far back in my head to pull out things that happened 15 plus years ago, has been a very hard and time-consuming process.
Trying to do this all while not sleeping due to a new baby, and my usual not sleeping. It has made things worse. I had a therapy appointment today, and we talked about my sleep and how my not sleeping has added to this spiraling effect. Where when I don't get much sleep I start to get more anxious and then that makes it harder to fall asleep so I get even more anxious so it's even harder to sleep and so on. I struggled to sleep prior to this at times but now that I add the lack of sleep from my baby, it has made it hard to get away from this spiral.
I started this blog to track my thoughts and emotions. I wanted to write them down and hopefully get them out of my head. I'm going to try and head to bed because it's 11:40 and I need all the sleep I can get. If anyone reads this, I hope it was understandable as well as helpful if you are going through something similar. And if you are going through something similar, hang in there, it will get better, and seek help if you can.
Thank you for reading, and ill see you in the next one.
— Tanner
1/20/2021 | https://medium.com/@tannersstory/my-life-with-ptsd-ac28675a93e1 | [] | 2021-01-21 06:40:51.834000+00:00 | ['Baby', 'Life', 'Pandemic', 'Blog', 'PTSD'] |
How to become a social influencer. | As a teenager social media almost completely shapes my life. Like, follow, post, and share are almost like the bible to me, over the years I have built up a small social following on many apps. And I’m guessing you want one too? Here are the steps to becoming the next big hit.
Photo by Viviana Rishe on Unsplash
Step 1: Make people want to be you
The reason people follow influencers is because you want to be like them, be friends with them, be them. Make yourself the thing that everyone else wants;to be young, hot, and interesting.
Step 2: Move to LA
Isn’t the reason people are popular because Los Angeles? To become the ultimate influencer you need to move to LA. People will follow you just because of F.O.M.O (fear of missing out). And while you’re in LA might as-well make a few friends just to social climb.
Step 3: Have a scandal or a few…
Isn’t having a scandal the perfect thing? It keeps your name out there and gives you so much attention. Some people today are only relavent because scandals, like Trisha Paytas. Sure saying the N-Word and doing black face are bad but people forget overtime. | https://medium.com/@sidneyisaac/how-to-become-a-social-influencer-ef87d5119d86 | ['Sidney Isaac'] | 2020-12-22 04:16:52.375000+00:00 | ['Tiktok App', 'Teens', 'Los Angeles', 'Rich', 'Influencers'] |
How to Be Confident in Your Entrepreneurial Journey | What is the most important part of being an entrepreneur? Confidence. You have to believe with every fiber of your being that you‘re doing the right thing, on the right path, and that the hard work is totally worth it.
Sometimes I joke about working 14 hour days, so I must be a hustler, but it was true. I started my business while I still had a full-time day job, a sales career I’d built for almost 10 years, and have been successful in. I enjoyed my career and the opportunities I had within it.
I’ve met some of the most amazing people, learned from the best bosses, and gotten marketing, conference, travel, and training opportunities I never would have had otherwise. I appreciate what I accomplished, and in some ways the transition to entrepreneurship has been bittersweet.
I’d been working nights and weekends for a few months building my own writing business. Every free moment was spent communicating with clients, research for articles, writing articles and copy for websites, editing, sending it to the client, posting it online, sharing, etc. And when I wasn’t doing that, my time needed to be spent marketing myself and my services and looking for additional clients, then following up with potential clients, proposals that are out, checking on invoices, and applying to additional writing gigs.
I was so busy, and it was exhausting, but it was also pretty amazing to see what I’m capable of. I may have been tired sometimes, but I was also invigorated and interested in my clients and what I was doing. I was building something from nothing, which is extremely exciting and also terrifying.
Interestingly, many of my clients are entrepreneurs themselves, so I’m also surrounded by incredibly smart, funny, interesting people who completely understand my journey and are excited to be a part of it.
I’ve also had the very interesting discovery of learning to utilize Instagram as best as I can. I’d never really used it at all, but decided to give it a shot, and hey — free marketing. It’s been fantastic! I’ve gotten half of my client list from Instagram. I’ve been enjoying posting pictures, engaging with people, and really finding fascinating new people all over the world to follow. It’s a really cool visual platform.
As I got more clients, I took my leap of faith — in myself. I quit my day job to focus on writing and editing full-time.
That was over 18 months ago, now. And every day I am confident in my journey and my abilities and myself.
I’m busy, but I am supported and growing and learning and excited and some days I don’t know exactly what I’m doing, but I keep learning and researching and I will not stop. I will NOT STOP because I know I can do this.
What’s an important quality to being an entrepreneur? CONFIDENCE.
I am good at what I do, I deserve to do it, and I am bringing great value to my clients. I am completely confident in these things.
How can YOU be more confident?
Identify what you’re good at. What do you feel you are good at and like doing? What special skills do you have?
Once you know what you’re good at AND feel good doing, you’ll feel that spark of confidence — you KNOW this is something you’re great at, no matter what it is.
Body language. Act confident, walk with your shoulder back and head up. Project confidence.
Research if there is a way for you to use your skills in a career you’d find fulfilling. I am confident in my writing, and I found a way to be a writer.
I know it sounds a bit simplistic, but confidence boils down to how you FEEL about your skills and yourself. You don’t have to be confident in everything about yourself to be successful. That’s not realistic for most people.
Find something you ARE confident in and build from there.
And remember — you can fake it ’til you make it. When you project confidence and act confident, you will internalize that feeling and the reactions and will continue to act that way, which eventually becomes a real part of you. | https://jyssicaschwartz.medium.com/how-to-be-confident-in-your-entrepreneurial-journey-7bd0d8a6a11f | ['Jyssica Schwartz'] | 2018-07-24 16:49:18.633000+00:00 | ['Entrepreneurship', 'Writing', 'Confidence', 'Learning', 'Life Lessons'] |
Mexico WILL Pay For The Wall | Donald Trump says there will be a border wall, and Mexico will pay.
I hate to be the one to break this to you liberals out there, but if Donald Trump becomes President, and builds an effective wall across our southern border, then Mexico will pay for it whether they want to or not. Repeatedly.
Right now, Mexican immigrants in America send tens of billions of dollars of their income (or welfare) back to Mexico each year, in the form of what are called “remittances”. According to the Government Accounting Office (GAO), in 2014 about $25 billion dollars in remittances were sent from America to Mexico (see this article in Breitbart).
Assuming that there are, very roughly, about 25 million legal and illegal Mexican immigrants, combined, in America, then I’ll give a very rough estimate that the average immigrant is sending back about $1000 per year. So if an effective southern border wall cuts the total number of Mexican immigrants in America by only 4%, or about 1 million, then that will represent a loss to the Mexican economy of $1 billion American dollars per year, in perpetuity. That same money that would otherwise go to Mexico would be earned by American workers instead, who would spend it back into the American economy, taxes and all.
This would also take those currently-unemployed American workers off of welfare, without replacement, as the Mexicans who would otherwise have taken those jobs would never have entered the country. If each of those 1 million Americans was using welfare services totaling only $20,000 per year, then that comes to $20 billion dollars in annual savings for the welfare system, which is probably enough money to finance an entire, very substantial border wall in significantly less than one year (although it would take longer than that to build).
So it makes no difference whether or not Mexico literally writes us a check to pay for the building of a border wall between our two countries. The only thing that has to happen in order for Mexico to be forced to pay for a wall, is for the wall to function successfully as a wall. Like it or not, directly or indirectly, Mexico will pay.
Then, if we want to help Mexico with their economy, we find some way to do it that doesn’t involve importing all of their unskilled labor. | https://medium.com/the-insomniac/mexico-will-pay-for-the-wall-ad08b56a18a | ['Kevin Grant'] | 2016-11-28 05:27:09.270000+00:00 | ['Mexico', 'Payments', 'Unemployment', 'Donald Trump', 'Immigration'] |
Be a man of understanding | “Be a man of understanding and understand everyone is not going to have the same understanding.” Kevin Gates quote.
Message for all men out there
Every man needs to understand violence is not needed in every situation when you do not agree with another male. Understanding will help you see the other persons perspective giving you the reason to why they view things the way they view it. Be aware next time you are in any situation be understanding before you react to anything. | https://medium.com/@biggeneralpodcast/be-a-man-of-understanding-2b41dde5dc12 | [] | 2020-11-27 00:13:24.158000+00:00 | ['Men', 'Mens Health', 'Masculinity'] |
10 Interesting facts about Blockchain | Blockchain is a complex and exciting subject and the facts that accompany it are numerous.
Whether you are a novice or an expert in the field, there may be things you may not know about Blockchain. I hope this article will bring you some additional knowledge.
Here you will find 10 interesting facts about Blockchain:
1) Bitcoin was created by Satoshi Nakamoto. Is that a woman, a man, a group? Nobody knows. The 10 years of speculation isn’t just on the price of Bitcoin, but also on the creator’s identity. Over the years, several people (and I have one in particular person in mind when I say this) have tried to pretend to be him. How foolish!
2) Bitcoin was alone at the beginning of the road. That’s why it is the “master”. All other coins are called “altcoins”. The first altcoin was Namecoin, created in April 2011.
3) In October 2011, Litecoin arrived! Litecoin and Namecoin are both alive today, among more than 2000 cryptocurrencies. Bitcoin started the game. Some of the 2000+ cryptocurrencies are a scam, some are serious (as Horizen, who arrived in 2017), some are semi-serious. There is even a cryptocurrency built around… a dog!
4) There is no blockchain without cryptography, but that doesn’t mean that all blockchains are encrypted! Cryptography finds its roots in the Cypherpunk movement, best known from 1990–1992, but born in 1980 (cryptography already existed, the Cypherpunk Movement made it proactive). There is a famous mailing list, the Cypherpunks electronic mailing list. The ideas contained within were used by Satoshi Nakamoto to create Bitcoin. Is Satoshi also cypherpunk?
5) Bitcoin turned 10 years old in 2019. The first Bitcoin transaction was for two pizzas, for 10.000 BTC (now 10.000BTC is worth approximately 102.620.000 dollars. I hope the pizza seller decided to HODL!).
6) Bitcoin isn’t anonymous, this is pseudonymous. Some of Tor’s users were a bit confused and regret some sales and purchasing.
7) Who owns one of the largest Bitcoin wallet after Satoshi Nakamoto? The FBI. After shutting down the Silk Road, the darknet site involving criminal activities in 2013, the US government confiscated a large amount of Bitcoin. The authority decided to sell off some of the BTC through an auction.
8) It’s estimated around 2 million bitcoins are stolen and 4 million bitcoins are lost to date. (according to https://thinkmaverick.com) So keep your private key safe!
9) Bitcoin (and cryptocurrencies in general) is forbidden in some countries: you will find the list here.
10) The world banking sector will save up to $20 billion by 2022 by implementing blockchain, according to a study conducted by Accenture.
11) [ BONUS ] The last anecdote of this article but not the least… Another fool made public a bet on Bitcoin price. He stated that Bitcoin will reach 1 million by the end of 2020. If not, he promised to… Well, decency makes me tell you to look at this tweet on this subject instead.
You can learn more about blockchain and cryptocurrency at the Horizen Academy
And you, which fact do you find interesting and you would like to add to the list? Let me know! | https://medium.com/@manon_71723/10-interesting-facts-about-blockchain-51cc3db2309d | [] | 2019-07-01 16:10:43.233000+00:00 | ['Horizen', 'Bitcoin', 'Facts', 'Blockchain', 'Nakamoto'] |
Has Apple Won with Apple One? | Has Apple Won with Apple One?
Why I will be getting Apple One and if it makes sense for you.
All technology podcasts, magazines, blogs and websites have covered Apple One in great detail. They go over the things that could be better, like larger iCloud storage sizes for each option, and discuss Apple’s clear goal in getting more into services.
These things about Apple One are interesting but I wanted to talk about how one specific tier not only benefits me but feel it may be one of the best deals Apple has ever offered.
I have been big fan of iCloud storage and the benefits of having all of my photos, messages, app data and other data in constant sync with the Apple cloud storage. Making it available on my Mac, iPad, or iPhone anytime I need something. Not only does Ulysses sync each post that I have started writing, but if I forgot to email a document that is in my downloads folder on my Mac, I can get that document through my iPhone and email it from there.
Apple’s 2 TB iCloud tier was my biggest reason for why I now only buy iPhones and iPads with the base storage option. If you have enough iCloud storage and you trust Apple to optimize and store your data that is not being used in the cloud, you don’t need a ton of local storage. It also makes it so whatever device you pickup you know you will always have access to the same data. It is magical.
So, iCloud storage for me is a big necessity. Not only do I have all of my data syncing but I also have my wife’s data syncing as well using the Family Share feature. So, all the same things that I can do with my gear, I also have set up for my wife. The biggest benefit of all though, is iCloud Photo sync. We are using about 500 GB of the 2 TB storage for just our photos syncing to iCloud.
Now looking at the other services the Apple is offering in the Premium bundle, I am currently only signed up for Apple Music (Family plan), News+, and AppleTV+ (still on the free trial from iPhone 11 Pro). So picking this option I will also be getting Apple Arcade and Fitness+. I had Arcade for a while but didn’t fall in love with any game and Fitness+ is not yet available, so we will see how good that is.
Overall though I am already paying for two-thirds of the services offered in the Premier bundle. Once AppleTV+ trial ends I will be paying $39.96 in total for all of my services without the bundle.
So, right off the bat I am going to be saving $15.01 by signing up for the Premier bundle since it only costs $29.95. If you don’t currently pay for News+ for $9.99 a month, I know many people think it isn’t worth it, this bundle still saves you $5.00 and you are getting News+ extra, let alone all the other services not mentioned
If we break down the cost of all the services in the Premier bundle it equals to about $54.94. That is almost double the $29.95 that Apple is charging for their top-tier Apple One bundle. Even if all I do is continue to use the four services I have and never use the other ones, I am still saving money which is pretty great.
I look forward to seeing how Fitness+ goes. Since we are all stuck at home and I have an Apple Watch, I am very curious to see if I take advantage of this service for my new workout routine. I am also interested in Arcade more now that I have an iPad. When Apple Arcade was originally announced I only had a MacBook Pro and iPhone, so I am curious to try some of these games on my iPad.
I really hope that Apple comes up with more bundle options in the future. Right now the only one I see worth it is the Premier option. But if I were to recommend any of these to friends or family it would be difficult. This is mostly due to the small storage options for both the Individual and Family option. I think you can add more storage on top of the bundles if you want but that defeats the purpose of a bundle since you will be paying separately for more.
For now, though, I am going to jump on this deal as soon as it becomes available in the next month or so. The benefit of saving money is huge but also having a single charge each month will also be better. Right now all of my services are charged on different days for different amounts which can be annoying. Having a single charge each month makes my minimalist brain tingle with excitement. | https://medium.com/techuisite/has-apple-won-with-apple-one-adaa7d7a9186 | ['Paul Alvarez'] | 2020-09-22 14:13:04.680000+00:00 | ['Events', 'Gadgets', 'Technology', 'Apple', 'Services'] |
Innovative Mind Mapping System Connecting to Smart City IoT Networks | How it works
To create the semantic network of public services, CEFAT4Cities partners start from a few abstract templates that describe what a public service looks like (who can submit a form to get access to which service, providing which type of proof?) and what the interacting entities look like (are we dealing with an organisation or a citizen?). These abstract templates consist of nodes and links (hence the term “semantic network”) and are provided by the European Interoperability Framework which governs data standards to ensure that data can be used across as many applications as possible.
Next, these templates are used as extraction filters to transform unstructured human natural language (occurring on websites, online forms, etc.) into machine-readable semantic networks which can be utilised in any software application. In short, thousands of pages of raw text are transformed into structural representations that can be used by machines, such as chatbots.
The process runs as follows: data is collected automatically from websites, then only those pages containing public service information are selected. Next, paragraphs describing administrative procedures are extracted and syntactically analysed to identify nodes occuring in the template. Finally, relations between the extracted nodes are identified (the most challenging part of the process) and the information is delivered in a standardised Open Linked Data format.
Note that the process designed can deal with all sorts of raw text in various EU languages.
To achieve all this, multilingual Artificial intelligence techniques such as automated classification are used, topic modelling, clustering, syntactic parsing, machine translation, unsupervised bilingual language induction, shallow parsing, paraphrasing, and question/answer pair generation.
Through a dedicated data schema, the Open Linked Data format used to publish results, is compatible with the FIWARE Context Broker. Any follow-up effort or downstream software application can use this schema to publish or subscribe to the public services content created.
Figure 1 . Architecture
When developing the solution, several challenging issues were discovered. As described earlier, discovering links between nodes (connecting for example an administrative procedure and all the evidence a citizen must provide to fulfil it) proved to be a non-trivial task. A unique solution had to be built, combining syntactic parsing and classification, since no out-of-the-box components existed to do this. Throughout the pipeline, a balance was needed between using monolingual AI models and multilingual AI models using translated data, since many linguistic AI models only exist for a couple of languages.
Finally, often the language itself was problematic. Current AI models excel at “recognising” the meaning of a word when it appears within a larger body of text, but when words occur isolated (for example in a title or a table) recognition and translation become more difficult. In addition, there is also the typical “call-to-action language” used on websites. This sort of language (for example, “Need more info on the necessary formalities to start as a self-employed professional?”) typically packages pieces of declarative information as questions, throwing the systems question/answer pair extraction off balance. | https://medium.com/@fiware-foundation/innovative-mind-mapping-system-connecting-to-smart-city-iot-networks-f654bca85687 | [] | 2021-12-16 13:35:16.237000+00:00 | ['Translation', 'Smart Cities', 'Government', 'Public Service', 'Open Data'] |
Burning out at Home: 4 Common Challenges for Remote Work | Written by Rebecca Lee, Lillie Sun & Christine Yip, edited by Crystal Li
With plans to work for home lasting longer than any of us expected, it’s time to buckle down and address some of the real challenges that come with it.
When the stay at home orders first came into play, many of us thought we’d get a chance to take a breath and slow down from our day-to-day grind. Some have. For others, that expected sigh of relief has yet to come. Though our work environments may have drastically changed in the last few months, the risks of burnout have not. In fact, we may be at risk now more than ever.
The problem is, employees aren’t just working from home. They’re simultaneously adjusting to the unique needs that working remotely brings with it, all while balancing personal and family responsibilities. On top of all that, there is a big shadow over everyone’s heads — anxiety. Anxiety about job security. Anxiety about staying safe. Anxiety about what comes next. All of this adds stress to our day-to-day lives, and the more stress that we experience at one time, the higher risk we are of burnout (see our post “demystifying burnout” for a review of the impacts of burnout).
What does the research say?
Despite the extraordinary circumstances we are in, it is possible to adapt to our circumstances, and find ways to manage the impact of these demands on our psychological health. Researchers studying stress and burnout at work have long studied the relationship between job demands, stress, and burnout. The job demands-resources (JD-R) model is a framework of stress at work that is supported by decades of research. Using this model, researchers have demonstrated that high levels of job demands (e.g., high workloads, workplace conflicts, unstable schedules, etc.) will lead to stress and eventually burnout, but there are certain job resources (e.g., supervisor support, stability in work hours, clear role expectations, etc.) that can reduce stress and minimize the risk of burnout. For a full list of common job demands and useful resources supported by research and practice, check out our list of 10 Stressors Leading to Burnout.
We drew from the JD-R model and its supporting research to help guide our thinking, and serve as a foundation to understand which job demands might be playing a greater role for those working from home right now. The following is a list we compiled of some of the more pressing job demands many are likely experiencing, and some evidence backed recommendations on how to address them.
1. Blurred Boundaries
Right now, boundaries between work-life and home-life may feel non-existent. Many people are working from home with less than ideal “offices” and new “coworkers” (children, parents, pets). These circumstances make it difficult to concentrate on one task at a time, leaving many of us to feel like we are unable to juggle being an effective employee, partner, parent, friend, and/or caretaker.
Take Action
Define a daily routine for your work and life activities and stick to it. This might mean a dose of healthy communication (and perhaps negotiation) with your partner, your manager, and your team. If you share your calendar with your team, consider blocking out your off work times so people can easily see when you’re available (or not!).
Identify your boundaries. If you know you need a break from the endless screen time during the day, make that happen. This might mean you have to ignore the ping or email coming from your boss or your team over lunch. Though this may seem obvious or overly simplistic, knowing and enforcing our boundaries takes intention, but the rewards of doing so are worth it.
Similarly, ask your family members to respect your working hours and home office space. Loved ones can sometimes assume that since you’re home, you’re available, which can make it difficult to concentrate on working efficiently.
2. Uncertainty
One of the biggest challenges of working during these circumstances is the rising anxiety and uncertainty around us. We have quickly transitioned from worrying about whether we’re late to our next meeting, to whether our savings accounts will last, and whether the people we love will be safe. While focusing on your work can serve as an important distraction from the pandemic and provide some sense of stability, it is undoubtedly harder to maintain focus, keep motivated, and maintain productivity at this time.
Take Action
At organizations today, there should be nothing more important than the promotion and support of mental health and wellness of employees. A supportive supervisor (and team) must be able to adjust work expectations and show empathy and compassion for all employees, not just those that are visibly struggling.
If you need assistance, try to start a dialogue with your supervisor about how you’re adjusting to the “new normal” and ask them for advice. Don’t hesitate to take advantage of mental health and wellness programs your company offers. These EAP programs can help you manage through these new uncertainties.
If this is one of your first times feeling overwhelmed or experiencing increased levels of anxiety, take a step back from your work and your thoughts. Though it won’t solve all your problems, regularly take walks, meditate, or close your eyes and listen to some music to detach your mind and body from your anxious thoughts, even if just for a little while.
3. Unclear Expectations
Unfortunately, COVID-19 has impacted many workplaces, resulting in furloughs, lay-offs, and staffing shortages. If you’re currently still employed, you may be feeling a mixed emotions — thankful to still have a job, guilty that your colleagues do not, and also fearful that you might be next. On top of this, you may find yourself having to pick up additional tasks and unfamiliar roles from those who were let go. This increased uncertainty and workload can cause additional stress and pressure to an already difficult situation.
Take Action
It is important that supervisors help clarify with team members about how work will be redistributed, and what the changing expectations are. This is especially important if roles and responsibilities are shifting from one team or individual to another. If you manage a team, make sure to reiterate the change to the team so that everyone is aligned about who is responsible for what moving forward.
Two way communication between you and your supervisor will be critical as everyone works through the new normal. Clarify what is expected of you, inquire about what help is available if you are overloaded, and work through what should be prioritized. Setting the tone of “we are all in this together” will be important.
It is important for managers and leaders to also provide positive reinforcement to their teams and ensure they are being recognized for their efforts. A surprise pizza delivered at lunch or a card or email to say thank you goes a long way.
4. Childcare Responsibilities
Many parents who are still employed are experiencing unprecedented challenges to balance their workloads with childcare responsibilities. Between endless video calls and homeschooling assignments, parents are being pushed to the limit. As we wait to see in the coming weeks whether businesses, childcare facilities, and summer day camps will be opened, it continues to be a juggling act, leaving parents little time to unwind and detach.
Take Action
Many parents have already figured out creative ways to adjust their schedule and coordinate “shifts” with partners or grandparents to take care of the kids. However this leaves little time for parents to have some downtime to unwind.
Finding time to mentally detach from your responsibilities and give yourself time to “re-energize” will be critical to ensure you don’t burnout out before some of the childcare restrictions are lifted. Take a look at your schedule and find time to replenish your own energy; this could be 10-minutes of quiet meditation in the morning, a late-night run, or listening to a podcast before bed. As important as it is to take care of others, you can’t be there for those you care about if you are not also taking time to take care of yourself!
Regardless of what the new “normal” is for you, your team, and your workplace, it’s important to remember that we all deal with uncertainty and anxiety in different ways. The current demands and complexities we are facing make burnout a real risk if people do not have sufficient resources to cope with the added demands. The key to preventing burnout is first being aware to recognize the extra demands that are causing you significant stress. This awareness can help you identify the appropriate resources or strategies to minimize the strain on your well-being. By asking for help, making positive changes to the things you can control, and patient and kind to yourself as you work through it, we can make our current situation a little bit easier to manage.
To help managers working remotely for the first time, we have published some helpful guidelines for managing remote teams on our website. Visit our Resources page to access them now. | https://medium.com/organizations-for-impact/burning-out-at-home-4-common-challenges-for-remote-work-521d168710c7 | ['Organizations For Impact'] | 2020-06-04 20:43:53.873000+00:00 | ['Work From Home', 'Burnout', 'Remote Work'] |
Reflection on Amal Totkay | Before reflecting on Amal Totkay it is essential to share why Amal Academy focuses so much on them and what they do?
Amal Totkay, As the name indicates is a set of life hacks that will help anyone to develop a growth mindset. In Amal, we are thought that a growth mindset is a leadership trait. Why so? Because people with a growth mindset have an underlying belief that their learning and intelligence can grow with time and experience, in people with a growth mindset, the brain is most active when they are being told what they could do to improve. They view setbacks as an opportunity to learn. They tend to try harder in an effort to overcome the problem. A leader with a growth mindset sees opportunities for their team, even during times of crisis.
The 5 Totkay that Amal shared are
1. Self-Talk
2. Get Out of Your Comfort Zone
3. Create New Habits
4. Ask People Help
5. Fake It Till You Make It
l find all 5 Totkay are very effective and all of them will contribute to developing a growth mindset but the one I like most is creating new habits this has the potential of making someone a lifelong learner, the right habits will help me reach my goals.
“You’ll never change your life until you change something you do daily. The secret of your success is found in your daily routine” — JOHN C. MAXWELL
I am already working on changing my bad habits with good ones and the first on my list is to decrease my time on social media and spend that to read/write blogs.
To develop a growth mindset we all should start implementing these Totkays one by one rather than making a sudden change and ending up at the same spot. | https://medium.com/@zwaqas78/reflection-on-amal-totkay-3fb5eefd0a42 | ['Muhammad Waqas Zafar'] | 2020-12-25 11:06:49.320000+00:00 | ['Growth Mindset', 'Amal Fellowship', 'Amal Academy', 'Leadership'] |
I Created an Alexa Skill to Settle an Argument With My Wife | Then it comes the fun part:
Design what the skill should do if it’s a standalone service Find out how to implement it with Alexa
In my design, the skill should be able to give an “outdoor score” in the range of 0–100 for a city within a time frame. The score should reflect how comfortable it was, so I came up with a formula that takes temperature, humidity, cloudiness, precipitation into account. There’s a range for perfect temperature and humidity. Once it’s out of the range, the formula gives a linear penalty depending on how far it deviates. The percentage of cloudiness and precipitation also plays a big role. A place such as San Diego should consistently score very high.
The skill should also be able to summarize the weather using these properties so that the users get an idea of the overall picture and the reason why a certain score was given.
There’s some nitty-gritty in timezone handling and data munging, but the logic isn’t too complicated to implement. Before I built the actual skill, I wrote the logic and tests in Python, then used FastAPI to make it a service and deployed it on Heroku so that perhaps the Alexa skill could directly call it later.
How to Build a Custom Alexa Skill
This part is where the learning is. I had no prior experience developing Alexa skills, so I searched for tutorials and followed their documentation. The most helpful example was the City Guide Skill Sample. You can follow its step-by-step setup instruction to build your own skill.
In a nutshell, one can register as an Alexa developer, create a skill in the dashboard, and build up each component of the skill all in their web UI. The components include
The skill’s invocation name, as in “Alexa, ask <skill_name> to …”
An interaction model that has intents, slots, and utterances
Optional multi-modal responses, such as audio and video.
An endpoint that serves the requests. It can be an Alexa-hosted Lambda function (recommended) or a third-party service
During the process, I found out that using a third-party web service as the backend is quite a hassle. Going serverless and using the recommended Alexa-hosted Lambda function is easier and more secure. It also handles scaling if it ever gets too many requests. Since I wrote my logic as an independent module in the FastAPI case, I can easily put it here in the Lambda function. The only hoop I had to jump through was the IAM role access so that this Alexa hosted Lambda can query my DynamoDB.
Once I got familiar with the Alexa Developer Console, I was able to create the Interaction Model right there with a few clicks.
If you know some natural language processing, this interaction model is basically named-entity recognition (NER) and intent classification. NER is for finding slots. They are the variables you can extract from an utterance (the sentence that the user says to Alexa). For example, if you say “Alexa, what’s the weather in {New York}?” New York is a {city} variable that you can extract and use in your code. If you provide enough utterances with labeled slots, the NER model can generalize and extract similar words or phrases as the slot from utterances it has never seen before.
Intent classification is for identifying the user’s intent. In the case of Weather Score, I defined 3 intents at the moment: score, summary, and detailed summary. If the user asks “what’s the score for Seattle last week”, the intent should be classified as score, and it should trigger the score intent handler in my Lambda function which then queries and calculates the score for the city and timePeriod slots mentioned in the utterance. If the user says “give me a detailed summary of Honolulu last month”, it should trigger the detailed summary intent handler and output an hourly summary I defined.
The training of these models is completely encapsulated into a few clicks. This is quite nice. Even if the developer doesn’t know machine learning at all, its benefits are tangible with just a few lines of code.
VS Code Extension: Alexa Skills Toolkit
I like VS Code a lot, one reason being its powerful extensions. This Alexa Skills Toolkit extension makes me like VS Code even more. I can test and deploy the skill without leaving my editor! If you would like to learn how to set it up, just follow this guide here.
Screenshot of my VS Code Extension: Alexa Skills Kit
Local Debugging
One thing I particularly like is being able to step through my code line by line with breakpoints. It requires some extra steps to set up for an Alexa app, but the result was worth it. I just type into the test dialog box powered by the Alexa extension, and the execution stops at breakpoints I set for easy debugging. If you are interested in setting this up for local development, I have written a separate article for the details.
Deploy and Submit to Alexa Skill Store
Once my code was ready, I could deploy and promote it to live by clicking some buttons and filling in a form in the Alexa Developer Console. In the form, I described what this skill did and gave some sample utterances. After that, the console runs a validation and tells you what needs to be updated. With everything validated, you can submit your skill for review. It usually takes less than 24 hours. My first review took only ~2 hours.
Here are some sample utterances and responses.
Utterance: Alexa, ask weather score the weather in Seattle last week Response: For the last 7 days, Seattle had a high of 52 degrees, low of 37 degrees, average temperature 46 degrees. 7 out of the 7 days had precipitation. There were a total 58 hours of precipitation.
This is the “summary” intent. It gives the highest and the lowest temperature at hourly granularity during the time period asked, along with the number of days/hours with precipitation. And wow, Seattle really does have a lot of rain!
Utterance: Alexa, ask weather score the score of New York yesterday Response: The outdoor score for new york is 62 out of 100 yesterday. There was precipitation.
This is the “score” intent. The response is an outdoor score calculated based on my formula and a brief mention of whether or not it had precipitation.
Utterance: Alexa, ask weather score for a detailed summary of Honolulu for last month Response: Here is a detailed hourly summary for Honolulu for the last 30 days: the temperature was comfortable most of the time, 68% of the time was humid, 21% of the time was cloudy, 3% of the time had precipitation.
This is the “detailed summary” intent. It summarizes the temperature, humidity, cloudiness, and precipitation over all the hours. It’s a more direct look under the hood for the outdoor score calculation.
To try it out yourself, go to the skill page to enable it if you have Alexa at home. Or you can use the Alexa app on your phone, navigate to the skill store, search for Weather Score, and enable it there. If you find it useful, please give it a 5-star! Any suggestions or ideas are appreciated.
Limitations and Future Work
The most obvious limitation is the range of time and space it can query for now using the free data source.
At this time, the supported time periods are yesterday (default if there’s no time period provided), last week, and last month.
The supported cities include Los Angeles, New York, Chicago, Dallas, San Jose, Denver, Honolulu, San Francisco, Seattle, San Diego, Beijing, Hong Kong, Tokyo, Singapore, Shanghai. This is just the initial list and can be expanded if there’s demand.
With more data, I’m thinking of several potential features to add
Query a specific month, e.g.
“What’s the weather like in {August} in New York?”
Compare two cities over a time period, e.g.
“Compare the weather in San Francisco and Seattle last month”
The response can include the outdoor scores for both cities, a comparison of the percentage of cloudiness and precipitation, and more.
If you have any suggestions, please let me know!
The End
Weather Score is a fun project for me to learn how to build an Alexa skill, and it serves a small niche that I personally wanted. The gist of this article is to let you know that it is quite easy to develop an Alexa skill. If you would like to build one yourself, a good place to start is to think of something you are too lazy to search online but repeatedly need. Find an API that has the data you are interested in, write a Lambda function, click some buttons in the Alexa Developer Console, and that’s it!
After building this project, I realized how cumbersome it is to say “Alex ask <skill_name> to …” every time I need to invoke a skill. In my opinion, for a more natural conversational experience, the user shouldn’t have to explicitly say the skill name every time. Since the average user won’t have too many skills enabled, one idea is for Alexa to add a layer that infers the skill the user wants based on the list of enabled skills they have. Though it may introduce conflicting intents, it is not impossible to solve. If anybody who works at Alexa reads this, let me know if it makes sense.
Thanks for reading! | https://medium.com/swlh/i-created-an-alexa-skill-to-settle-an-argument-with-my-wife-410fbcb6a45a | ['Logan Yang'] | 2020-12-22 21:01:34.152000+00:00 | ['Side Project', 'Artificial Intelligence', 'Conversational Ai', 'Alexa', 'Programming'] |
The Silence, by, Emma Ferguson inspired by Robert Hayden | Sundays, too, your voice echoed through the house,
with the silence that plagued me,
Only one of these accompanied me through those monotonous days,
Yet it was not the one that once called me beautiful,
With my cracked voice and tired eyes,
The silence was the one who cradled me.
I’d wake and wish you had stayed,
When I put up my walls, you would call,
And I’d fall again, just for you to leave,
Standing on shaky ground fearing that this would be the last,
You never stayed, It never was,
If pain had a name it would be yours.
Speaking indifferently towards me when others were around,
They couldn’t know,
Know that it was you who lifted me up to switch to another just as I fell.
My smile was polished,
What a fool,
The fault was mine for I somewhat know,
You were my Hireath, you weren’t mine to lose.
What did I know, what did I know
Of Clandestine meetings and a written elegy
My Poem “The Silence” is about losing someone and realizing that through all the time you spent together they were never really there, and they were never really yours. It played off of “Those Winter Sundays” by incorporating the beginning words of each stanza into my own. Both were about someone not being around and misunderstanding the reason, but one was because they needed money, the other was simply because they didn’t want to be there. | https://medium.com/@eferguson.24/the-silence-by-emma-ferguson-inspired-by-robert-hayden-a53f908afede | ['Emma R. Ferguson'] | 2020-12-22 18:03:17.268000+00:00 | ['Poem', 'Poetry Writing', 'Inspiration', 'Poetry', 'Poetry On Medium'] |
The future of self-driving cars | But the worries about operator less elevators were quite similar to the concerns we hear today about driverless cars. — Garry Kasparov
There is a lot of talk about self-driving cars and how they will one day replace truck drivers, and they say that the transition will happen all of a sudden. In fact, the change will happen in steps, and it will start in a few locations and will expand rapidly. For example, Tesla is releasing software updates that make their car more and more autonomous. It first started releasing software that let its cars drive on highways, and then with a software update, its cars were able to merge into traffic and change lanes. Waymo is now testing its self-driving cars in downtown Phoenix. But it might not be surprising if Waymo starts rolling out their service in other areas.
The industry talks about five levels of autonomy to compare different car systems. Level 0 is when the driver is completely in control, and Level 5 is when the car drives itself and does not need driver assistance. The other levels range between these two. I am not going to delve into the details of each level because the boundaries are blurry at best, and I prefer to use other ways to compare them. As the systems improve, autonomous cars can prevent humans from making mistakes and help avoid accidents caused by other drivers.
Self-driving cars will reduce and nearly eliminate the number of car accidents, which kill around 1 million people globally every year. Already, the number of annual deaths per billion miles decreased due to safety features and improvements in the design of vehicles, like the introduction of seatbelts and airbags. Cars are now more likely to incur the damage and absorb the impact from an accident, reducing the injuries to passengers.
Number of miles driven by cars versus the number of annual deaths per billion miles driven
Autonomy will reduce the total number of accidents and deaths. In the United States alone, around 13 million collisions occur, of which 1.7 million caused injuries, and 35,000 people died. Driver error caused approximately 90% of the accidents, a third of which involved alcohol. Autonomy can help prevent these disasters.
Deaths are not the only problem caused by accidents. They also have a huge economic effect. The U.S. Government estimates a cost of about $240 billion per year on the economy, including medical expenses, legal services, and property damage. In comparison, U.S. car sales are around $600 billion per year. According to data by the U.S. National Highway Traffic Safety Administration (NHTSA), Tesla cars reduced their crash rate by 40% after the introduction of the Autopilot Autosteer¹ feature. An insurer offered a 5% discount for Tesla drivers with the assist feature turned on.²
Autonomy will have an effect on traffic overall. Cars will not necessarily need to stop at traffic signs because they can coordinate among themselves what is the best route or safely drive at 80 miles per hour 2 feet away from each other. So, the flow will improve, allowing more cars on the streets. With fewer accidents, there might be less traffic congestion. Estimates say that as much as a third of congestion happens because of car accidents, and these create even more congestion. The impact remains unclear since, to my knowledge, no studies exist yet. Self-driving cars will certainly increase capacity, but as the volume increases, so does demand. If it becomes cheaper or easier for people to drive, then the number of people who use them will escalate.
Parking will also transform with autonomy because if the car does not have to wait for you within walking distance, then it can do something else when people do not need it. The current parking model is a source of congestion with some studies suggesting that a double-digit percentage of traffic in dense urban areas comes from people driving around looking for parking places. An autonomous car can wait somewhere else, and an on-demand car can simply drop you off and go pick up other passengers. But this new model might also create congestion because in both cases, the cars need to go pick up people. With enough density, the on-demand car might be the one that is already dropping off someone else who is close by you, similar to Uber’s model.
Parking is not only important for traffic but also for the use of land. Some parking is on-street, so removing it adds capacity for other cars driving or more space for people to walk. For example, parking in incorporated Los Angeles County takes up approximately 14% of the land. Adding parking lots and garages is expensive, driving up construction prices and housing expenses. A study in Oakland, California showed that government-mandated parking requirements increased construction costs per apartment by 18%.
Removing the cost of drivers from on-demand services, like Uber and Lyft, reduces the expenditure by around three quarters. Factor in the reduced price of insurance because of fewer car accidents, and the cost goes down even further. Transportation as a Service is the new business model.³
Transportation as a Service
Transportation as a Service (TaaS), also referred to as Mobility as a Service (MaaS)⁴, will disrupt not only the transportation industry but also the oil industry with the addition of electric vehicles (EV). TaaS goes hand in hand with EVs. Electric cars are much less expensive to maintain because, for one, their induction motors have fewer moving parts than the internal combustion engines (ICE) of gas-powered cars⁵. For autonomous vehicles in the TaaS sector, low maintenance costs are essential as car rental companies know pretty well.
The average American family spends $9,000 on road transportation every year. Estimates are that they will save more than $5,600 per year in transportation costs with TaaS, leaving them to use that money in other areas like entertainment. Truly cheap, on-demand services will have even more consequences. As TaaS with self-driving cars becomes cheaper, we must rethink public transportation. If everyone uses on-demand services, then no one will need public transportation.
Adding all the people who were moving through the subway system to cars above ground can increase congestion of the roads. In high-density areas, like New York City, people live in stacked buildings on different floors. If everyone needs to move at the same time, such as during rush hour, and go through only one “floor,” meaning the road above ground , then congestion will invariably happen. Therefore, a need exists for self-driving vehicles to move around in a manner not dependent on only going through the roads above ground. Autonomous vehicles should travel through many levels. Currently, subways go underground.
Kitty Hawk’s first prototype of its flying car
One possibility is self-driving drones, something like a Jetsonian future. Kitty Hawk Corporation, a startup developed by Sebastian Thrun, already has a few prototypes of these flying cars. Some argue that this solution might not work inside highly dense areas because these drones produce too much noise. And if they fail and crash, they can damage property or humans. The most recent prototype, however, is not as noisy as some claim. From a distance of 50 feet, these vehicles sound like a lawnmower and from 250 feet like a loud conversation. And, their design is such that if the motor or one of the blades fail, they will not fall to the ground.
Another possibility for adding more levels for on-demand vehicles is to add them under the ground, i.e., creating tunnels. But creating tunnels is a huge financial and construction investment. Elon Musk’s Boring Company focuses on reducing the cost of tunneling by a factor of 10 by narrowing the tunnel diameter as well as increasing the speed of the Tunnel Boring Machine (TBM). Their goal is to make them as fast as a snail. Musk thinks that going underground is safer than flying vehicles and provides more capacity by adding more tunnels on different levels. The Boring Company is already developing one for the city of Chicago so that people can travel from the Chicago O’Hare Airport to downtown.
TaaS will have a direct impact on the driving industry as well as employment. In the U.S. alone, self-driving cars will impact around 200,000 taxi and private drivers and 1.5 million long-haul truck drivers. Displacing truck drivers, in particular, will significantly impact the economy since truck drivers are the largest profession in the United States.
Given that during peak hours of driving only 10% of the cars are in motion means that there will be fewer cars and could affect production numbers. Over 10 million new cars are sold in the U.S. market every year. With fewer needed for the same capacity, the total number introduced to the market might go down. Also, the cost of transportation will decline by a large factor because you need fewer resources to make them. Using TaaS will be much cheaper than owning a car because of the reduced usage as well as the fuel and maintenance savings when using EVs for autonomous driving.
Switching to TaaS is easy and requires no investment or contract, so I believe that the adoption rate will be high⁶. And as consumers’ comfort levels rise due to increased safety and less hassle, usage will spread. First, the switch will occur in high-density areas with high real-estate values, like San Francisco and New York, and then it will spread to rural, less dense areas.
Cost difference of autonomous EV cars versus ICE cars
As this shift occurs, fewer people will buy new cars, resulting in a decline in car production. We already see this trend with young adults who use car-sharing services in cities and do not buy vehicles. According to one study, young people drove 23% less between 2001 and 2009. The car types that people drive will change over time as well. If you move to the city, you might not need an F-150 but rather a much smaller car. Or, if you relocate from one highly dense area to another, it might make sense to have autonomous vehicles that transport more than 10 passengers at a time.
Percentage of drivers for different age groups
The availability of on-demand, door-to-door transport via TaaS vehicles will improve the mobility of those unable to drive or cannot afford to own a car. Because the cost of transportation will go down, more people will travel with cars. Experiments with TaaS already exist in different areas of the U.S. For example, Voyage, a Silicon Valley startup, deployed cars with “remote” drivers that run its software in The Villages in Florida, a massive retirement community with 125,000 residents. Voyage is already experimenting with what will become mainstream in a few years. Residents of the retirement community summon a car with a phone app, and the driver-less car picks them up and drops them anywhere inside this community. The vehicles are monitored by workers who check for any problems from a control center. Transportation will completely change in the next decade and so will cities. Hopefully, governments will ease the transition.
Links: | https://medium.com/@giacaglia/why-elon-musk-is-building-the-boring-company-395b18fb32c | ['Giuliano Giacaglia'] | 2019-05-29 01:49:06.713000+00:00 | ['Self Driving Cars', 'Elon Musk', 'Boring Company'] |
Category: Rug Store | Category: Rug Store
About US:
Shopping for rugs can be a real pleasure when you work with us. We have hundreds of different styles available to choose from, each of which is made with the highest quality materials and craftsmanship possible. If you live in or around the Atlanta area, take some time to browse through the different types of rugs we have in stock, and please don’t hesitate to get in touch with us if you have any questions or concerns.
If you are interested in a Oriental rug, you have certainly come to the right place. Our team brings in beautiful Persian rugs directly from that region, and even from other parts of the world where they have been stored. This gives you the opportunity to shop one of the largest selections of these fine rugs you’ll ever see. You can look through these incredible pieces, and purchase the one that will fit best with your overall decorative style.
Modern rugs are becoming more and more popular with every passing day. They are a great way to add a lot of character to a room, and really bring out the best in any decorative style. Here in Atlanta, we have a huge selection of great modern rugs that take advantage of beautiful colors, shapes, patterns, and designs. We can also have custom made modern rugs created based on your specific desires. Whatever it is you need, we’ll make sure it gets done for you.
Related Searches:
Moroccan rug | modern rugs | blue rug | antique rugs | vintage rugs | oushak rugs | contemporary rugs | square rugs | mid century modern rugs | persian carpet | art deco rug | indian rugs | scandinavian rugs | tabriz rug | antique oriental rugs | oversized area rugs | khotan | extra large rugs | Atlanta GA.
Additional Details:
Mon-Sat 9 a.m — 5 p.m
Payment Method:
Cash, Check, All Cc.
Social Profiles:
https://web.facebook.com/Modern-and-Custom-Rugs-by-Doris-Leslie-Blau-1224419427733920/
https://twitter.com/ModernDoris
https://www.instagram.com/moderncustom22/
https://www.pinterest.com/leslieblauby/
GMB Listing: | https://medium.com/@leslieblauby/category-rug-store-8802ed7e40f | ['Modern', 'Custom Rugs Doris Leslie Blau'] | 2019-05-06 11:55:51.221000+00:00 | ['Interior Design'] |
移り変わりの早いフロントエンドに対するスタディストの取り組み | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/studist-dev/frontend-team-at-studist-a91580d97bb7 | ['Murata Katsuyasu'] | 2020-12-16 04:50:09.659000+00:00 | ['Vuejs', 'Frontend', 'Web Team', 'Storybook', 'Vue 3'] |
Considering Matthew Shepard: A Beautiful Unexpected Journey | Considering Matthew Shepard: A Beautiful Unexpected Journey
A Celebration of the Life of Matthew Shepard in Song
I’m streaming choral group Conspirare’s Considering Matthew Shepard album. My playlist tends to be more Cardi B than chorister, so I’m not quite sure what to expect. Plus, my rude millennial gayness obliges me to resent the image of the tragic, victimized queer. Enough already, we’re fine now, right? RIGHT?!
I finally press play. The album starts quietly with a haunting piano progression. Then voices rise in pristine harmony setting up a sort of beat. I hear what sounds like a cowboy yodeling and the clank of hooves — unexpected. Then post-modern, staccato-y voices describe the Wyoming landscape:
These are the things that sway and pass
Cattle, sky, and grass
Dance and dance
Never die, never die, they circle
There is something both ancient and very new about the sounds. I feel this joy pulse through me, this connectedness to earth, humanity, my queer ancestors. The pulsing voices sweep me far away while also reassuring me that I belong there and everywhere. A few tracks later and the album has dipped into rock, folk, blues, country, hymn, chanting. The lyrics weave from news accounts and Matthew’s journal entries to interview material and words from the likes of Rumi and Michael Dennis Browne. I remember the times I was bullied and overcame. The way theatre and Sufi poets saved me and so many of my friends. I saw all those things and the beauty of the grit and hope that overcame them.
I am now mesmerized and on a beautiful journey I never expected.
Luckily for LA, there are two opportunities to hear the stunning three-part fusion oratorio live at the Ford on June 15 and 16. The piece was created by GRAMMY®-winning conductor Craig Hella Johnson, who will also lead the 30-voice Austin-based Conspirare choir and a small instrumental ensemble at the Ford.
In case you’re not familiar, Matthew Shepard was a 21-year-old gay student who was brutally beaten and left to die on a fence in Laramie, Wyoming back in 1998. His story ultimately — though forever tragic and enraging — became a groundbreaking moment, one which helped galvanize, unite and ultimately strengthen the LGBTQ community. The landmark Matthew Shepard and James Byrd Jr. Hate Crimes Prevention Act of 2009 has saved and continues to save countless lives.
Similarly, Considering Matthew Shepard transcends tragedy. It is sad at times, but mostly hopeful, reaffirming, full of life and sometimes even funny. It has a daring honesty too — like when Johnson, who is also gay, explores what he could have in common with Matthew’s killers in the song “I Am Like You.”
“If I’m being honest, I think there are certain things I have in common with the murderers.” Johnson said in a recent interview. “I have never met them, but I do know that they were incredibly lost. To do something like this, how could they not be completely lost? And I know that sometimes I get lost.”
That level of complexity makes Considering Matthew Shepard a completely different experience from other tellings of the story that I have encountered. I think the difference is the medium of music, which enables Johnson to explore the bounds of love and compassion in a visceral, unifying and ultimately, hopeful way.
“In the midst of these unfathomable, dark, confounding aspects of life which we all experience, is love anywhere to be found? Is it something that is only available when we are in happy circumstances? Or is love still somewhere present, even when we are faced with some of life’s hardest circumstances?” Johnsons poses. “It was an honest and simple question, and this piece was born out of that. So, the focus is really on the ‘consideration,’ the listeners and creating a space for their interior journey with these kinds of questions. I also attempted to find a way to transition from Matt’s story to contemplate whether hope is possible, and whether true healing is something we can even envision. In performing it, we leave each performance renewed.”
Johnson himself grew up in rural Minnesota and spent much of his early life in struggling with his sexuality. “There was nothing in my world that said it was positive a thing to be gay,” Johnson said.
I am struck, even with our generational differences, how connected my story is with Matthew and Johnson. And how powerful and empowering that connection is. I can only hope that one day everyone can hear the healing words and sounds Johnson has assembled:
Life over death
Love over hate
Light over darkness
See Considering Matthew Shepard at the Ford on June 15 & 16. For tickets and more information click here.
- Written for the Ford Blog by Amin El Gamal. | https://medium.com/ford-theatres/considering-matthew-shepard-a-beautiful-unexpected-journey-de7ae0f1f62f | ['Ford Theatres'] | 2018-06-08 22:40:37.704000+00:00 | ['Queer', 'LGBTQ', 'Music', 'Matthew Shepard', 'Performing Arts'] |
A musician applied for your sales opening? Here are six reasons you should hire them. | They are competitive and they love to win
Photo by Annie Spratt on Unsplash
If you’re not a competitive, achievement-driven individual, you cannot succeed in the performing arts. In orchestral music, you “win” a job by going through rounds of blind auditions, playing a trial period, and, if you’re lucky, winning tenure after a few years. The pressure to win is intense. In addition to practicing for hours upon hours every day for many years, you have to learn how to win an audition. Most music students and professionals read books like The Inner Game of Tennis to prepare for the psychological challenges of the process.
If you have a job applicant who has professional credentials in the performing arts, they are likely in the top <1% of their field. They have worked tirelessly for years, maybe decades, to cultivate the mindset required to win a job in a highly competitive industry. Some people, like myself 10 years ago, ultimately decide that they lack the talent or patience to win a fulfilling job as a performer. (For context, there is only one orchestral clarinet opening in the US right now). But professionally trained performers can transfer that drive, that hunger to succeed, to a new professional setting, and bring in big wins for your company.
They have been trained to achieve perfection in their work
During my freshman year at Yale, the concert band performed a piece by our director, Dr. Tom Duffy, called A+. In this piece, Duffy demonstrates what would happen if every member of the ensemble made a small number of mistakes, straying from what’s written on the page. The result is, of course, preposterous, and it reveals how critical it is that every member of an ensemble achieve perfection in their playing. 97% accuracy is nowhere near good enough. You’d be fired so fast if you averaged 97% accuracy in an orchestra.
Musicians have to devise strategies to overcome the mechanical challenges of operating their instruments, like slow, repetitive practice; playing different rhythmic permutations of tricky passages; using alternate fingerings; adding visual cues to their written music; etc. Ultimately, operating the instrument is just the first step — they need to interpret the music once they have a foundation for execution. If they can set and achieve ambitious goals in performance, they can do the same at your company.
Once you train them, they can self-direct and require minimal management.
When you’re studying music performance, you meet with your teacher once per week for one hour. They coach you on whatever repertoire you’re preparing, perhaps offering some specific exercises related to fundamentals, but most teachers expect a music student to select repertoire, ask questions about what’s working and what’s not, and generally guide the direction of their own development. For the rest of the week, the music student practices for hours a day with no direction or coaching. They are essentially on their own in working through individual challenges.
Without necessarily knowing it, musicians cultivate the ability to solve problems on their own, plan long-term projects (like selecting and preparing repertoire for a recital), and manage up. If you hire a musician to join your firm, you’ll need to set them up for success with a strong onboarding program, but if you invest that time upfront, you’ll save yourself time in the long-run.
They might have more operational and administrative experience than you realize
Photo by Lala Azizli on Unsplash
Musicians work on a range of non-musical projects as students and professionals. These might include:
Running a teaching studio, which requires coordinating schedules, communicating logistics and expectations, booking space for recitals, marketing the studio to potential new students, understanding unit economics, handling extremely complicated personal tax filings, etc.
Planning a recital, which requires selecting and booking a venue, finding collaborators and coaches and coordinating schedules with them, marketing the event, writing program notes, hiring a recording engineer, potentially arranging catering and clean-up, etc.
Writing grant proposals
Maintaining a website for a teaching studio or ensemble
Contributing to labor negotiations with a players union
If you see this sort of experience on a resume, odds are good that the candidate can quickly come up-to-speed with administrative processes at your firm and maybe even improve them.
They can sell anything (they literally went to school to learn to perform)
Admittedly this one doesn’t apply to every musician and might be more broadly applicable to the theater crowd. However, musicians are trained to perform. A skilled performer can communicate something meaningful to their audience, draw them in, and keep them engaged. They are also well-practiced in memorization. Over the past decade or so, music schools have added oral communication or presentation skills to their curricula, so that performers can speak directly to an audience and provide context that might help bring the program to life.
Many sales, recruiting, and customer success interactions involve a performance of sorts. Sometimes you have a script to guide you through a call or an in person meeting, which you’ve rehearsed and perfected over time. If a performer expresses interest in your company, odds are good they can use their performance training to influence others and close deals for you.
Photo by Conor Samuel on Unsplash
They are exceptional collaborators who can navigate difficult conversations with colleagues
Playing in a small ensemble or even a garage band cultivates exceptional interpersonal skills by necessity. Without them, the ensemble will fail. A small ensemble performs without a director, and practices without a coach for the most part, so they have to listen to and watch each other to make music as a unit. The individual becomes far less important than the greater whole in these situations, as every player operates like an organ within a larger organism.
Per my earlier point, the ensemble is striving for perfection, and they need to hold each other accountable. If someone is out of tune, or isn’t rhythmically aligned with others in the ensemble, it needs to be addressed with sensitivity and tact. Chamber musicians learn to say, “Can we tune this?” as opposed to “You are out of tune.” They can then iterate and become more direct — “How about just the two of us? Can you bring your pitch up to meet mine?” These communication skills are invaluable in a professional setting where you are accountable to a larger team. If you hire a musician to join your team, they will communicate with the tact and precision required to get things done. | https://medium.com/@melanie-lahti/a-musician-applied-for-your-sales-opening-here-are-six-reasons-you-should-hire-them-c9c560ba553a | ['Melanie Lahti'] | 2020-12-15 17:46:06.376000+00:00 | ['Careers', 'Hiring', 'Talent'] |
How to setup Google Payments to pay for Cryptonite anti-phishing security service | Some Cryptonite users are complaining that they are unable to pay for the service. Google Payment is a bit of a pain, so I apologize for having to use this payment method. We will look at implementing a more convenient payment system in the near year — when we can dedicate more time and resources to the project.
Below are instructions that must be followed in order to be able to pay for Cryptonite. I have also included instructions on how to use Cryptonite on multiple computers using the same account.
Using Cryptonite on multiple computers
To use Cryptonite on multiple computers, follow the steps above — making sure you sign into the same account that you used to pay for Cryptonite. Make sure you turn on sync or Chrome won’t know that you’ve already paid for Cryptonite.
After you have signed into the same Google Account and turned on sync, install Cryptonite. Don’t install Cryptonite and then sign into Google — that might not work.
Cryptonite is built for desktop browsers
Cryptonite does not work on iPhone or Android devices — it’s a desktop browser extension. You can however, request access to our email security service that offers the same level of protection for your native (pre-installed) email client https://metacert.com/email ← if you like Cryptonite wait until you see this working.
If you REALLY want to pay in crypto
If you have serious difficulty in paying for the service using Google Payments, you can use crypto. To enquire about using crypto please send a private message to Billy on Telegram — https://t.me/RobotComms ← Billy will NEVER DM you unless you message him first. You can also connect with Billy on Slack — the link can be found at the bottom of metacertprotocol.com
If you still have problems trying to pay for Cryptonite please contact us through our ticketing system — it’s the fastest way to get help.
Get Support Here. | https://medium.com/metacert/how-to-setup-google-payments-to-pay-for-cryptonite-anti-phishing-security-service-c0d82f3312ad | ['Paul Walsh'] | 2019-01-02 22:07:19.324000+00:00 | ['Chrome', 'Cybersecurity', 'Security', 'Blockchain', 'Google'] |
Is the World Telling You Who You Are? | Image by Gerd Altmann from Pixabay
“The world will ask who you are, and if you don’t know, the world will tell you.” Carl Jung
Who are you really?
If the opinions of others didn’t exist, how would you act? What would you believe? How would you live?
It’s a funny balance between the training to comply and get along with others and the deep soul need to truly be who we were meant to be.
Isn’t it strange that these two things are in opposition to each other?
At the beginning of covid, a friend said to me “Why are you such a rebel?”. She was responding to the fact that I had a very different reality about what was going on… and because of that, I couldn’t comply with the measures being taken.
Why does this mean I’m a rebel?
It’s important to note that we have a deep training within us that you are either with the current narrative or you are a rebel… someone just out there causing trouble.
One of the great blessings I have found through this covid situation is that I am getting to observe my “infrastructure”. I am getting to see the beliefs that I have been programmed with that I wasn’t even aware of.
One piece of this infrastructure is that we are not allowed to believe our own realities if it does not agree with those in power. If we go against the clan leader, then we are being difficult, we don’t trust people, and we don’t care.
Wow… talk about powerful programming to keep us in line.
How amazing to get to see this clear as day.
What I love about Carl Jung’s quote is that I do know who I am. So, when the world tries to say that I am a rebel or anything else, I can quietly smile, and know that “no… I simply don’t agree with you… I know who I am.” | https://medium.com/morning-coffee-meditations/is-the-world-telling-you-who-you-are-afd849e93ddf | ['Katrina Bos'] | 2020-12-16 12:38:31.816000+00:00 | ['Relationships', 'Autonomy', 'Truth', 'Spiritual Journey', 'Freedom'] |
Why Conor Lamb Won | Newly Elected U.S. Representative Connor Lamb (D) Pennsylvania
Why Conor Lamb Won
At this time yesterday, all we could tell you is that Democrat Connor Lamb had “probably” won a special election in Western Pennsylvania that reduces the Republican majority in the House by 1. Now we can tell you for sure.
Even though a few absentee and overseas votes still haven’t been counted, even if they all come in for Republican Rick Saccone, according to the New York Times, it won’t be enough to make up the gap.
Lamb’s margin of victory: just a little more than 600 votes out of about 230,000 total, still small enough that Republicans will almost certainly fight it. According to Politico, they plan to challenge on 3 things: Republican party attorneys were not permitted to watch the tabulation of absentee ballots, changes to the Secretary of State’s website confused voters about polling locations, and some touchscreen voting machines might’ve been “mis-calibrated”, so they recorded votes for the wrong candidate. (The fact that the district is almost 100% white limits Republicans’ ability to use their more “typical” arguments: that “illegals voted”, etc.)
Lots of people are batting around lots of ideas about why Lamb, a former marine and Federal Prosecutor prevailed, with all kinds of spin going every which way.
We think there are 3 major factors:
• Enthusiasm! One thing President Trump deserves complete credit for: Americans are more interested in politics than they’ve been in a couple of generations. That higher level of engagement is leading to a higher voter turnout, according to fivethirtyeight’s Nate Silver. And we believe there’s a 2nd trend in play: there’s less interest in party and more interest in who is actually running and what their message is. The Republican in this race tried to skate by with no message other than “I was Trump before Trump was Trump.” Trump alone is Trump.
• People Are More Concerned About Health Care And Social Security Than They Are Happy About Tax Cuts And Steel Tariffs (Even In “Steel Country”): And they’re right! We think this is going to be the biggest takeaway for Democratic leadership from this race: that they can run robust campaigns built around focusing on social safety net issues and how Trump policies imperil them. Lamb emphasized strong support of health care programs and Social Security in person and in his TV ads. Here’s one:
• Lamb Flipped The Script On Nancy Pelosi: Several decades ago, Republican pollster Frank Luntz found groups of voters he talked to instinctively reacted negatively to the California Representative, long before she became Speaker of the House during Obama’s first term. He strongly urged Republican candidates to “run against Pelosi” and coastal Liberalism regardless of who they were actually running against. Turned out this strategy really worked, and it especially worked well when the Democratic candidate wasn’t well known to the public beforehand, which was certainly the case with Lamb. So why didn’t it work this time? Especially with as much as $12-million in Conservative dark money pouring in to finance ads taking this exact approach? Because Lamb swerved, and made it known very early on that he doesn’t support Pelosi either. Rendering the Republican line of attack ineffective, with millions wasted on ads like these:
Senate Majority Leader Paul Ryan says Lamb won because he “ran as a Conservative”.
We say: Conservatives are going to have to find a new way to run. | https://ericjscholl.medium.com/why-conor-lamb-won-96a0e4f1e8ec | ['Eric J Scholl'] | 2018-03-15 05:29:15.214000+00:00 | ['Congress', 'White House', 'Politics', 'Donald Trump', 'Democrats'] |
The Outsiders — Notes. CEOs have five essential choices for… | in MIT Initiative on the Digital Economy | https://medium.com/@sohilgupta/the-outsiders-notes-c0ef71a0a717 | ['Sohil Gupta'] | 2020-12-23 18:43:04.255000+00:00 | ['Warren Buffett', 'CEO', 'Leadership', 'Management', 'Capitalism'] |
DO SOMETHING ACTIVE EVERYDAY WITH RED JANUARY ON THE CHALLENGE ACCEPTED APP | Do something active every day with RED January on the Challenge Accepted app
Kick Start 2021 in a positive way — RED January & the Challenge Accepted app
RED January returns next month with its annual month-long fundraising campaign challenging people across the UK to get active every day in January to support their own mental health and raise funds to help others. This year participants can keep track of their progress and stay motivated using the Challenge Accepted app.
Participants can take part in any activity from walking, cycling to kicking a football about in the garden, and have the option to fundraise for Sport in Mind, the UK charity that uses sport and physical activity to improve the lives of people experiencing mental health problems.
More than 150,000 people have taken part in the RED January campaign since it started, raising millions of pounds for mental health charities. There are numerous benefits to regular physical activity, with a key mental health gain being the reduction in the risk of depression by up to 30%.
Hannah Beecham, Founder of RED January comments: “We know how tough 2020 has been for everyone, and we strongly believe that RED January can help people to start their 2021 in a really positive way. I was originally inspired to start RED January after witnessing the transformative effect that regular exercise had on my mum’s mental health. I hope we can help to get the nation’s mental health moving in a positive direction again with RED January 2021.”
Launched July 2020, and already winner of ‘Theo Paphitis Small Business Sunday’ Challenge Accepted is a motivation app that helps people discover, track and complete personal challenges. Be it 30-day health and fitness, reading lists and more. The app comes with challenges for people to complete immediately and also gives them the option of adding their own.
“Like RED January, the Challenge Accepted app is designed to help you stay motivated in a fun, flexible and inclusive way — with whatever goal you’re working on. It’s all about setting manageable goals whilst being active and entertained — with the flexibility that allows you to skip days, or restart your challenge if you need to without feeling guilty about it.” Steph Mandeville, Challenge Accepted Co-Founder
Signing up is easy and free, people can register at joinredjan.co.uk and download the Challenge Accepted app on the Apple App Store or Google Play Store to tick off each day they’ve been active.
Over 70% of REDers are likely to continue what they started during RED January if they’re given the support and motivation to do so. Following RED January, people will be able to continue to track their days of doing something active on the Challenge Accepted app and get support through RED Together to stay motivated and active beyond January. | https://medium.com/challenge-accepted/do-something-active-everyday-with-red-january-on-the-challenge-accepted-app-ecb57321e036 | ['Steph Mandeville'] | 2020-12-16 14:44:58.040000+00:00 | ['Mental Health Awareness', 'Challenge Accepted', 'Self Care', 'Fundraising', 'Mental Health'] |
WOZX Token Dual Listing Announcement! | EFFORCE is proud to announce its official token listing on two leading exchanges this coming December 3rd, 2020.
After spending most of this year in stealth mode to build its architecture and service the first batch of test projects, EFFORCE is ready to begin its journey to become the gateway towards a more sustainable future by merging cutting edge technology with over a decade of direct market experience in the energy sector.
As preannounced, the WOZX token will be listed on HBTC.com on December 3rd and following up with Bithumb Global next week.
Steve Wozniak — EFFORCE and APPLE Co-founder
WOZX gets its name from EFFORCE co-founder Steve Wozniak (Apple) who spent his whole life striving for more efficient systems. As an engineer, Woz created some of the most beautiful computers in the world, and now, decades later, he decided to revolutionize the way people interact with energy efficiency projects.
Efficiency improvements span across all industries and geographies, however as of today, these opportunities are still not equally accessible to everyone; for this reason EFFORCE created a borderless participation system tearing down boundaries and lowering barriers to entry.
Through the Platform, ESCO’s (Energy Service Companies) will increase their operational capabilities by being able to effectively list their current projects in front of a wider audience, while Contributors from all over the world will be benefiting alongside Companies for their support.
Thanks to Energy Efficiency improvements, Companies immediately begin to lower their consumption of energy which results in the direct ability to decrease their cost structure and reduce the Carbon footprint on the environment.
This alignment of interest creates a win-win ecosystem for all parties making EFFORCE the only viable technology bridge into a $ 250Bn market with 10% annual growth which can bring meaningful change to our environmental future. | https://medium.com/efforce/wozx-token-dual-listing-announcement-4ea0a4524b66 | [] | 2020-12-02 09:01:34.464000+00:00 | ['Energy Efficiency', 'Woz', 'Blockchain', 'Token Sale'] |
數位行銷人必修!三個簡單小行動,讓創意思考在生活中成為肌肉記憶。 | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/begonia-design/%E6%95%B8%E4%BD%8D%E8%A1%8C%E9%8A%B7%E4%BA%BA%E5%BF%85%E4%BF%AE-%E4%B8%89%E5%80%8B%E7%B0%A1%E5%96%AE%E5%B0%8F%E8%A1%8C%E5%8B%95-%E8%AE%93%E5%89%B5%E6%84%8F%E6%80%9D%E8%80%83%E5%9C%A8%E7%94%9F%E6%B4%BB%E4%B8%AD%E6%88%90%E7%82%BA%E8%82%8C%E8%82%89%E8%A8%98%E6%86%B6-b9b3a83ea836 | ['Nan Chu'] | 2020-12-11 04:22:04.157000+00:00 | ['邏輯思考', '專案管理', 'Creative', '創意發想', 'Social Marketing'] |
Quit Google Analytics and use self-hosted Gatsby statistics with Ackee | Photo by Hal Gatewood on Unsplash
There are many different goals one can have when it comes to hosting your own website or blog. For me, it means just having a place where I own the content of my words and can customize to my liking. When it comes to analytics, my needs aren’t many — most of my audience reads my content via platforms like dev.to or Medium. All I need to know is how many people visit my site, which posts are doing well, and where users come from (referral links).
Given my recent obsessive elimination of all things tracking and advertising in my life, I chose to stop supporting Google and move from Google Analytics to something self-hosted. It wasn’t an easy product to use and most of the features were useless to me as I don’t sell anything on my blog. This way I own the data and am not contributing it to a company that could potentially use it in malicious ways.
I set out to search for a new tracking tool for my blog. My criteria for choosing a new product were:
Be simple.
Have features I will use.
Have a focus on privacy.
Be built with a programming language I know, so making changes is easy.
Be able to easily host on a Platform-as-a-Service like Heroku
Have the ability to be easily added to a Gatsby blog
Have an option to not collect unique user data such as OS, browser info, device, and screen size
Meet Ackee
Beautiful, isn’t it
I came across Ackee, a self-hosted analytics tool, and found that it fit my requirements almost perfectly. It’s built using Node.js, which I have experience of, and it focuses on anonymizing the data that it collects. Read more information on how Ackee anonymizes data here.
The steps you need to take to start collecting statistics with Ackee are to start running it on a server, Heroku in my case, add the Javascript tracker to your Gatsby site and test to see if the data is flowing correctly.
This a detailed guide on how I went about deploying it to Heroku. After, I contributed a Deploy-to-Heroku button which deploys it in one-click. Find the button here.
Up and running on Heroku
First, you have to start running the server which is going to receive the tracking data from your website.
Create a new Heroku app instance:
Use the heroku-cli to upload the code:
Configure a MongoDB add-on — this is where the data is stored:
Configure the environment variables:
Voila — you’re finished! That was easy, wasn’t it?
Open up the webpage that Heroku automatically configures for you — it should be https://ackee-server.herokuapp.com/ — and you should see this:
The login page!
Adding the tracker
Now we need to send data over from the website to the server we now have running on Heroku. If you’re using Gatsby, this is incredibly easy with the plugin.
Install the tracker:
npm install gatsby-plugin-ackee-tracker
Create a domain on Ackee and get the domain id. Find this option in the settings tab of your Ackee instance.
Add it to your Gatsby config.
Run the site locally:
gatsby develop
Testing to make sure it worked
Open up your site at http://localhost:8000 and go to a new URL.
Observe the network requests your site is sending. You will notice it now sends requests to your Heroku instance.
The Brave dev tools
We now have the server running Ackee and our Gatsby sending analytics!
What you get
Let’s explore Ackee.
Home page with total site views
List of referrers | https://medium.com/better-programming/quit-google-analytics-self-hosted-gatsby-statistics-with-ackee-846a2b4be634 | ['Alec Brunelle'] | 2020-02-26 23:14:43.813000+00:00 | ['Analytics', 'Privacy', 'Programming', 'JavaScript', 'Gatsbyjs'] |
HAVE YOU SEEN ENOUGH? (PART VI) | The unfortunate magnetic pull for death to call so early almost seems unfair. How could it be that the Creator, Universe, God, and representatives of the infinite can allow or even accept for these things to take place? How could it be that those so full of life in sharing all of who they are must come face to face with a spell and sign that their show has come to an end. Life can be viewed very much so as a stage in being observed even by some you do not even realize are watching. To be honest, it seems to be unfair that life can deal such a tough hand to those we might feel deserve just a little bit more consideration. Annelies Marie “Anne” Frank, a young Jewish diarist, wrote one of the most well known published works, “The Diary of a Young Girl”, in being a victim of the Holocaust and dying at the age of fifteen (15). This prior example of the hatred directed through human strife can have no consideration to who it decides to take aim at.
The proficiency to be popular versus smart is a tremendous struggle. Could it be much easier to just fit in? When attempting to fit in, do you still stick out like a sore thumb? Would it be better for you to do what everyone else is doing which could provide tactic approval? For some this scenario can not work because the fulfillment of innovation to transcend beyond cultural norms must be breached. The assimilation prevents the necessary acceleration which causes you to be ordinary. For some leaders there must be more. Therefore, instead of letting that cup pass, some must drink from the water that flows even at times when the taste leaves much more to be desired.
Click below link to receive complimentary printable from initial purchase of CCEL Art & Design Store by sending contact name, email address, and contact number to [email protected] | https://medium.com/@mjabizgrowth/have-you-seen-enough-part-vi-3d0dd4d06b43 | ['Marcus Austin'] | 2020-03-11 15:08:06.564000+00:00 | ['Death And Dying', 'Vision', 'Personal', 'Personal Development', 'Death'] |
RSL CG5 loudspeaker review: High-end sound with a budget price tag | California-based RSL has been delivering monitor speakers that defy their size and price for 50 years. The company has manufactured speakers that are the darlings of some Hollywood record producers. RSL’s newest flagship lineup, the CG5, is the successor to the highly acclaimed CG4, and it consists of two models: A two-way monitor, the CG5 (reviewed here) and an MTM monitor design called the CG25.
The CG5’s sonic performance, build quality, and price point ($800 for the pair) left me awe-struck. Read on why you just might want a pair of CG5 under your Christmas tree.
RSL’s HistoryRSL (the acronym stands for Rogersound Labs) is a well-known brand in die-hard audio circles, but it’s far from being a household name. Founder Howard Rogers started selling speakers factory direct from his store in North Hollywood in 1970. This approach allowed Rogers to use high-quality components while keeping prices low by cutting out the middleman.
[ Further reading: The best surge protectors for your costly electronics ] RSL got a big break when a Warner Brothers Records producer purchased a set of RSL speakers and told his friends about them. Soon, RSL speakers started popping up in record companies throughout Southern California.
Rogers developed and patented his Compression Guide speaker technology in the 1980s, and it remains the hallmark of all modern RSL speakers.
Theo Nicolakis / IDG Top view of the CG5’s rounded front baffle.
Fast forward and Howard’s eldest son, Joe, now helms the company with his dad, maintaining the same focus on bang for the buck and an emphasis on personalized customer service. You can audition any RSL speaker with a free, 30-day in-home trial. The company offers free shipping in the continental United States, and there’s no restocking or return shipping fees in the unlikely event you decide you don’t like the speakers. Now that is a true no-risk trial.
Build quality beyond their price pointThis is my third review of RSL speakers. My first was the CG4 series in 2015, followed by the CG3 series in 2017. Each time, I’ve come away shaking my head in disbelief with the build quality and sound of the RSL setups. This time? It’s déjà vu all over again.
Theo Nicolakis / IDG The CG5’s gorgeous gloss finish is fingerprint resistant.
Unboxing the CG5, I was immediately struck by the CG5’s solid build quality. The 16-pound weight is your first clue. The CG5 is like one of those elements on the periodic table whose physical mass is far greater than its physical size would imply.
The speaker is physically deeper than it is wide, measuring 7.625 x 10.75 x 12.625 inches (WxDxH). When looked at face-on, these dimensions make the CG5 speaker look smaller than it actually is. Spouse-challenged audiophile households rejoice. The front baffle’s side edges are rounded, giving the speaker a subtle, cool-looking aesthetic that tricks your eye into thinking that speaker is smaller still.
Theo Nicolakis / IDG Detailed view of the curved, metal grille. The grille has velvet-like pads that prevent any scratching or marring of the speaker’s beautiful high-gloss finish.
The included metal grilles are magnetic, with velvet-like pads that prevent the grilles from marring the CG5’s gorgeous finish. Simply place the grilles in the general vicinity of the front baffle and they snap instantly and firmly into place.
Mentioned in this article Naim Audio Uniti Atom Read TechHive's review$2,995.00MSRP $2,995.00See iton Naim Audio The speaker’s cabinet is fabricated from .75-inch thick panels on all sides. I gave the CG5’s cabinet a couple of good knuckle raps that only reaffirmed a solid, sturdy, enclosure. This is no flimsy speaker.
When it comes to building a speaker, it’s not just the drivers and cabinetry that contribute to a speaker’s sound. The crossover network is vital. It handles that all-important division and hand off of frequencies between the drivers. The crossover network is where some speaker manufacturers choose to cut costs. Not RSL. RSL’s crossover network is made up of high-quality parts, including a polypropylene capacitor and an air-core coil for high performance and power handling.
Theo Nicolakis / IDG Top view of the CG5 with the magnetic grille.
The CG5’s build quality is matched by its aesthetics. My GC5 review pair came in a beautiful white gloss finish (they’re also available in a high-gloss piano black). Despite the gloss, all of RSL’s finishes are fingerprint resistant, and I didn’t detect any smudges no matter how many times I handled these speakers.
Design, adjustments, and Compression GuideThe CG5 are a two-way design consisting of a 1-inch, translucent, silk dome tweeter and 5.25-inch, aramid-fiber cone woofer. The tweeter is crossed to the woofer at 2,500Hz.
The CG5’s frequency response is rated at a respectable 54Hz-35kHz ± 3dB. In layman’s terms, the CG5’s frequncy response resembles a true monitor speaker. I mention that because the CG3 and CG4, by contrast, only went down to 100Hz and really needed a subwoofer’s help even for music.
Tweeter adjustmentThe CG5 comes with a Tweeter Adjust dial on the speaker’s rear panel just above the binding posts that allows you to attenuate the tweeter. The Tweeter Adjust dial’s default position is set to “Low” when you first unbox the CG5. I immediately noticed a warmer and slightly unbalanced sound that put bass notes slightly forward with the Tweeter Adjust in the “Low” position.
Theo Nicolakis / IDG The Tweeter Adjust dial comes in the default low position. I found the sound warmer but slightly unbalanced in the low position. The “Reference” position produced the most neutral sound at the expense of some warmth.
Turning the dial to the “Reference” position evened out the tonal balance and gave the speaker a slightly analytical quality at the expense of some bass emphasis and warmth. At no time did the CG5’s smooth, non-fatiguing quality suffer. You can adjust the dial to your preference.
As I’ll detail more below, the CG5 does a fine job rendering its rated audio band cleanly and authoritatively. Indeed, the RSL CG5 does a superb job with bass frequencies in its range. If you intend to use the CG5 with movies or content with deep bass, you’ll want to add a subwoofer to the mix. On tracks such as James Blacke’s “Limit to Your Love,” and Dido’s “Northern Skies” the speaker’s frequency limits came through. I’d strongly recommend pairing the CG5 with RSL’s Speedwoofer 10S, whose superb performance still says with me three years after I first reviewed it.
Theo Nicolakis / IDG Detailed view of the CG5’s driver.
Compression Guide technologyRSL’s patented Compression Guide outwardly manifests itself as a horizontal, rectangular opening along the speaker’s front baffle. It’s inside the speaker cabinet where the magic really happens.
RSL RSL’s patented Compression Guide technology.
RSL’s Compression Guide design divides the interior of the speaker cabinet into high and low pressure zones. Resonances are reduced as sound waves travel through these pressure zones, leading to tighter bass. Compression Guide technology works astoundingly well, delivering a smooth, distortion-free sound that you must hear to appreciate.
Silky smooth soundAll the aforementioned is well and good, but how do these $800-per-pair speakers sound? I set up the CG5 on solid wood 30-inch speaker stands in my Dolby Atmos/DTS:X/Audo-3D home theater setup, where I typically have RBH Sound SVTR Tower Reference Speakers and SVS Ultra speakers powered by my Denon X8500H receiver, Monoprice Monolith 7-Channel amplifier, and Oppo UDP-203 universal disk player.
For this review, I decided to use the outstanding Naim Uniti Atom that I’ve had on loan and used with my recent JBL L82 Classic and Focal Chora 806 speaker reviews. It’s not that Denon’s flagship X8500H wasn’t up to the task—on the contrary—but given that RSL only shipped a stereo pair, I wanted to prove you can have a superlative two-channel music setup with a minimal physical footprint.
Theo Nicolakis / IDG Detailed view of the GC5’s patented Compression Guide outer port.
I fed the Roon-ready Naim Uniti Atom from my Roon Nucleus with content comprised of high-res music files, ripped CDs, and content streamed from Tidal.
Smooth is the word that struck me when I fired up the CG5—and that impression never changed. The CG5 deliver an extremely smooth, non-fatiguing sound that will let you get lost in music or movies for hours. The CG5 reveled in Lisa Gerrard’s haunting vocals in “Elegy” from Immortal Memory. Synthesizer notes were smooth and free of compression or distortion. Indeed, it didn’t matter if I cranked up the volume, the CG5 just purred along.
As is typical with high-quality monitors, imaging was solid, with vocals dead center and instruments placed firmly in space and time. Recalling classics from Patricia Barber was case in point. Unlike higher-end speakers, the CG5 couldn’t quite conjure the uncanny dimensionality around a sonic image.
Theo Nicolakis / IDG The CG5’s back panel has a threaded opening for optional wall mounting.
RSL’s CG5 speakers excelled at revealing musical layers. For example, on Sarah McLachlan’s “Elsewhere,” you’ll find that some speakers smear the harmonies in the refrain. You can tell there are multiple vocals, but you can’t quite focus on individual voice. Not here. Through the CG5, I could distinguish each vocal track in distinct space and time.
In my setup, the CG5 created a wonderfully large, deep, and wide soundstage. The CG5 also exhibited a noticeably relaxed presentation, recessing the image well behind the speakers’ baffle. Depanding on whether or not you prefer a forward or relaxed presentation will be key to determining whether or not you like these speakers.
Bass lines were respectable, though I want to emphasize that the quality of the bass lines within that frequency range was outstanding. The CG5 gushed chest-thumping bass on Natasha Bedingfield’s “King of the World.” On Sade’s “Soldier of Love,” the CG5 delivered bass lines with a good, tight, clean, detailed attack. The same was true on Katie Melua’s “Love is a Silent Thief” and Dido’s “Northern Skies:” Clean and controlled.
I wasn’t quite sure how the CG5s would handle the intense, pulsating bass punch on James Blake’s “Limit to Your Love.” Needless to say the CG5 was an unbelievably cool customer, thwacking me with chest-punching bass while maintaining precise, detailed control. Of course the CG5s couldn’t reproduce the foundation-rattling, subterranean bass Blake’s classic is capable of. Overall, the CG5 was very good, though not outstanding with dynamics.
Theo Nicolakis / IDG The CG5’s back panel has a Tweeter Adjust and five-way binding posts.
But don’t you dare interpret the CG5 as a polite speaker. Pulling out some Van Halen—in tribute to Eddie Van Halen—proved these babies can rock—hard. The CG5 brought forth all the best qualities of Eddie’s immortal, “Eruption,” revealing nuances of the grand master at his craft. The CG5 unrelentingly pumped out the raw rock grit and edge of “Tattoo.” And the CG5 reveled in the inceased power when I turned up the volume to immerse myself to Van Halen’s rock anthem classic, “Dreams.”
Treat yourself to a pair for ChristmasEvery RSL speaker I’ve auditioned has left me impressed, and the CG5 are no exception: Impeccable build quality; an intoxicatingly alluring, smooth sound; and superb top-to-bottom performance are just the opening act. The more time I spent listening to the CG5, the more I loved them.
For $800 a pair, the CG5 will give you pure sonic bliss far exceeding their price. Pair them with RSL’s Speedwoofer 10S subwoofer and you’ll have a high octane, sonic setup that will excel at both a two-channel and full-on home theater assault. You’ll be the envy of the neighborhood.
While there are lots of speaker options priced less than $1,000, the CG5 rank among my favorite. Whether it’s an upgrade to your current setup or a dive right into sonic bliss, take RSL up on their free in-home trial in time for Christmas. You won’t be disappointed. Enthusiastically recommended.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@mandy93026288/rsl-cg5-loudspeaker-review-high-end-sound-with-a-budget-price-tag-7d6432d40ab6 | [] | 2020-12-24 08:29:35.036000+00:00 | ['Tvs', 'Connected Home', 'Chromecast', 'Audio'] |
MySQL, YourSQL, OurSQL | there really aren’t enough memes about SQL
What the hell is SQL?
Hey, glad you asked. First of all, SQL stands for Structured Query Language. Second, it isn’t pronounced like that — it doesn’t rhyme with “hell” — or does it? This is a standard argument in some circles, so while it may seem trivial it’s more like Vim vs Emacs. Call it what you want, S — Q — L or Sequel. The official word is:
The official way to pronounce “MySQL” is “My Ess Que Ell” (not “my sequel”), but we do not mind if you pronounce it as “my sequel” or in some other localized way. -MySQL official docs
So what is it? According to Wikipedia, SQL “is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS).” This means nothing to me. How about you?
In simple terms, SQL is a language used to view or change data in a database. (A database is a system for storing and taking care of data.) The statements — or programs — used in this language are called SQL queries. With these queries we can:
INSERT data INTO a database,
DELETE data FROM a database,
UPDATE data in a database,
SELECT (extract) data FROM a database.
And more!
A diagram what satisfies project requirements
More acronyms will be thrown at you if you get any deeper into this. (Save yourself!)
By the way, you may feel confused by terms like SQL, MySQL, and so on. The gist is that SQL is the language and there are many different database systems that use the language. MySQL was one of the first open-source database available in the market. The database software itself is called an RDBMS— “Relational Database Management System.” Why relational?
It is “relational” because the data contained in each table are related to each other. Tables can also be related to other tables. The relational structure makes it possible to run queries on multiple tables at once. (Relational as in Tabitha: 39. (name, age))
“Relational” describes type of database an RDMBS manages, but the RDBMS also refers to the database program itself. It is the software that executes queries on the data and provides a visual representation. It may also display data in a tables like a spreadsheet, viewable and editable like a Google Sheet.
But… why?
Another good question. We have a lot of information to move around, whether it’s a class roll call or Stack Exchange for lazy programmers. (Thanks, Stack Exchange!) Usually these databases are kept secret from outside eyes, but sometimes you can interact with them:
Going back to What SQL and Databases are:
Essentially, a database is an organized stack of information. The database includes numerous tables and a table stores rows of data in an organized format characterized by the table’s columns (also called fields). The greater part of this leads to the fundamental progression of the database. So SQL queries control the rows of information that is stored in tables, and the tables, in turn, are contained within a database. -CodingSight
SQL gives us numerous important commands so we can interact with data. Once you know how to use them, they are very powerful tools to manage and modify vast volumes of data. Below is a very simple example of listing databases and the tables inside one of them.
an example of listing MySQL databases and tables
I could also add to those tables if I wish, say to indicate votes from my cohort on how to pronounce SQL. I could delete them, add them, and retrieve them in many ways. But using SQL is beyond the scope of this article. Suffice it to say it is a simple language that does what you want it to. (Thanks, SQL!)
How Do SQL Database Engines Work?
An SQL engine is defined as software that “understands’’ SQL commands to access a relational database and manipulate data. An SQL engine can also be referred to as an SQL database engine or an SQL query engine.
When a user interacts with the database, their query is translated into a SQL request for processing. The SQL storage engine writes to and retrieves data from a data warehouse server, often by converting the data to a format such as a JSON file. (Please don’t ask me about JSON files. It’s just a list, just a list *rocks back and forth*)
To complete the query, the query processor accepts, reads, and executes SQL commands for the data warehouse to forward to it’s application server. The application server processes the SQL request and sends it to a web server where the user can access the information from the SQL data tables. (Thanks, SQL!)
An SQL engine processes data in stages. In general, the first stage of SQL processing begins with parsing an SQL statement (query) via a parse call, to prepare for execution. The statement is translated into a data structure that other routines can process, then there are three checks: syntax check, semantic check, and shared pool check. (Thanks, SQL!)
The second stage is query optimization. This optimizes the query and chooses the best algorithms for searching and sifting through data. (Thanks, SQL!) The final stage is to execute the SQL statement by running the query.
That’s a lot to digest, I know, but the beautiful thing about SQL is that you don’t need to know any of the inner workings. When you write the query or statement you are telling the engine what to do, not how to do it. It figures all that out for you.
Thanks, SQL! | https://medium.com/the-innovation/mysql-yoursql-oursql-sql-in-341a31bc6532 | ["Tabitha O'Melay"] | 2020-08-05 09:41:45.988000+00:00 | ['Sql', 'MySQL', 'Holberton School', 'Sql Server', 'Software Engineering'] |
Self-employed vs. Employed: Pros & Cons | This is not going to be some deeply controversial rant about why one way of making a salary is better or worse than the other.
This is not a verbose way to toot my own horn and make you spend your time and energy reading about how great I am.
This is not an article filled with cat pictures (sorry).
Because the question of “Which is better?” is an unfair question. The “right” answer is completely subjective.
I am currently self-employed and I love it, though of course there are drawbacks.
I previously worked for 10 years in corporate America and it was also both good and bad.
There are “dream jobs,” of course, but the reality is that even your dream job has drawbacks and days that suck sometimes.
Both self-employment and traditional employment have advantages and disadvantages, it is truly about what is best FOR YOU as an individual and for your family.
As Quora user Kelven Swords points out:
Pros:
YOU make the decisions, no one else… and you thus reap the rewards.
YOU control the finances, no one else… and you thus reap the profits.
YOU determine who is on staff, no one else… and you thus control the social structure.
Cons:
You make the decisions… thus have no one else to blame for your errors.
You control the finances… thus have no one else to blame for any wasted money.
You determine who is on staff… thus you have no one else to blame for any parasitic staff members who poison the well.
Let’s take it a step further than what Kelven has described above.
There are obvious advantages to working for yourself.
You can set your own working hours.
You choose who to work with…and who NOT to work with.
You have significantly more control over processes, contracts, clients, work, time, and everything else.
You can work in your pajamas — and even sleep in!
You get to build great relationships with your clients because you’re steering the ship and choosing how to cultivate those relationships.
There are some obvious disadvantages, as well.
You have no one else to rely on.
You do not have a manager setting tasks or deadlines, so all deadlines are self-imposed, which can be difficult for some to manage and stick to.
Time management becomes extremely important, which is hard for many.
No company insurance or other benefits.
No sick time, paid vacation time, or maternity leave.
Less stability in terms of income.
You will find yourself working far more than 40 hours most weeks.
You do not have coworkers and it can be sometimes lonely and isolating.
You are probably not an expert in every single thing a business needs: processes, sales, closing sales, marketing, website building and maintenance, creative stuff, contracts, organization, admin work, etc.
Higher potential for burnout/overworking.
Doing your taxes is harder.
When it comes to working for a company, you are getting some very specific advantages, in terms of a stable, dependable income, medical and other benefits, having people to ask when you need help, and being told what you should be doing.
Something people rarely think about when dreaming of being self-employed is the lack of structure and organization.
You have to create your own schedule, keep yourself on task, make sure work gets done, track deadlines, invoices, payments, all business expenses, and create a structure to your day.
It is incredibly easy to lose track of time or lose focus and end up spending half your day on social media when no one is watching!
There are many tools out there to help you get organized and create a structure for your day. Some are free and some cost money — which you need to keep track of so that you can make sure to deduct it on your taxes as a business expense.
Taxes are different and a bit more difficult when you work for yourself, and you have to save some of your income to pay it, and it WILL be a difficult check to write.
If you have personal assets, you’ll need to consider if it makes more sense for you to be a sole proprietor, LLC, S-Corp, or several other options, each with their own benefits and drawbacks. There is much research involved in starting your own business!
For Me
Being my own boss has been fun, challenging, interesting, and lonely. I love being a writer and being able to choose what I write and who I work with, and I created a business model which works well for me.
I also continuously refine and evolve my business offerings, update my own website, look for clients, maintain my social media accounts, and blog regularly. All of which is part of running my business, but is ultimately unpaid work.
I love my business and what I do, but I also enjoyed my work as a Business Development Director in the recruitment industry. I had a great boss, cool coworkers, a stable and dependable paycheck, and a set end time to my workday, none of which I now have.
However, I have the freedom to do the work I want, charge the rates I want, and am much more flexible with my schedule. I can go to the gym in the middle of the day, run errands whenever I want, work in the middle of the night if I am so inclined, and pet my cat all day.
For You
It’s about what works best for you. Don’t put pressure on yourself to be one way or the other or let people tell you one is “better” or more “right” for you than the other.
Make plans, do research, interview people, and figure out what is best for you and make sure you have a clear idea of both the advantages and disadvantages so you are well informed! | https://jyssicaschwartz.medium.com/self-employed-vs-employed-pros-cons-d97b4bdc4f70 | ['Jyssica Schwartz'] | 2020-02-04 16:23:57.033000+00:00 | ['Life Lessons', 'Freelancing', 'Entrepreneurship', 'Writing', 'Business'] |
“How I Generates 6 Figures From Youtube Channel”Tube Mastery and Monetization | How To Grow And Monetize a YouTube
Introducing Tube Mastery and Monetization
Tube Mastery and Monetization teaches how to start, grow, and monetize a hyper-profitable YouTube channel from complete scratch.
It doesn’t matter if you don’t have any tech skills or any previous business experience. Everything you need to know is provided step-by-step in this training program.
When You Join Tube Mastery and Monetization, You Get Full & Immediate Access To:
Module 1: Overview of The Blueprint
The 3 Stages To YouTube
Beta Phase: Choosing a niche and planning your content
Choosing a niche and planning your content Intermediate Phase: Uploading 33 videos
Uploading 33 videos Scaling Phase: Outsourcing the work
Module 2: Choosing a Niche
Ways of Going About YouTube
The Best High CPM Niches
Doing Market Research
BONUS: List of 100+ Profitable Niches
Module 3: Setting Up Your Channel For Success
The 33 Rule and how to use it
The Best YouTube Tool Ever Made
My Secret SEO Keyword Process
Planning Your Content Strategy
Module 4: Uploading Videos
Anatomy of a Viral Video
How to Systematize Your Videos
Where to Find FREE Content
How to Edit Videos for Free
Making High Click-Through-Rate Thumbnails
Module 5: The Growth Module
Understanding YouTube Analytics and The Algorithm
How to Truly Go Viral on YouTube: Breaking the BIGGEST myths and misconceptions and laying down some truth.
Breaking the BIGGEST myths and misconceptions and laying down some truth. The Best Time of Day to Upload
Module 6: The Monetization Module
How to Make More Money Than Most YouTubers
The Many Ways of Monetizing Your Channel
My Personal Favorite Way of Making Money Utilizing YouTube
Module 7: Scaling Your Channel
Hiring One Person to Do All The Content Creation
Making a Video Creation Assembly Line
BONUS: Fill In The Blank Scripts for Finding and Hiring Employees
Tube Mastery and Monetization
CLICK HERE TO LEARN MORE | https://medium.com/@mondyalgarage/how-i-generates-6-figures-from-from-youtube-channel-tube-mastery-and-monetization-6f4454aee2af | ['Tech Offers'] | 2020-12-23 19:21:15.421000+00:00 | ['Tube Mastery Review', 'Tube Mastery', 'YouTube', 'Make Money Online', 'Tubemastery Monetisation'] |
10 Fun Blog Posts about ‘Blog Posts’ | There’s a reason why blogging is so popular. I’ve been posting to my blog for 29 straight days. For those that have been blogging for years, I applaud you for your commitment and creativity.
Continuing to come up with exciting content is a challenge I face every day. One day soon, I’m hoping to have a schedule locked down a real plan. Like you do, I’m sure.
As I was looking for ideas today, I turned to my SEO tool, Ubersuggest, for ideas on the keyword ‘blog posts’. I thought it would be fun to share the most popular posts about ‘blog posts with you.’
Yes, I consider keywords when looking for my content marketing. Ubersuggest has a Chrome extension to quickly peek at how the keywords are ranking while searching in Google. Check it out.
Interesting fact, none of these blog posts are recent. Even though your post doesn’t go viral today doesn’t mean it doesn’t have potential in the future. As long as your article is evergreen, you could see the popularity grow over time.
From the top down
6,761 visits to this article that was initially published on February 24, 2016
Humor is good. Finding a way to twist it around to make it the best blog ever. I wish I could be that creative — all the time. Since I can’t, I will just have to read someone else’s funnies.
This one was shared over 5,000 times
Another great one for blog post ideas that work. Follow the breadcrumbs that have been left by others. Trailblazing is for people with oodles of time — not me.
Shared over 1.8k and read 272k
Build it, and they will come the tagline for Field of Dreams, but it isn’t the best move for your blog. Just like any publication, you have to share it. And it starts with you. That’s the secret to going viral, sharing. It looks like they found some more places to share.
Viewed by 2,501 people on Facebook
If you’re into audio posts, this is your jam. More and more people are listening on the go. I do as well; in the car, while riding my bike, and out for a walk. I’m always plugged in.
Viewed 2,250 times on Facebook
I’m a big fan of Thrive Themes; I use them for my website theme and highly recommend them for everything. They are super smart people who have reliable content along with a fabulous product.
Viewed over 2,333 times on Facebook
Heck yeah, tell me what types of blog posts work the best. Why spend your days wandering around aimlessly when someone is pointing the way to the gold? Test out their material to see what will work for you.
Viewed 30,398 from Facebook, although it looks like they changed the title
Right after searching for content ideas, the next question is, where do I find the images I need. There are many sites with beautiful photos; however, sometimes you’re looking for just the right one to drive your message home.
Viewed by over 4,641 from Facebook
Take advantage of the hard work that others have put in when doing your blog. You can pick their brain (or blog) for the information you need to make your stories enjoyable. With so much information to choose from, the world is your oyster.
Viewed 4,612 times from Facebook
Life needs a little humor, and this is an exciting point of view. Especially if your target audience also likes cats. Blogging doesn’t have to be dull and boring; find your hook.
Viewed 3,219 times from Facebook
Quick SEO tips are like taking a great course and breaking it up into little chunks. Don’t let strategies like SEO trip you up; learn enough to keep you in the game. Ignoring it is foolish; I speak from my personal experience. I wasn’t a fan of learning something that’s like a moving target.
Finding the right keywords is like backing into a problem. You have to ask yourself what would they be searching to be using these words. I’m no SEO expert, but I do try to follow the general guidelines. In any case, these posts have useful information for you.
The Next Steps
In summary: Blogging is fun and positions you as an authority in your field. Pick your topic and find all the ways to express yourself. Share it to create an Omni-channel where people will say, I heard that before. And they’ll be talking about you.
Commit to your blog, and it will pay off. It’s not an overnight success by any stretch, but it gives you a chance to spread your wings and fly. If you’d like to chat about your next project, reach out at [email protected]
Linda James Bennett; day 29 of 365 writing an article every day, making you a shiny object in the world. | https://medium.com/@lindajamesbennett/10-fun-blog-posts-about-blog-posts-2d9266872c6e | ['Linda James Bennett At Shinyobjectmarketing.Com'] | 2020-12-18 01:25:01.973000+00:00 | ['Life Coaching', 'Business', 'Coaching', 'Copywriting', 'Business Coaching'] |
Having fun with HTML5 — Range type input | Besides the much talked about video tag, HTML5 also introduced a couple of new input types, one of which is a rather interesting range input type which basically translates to a slider bar control:
The range input type is currently supported by the latest versions of Safari, Chrome and Opera while other browsers such as Firefox simply treat the field as a textbox.
There are four available attributes — min, max, step and value. min and max should be self explanatory, step determines the size of each increment/decrement and value is the current value of the slider and also determines the placement of the slider when the element is loaded (if not specified, the slider will appear at the centre of the control when it’s first loaded):
[code lang=”javascript” light=”true”]
<input type=”range” min=”0" max=”200" step=”5"></input>
[/code]
[code lang=”javascript” light=”true”]
<input type=”range” min=”0" max=”200" value=”0" step=”5"></input>
[/code]
I talked about the new border radius property being introduced in CSS3 in my previous blog post, and with a little JQuery I’ve put together a quick demo here of how a 400px by 400px square looks as the border radius changes.
If you look at the source of the page you will see that I added a changeBorder javascript function to handle the onchange event fired by the slider, which dynamically updates the HTML content of an internal CSS class called mySliderBarStyles using the current value of the slider control:
[code lang=”javascript”]
<input id=”mySliderBar” type=”range” min=”0" max=”200" value=”0" step=”5"
onchange=”changeBorder(this.value)”></input>
…
<script type=”text/javascript”>
function changeBorder(newValue) {
var newRadius = newValue + “px”;
// set the html content of the label showing the current radius value
$(“#rangeValue”).html(newRadius);
// set the style with the new radius value
$(“#mySliderBarStyles”).html(
“#myDiv { -webkit-border-radius: “ + newRadius + “; }”
);
}
</script>
…
<style type=”text/css” id=”mySliderBarStyles”></style>
[/code]
References:
Dive Into HTML5 | https://medium.com/theburningmonk-com/having-fun-with-html5-range-type-input-cf0fe6124a10 | ['Yan Cui'] | 2017-07-03 20:57:17.999000+00:00 | ['Html5', 'Programming'] |
Jenkins on Kubernetes: From Zero to Hero | Jenkins on Kubernetes: From Zero to Hero
Using Helm and Terraform to get an enterprise-grade Jenkins instance up and running on Kubernetes
TL;DR
This post outlines a path to getting an enterprise-grade Jenkins instance up and running on Kubernetes. Helm and Terraform are used to make the process simple, robust, and easy to repeat.
A GitHub repo containing the code snippets from this post can be found here.
The Objective
With Kubernetes adoption growing like crazy, many organizations are making the shift and want to bring their favorite DevOps tool, Jenkins, along for the ride. There are a lot of tutorials out there that describe how to get Jenkins up and running on Kubernetes, but I’ve always felt that they didn’t explain why certain design decisions were made or take into account the tenants of a well-architected, cloud-native application (i.e. high availability, durable data storage, scalability). It’s pretty easy to get Jenkins up and running, but how do you set up your organization for success in the long run?
In this post, I will share a simplified version of a Kubernetes-based Jenkins deployment process that I have seen used at some of the top brands in the world. While walking you through the process, I will highlight each key design decision and give you recommendations on how to take the solution to the next level.
When evaluating a cloud architecture, I like to use the Well-Architected Framework from AWS to make sure I cover my bases. That framework focuses on operational excellence, reliability, security, performance efficiency, and cost optimization. For the purposes of brevity, we will just focus on a few key elements of reliability and operational excellence in this post, namely high availability, data durability, scalability, and management of configuration and infrastructure as code.
Helpful Prerequisites
To get the most out of this article, you should have a relatively good understanding of:
Jenkins, how to use it, and why you would want to deploy it
Kubernetes and how to deploy applications and services into a Kubernetes cluster
Docker and containerization
Unix shell (e.g. Bash) usage
Infrastructure as code (IaC) principles
Technology Dependencies
To follow along and deploy Jenkins using the code samples in this post, make sure you have the following resources configured and on-hand:
A Kubernetes cluster (tested on v1.16): this can be an AWS EKS cluster, a GKE cluster, a Minikube cluster, or any other functioning Kubernetes cluster. If you don’t have a cluster to work with, you can spin one up easily in AWS using the Terraform module enclosed here. Permissions to deploy workloads into your Kubernetes cluster, forward ports to those workloads, and execute commands on containers. A Unix-based system (tested on Ubuntu v18.04.4) to run command-line commands on. The kubectl command-line tool (tested on v1.18.3), configured to use your cluster permissions. Access within your cluster to download the Jenkins LTS Docker image (tested on Jenkins v2.235.3). The Helm command-line tool (tested on v3.2.4). Helm is the leading package manager for Kubernetes. The Terraform command-line tool (tested on v0.12.28). Terraform is a popular cross-platform infrastructure as code tool.
If you don’t have these dependencies ready to go, you can easily install and configure them using the links above.
Our Approach
This article will walk you through deploying Jenkins on Kubernetes with several different levels of sophistication, ultimately ending in our goal: an enterprise-grade Jenkins instance that is highly available, durable, highly scalable, and managed as code.
We will go through the following steps:
Deploy a basic standalone Jenkins primary instance via a Kubernetes Deployment. Introduce the Jenkins Kubernetes plugin that gives us a way of scaling our Jenkins builds. Show what our basic setup is missing and why it is insufficient for our goals. Deploy Jenkins via its stable Helm chart (code for that chart can be found here), showing what this offers us and how it gets us close to the robustness we are looking for. Codify our Helm deployment using Terraform to get the many benefits of infrastructure as code. Go over additional improvements you can make to take Jenkins to the next level and highlight a few remaining key considerations to take into account.
Photo by Christopher Gower on Unsplash
Step One: Creating a Basic Jenkins Deployment
First things first, let’s deploy Jenkins on Kubernetes the way you might if you were deploying Jenkins in a traditional environment. Typically, if you are setting up Jenkins for an enterprise, you will likely:
Spin up a VM to be used as the primary Jenkins instance. Install Jenkins on that instance, designating it as the Jenkins primary instance. Spin up executors to be used by the primary instance for executing builds. Connect Jenkins to your source control solution. Connect Jenkins to your SSO solution. Give teams access to Jenkins — most will not need UI access and can make due with webhook access maintained by a set of administrators— and empower them to run builds. Actively manage the system over time to make sure it is operating effectively.
Let’s walk through the first three steps, but do so in a “Kubernetic” way.
Pro tip: To run the examples in this post efficiently, I recommend opening up two shell instances. This will help when you go to port forward and want to run other commands at the same time.
First, let’s make sure our Kubernetes context is set up. If using AWS and EKS, this would mean running something like the following: aws eks update-kubeconfig --region us-west-2 --name jenkins-on-kubernetes-cluster .
Next, let’s create a namespace to put all of our work in: kubectl create namespace jenkins .
For simplicity, let’s change our Kubernetes context to look at the jenkins namespace we created above: kubectl config set-context $(kubectl config current-context) --namespace jenkins . Because we’ve done this, we won’t have to pass the --namespace argument to all of our future commands.
Now, we need to create several RBAC resources to enable our Jenkins primary instance to launch executors. To do this, save the following configuration to your file system as 1-basic-rbac.yaml :
Navigate to the folder containing that configuration on your file system and run the following command: kubectl apply -f 1-basic-rbac.yaml .
This will create the RBAC resources we need in the cluster.
Now, save the following Deployment configuration to your file system as 2-basic-deployment.yaml :
Navigate to the folder containing that configuration and run the following command: kubectl apply -f 2-basic-deployment.yaml .
This will create a Jenkins primary instance Deployment from the jenkins/jenkins:lts Docker image with a replica count of one. This Deployment will make sure that, assuming the cluster has enough resources, a Jenkins primary instance is always running in the cluster. If the Jenkins primary instance pod goes away for any reason other than Deployment deletion (e.g. crash, deletion, node failure), Kubernetes will schedule a new pod to the cluster.
Okay, wait for that Deployment to finish provisioning, checking every once in a while to see if the Deployment is in a Ready state using the kubectl get deployments command.
Once that Deployment is in a Ready state, we can forward a port to the container in our pod to see the Jenkins UI locally in our browser: kubectl port-forward deployment/jenkins-standalone-deployment 3000:8080 .
You should now be able to see the Jenkins Getting Started page in your browser at http://localhost:3000 .
Pro tip: I highly recommend sticking to port 3000 if your setup allows you to. That should save you from some unexpected troubleshooting as you go through this post.
Alright. Because we have simply deployed the Jenkins base image and are not using Jenkins Configuration as Code, we have to configure the Jenkins primary instance manually.
On the first setup page, it asks you for your Jenkins administrator password. You can grab that password quickly by running the following: kubectl exec -it deployment/jenkins-standalone-deployment -- cat /var/jenkins_home/secrets/initialAdminPassword .
Copy that password, enter it into the password input field, and select continue .
On the next page, select Install suggested plugins and then wait for the plugin installation to complete.
Once the plugin installation completes, create a new administrator user with a name and password you will remember.
If this seems like a lot of work, and you are already convinced that this is not the best way to go about things, go ahead and jump to the section on Helm below.
Next, you can configure your Jenkins URL how you see fit. Its value is not important for this tutorial, but it is very important when you are actually using Jenkins in production. Go ahead and do that now, and then continue on until you have completed the initial setup wizard.
Hooray! If you followed the above steps, you now have a Jenkins primary instance running on Kubernetes and accessible in your browser. Of course, this is not how you would access your Jenkins UI in production — you would likely use an Ingress Controller or Load Balancer Service instead — but it is good enough for demonstration purposes.
Okay, let’s create a simple Jenkins pipeline to make sure our Jenkins instance actually works. Go to http://localhost:3000/view/all/newJob or the equivalent URL based on your port forwarding setup and create a Pipeline with the name jenkins-on-kubernetes-pipeline — you may be prompted to enter your new admin user’s password. This will look like the following:
Now, go to http://localhost:3000/job/jenkins-on-kubernetes-pipeline/configure — you should be taken there automatically. In the box at the bottom of that page, paste the following simple pipeline code:
Now you can save your pipeline job and run a build by clicking Build Now .
Wait a few seconds, and then you can view the console output here: http://localhost:3000/job/jenkins-on-kubernetes-pipeline/1/console . On this page, you will see the output from your pipeline. It should look approximately like this:
Terrific! We just ran a Jenkins build successfully on Kubernetes. Let’s move onto the next step.
Step Two: Introducing the Jenkins Kubernetes Plugin
The build that just ran successfully is awesome and all, but it ran on our Jenkins primary instance. This is not ideal because it limits our ability to scale the number of builds we run effectively and requires us to be very intentional about the way we run our builds (e.g. making sure to clean up the file system by clearing up workspaces).
If we stick with the current configuration, we will almost certainly run into trouble with space issues, unexpected file system manipulation side effects, and a growing backlog of queued builds.
So, how do we fix these issues seamlessly?
Enter the Jenkins Kubernetes plugin.
The Jenkins Kubernetes plugin is designed to automate the scaling of Jenkins executors (sometimes referred to as agents) in a Kubernetes cluster. The plugin creates a Kubernetes Pod for each executor at the start of each build and then terminates that Pod at the end of the build.
What this ultimately provides us with is scalability and a clean workspace for each build (i.e. better reliability and operational excellence). As long as you don’t exceed the resources available to you in your cluster — cluster autoscaling removes some of this concern — as many pods as you allow will be spun up to accommodate the builds in your queue.
Deploying Jenkins in this fashion also offers several additional benefits over a traditional VM deployment, including:
Executors launch very quickly (i.e. better performance efficiency). Each new executor is identical to other executors of the same type (i.e. increased consistency across builds). Kubernetes service accounts can be used for authorization of executors in the cluster (i.e. more granular security). Executor templates can be made for different base images. This means you can run pipelines on different operating systems using the same Jenkins primary instance as the driver (i.e. improved operational excellence).
So, let’s get this set up.
First, we need to add a Kubernetes Service to Jenkins to allow it to be accessed via an internal domain name. To do so, save the following configuration to your file system as 4-basic-service.yaml :
Navigate to the folder containing that configuration and run the following command: kubectl apply -f 4-basic-service.yaml .
Now we can configure the actual Kubernetes plugin.
Go to http://localhost:3000/pluginManager/available and search Kubernetes in the search box.
Select the box next to the plugin named Kubernetes and click Download now and install after restart at the bottom of the page.
Wait a few seconds for the plugin to download.
Now, go to http://localhost:3000/restart and click Yes . Wait a few minutes for Jenkins to restart. When it does, log back in with your admin credentials.
Pro tip: Jenkins restarts can be a little finicky. If your browser window does not reload after a minute or so, close that tab and open up a new one.
Once you are logged back in, go to http://localhost:3000/configureClouds/ . This will look a little different on older versions of Jenkins.
On this page, add a Kubernetes cloud and then click Kubernetes Cloud details .
Configure the Kubernetes Namespace, Jenkins URL, and Pod Label like so:
Once you have that configured, click Save .
Now, let’s go modify our job ( http://localhost:3000/job/jenkins-on-kubernetes-pipeline/configure ) to use the new Kubernetes executors.
Change the contents of the pipeline to the following:
Now, try running Build Now again.
Wait a few seconds — it might even take a few minutes to launch the first executor — and then you can view the console output here: http://localhost:3000/job/jenkins-on-kubernetes-pipeline/2/console .
The output should look approximately like this:
Sweet! We now have a highly-available Jenkins primary instance, we can create jobs on that instance, we can run builds, and the number of Jenkins executor Pods will automatically scale as the build queue fills up.
We’re done, right?
Not so fast. Let’s keep going.
Step Three: Bringing the House Down
Though some might argue our setup is highly-available and scalable, what we just deployed is still not anywhere near the enterprise-grade solution we are looking for. To demonstrate that, let’s simulate a failure.
A common way to simulate failures on Kubernetes is to delete a Pod from a Controller (e.g. a Deployment or ReplicaSet). To do that, let’s first determine what Pods exist in our namespace: kubectl get pods . You should get something like this:
Now, you can delete that Pod like so: kubectl delete pod jenkins-standalone-deployment-759b989cf4-6ptvc . This will kill your port forwarding session, so just expect that.
If you move quickly enough, you can then run kubectl get deployments and see a Ready state of 0/1 like the following:
Wait a few more seconds, though, and you will see something different. Your Deployment will eventually return to a Ready state of 1/1 . This means that Kubernetes noticed a missing Pod and created a new one to return the Deployment to the right number of replicas.
With our Deployment in a Ready state, we should be able to access the Jenkins UI again. Let’s confirm that by forwarding a port to the Deployment again: kubectl port-forward deployment/jenkins-standalone-deployment 3000:8080
Now, go back to http://localhost:3000 in your browser and refresh the page.
Uh oh…do you see what I see?
You should be seeing the Getting Started page again.
This is happening because we didn’t configure our Jenkins instance to have persistent storage. What we have deployed is roughly the equivalent of a Jenkins instance deployed on a VM with ephemeral storage. If that instance goes down, we go back to square one.
This is just one way in which our current setup is fragile. Not only is our Jenkins primary instance data not persistent, but this lack of persistence almost entirely nullifies the high availability benefits we get from using a Kubernetes Deployment in the first place. On top of that, our Jenkins configuration is largely manual and, therefore, hard to maintain.
Alright. Let’s fix this using our trusty friend, Helm.
Step Four: Deploying an Enterprise-Grade Jenkins With Helm
Helm is a package manager for Kubernetes. It operates off of configuration manifests called charts . A chart is a bit like a class in object-oriented programming. You instantiate a release of a chart by passing the chart values . These values are then used to populate a set of YAML template files defined as part of the chart. These populated template files, representing Kubernetes manifests, are then either applied to the cluster during a helm apply or printed to stdout during a helm template .
For our Jenkins installation, we will use the handy dandy stable helm chart. If you want to look at all of the values you can set for the version we use, take a look at the documentation on GitHub here.
To save some time, let’s keep most of the default values that come with the chart but pin the versions of the plugins we want to install. If you don’t do this step, you will likely run into issues with incompatible plugin versions as Jenkins and its plugins change over time.
Pro tip: If you are working with a chart and want a values.yaml file to work off of, repos usually come with one that holds all of the default values for the chart.
By default, out of the box, this helm chart provides us with:
A Jenkins primary instance Deployment with a replica count of one and defined resource requests. These resource requests give Jenkins priority over other Deployments without resource requests in the case of cluster resource limitation. A PersistentVolumeClaim to attach to the primary Deployment and make Jenkins persistent. If the Jenkins primary instance goes down, the volume holding its data will persist and attach to the new instance that Kubernetes schedules in the cluster. RBAC entities (e.g. ServiceAccounts, Roles, and RoleBindings) to give the primary instance Deployment the permissions it needs to launch executors. A Configmap containing a Jenkins Configuration as Code (JCasC) definition for setting up the Kubernetes cloud configuration and anything else you want to configure — to change this configuration, you would modify the values file you pass into the Helm chart. Configmap auto-reloading functionality through a Kubernetes sidecar that makes it so JCasC can be applied automatically by updating the JCasC Configmap. A Configmap for installing additional plugins and running other miscellaneous scripts. A Secret to hold the default admin password. A Service for the Jenkins primary instance Deployment that exposes ports 8080 and 50000. This makes it easy to connect Jenkins to an Ingress Controller, but also allows the Jenkins executors to talk back to the primary instance via an internal domain name. A Service to access the executors with.
Sounds a lot more robust than what we just built ourselves, doesn’t it? I agree! Let’s get that up and running in our cluster.
Before we do, let’s clean up our previous work by running the following:
kubectl delete -f 1-basic-rbac.yaml kubectl delete -f 2-basic-deployment.yaml kubectl delete -f 4-basic-service.yaml
Now, let’s set up a values file to pass into the Helm chart. To do so, save the following to a file named 6-helm-values.yaml :
Now we are good to deploy Jenkins using Helm. To do so, the process is super simple. Run:
helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm install jenkins stable/jenkins --version 2.0.1 -f 6-helm-values.yaml
Pro tip: It is almost always helpful to pin a version to entities like a Helm chart, a library, or a Docker image. Doing so will help you avoid a drifting configuration.
This will create a Helm release named jenkins that contains all of the aforementioned default features. Give that some time, as it does take a while to get Jenkins up and running.
It is important to note that there are a lot of extra features (e.g. built-in backup functionality) that we don’t take advantage of here with the default chart configuration. This is meant to be a starting point for you to build on and explore. I highly recommend evaluating the chart, creating your own version of the chart, and customizing it to meet your needs before deploying it into a production environment.
Once you have given the release time to deploy, run helm list to verify it exists.
To get the admin password for your Jenkins instance, run printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo . This is a little different from the last command we ran to fetch the password because the password is now stored as a Kubernetes Secret.
Then, run the following to forward a port:
export POD_NAME=$(kubectl get pods --namespace jenkins -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace jenkins port-forward $POD_NAME 3000:8080
Now you will be able to see Jenkins up and running at http://localhost:3000 again.
Now, let’s test our release and make sure we can run builds.
Once again, go to http://localhost:3000/view/all/newJob and create a Pipeline with the following code:
Pro tip: You could also use the Helm chart to add jobs on launch or the Job DSL plugin to add jobs programmatically. I would recommend the latter in a production environment.
Click Build Now again, and let’s marvel at what Helm just did for us.
Hopefully, your build ran successfully. If it did, awesome! Let’s make this setup even better.
Step Five: Moving Our Deployment to Terraform
In the current state, we have checked most of the boxes that we were looking for in an enterprise-grade Jenkins deployment, but we can improve our setup even more by managing it with infrastructure as code.
Terraform is the infrastructure as code solution I see used most in industry, so let’s use it here.
Essentially what Terraform does is take a configuration, written in HCL, convert that configuration into API calls based on a provider , and then make those API calls on your behalf with the credentials made available to the command-line interface. Because we are already authenticated into the cluster, and we are the ones calling Terraform commands, Terraform will be able to deploy the Jenkins Helm chart.
First, let’s delete our existing helm release: helm delete jenkins . Verify the release no longer exists with helm list .
Now, save the following Terraform configuration to a file called 7-terraform-config.tf :
Awesome, now we can use Terraform to deploy our chart.
Run terraform init in the directory where you put 7-terraform-config.tf . This will initialize a state file that holds Terraform state — typically you would store this state remotely using something like Amazon S3.
Now, run terraform plan . This will print out what is going to change in the cluster based on the configuration you have defined. You should see that Terraform wants to create a helm release named jenkins .
If this looks good to you, run terraform apply . This will give you one more chance to make sure you want to apply your changes. If everything seems to check out, which it should, reply yes when Terraform asks for validation input.
After a few minutes, you should have an enterprise-grade Jenkins up and running again! Booyah!
Pro tip: Terraform is great for a lot of things, but one of my favorite things it provides is drift detection. If someone goes and changes this helm chart ad-hoc, you will be able to tell by running a terraform plan and checking to see if any changes show up in the plan.
We’re almost done here, but I would be remised if I didn’t go over a few additional recommended improvements and considerations with you. Check those out below.
Step Six: Improvements and Considerations
Now that you have an enterprise-grade Jenkins up and running, you probably want to make it more useful to your organization by making the following improvements:
Set up SSO via a plugin like OpenId Connect Authentication or OpenID.
Open up your firewall to allow communication with your source control solution.
Create a custom executor image and custom pod template to be used for builds. This will allow you to install common dependencies across builds and will decrease build time.
Create a set of Jenkins Shared Libraries for teams in your organization to call upon when running deployments. Using these, you can define guardrails for what teams can and cannot do with their Jenkins usage. In fact, with this, you can set it up so very few people actually need access to the UI, and developers can just run builds by pushing to source control.
Convert the stable Helm chart we used above to a custom one stored in source control. This will allow you to manage changes to the chart over time and make sure you have more control over its lifecycle.
Set up monitoring via Prometheus and Grafana to evaluate the health of your Jenkins instance over time.
When deploying applications and services, configure Jenkins jobs to perform zero-downtime and blue-green deployments. This link will help you do that.
Add theming to make Jenkins a little cleaner by using something like the Simple Theme plugin.
I also recommend you take the following considerations into account:
If you want to stick with block storage for storing data from the primary Jenkins instance, you will want to add an automated volume backup system to restore from if the volume is ever corrupted. Using a tool like Velero simplifies the backup process, and you can actually use it to back up and restore your entire cluster in the event of a disaster.
If you want a more dynamic storage option and are using EKS, you might want to set up something like the EFS Provisioner for EKS to back Jenkins with a scalable file system.
If you want to deploy non-Helm workloads to Kubernetes using Terraform, take a look at the alpha provider for Kubernetes. It allows you to deploy any Kubernetes configuration that you can deploy without Terraform, something the original Kubernetes provider could not do. More information on your options for this workflow can be found here.
Regularly check to make sure your Jenkins plugins are up-to-date. In general, I recommend limiting the number of plugins you install as much as possible. From a security perspective, plugins are one of the weakest links for Jenkins, so limit their use and keep them up to date.
Finally, empower your developers to dive in and build great stuff!
Conclusion
If you’ve made it this far, fantastic! I hope that this post was valuable to you and you learned something new. You rock!
A GitHub repo containing the code snippets from this post can be found here. If you find it helpful, go ahead and give it a star. If you run into any issues, please submit a pull request to that repo, and I will do my best to respond.
Take care!
Addendum | https://medium.com/slalom-build/jenkins-on-kubernetes-4d8c3d9f2ece | ['Luke Fernandez'] | 2020-09-30 17:26:14.161000+00:00 | ['Jenkins', 'Helm', 'Terraform', 'DevOps', 'Kubernetes'] |
PPC Benefits For New and Small Businesses | PPC Benefits For New and Small Businesses
Pay Per Click (PPC) is one of the coolest tactics to uplift a new business. It can benefit a new and a small business in a number of ways. With the help of PPC, a new business can get top placement, which is really important for attracting customers. It helps in producing profitable results faster. Small businesses with poor SERPs can get an advantage from PPC easily. PPC helps in brushing up the online profiles of new businesses. It is one of the most effective and popular methods of online advertising.
Benefits of PPC
1. Instant and quick result
Though SEO is free and it has a long-term effect, PPC shows faster results than organic SEO. PPC can instantly place a new website at the top while organic SEO takes months to do the same. In fact, SEO and PPC should work together for better performance.
2. Attracting the right audience at the right time
An important way to enhance a new and small business is to reach their targeted customers at the right time. Until and unless more and more visitors are attracted, a new business can never be successful. This is where Pay Per Click ads work. PPC ads are seen at the top and this is how more and more visitors visit their site.
3. Helps building SERP naturally
As PPC ads are seen at the top, it attracts many visitors, which eventually helps in organic SEO.
4. Can work with Small Budget
PPC is excellent for small budget business as there are no budget restrictions. You can fix your own budget. Then have a look at how your campaign is performing. Accordingly, you can increase the budget, if you are getting a good ROI.
5. Can target any particular geographical Region
If you don’t want your ad to be displayed everywhere, then no problem. You can select your region where you want your ad to be displayed and your ad will be displayed only to that particular region.
6. Helps in Increasing Revenue
PPC ads help new businesses to increase their revenue.
Now after reading this article, it must be very clear to new business owners that why PPC is so valuable/important. PPC is one of the most powerful ways to uplift a new and small business. And as it is budget-friendly, small business owners with small budgets can decide how much to spend for an ad and get benefited easily from it. It is a fantastic way to increase local visibility and get good and instant results. | https://medium.com/@flyonit/ppc-benefits-for-new-and-small-businesses-c5ebdf3f7184 | [] | 2019-12-23 06:40:32.141000+00:00 | ['SEO', 'Digital Marketing', 'PPC', 'Pay Per Click Marketing', 'PPC Marketing'] |
How to measure the code quality of your project? | Long story short
Don’t measure it. Measure consequences of the code quality which matter the most:
Customers’ satisfaction;
Stakeholders’ satisfaction;
Engineers’ satisfaction.
The problem
Since the moment I read “Clean Code” by Robert C. Martin and “The Pragmatic Programmer” by Andrew Hunt and David Thomas 10 years ago, I have been trying to improve my skills in writing the “good code.” Moreover, I also have been thinking about how to measure the current quality of the code, search for weak points and control the improvements process.
So far, what do we have for our engineering excellence? We have a lot of best practices such as TDD, BDD, something-else-DD, design patterns, pair programming, code review, continuous delivery, etc. We have many tools like linters, type-checkers, formatters, generators, IDEs, and so on. We even have a ton of metrics we can calculate on our code-base: code coverage, cyclomatic complexity, code churn, cohesion and many more.
They all are very useful, but can we be sure that if we have used all of them, we are safe? Imagine you are a technical lead, and you are asked: “is our technical solution good enough?” or “should we worry about any technical issues?” You have found all possible tools, applied all possible best practices, and measured all possible software metrics — all checks are green. Then we can relax and go on a vacation — right?
I don’t think so. In my career, I have seen several examples when applying some of these practices and tools didn’t help or even made the situation worse. For example, one day, we decided to improve our code excellence and integrate TDD, pair programming, push code coverage and some other best practices and tools. After some time, we noticed that the result was quite the opposite of what we expected. Engineers started to leave the company, product owners complained about the development speed, and the customers’ churn increased. Later we understood that some people hate pair programming because they think that it is equal to supervising other’s coding process for several hours without switching. The coverage was almost 100%, but the tests were flaky and complex, so people spent more time fixing tests and keeping coverage high than implementing features. All of these decreased the speed of the development and motivation of employees. As a result, our application became slower and buggier, which could not improve our revenue.
How is it possible, and why does it happen? Based on my experience, the root cause is the wrong target metrics. With the wrong metrics, we are blind to the real consequences. We have to measure what matters. It does not mean that we should not measure code coverage, for example, but we must understand that this metric is collateral. In other words, the high code coverage is not what we really need at the end of the day. But what do we need? What really matters, and what makes the difference?
What matters
To answer this question, let’s answer an even more fundamental question first: Why do we write this code? What is the purpose? If you work in a commercial company, the purpose is profit, right? I know all companies have their missions, visions and all of them want to change the world. But if it is not a charitable foundation, they need profit at least to survive. No profit — no salaries — no results — no changes of the world. Ok then, we write our code to make a profit.
How can we make a profit from it? What makes our users pay our company? This answer is specific for each company, but there is one common principle for the competitive market.
Your product should be valuable for your customers, and your customers should be happy using it.
Small remark that it matters only for a competitive market because otherwise, it can be that you can somehow force customers to use your software even if it has no value or your customers just have to use your product if there are no other options.
Customers’ satisfaction
One of the metrics we can use to measure customer satisfaction is NPS. Of course, you can find more ways and more metrics, and there are people who know much more about how to measure customer satisfaction. I just suggest using it as an example. I like this metric because it is representable and straightforward, in my opinion. In other words, if somebody is ready to promote your product and recommend it to their friends — it looks like your product is really making a difference for them.
Wait a second! Why are we talking about NPS — how is it related to the code quality?
The truth is it is directly related. We should make our code good not only to satisfy our own ambitions but to make our product work good and satisfy customers, not technical architects. As a result, the first key metric for code quality is — “How happy are our customers?” I suggest using NPS for that, but it is up to you how to measure it.
Stakeholders’ satisfaction
Let’s go forward. The NPS can show us the current situation, but in every company, there are people whose duty it is to think about the future. They investigate the market, customer needs and plan the following steps to improve revenue and beat competitors. In all companies where I have worked, we called them “stakeholders.” It can be product owners, product managers, marketing experts, sales gurus, executives and even literally shareholders.
All of them should be happy about engineering results. For now, I don’t know a better way to measure this than just ask. To be honest, in my experience, it has never been a problem at all. These people are providing feedback even more often than you may think about it. At the same time, just in case some of your stakeholders are not mature regarding feedback, it is crucial to keep in mind that you have to be sure about their goals, their measures and their feelings about your job. If, for some reason, you are not sure what plan they have or how they evaluate your work in the last quarter — reach out to them and ask as many questions as needed till you have a complete picture. Your stakeholders have to be happy.
Engineers’ satisfaction
Eventually, what about our engineers?..
Well, we can think that making technical decisions and choosing the development strategy in the engineering department we should think only about objective technical consequences. I think that it is possibly one of the greatest mistakes technical leadership can make. Based on my experience, motivation to do technical improvements, confidence in the current technical choices, comfort in the daily development process and all other things related to engineers’ feelings about their work environment — at least as important as, or even more important than, technical characteristics of the chosen tools and solutions.
For example, let’s imagine the situation when you need to choose between frameworks A and B. What lines in the compare table first come to mind? Probably, it can be performance, size, tooling, the number of stars on GitHub, licence, contributors, documentation, etc. All of them are important for sure. At the same time, what if you, as a leader with your closest subordinates, have chosen the best framework B based on these characteristics, but the rest of the engineers hate this framework? I have to assume that it will be a disaster. No matter how good the framework is if people don’t understand this choice and don’t accept it the result of their job will be messy.
The key conclusion here is that you should always keep your eyes on your engineers’ feelings about technical decisions and improvements. It is important to understand that it is not only about framework choices, it works in the same manner with any changes: integrating new practices, refactorings and standardizations.
cNPS
How to use NPS to measure code quality?
If you just measure NPS in your team or a whole department, it can be hard to differentiate the impact of the code quality and other work-related factors in this number. To solve this problem, we can try to ask engineers specifically about their opinion of the code quality in the same manner as NPS does.
My suggestion is to use the following question: “Based on your engineering experience with our project, how likely is it that you would recommend our project to a colleague or peer?”. The idea behind this question is pretty much the same as behind the NPS itself — the more you like to work on the current project from the technical perspective the more you are ready to promote it to your friends. We can call it “Code Net Promoter Score” or just “cNPS”.
The technique is to send a poll with this question to your engineers periodically (monthly for example), collect data and calculate it as usual NPS. Then you organize meetings and a technical backlog based on the feedback from such polls.
Eventually, with all these metrics you will have the best technical solution your engineers can offer, and be sure that your customers and stakeholders are happy as well.
p.s. if you have any concerns, suggestions or questions about the idea described above, please, leave a comment and I will do my best to address it. | https://medium.com/@keksinautin/how-to-measure-the-code-quality-of-your-project-fba0c882beeb | ['Ivan Evsikov'] | 2021-04-06 11:32:21.940000+00:00 | ['Nps', 'Code Quality', 'Metrics', 'Agile', 'Software Development'] |
Now or Never Bounty Campaign from Beldex! | It’s the time to get your Beldex coins for the ICO, and the price is $0.23. However, that’s the usual way of getting hold of our privacy enabled coins. What’s the different way you ask?
Welcome to Beldex Bounty Program!
Earn coins by discussing about the Beldex project in our telegram group for at least seven consecutive calendar days. Participants who send spam messages/advertisements of other projects will be removed from the whole Bounty Program and group.
Rules
a. Raise/answer questions about Beldex
b. Your own opinions about the Beldex ecosystem and its related technologies
c. Constructive discussion on the Beldex token sale
Coins will be rewarded as per the Activity percentage, here are the numbers for your reference. Please note: Bad “flood ratio”( Spam Messaging) could disqualify you from the campaign.
Rewards are as follows:
If you are in top 3: 200 Coins
If you are ranked 4 to 10: 100 Coins
If you are ranked 11 to 20: 50 Coins
If you are ranked 21 to 50: 20 Coins
Combot will determine the winner based on the activities executed by members. And coins will be sent to the top 50 most active participants indulging in constructive promotions in the group. So join the telegram group and start earning BDX coins!
In care you wish to purchase coins in the traditional method then the ICO will last till August 15th 2018. The prices of one BDX right now is $0.23 and it’ll be $0.25 from August 1st.
Website | LinkedIn | Twitter |Telegram |Facebook | https://medium.com/beldex/now-or-never-bounty-campaign-from-beldex-2107a6d15687 | ['Beldex Exchange'] | 2018-07-18 05:16:40.643000+00:00 | ['ICO', 'Cryptocurrency', 'Crypto', 'Bounty Program', 'Blockchain'] |
Australian space startup cluster launched | The SmartSat Cooperative Research Centre (CRC) has established an organisation to represent space startup companies in Australia.
A rocket launch from Woomera, South Australia in 2017. Photo: Australian Defence
Called the Aurora Space Startup Cluster, the new company is based at the Lot Fourteen innovation neighbourhood in Adelaide, South Australia that is also home to the Australian Space Agency.
SmartSat CEO Prof Andy Koronios said Aurora already has more than 65 member companies from all sectors of the space supply chain but is looking for more.
He said they had startups offering rocket launch services, in-space computing, precision sensors, satellite digital twin technology, in-orbit and deep space operations, right through to ground station antennae development and Earth data applications for agriculture, resources and sustainability management.
“We invite space start-ups to join Aurora and help us build the space industry,” Koronios said.
Dr Tim Parsons, from Delta-V Newspace Alliance, chaired the Aurora Steering Group in the past year through its formation phase and said the new company aims to provide a framework for startups to grow through commercial collaboration with one another, research organisations, and local and international organisations.
“Startups are, by definition, companies looking to grow fast by leveraging new technologies and disruptive business models,” Dr Parsons said.
“If we’re to have any chance of meeting the nation’s ambitious growth targets for space, we need to help our space startups grow faster, in technical readiness level, in capability to execute, and commercial acumen.”
Dr Parsons will chair the inaugural board comprised of Directors Andrew Barton from Southern Launch, Troy McCann from Moonshot, Conrad Pires from Picosat Systems, and Dr Anastasia Volkova from FluroSat.
Prof Koronios and Peter Nikoloff will represent SmartSat CRC.
The SmartSat CRC is a consortium of universities and other research organisations, partnered with industry that has been funded by the Australian Government to develop know-how and technologies in advanced telecommunications and IoT connectivity, intelligent satellite systems and Earth observation next generation data services.
The impact of this research will be to develop intellectual property and a specialist industry expertise that will spawn new businesses, create export economic value and generate new high-tech jobs for all Australians.
Australia’s space industry is relatively young, with the Australian Space Agency founded in 2018.
The South Australian space ecosystem has grown in recent years, with collaborations with NASA and Japan Aerospace Exploration Agency, and the presence of numerous successful startups such as Myriota, Lux Aerobot, and ResearchSat. | https://medium.com/@newsleads/australian-space-startup-cluster-launched-f633f243265 | ['Solstice Media'] | 2020-12-17 22:58:59.807000+00:00 | ['Australia', 'Space', 'Research', 'Earth', 'Startup'] |
Scaling our inventory cache reads to 1000X | In this article, we would talk about how we scaled our inventory services to handle ~100M inventory cache reads. Inventory services at Myntra are expected to handle millions of API hits and a subset of these calls gets translated to ~100M inventory lookup hits on our underlying application layer, with that being the goal, we redesigned our core inventory services to handle the expected scale with a very low resource footprint. We will talk in detail about our initial approach and how we finally arrived at the proposed solution by leveraging the near cache functionality offered by Hazelcast.
What are Myntra’s inventory systems and what do they do?
Like in most other retail supply chain systems, Myntra’s inventory systems play a critical role in maintaining the inventory data that is always available, accurate, and accessible in a reliable manner. These systems are directly responsible for ensuring that Myntra shows the right products to the customers based on the product’s inventory availability and makes sure that there is no underselling and overselling. We do this by keeping track of the total inventory available at any given time and the total number of orders taken against it along with a bunch of other details to accurately determine if there is any inventory available for a given product. The inventory is stored at different granularities such as locations, sellers, etc., and also at an aggregate level, i.e total inventory available for a given product across sellers and locations.
The inventory sub-system has multiple microservices to block/read/write the inventory data and offers all other supporting functionality related to inventory management. Currently, the inventory data is not directly accessed by the order-taking flows(customer order placement path) as the inventory data is mirrored in the user-facing system’s central cache through an async pipeline with an exception of inventory blocking for which there are APIs that are hit in the customer order taking flow.
Current Architecture
In the current setup, all of the inventory data is stored in a MySql cluster with master-slave replication. We have a single master and multiple salves for redundancy, all the write goto master and reads are distributed among the slaves with an async replication between MySql nodes. There is no sharding so the data is not partitioned.
The service layer has a bunch of java based microservices each having a separate schema in MySql. For this discussion, we would like to focus on the inventory promise engine that today offers two functionalities -
Blocking the inventory for order taking
Pushes inventory changes to the central cache service
Central Cache Service
Central cache service is a host of a lot of other data fetched from multiple source systems (inventory data is one among them). It is primarily used by the storefront (User facing tier 1 services) for different use cases. It uses Redis as a backing data store. Even though it is called a cache, there is no fallback on to the source systems when there is a data miss and it is almost treated as a persistent store.
Whenever there is a change in inventory data, the inventory promise engine pushes a notification event(async) to the central cache service, the cache service invalidates the inventory and re-fetches it from the inventory database by making a read inventory call to the inventory promise engine. We have employed different mechanisms to make sure the inventory database and the central cache service are in sync and are free from any discrepancies. With this design, the read throughput of the inventory promise is only limited by the rate at which the inventory changes which is much lower when compared to the actual inventory reads made on the central cache service. To be more precise, whenever a user searches for products on Myntra, the inventory check happens at the central cache service level and the core inventory services are untouched. It’s only when the user places an order, we make a call to the inventory promise engine to block the inventory.
Why was this setup this way?
Historically we have been using centralized cache service as a single source for different types of data. Data from different sources arrive here and the application layer stitches all of this data together for rendering the search and product detail pages on the Myntra platform. Inventory systems also followed the same pattern and inventory data is one among the few other types of data that are stored in the central cache.
Why do we want to change this?
The primary reason why we want to change this is not because of the read throughput as the cache service backed by Redis is quite capable of handling the expected scale. We want to redesign this to eliminate the cache layer altogether for inventory reads and scale the core inventory service(promise engine) to directly handle the user-facing inventory reads (~100M cache hits per min.). Following are some of the reasons why we need to remove the central cache layer for inventory reads -
As the inventory data is propagated to the central cache layer in an async way, there is a possibility of momentary inconsistencies creating issues, these issues are more prominent during sale days where there is a sudden spike in inventory changes from the sellers, one of the biggest issues is overselling.
There is an operational cost involved as the data is stored at two places, we need to make sure data is consistent, etc.
Whenever there is a change in the data model in the inventory database, the same need to be changed at the central cache service. This is a big maintenance issue given that both are handled by two completely different teams. Every change needs to be communicated explicitly.
Central cache service is becoming a single point of failure
So for the reasons mentioned above we decided to remove the dependency on the central cache service and make the inventory system to power the inventory reads for user-facing flows. What this means is that scaling the inventory promise engine from less than a few thousand inventory reads to ~100M hits per min. at the inventory cache.
Proposed Solution
Scale the inventory promise engine to directly handle the inventory reads(~100M hits per min. at the inventory cache) and eliminate the need for pushing the inventory reads to the central cache.
Initial Approach (Iteration 1)
Our first attempt was to look at scaling out the MySql as it was the biggest bottleneck, the service layer is already horizontally scalable so if MySql can support the ~100M inventory reads per min., with small improvements in the service layer we thought we could handle the expected load. We looked at the following changes at MySql and service layer -
Add more read slaves in the MySql cluster
Enable sharding
But then we quickly realized that this will create the following issues -
A lot of resources as MySql is not ideal datastore for such a high throughput
Add more slaves means added replication lag causing momentary inconsistencies in data,
Maintenance and operational overhead as we are looking at so many slaves and monitoring their replication delays.
The complexity involved in application-managed MySql sharding such as adding / removal of shards, query routing, etc.
Out of all this, the biggest challenge was the number of resources required. This approach might have been inevitable and we would have explored further on this if the requirement is to also scale inventory writes. But as the writes are very minimal and a single master can handle the current and projected load, we are currently only interested in scaling the reads. So we decided to move on and find a better approach.
Let’s Cache — Again? (Iteration 2)
From the previous iteration we figured that scaling MySql is not a good idea, but then we can’t also replace MySql with something else as we needed a highly reliable persistent store with strong ACID properties for storing the inventory data, MySql was the defacto choice and we would like to retain it. So we decided to add some sort of caching layer on top of it(MySql) just for reads as our aim was to scale only the reads while the write throughput stays in tens of thousands.
We have introduced Redis as a cache and decided that we update it synchronously whenever there is an update in the inventory data. The idea was to use MySql for writes and make a parallel write to Redis while Redis powers all the inventory reads. Redis was our de facto choice when it comes to caching as it is used extensively at Myntra for various use-cases.
How is this different from the current setup?
It may seem like this approach is the same as the current setup — Central cache powered by Redis, however, if we put aside the fact that we also want to use Redis, this setup is a bit different and solves the problems we talked about -
In the case of the central cache, we were using the caching service that is shared by multiple teams and we have no control over it
The central cache used async notifications to consume changes, we can’t change this if we decide to propagate changes synchronously.
Having our own cache means there is a better chance of keeping the data model consistent
The central cache can no longer become a single point of failure as we can have a fallback if we have our own cache
It is fair to say this approach is nothing but bringing the central cache into the inventory systems for deeper integration and better control, but then since it has other data that can’t be owned by inventory systems, we figured that having our own central cache kind of setup is a better approach.
With this in mind, we did a quick POC and performance benchmark to only figure that this approach works fine and we are able to achieve the scale we are looking at, but then this doesn’t seem to be the optimal approach for the following reasons -
We required a ~ 80 to 100 node Redis cluster to handle the scale of ~100M RPM. This is so much for the given data size of ~ 10Gigs. Underutilization of resources as each node handles a tiny amount of data. Operability / Maintenance overhead as we are looking at a huge Redis cluster
Even though we felt that we need to rethink this approach of adding an external caching layer, we proceeded to explore two other caching solutions with the hope that we can improve on the resource utilization and bring down the total infra requirement. Along with Redis, we have also explored Hazelcast and Aerospike.
Redis
Hazelcast
Aerospike
When we benchmarked all three by setting them up as standalone clusters, we saw comparable performance for our use case. There were some differences in the numbers but not enough to pick one over the other for our use case. The payload size for each key was a maximum of 2500 Bytes and an average of 1500 Bytes. Also, there were different settings used for benchmarking:
Redis only supports benchmarking for a cluster-based setup starting from Redis 6. The default Redis benchmark utility (redis-benchmark) also doesn’t provide a way to run the benchmark on custom data in contrast to Aerospike.
Application Embedded Cache (Iteration 3)
As we concluded that in the previous iteration that with a standalone cache we would be needing huge resources, we decided to move on from the idea of having an idea of a standalone cache to an embedded cache as the data size is very small and we could comfortably fit the entire data in the memory of the application. This approach is super efficient in terms of resource utilization as we are keeping the data within the application memory, this drastically improves the throughput and latencies. However, there are few challenges -
Every time we restart the node, we need to warm up the cache thereby increasing the application startup time
We need to keep all the nodes in sync with the MySql
Even with a distributed — shared — replicated embedded cache we see the following issues -
Multi-node failure can cause data loss required us to re-build cache from MySql
Every time we restart the node, we need to warm up the cache(node-specific shards) thereby increasing the application startup time
As restarting application services is a routine activity and the frequency is quite high we feel that application startup time is a big concern.
Near Cache for Rescue (Iteration 4)
Even though there are different ways to solve the problems mentioned in the above iteration, we wanted a cleaner and easy approach with less complexity so we decided to explore further and figured a hybrid approach where we will have a standalone cache cluster with client level caching(more like an embedded cache). To be more specific, we would like to have a cache system where the data is kept in a standalone cache cluster but also cached at the client side for faster retrieval. By pushing the data closer to the application proactively / reactively we not only improve the throughout dramatically but also improves the overall availability and resiliency of the cache.
This approach many advantages -
We don’t have to set up a huge standalone cluster as most of the data is already cached within the application and the standalone cache is just a fallback The application can be restarted anytime without the worry of data loss and doesn’t need to warm up the cache at the start The frequency of re-build of cache from MySql will be less as the standalone cache cluster is used as a fallback The network chatter between application and cache cluster nodes is very less
With this approach in mind, we started hunting for caching solutions that offer the above functionality, we quickly found that this exists in both Hazelcast and Redis in the name of Near Cache.
What exactly is Near Cache?
Near cache effectively means creating a local copy of data at the client-side whenever data is read the first time from the cache cluster. This omits the network trips between the cache cluster and the app nodes and hence reduced latency as data is being read from the application’s local cache in subsequent calls.
Near Cache support in Hazelcast
We found that Hazelcast has great support for near cache and our benchmarks and POCs proved it to be quite reliable. Near cache support in Redis seems to be currently available with only one client type and overall we feel that Hazelcast is a clear winner in providing excellent support for it. It offers the following features out of the box -
Both client and cache server-side support (Near caching among cache cluster nodes)
Automatic client-side invalidation
Limit on number of near cache entries
Preloading of cache
Here is a snapshot of configuration options available in Hazelcast for the near cache -
More details here — https://docs.hazelcast.com/imdg/4.2/performance/near-cache.html
With this hybrid approach of standalone cache with near cache support of Hazelcast, we were able to cut down the resource requirement by 80% as we were pretty much serving the inventory data right out of application nodes, the standalone Hazelcast cluster is just a fallback and very minimal. Also, we found out that the Hazelcast community license was good enough for our use case and are using the same in production.
Data Consistency (MySql → Hazelcast)
While the above architecture with near caching at the client-side and a stand-alone Hazelcast cluster as a fallback worked beautifully with the least amount of resources, there is still one more problem we needed to solve wrt data consistency between the source(MySql) and the cache. Since we are pushing the data to Hazelcast in an async way after committing the changes in MySql, there is always a possibility of discrepancies for varying reasons ranging from the way we deployed the cache to failures in publishing or consuming the event. While the momentary discrepancies are automatically resolved by employing retires on failures, any prolonged discrepancies need to be addressed proactively. We have figured the following two approaches to solve this issue -
User-case triggered reconciliation — Certain scenarios can be used as trigger points for automatically reconciling the specific inventory data, for example, let’s say, a new order is created but we figured that there is no inventory available, this is a good indicator to re-sync that specific product’s inventory as if the cache has the actual inventory picture the order should not have been created in the first place. Incremental reconciliation — We would run a periodic job that incrementally reconciles data between MySql and the cache. The job will be pretty lightweight as it reconciles the data from the last checkpoint, as our data volumes are pretty low, we can run this job every few min. without any issues.
By employing the above two approaches we can pretty much lower the possibility of discrepancies to almost zero. Even in the worst-case scenario, a momentary discrepancy is not going to be a big issue as the discrepancy impacting the customer experience/revenue is again a rare possibility due to the fact a lot of parameters have to be aligned exactly at the same time — Inventory becoming zero / available + discrepancy arising at that time for the same product + user trying to place the order at the same for that same product is a rare possibility.
Scaling the service layer — Final Piece of the Puzzle
We initially thought we would plug Hazelcast with the current service layer (promise engine) and scale the service layer horizontally by adding more nodes, but then we figured that
The current promise engine is written in Java + Spring as a traditional servlet-based application not offering great throughput. If we were to scale the same application to handle ~100M hits per min at the inventory cache, we would again need huge resources. As there is a lot of legacy code, there is a lot of effort required to upgrade the current setup to use a better framework, more or less it would be a complete rewrite. Promise engine being a tier-1 service offering the critical functionality of blocking the inventory, we don’t want to risk making big changes as we don’t have a fallback for this functionality
Due to the above reasons, we decided to create a new service only to serve the inventory reads, with a plan that we would eventually and incrementally migrate the entire promise engine’s functionality to this new service.
Choosing the right framework
As the throughout requirement was in the order of millions, we decided to explore an event-based non-blocking framework as these frameworks are proven to offer great throughput. We explored Vert.X and Spring WebFlux and benchmarked them against the traditional Spring + Netty-based setup.
To run a performance test, we used a setup that consisted of:
One application node [8 cores, 32G RAM] Hazelcast cluster of 3 nodes [3 x 8 cores, 32G RAM]
The first run was for a batch size of one. What it means is that one API call will result in one Hazelcast read operation. The response payload size was 1 KB on average.
With the above RPM, the application node was running at full capacity in terms of CPU.
Based on the above numbers, we picked Vert.x and ran more tests with increased batch size. A batch size of N means that one API call will result in N Hazelcast operations.
We decided to go with Vert.X for two reasons
It offered maximum throughput per core. It is already being used in Myntra so we have some prior experience.
Conclusion
After experimenting with multiple caching solutions and application frameworks to scale our inventory read layer, we finally proceeded to use Hazelcast with near cache support and Vert.X as this combination offered maximum performance per core for our use case. We learned that this design can be leveraged at multiple places to effectively reduce resource usage.
References
Credits
Coauthors: Neeraj Gangwar, Priyank Pande
Thanks to Ramalingam R and Manoj Kansal for their review and support. | https://medium.com/myntra-engineering/scaling-our-inventory-cache-reads-to-1000x-84a8be1f576e | ['Vijay Muvva'] | 2021-06-08 15:06:44.372000+00:00 | ['Caching', 'Hazelcast', 'Redis', 'Inventory', 'Near Cache'] |
About Me — Kathy Chaffin Gerstorff | About Me — Kathy Chaffin Gerstorff
Writer, Poet, Flower Child, Perpetual Student, Bibliophile, Technophile, Nature Lover, Mermaid…
To find love, give love. ~ Kathy Chaffin Gerstorff
I’m excited to find a home for my blog writing on Medium. I have written on many platforms over the years starting with AOL in the late 90’s. When Medium first started, I was writing on WordPress and Blogger. I wish I would have paid more attention when I first heard of this platform for passionate writers and voracious readers, but no worries, I’m here now with you!
I love Medium’s clean design with no annoying pop-ups or banner ads. It’s so refreshing and very much worth the cost of a cup of coffee to read unique stories and discover talented writers. | https://medium.com/about-me-stories/about-me-kathy-chaffin-gerstorff-b6a8cea5e6f2 | ['Kathy Gerstorff'] | 2020-12-17 19:48:40.551000+00:00 | ['Entrepreneurship', 'Introduction', 'Poetry', 'Writer', 'About Me'] |
Should You Leave That Hard Ministry… or Stay? | Don’t get me wrong. There are times to make a change. But we need to be really honest about why we would leave.
Consider getting advice from a trusted mentor. Pray for clarity and peace. Don’t make rash decisions or major changes in the dead of winter, when our moods have a tendency to get more depressed than in the other three seasons.
“We nee to be really honest about why we would leave.”
Timothy was in a challenging ministry when he received his first letter from the apostle Paul, his pastoral mentor. Rather than allowing Timothy to take the easy way out, Paul tells him to “stay” (1:3) and continue a hard work that included conflict and controversy, opposition, and criticism.
Those words may have been hard to swallow.
“Can’t someone else do this?”
“No, Timothy. You don’t realize it, but staying is as much for you as it is for them.” Okay, Paul didn’t say that. But I suppose he could have. Or at least I like to think he would.
Because running from the hard sometimes is a missed opportunity to grow. It is kind of like how a butterfly develops strong wings, by having to struggle with the cocoon. Pushing, straining, and developing. Through trial.
“Running from the hard sometimes is a missed opportunity to grow.”
I suppose our spiritual growth is often the same way. The hard is not just hard. There is always a purpose. This is why it may be more important to stay and face the hard in the power of the Spirit than jump the fence to what looks like greener grass.
But green grass always fades. Like a mirage.
This is a broken world and every field has its thorns.
It is Okay to Leave, But…
It really is okay to leave. Sometimes, the signs are very clear that we should.
But when the going gets hard, it just might be the most important time to “stay” and let God reveal his grace to you and through you.
As Jesus told Paul, “My grace is sufficient for you. For my power is made perfect through weakness.”
This is the Jesus who could have run from the cross. Instead, for the joy set before him, he endured it, knowing that there would be glory beyond the grief.
“My grace is sufficient for you. For my power is made perfect through weakness.”
Maybe there will be glory beyond your grief. Maybe not. But if our ministries are not about us but about him, we, too, for the joy of magnifying his grace, can endure it… to the praise of his glory.
By the way, if you are in a tough spot or feel as if you want to try something different, it really is okay to find a new ministry. But before you go, do some heart work by asking why.
“For the joy set before him, Jesus endured the cross, knowing that there would be glory beyond the grief.”
Questions to Ask for Discovering Your Why | https://medium.com/the-mustard-seed/should-you-leave-that-hard-ministry-or-stay-c41df199cb24 | ['Dr. Mckay Caston'] | 2019-10-03 14:56:00.702000+00:00 | ['Religion', 'Christianity', 'Church Leadership', 'Leadership Development', 'Spirituality'] |
Prayer for December 16, 2020 - | Prayer for December 16, 2020: Please pray with me. God who forms us, you built your image right into us. Some have trouble accepting this. Many more have trouble seeing your image in others — especially their enemies and anyone who is different in any way. But your image is imprinted on all people — regardless of how we like it. Clear our eyes to better see your image in others, and in ourselves. Open us to encounter you through people. Amen. | https://medium.com/@pastormatthewbest/prayer-for-december-16-2020-faef9e1cfca9 | ['Pastor Matthew Best'] | 2020-12-16 14:03:35.060000+00:00 | ['Christianity', 'Prayer'] |
Why account for innovation? | Why account for innovation?
To dismiss accounting practices for any innovation project is to deny that which spans industries, countries and time itself, writes Matt Kerr.
Portrait of Luca Pacioli, author of the first known published text on double-entry accounting.
This is Part 1 of our series on Innovation Accounting. Part 2 can be found here.
The use of traditional double-entry accounting stretches back to ancient times, so the impulse to associate Innovation Accounting with its traditional ‘bean counting’ namesake is entirely understandable. In the 1300’s the great trading empires of Venice, Florence, and Genoa were using double-entry accounting to keep track of their loaves, fishes, finery and more.
So what does something that once helped Venetian merchants keep track of salted cod imports actually have to do with innovation?
“People are accustomed to thinking of accounting as dry and boring, a necessary evil used primarily to prepare financial reports and survive audits, but that is because accounting is something that has become taken for granted.” Eric Reis — The Lean Startup
Putting innovation and accounting together in a corporate environment can often result in innovation being crushed under the weight of unrealistic financial targets. When customer-led product development meets premature revenue targets’ it tends to be the revenue target tank division which wins the battle.
“Our prototype has been testing well with customers”. “Great, let’s give it some financial targets and throw it into battle.”
If Innovation Accounting isn’t about traditional double-entry accounting and it also isn’t about applying traditional business case metrics to innovation initiatives that can’t yet support them, what actually is it about?
Our friends over at The Corporate Startup have come up with a really useful definition for Innovation Accounting:
“Innovation Accounting is about managing three key activities:
Making investment decisions
Tracking and measuring the success of specific innovation projects
Assessing the impact that innovation is having on the business as a whole”
Measuring and managing innovation initiatives across these three dimensions allows us to both hold ourselves to account, and to have an ongoing dialogue with our executive sponsors and boards about how our innovation efforts are tracking.
So, how do we go about measuring our innovation efforts across the three key activities?
At IE, we have developed a framework to help our clients navigate the new world of innovation accounting. Our framework breaks innovation metrics into four layers:
The IE Innovation Accounting framework
Given our growth focus, we think it is important to split traction metrics (backwards-looking) and growth metrics (forward-looking) into their own layers. The distinction is made to help with adopting a Venture Capitalist mindset — evidence of traction will get a VC interested, but the final decision to invest is based on the future growth story. It also helps when managing the transition from achieving traction (product-market fit) to explicitly driving growth as our business model matures and we begin to scale more aggressively.
While we may associate measures and targets with corporate bureaucracy, having complete freedom to innovate inside an organisation often brings its own set of challenges. At IE, we regularly talk to leaders of Corporate Innovation programs which have been cut short and we find broadly two types of demise — short-lived programs which closed down because they weren’t delivering, and longer programs that survived by changing their reason for existing a number of times before they are eventually closed down.
Programs that aren’t delivering tend to fall into the gap between traditional and Innovation Accounting. Innovation projects shouldn’t be expected to deliver against short-term revenue and profit targets. However, in the absence of these metrics business leaders will often only support ‘trust me, I’m an innovator’ for a finite period of time. The progress layer of our framework helps support a conversation around the level of activity, the learnings that are being uncovered, and the cost/runway remaining. The traction and growth layers extend beyond basic activity to inform an evolving conversation about the realistic growth potential of an initiative.
In the case of both short-lived and constantly morphing innovation initiatives, we believe the core problem can also be traced back to a lack of agreed metrics which can itself stem from the lack of a clear Innovation Thesis.
An Innovation Thesis clearly sets out why the innovation program exists and how it fits into the future view of the organisation.
“In order to lead innovation successfully, every company needs a clear innovation thesis. Leaders need to take a point of view about where the world is going and how they plan to use innovation to respond. This Innovation Thesis must be aligned with the overall corporate strategy.” — Tendayi Viki.
Our view at IE is that the best way to align corporate strategy with an Innovation Thesis is through a growth lens. While we should avoid saddling individual innovation initiatives with unrealistic growth targets, we can still frame the growth outcome we are seeking from our overall innovation program.
When we have determined an innovation growth frame, we can then use Innovation Accounting to measure how we are tracking toward the overall growth objective. We refer to this process of setting a holistic growth target and then managing many individual initiatives as ‘innovation portfolio management’. Not only does this help us adjust course over time, it anchors innovation within the corporate strategy and supports alignment around what the program is (or perhaps isn’t) delivering back to the organisation. This is the reason our Portfolio layer looks at metrics such as aggregated market and growth potential. While these metrics are uncertain by nature, they inform the growth frame for the innovation program and support more strategic conversations about innovation.
If we can create alignment between our corporate strategy, our innovation thesis, and our innovation metrics we will be in a powerful position to avoid being short-lived or existing in a state of constant morphing. While this may sound ambitious for a new discipline like Innovation Accounting, the tools and concepts we need already exist and it is more about how we adapt them to our emerging needs. Knowing what metric to use when is the critical piece — is our Idea Funnel being effective? Are our validation sprints running at an acceptable velocity and quality to test assumptions? Are our projects tapping into addressable markets that create a large enough return to justify an investment to scale? How much should our overall program invest and in what parts of the innovation system to return the ambition of our innovation fund?
Back in their day, those Venetian merchants had to rise to a similar challenge. Faced with the twin disruptive innovations of a monetary economy and the emergence of ‘trade finance’ they adopted double-entry accounting to manage their increasingly complex affairs.
Those merchants, with their double-entry accounting and foreign trade, were actually the ‘international change agents’ of their day. It turns out that we were never just counting beans, or loaves, or fishes — instead we have been using accounting to help navigate uncertainty for a long time. Its significance in history should provide solid justification for its use as a tool for navigating modern innovation projects.
Matt Kerr is an innovation consultant at IE. To learn more about our thoughts and expertise on corporate innovation and how to innovate for growth, visit our website. | https://medium.com/ie-company/why-account-for-innovation-39d6f6e9bd77 | ['I.E.'] | 2019-07-30 05:54:13.323000+00:00 | ['Corporate Innovation', 'Innovation', 'Innovation Accounting', 'Innovation Consulting', 'Innovation Management'] |
BLK LUV — Game Of Spreading Love To End Hate | Welcome!
Welcome to BLK LUV!
😍 BLK is LUV. As a blockchain nonprofit, we studied the race-ism game and realized that it’s a rigged game of divide and conquer that keeps changing the rules every time certain 🧑🏾🦱 players level up. So we created a new game of spreading Love (LUV) that only has one team #TeamLUV because studies show that we identify each other by teammates, before our race.
The # social movements are great but every campaign needs a call of action so we created a Role Playing Game that uses Telegram, Twitter, $LUV, $LUV App wallet, and NFTs (Non-Fungible Tokens). The NFTs can be bought, sold, traded, and they give you powers in the game and the real world.
To start the game, you must join the BLK LUV lobby here: https://t.me/luvmagiclobby
You need a BLK $LUV App Wallet to play the game. Once you enter Telegram the bot will instruct you on how to obtain the private link.
Once you have your wallet you will start the game with 10 $LUV.
BLK LUV Goals
Diversify the blockchain industry. In the past, only 1% of black-owned startups received investments. We solved this problem by creating an NFT marketplace called https://luvnft.com.
Earn, buy and sell your own NFTs on our platform LUV NFT (The average NFT seller makes $6k a month). NFTs use blockchain technology to verify ownership, so the days of anyone stealing your unique creations are over.
70% of black-owned businesses closed due to a lack of capital during the pandemic. We solved this issue by building a new digital economy powered by love ($LUV). Money, business, and love can co-exist. $LUV won’t shut down your business or evict you from your home over missed payments.
Diversify the gaming industry. Our game of spreading $LUV will lead to our LUV Metaverse game, where you will get paid to play and have the ability to create a virtual business.
By completing quests of spreading $LUV, you will level up your king or queen status with access to new temples.
Meet and network with others in our tribe.
Market your LUV-owned business without paid ads or crazy algorithms.
Onboard your business to accept $LUV for products and services.
Educate our tribe about the new digital economy.
Help build the game, lore… use your imagination
More to come!
What Is An NFT?
What Is LUV NFT?
NFTs — Gaming Items
“NFT” is just a fancy way to say an in-game asset.
For example, Ankh key, scroll, staff, coin…
The NFTs in BLK LUV can be bought, sold, traded, and they give you powers and special perks in the game and the real world.
True ownership, baby!
Here are some items the game has already.
☥ Ankh Key — Keys allow you access to different Chambers
🍩 Doughnut — Digital doughnuts are healthy
🍕 Pizza — Royalty eats pizza
✂️ Scissors — Everyone in the beauty industry needs a pair
🎤 Mic — Are you in the music industry? Then you need a mic
🍣 Sushi — Kings & Queens love sushi
🍰 Slice of cake — #HBD celebrate your bday with a slice of cake
🍷 Wine — Kings & Queens love wine
🍕 Pizza (Base) — Royalty eats pizza
🍿 Popcorn — Eat popcorn while you watch the spread $LUV movement
🌭 Hotdog — Eat or share a meal with someone
🩺 Stethoscope — Every doctor needs these
🍾 Campaign — Celebrate our solutions
🍔 Vegan Burger — Health is wealth
🥄 Spoon — You can’t eat certain meals without a spoon
🍭 Candy — Everyone gets a sweet tooth every once in a while
Quests
You will need an Ankh key ☥ in order to participate in Quests.
Once you have an Ankh key you gain access to the Base Quests Temple and you can do any of the Base Quests.
There will be *many* Base Quests.
Each Quest completed will gain more items, and make you a more powerful King or Queen.
Temple
The Lobby is the first temple.
Your purpose in the Lobby is to ask questions and obtain an Ankh Key ☥.
Don’t be shy!
Once you obtain an Ankh key you officially start the game and gain access to the “Key Holders” room.
(Tweeting or Retweeting content with #TeamLUV #OneLUV helps!)
Here are some of the Chambers already built in the game (in Telegram).
Game Pieces
BLK LUV Twitter: twitter.com/blkluvorg
Where to buy BLK LUV NFT items: https://opensea.io/assets/blk-luv
Donate: wefunder.com/luvnft | https://medium.com/@blkluvorg/blk-luv-game-of-spreading-love-to-end-hate-8119679a1974 | ['Blk Luv Org'] | 2021-04-25 19:43:50.585000+00:00 | ['Blackownedbusiness', 'BlackLivesMatter', 'Racial Justice', 'African', 'African American'] |
Social Media Will Eat You Alive | Photo by Prateek Katyal from Pexels
Instagram. Facebook. Twitter. We’re all consumers. We scroll. We double-tap. We swipe.
It’s online shopping, and we’re shopping for others and selling ourselves. I want to think I can stop consuming, but whenever I delete the app I find myself downloading it within a few days — or hours. Sometimes minutes.
I am constantly thinking up ways to commodify myself. What’s the best thing about me? What can I sell that others will buy?
My body — okay, show more skin.
My creativity — write a more clever caption.
My happy family-film every moment.
My depression — use authenticity for likes.
Maybe it’s the dopamine hits. The little pecks of joy that come with each like. For me, it’s the affirmation. Look — I am worth something. 129 people told me I looked cute in this photo. Now I have tangible proof that I am pretty. 215 people watched my story; I’m important.
Honestly, I never even look at the people who are doing the watching or liking. It’s the number I’m after. The number tells me how much I matter.
But somehow in all my searching to find what in myself might matter, I lose myself. It’s all smoke and mirrors. Pushing yourself to be better for others is a house of cards. There’s nothing behind the curtain.
When I exist for others to consume, often there’s nothing left for me.
Nothing left for me to be silent with and be present with. I can hardly be alone or sit still. I turn to my phone to fill my cavernous mind with gifs and photos of celebrities and more likes. I can’t be alone with myself; it’s other people that remind me I matter.
I realized lately that I don’t think people matter unless they’re actively achieving and growing. I don’t know how to see worth in other humans — or myself — for just being human. For only existing. For being in the world. I can see the worth in a long dirt road, or in a leaf floating down, or in a mountain; but I can’t see worth in myself unless someone acknowledges me and gives me the double-tap of a gold star.
“I can quit any time I want” by By Yasin Osman
Lately, I’ve attempted to sit in silence. I’ve tried to look at my phone less and look around me more. I’m shit at it. Without a screen to quiet the whispers that I am not enough the harsh silent stillness of the world is horrifying.
I wonder if I’m the only one like this, or if all the people watching and liking are, on some level, in search of the same thing: proof of worth flowing from a river of consumers, eating me alive.
I haven’t solved the problem yet — I keep downloading the app. I keep sharing my stories. I continue commodifying myself and my life. There’s not a quick fix solution to cutting something out of your life that was perfectly engineered to keep you as an addict.
The problem is only going to get worse. The more data collected, the more targeted the addiction. Comment below to share your own experiences, and if you’ve discovered the antidote yet.
And you can, of course, follow me on instagram here. | https://medium.com/age-of-awareness/social-media-will-eat-you-alive-c002dd23834c | ['Alyssa Grenfell'] | 2020-12-13 16:23:45.103000+00:00 | ['Mental Health', 'Life Lessons', 'Social Media', 'Digital Life', 'Technology'] |
prostate healthy foods | prostate healthy foods
Do THIS Simple 60-Second Trick For Breakfast, Save Your Prostate
The #1 method to help support a healthy prostate
Hello, my name is Sam Morgan!
And what I have to share with you today is actually something that may help with a very embarrassing problem men are told they HAVE to get used to:
Frequent bladder issue...
This doesn't have to be a young man's game, so if you ever thought to yourself that you should settle for this...
Trust me, this is the last thing you'd want to do!
Because after many years of personal trials and experiments, I believe that I have finally discovered a special blend of ingredients which may help anyone support healthy prostate.
The following formula is calculated and adjusted to give THE BEST POSSIBLE RESULTS when it comes to help maintaining the good health of your prostate!
Read More | https://medium.com/@loseweight648/prostate-healthy-foods-7c68e4c224a5 | ['Health'] | 2020-12-24 02:43:20.610000+00:00 | ['Healthy Lifestyle Tips', 'Health', 'Fitness', 'Health Foods', 'Healthy Lifestyle'] |
Are you looking for Reliable Road Freight and Express Road Freight transport to Bosnia and Herzegovina solutions ? | Are you looking for Reliable Road Freight and Express Road Freight transport to Bosnia and Herzegovina solutions ? Maxi Logistics Services ·Oct 28, 2021
Are you looking for Reliable Road Freight and Express Road Freight transport to Bosnia and Herzegovina solutions ?
Do you have Road Freight order to Bosnia and Herzegovina ?
Let’s do it more information https://www.maxitransport.eu/en/hizmetlerimiz/detay/international-land-transportation
Also you can connect with us by the phone or e-mail.
[email protected] & +90 216 335 55 97
For Bosnia and Herzegovina Reliable Road Freight and Express Road Freight logistics about ask us.
#LogisticsAskUs
#Europa #Avrupa #LogisticsAskUsEU
https://www.maxitransport.eu/en/
https://www.maxitransport.uk
#shipping #cargo #freight #freightforwarding #supplychain #export #import #transport #Poland #MaxiLogisticsServices #Czechia #Slovakia #Hungary #Prevoz #Turska #Logistika #Lojistik #Minivan #ExpressTransport #Taşımacılık #Nakliye #BeyondOfLogistics #Bosnia #Albania #CrnaGora #Srbija #Croatia #Slovenia #LojistiğinÖtesinde #Bulgaria #Macedonia #Netherlands #Kosovo #Belgium #Greece #Romania #Germany #France #Denmark #Minivan #Speedy #Panelvan | https://medium.com/@maksitransport/are-you-looking-for-reliable-road-freight-and-express-road-freight-transport-to-bosnia-and-161ada010be | ['Maxi Logistics Services'] | 2021-10-28 12:23:27.528000+00:00 | ['Export', 'Maxi Logistics Services', 'Bosnia And Herzegovina', 'Supply Chain', 'Logistics Ask Us'] |
What Are You So Afraid Of? | What Are You So Afraid Of?
Photo by Alex Iby on Unsplash
Few things paralyze us harder than fear. No one was left untouched by it, not even the bravest warriors in history. The difference between normal people and those who seem fearless is that the latter have learned to control fear, to use it to their advantage, to turn it into power. First step: discover what are you so afraid of.
Fears can be multiple: fear of loneliness, illness, change, new, unknown, darkness, death, loss of a loved one, loss of freedom, humiliation, etc. They are so powerful that they block us, grab all our attention and concentration, distract us from many things we should be doing. It often keeps us from evolving for years.
Photo by Aarón Blanco Tejedor on Unsplash
What Is Fear?
Fear is an emotional state created by our minds in some uncertain situations. When we do not know what is going to happen, we imagine the future, positive or negative. When the scenario is negative and we think something bad will happen, we are afraid.
Most often, it is caused by a lack of self-confidence. There is the fear that we are not enough. We think we are not good enough, talented enough, smart enough, beautiful enough, etc. There are also situations in which fear has nothing to do with our abilities, as in the case of fear of illness or death.
Photo by Helena Lopes on Unsplash
What Is Social Anxiety?
Social anxiety is the fear of a social situation in which we are exposed to the evaluation of others. The fact that we may become embarrassed at some point comes to control our lives and the way we operate, so we end up avoiding certain social contexts in which we could be rejected.
There is a permanent and irrational fear of going to a party where there might be people we don’t know and where we risk making fun of ourselves; when we believe that our point of view may be inappropriate in the eyes of others; when we are observed while eating or drinking. Or simply when we refuse to start a conversation, because we don’t think we have anything interesting to say.
Photo by Viktor Keri on Unsplash
Being Afraid of Fear — A True Story
From the moment I have started to know myself and realize what I wanted, I have become aware that I was afraid of fear. I had a clear case of social anxiety and at the age of 15, I decided to control it. We are social animals and the desire to be accepted makes us overcome any impediment.
So, I became what can best be called a social chameleon. Was I in a dance club? I danced first (although I hate to dance). Was I in a reading club? I was exposing all my inner opinions and feelings (although I hate to expose my soul on a platter). Was there a drinking contest? I was always the one who wins (although I hate the person I become when I drink). Was there someone telling a joke? I laughed first even if I thought it was the lamest joke ever.
At the age of 25, I realized for the first time how mechanically I had lived all this time. I turned into a completely different person and lost my identity, just because I was afraid of fear and I wanted to be accepted. I began to feel the pressure to change; to become ME again.
Well, I’ve done it. Somewhat. Let’s just say I’ve reached an existential balance. I don’t dance anymore, I drink only when I want to, I express my feelings only with people that I care about, I laugh only at the best jokes. The old me would be sad. Well, I’m not. For the first time in my life, I feel totally free.
Photo by Caleb Woods on Unsplash
It Is Normal to Be Afraid?
Yes. It is normal to be afraid. It is real that people judge us; that others fight for the same resources, and will act ruthlessly to get their hands on them first. It is also normal the desire not to be alone and the desire to maintain a present state of mind.
All these things come from our evolution of hundreds of millions of years. It is molded in our genetic code, and we will not get rid of this reality as long as we live.
No. It is not normal to be afraid…. Wait? What? I’ve just contradicted myself! Yes, only intelligent people can understand that a question can be answered with yes and no at the same time, and the answers should be both valid. To fully understand that, firstly, you should grasp the meaning of “normal”.
Normal means, for this context, something that has statistical prevalence, or comes from genetic programming. For example, it is normal not to be gay for the simple fact that most people are straight. That doesn’t mean that’s the way it should be or that’s the way it is. That’s not normal. It is normal for people to have brown eyes? It is statistically the most common color. It makes you bad if you have blue or black eyes? That doesn’t make you less normal. It just makes you special.
Photo by Greg Rakozy on Unsplash
Final words
I beg you: don’t change. You are beautiful, smart and strong. You should not be afraid of fear. Get to know yourself and every time you are afraid of something, just keep telling you that you are better than that. And believe it. | https://medium.com/thethinktank/what-are-you-so-afraid-of-c50a9e636a26 | ['Helen Bold'] | 2020-12-03 09:00:20.324000+00:00 | ['Life', 'Anxiety', 'Life Lessons', 'Fear', 'Social Change'] |
Esports, Gamification & Virtual Reality | Esports, Gamification and Virtual Reality
Yesterday I went for a 26km cycle through the streets of London with my father 🚲
There was no traffic and the weather was perfect ☀️🛣
We felt the burn on the uphills and on the downhills we cruised 🥵 🚀
We used each other’s slipstream for energy saving and we motivated each other throughout the ride 🚴🏽♂️🚴🏽♂️💨
But here’s the catch: My dad was at home in Cape Town, South Africa 🇿🇦 and I was in my apartment in London 🇬🇧. Indoor trainers combined with VR software (Zwift) is one incredible way to exercise, socialize and compete 🏆
The simulation is real. Your weight and power are used to determine the speed on the route depending on the gradient of road and the indoor-trainer modulates the resistance accordingly ⚙️🕹
Esports is a hot topic not only for gaming but for other areas where massive innovation is coming and the traditional sports and entertainment industry will be changing dramatically as a result. Ride on, play on and watch on this space 🚴🏽♂️🎮 | https://medium.com/@david.nathan092/esports-gamification-virtual-reality-885669b81cd2 | ['David Nathan'] | 2019-04-12 07:57:47.236000+00:00 | ['Esports', 'Virtual Reality', 'Sports', 'Cycling', 'Zwift'] |
Top 5 VR Games of 2020 | Alright, so with the year winding down I’ve opted to go with mostly tech related discussions for the next few posts. This one being focused on one of my favorite purchases from 2020: My Valve Index. Having had it since early September, I’ve logged about 240 hours on a number of different games, and have definitely seen the light on why people have gone for more enthusiast grade VR setups. However, I’m primarily here to discuss the top games I’ve played this year so far (plus some honorable mentions).
5. Pavlov
Pavlov was one of the first Co-Op games that I purchased for my Index, Combining a whole new experience with something that is otherwise very familiar. I played CS:GO, TTT and a number of Garry's Mod variants with friends when I was younger, and Pavlov offers a few of those as a VR experience with some twists. You can play on maps from a number of non-valve related titles (Rust from MW2 being one of them) including some of the TTT maps from back in the day. If that’s not your cup of tea, there’s always deathmatch or classic bomb defusal in public or private lobbies. 1 v 1 is also something you have the option to do, and with a decent amount of weapons, maps, a shooting range, and mock run that’s perfect for speedrunning (think the run you do at the beginning of CoD 4) and honing your shooting. The game doesn’t come without flaws such as spawning without a weapon at times, or the double barrel ejecting your shells upon reload, but it provides a nostalgic experience with a fresh spin.
4. SUPERHOT
I played the original superhot on desktop, and heard the VR version was an infinitely better option. And you know . . . it’s true. In short, SUPERHOT VR is a game where you’re spawned into hostile situations, and it’s up to your quick thinking, and strangely enough, slower reactions to survive them. Playing in a large space allows greater immersion into a world where time only moves when you do, and it’s wildly entertaining. I’ve found myself practically on the floor trying to dodge bullets at times, and when you familiarize yourself with some of the levels, you can ratchet it up and get far more aggressive with your play style. Cutting a bullet in half with a knife, slashing the polygonal enemy before you, taking his weapon and then gunning down the rest of the room is pretty satisfying. The game offers an endless mode, which is seeing how long you can survive an onslaught of enemies while blinking around the room, to give you a better advantage. In a world where things move at the speed you do, it often feels not slow enough (firing your gun causes time to speed up for a brief moment) but that’s due to how often you find yourself actually making minute movements. It’s a game that should be experienced by everyone who gets to play VR, even if you don’t think you you’ll keep it in your library.
3. Beat Saber
To be honest it was really difficult putting Beat Saber here given that it’s incredibly addicting and the skill curve for those who play it is very impressive. I also haven't played it as much as my top two, but that doesn’t make it any less of a fantastic title (because it is). Waving around lightsabers in rhythm to cut blocks has never felt so fun, coupling it with mods and sites like Beat Sage can fill hours with fun and challenging new songs. Throw a leaderboard rank system on top of it and you’ve got a whole new way to get up and active. The campaign recycles harder versions of the same songs you play in the beginning with new challenges including: limited missed blocks, total distance you’ve waved controllers, and overall speed. It wasn’t long before I installed ModAssistant and began to put some of my favorite songs into the games custom maps. They’re not always note for note perfect (I’d say anywhere from 85–90% accuracy), but if you want the hardest version of a song, the user created versions give you that extra degree of difficulty to make you want to practice a song until you can 100% it. I think it’s safe to say that Lightsaber-Guitar Hero should be in every VR users library.
2. Blade & Sorcery
Blade & Sorcery was practically made to be used with NexusMods. You can mash up some of your favorite fantasy titles in wave/endless arena style combat. A game that takes advantage of the Index’s Knuckles controllers gives you an extra depth of immersion when picking up objects or harassing the NPC’s. The flow and weight of objects in game feel great from the faster daggers and short swords, to the claymores and axes (two handed weapons can behave oddly however, such as holding a staff as Darth Maul would). Even more fun is the downloadable maps and extras. You can wield Mjölnir and smite enemies with lightning, twirl it to fly (which can be disorienting) and turn the NPC’s into baseballs. It will always return to you, and as fun as it can be to pound your foes into a pulp, it’s still not the most fun you can have. That’s reserved for the Outer Rim mod. A mod dedicated to putting dozens of blasters, lightsabers, and other accessories from the Star Wars into your hands (albeit virtual ones). The fact you can force pull someone into your lightsaber, dismember, and customize the blades of the sabers for FREE is incredible to me that it didn’t require a small donation to download it. The multiplayer mods aren’t updated at the moment and I look forward to getting around to using them, but for now the single player is enough enjoyment, especially with the higher volume waves (given that your computer can handle the number of player models).
1. Half-Life: Alyx
This shouldn’t come as a surprise to anyone that Half-Life: Alyx takes the number one slot. It’s the best AAA game offered in VR by a major studio, the story is incredibly well written, with a number of fun recurring characters whose dialogue doesn’t remotely inch on being annoying. The ending is wonderful and is surprising for those (and disappointing for those who don’t own VR for not being able to experience it) who have been waiting for nearly a thirteen years for another addition to the Half Life saga. The number of ways you can interact with the Combine, Xen beings, and the overall environment as a whole was riveting. You can block with objects around you, gain the upperhand by using your environment to your advantage such as peaking around doors and blind firing, and to a degree can customize your weapons. Throughout the game you find resin that can be used to upgrade aspects of your 3 guns:pistol, shotgun, and a combine smg, and if you don’t take time to search for it, will force you to ration which upgrades you deem more important, if you’re trying to save for the more expensive ones. The laser sight being the most impactful in my opinion next to the auto loader for the shotgun. This brings me to another aspect Half Life does well, given that the HL games are inherently thrillers: Horror. I can’t describe the number of times in a dark room with a flashlight that I accidently dropped a magazine while having to reload and had a small panic attack trying to pick it back up before I was almost killed. There’s a sneaking part of the game where you have to evade a mutated creature (which I will not spoil), that was genuinely one of the spookiest experiences I’ve ever had to go through in a game. It even gives you the option to put on hard hats that protect you from initial barnacle grabs, and cover your mouth if you can’t find a respirator to stop you from resulting effects such as coughing to alert enemies, or impairing your vision and senses. It’s probably my game of the year choice, and was the most fun I’ve had playing a game since Dark Souls. It’s a game everyone should take the time to experience, and it is the culmination of the best experiences that VR has to offer. | https://medium.com/@destrampenick/top-5-vr-games-491807b8271d | ['Nick Destrampe'] | 2020-12-27 16:02:07.606000+00:00 | ['Virtual Reality', 'Videogames', 'Gaming'] |
Let Us Reconcile | Let Us Reconcile
27th DECEMBER 2020
Christ for Youth International
“Be angry, and do not sin”: do not let the sun go down on your wrath,” [Ephesians 4:26 NKJV]
Life is full of relationships. Throughout life’s journey, the people we interact with at home, our workplaces, at church or school are bound to offend us just as we are likely to offend them. Getting offended sometimes is inevitable since we are all imperfect people.
Have you been offended by someone close to you this year? Has it caused a rift in your relationship? If so, one way to end the year well is to forgive those who have hurt or offended you during the year. The Bible says the sun should not go down on your anger. As the year ends, we must do well to let go of every hurt or bitterness. Bitterness robs us of God’s best in life.
Pray about that hurt and ask God to help you forgive. If it’s possible, try to reconnect with any person who hurt you and let them know
you are not holding anything against them. Even if you’re to face some resistance in the process of reconciliation, go ahead and offer your forgiveness.
Don’t let the journey ahead of you next year become burdensome by carrying pain, hurt, bitterness or offences into the year. The year ahead is so full of promise and hope to carry ‘baggage’ that will set you back.
Don’t let the sun go down on your anger. Don’t let the year end while you still hold onto a hurt or pain caused by someone. Embrace the peace of God by letting go of any hurt you’re holding on to.
Dear Lord, thank you for forgiving my every sin. I pray for strength and courage to let go of any pain, bitterness and hurt as I enter a new year.
Further Reading: Romans 12:19–21
Prophetic Declaration: Psalm 91 | https://medium.com/christ-for-youth-international/let-us-reconcile-393a194037ca | ['Precious Moments'] | 2020-12-27 03:45:38.843000+00:00 | ['Christ', 'Reconcile', 'Family', 'Precious Moments', 'Youth'] |
Trust Wallet community votes on supporting the KIN migration and adding support for Solana tokens. | This is the extent of my art skills.
Trust Wallet invited the KIN community to submit a governance proposal, which enables existing holders of $TWT to vote and decide on proposals relating to the development of Trust Wallet. Our proposal has been submitted and is located here.
In order for Trust Wallet to support Kin’s migration to Solana, the Trust Wallet dev team needs to add support for Solana’s tokens (SPL tokens) before 8 December 2020. Currently, Trust Wallet supports Solana’s native coin (SOL) but does not presently support SPL tokens.
The Trust Wallet community is currently voting on our proposal. If you already hold $TWT prior to the proposal submission time, you too can cast your vote.
KIN and SOLANA: working together for a win-win
This proposal benefits both the KIN and Solana communities. For the KIN community, they will be able to keep using their favourite (and trustworthy) wallet post-migration. For the Solana community, they will be able to use Trust Wallet to store SPL tokens and as their token platform continues to grow the demand for a reliable wallet solution to support their community will increase.
The Kin community and Solana community can work together to rally the support of the Trust Wallet community. In this way, it’s a great opportunity to have our communities all involved together.
I hold $TWT — where can I vote?
Make sure you read: How to Vote in Trust Wallet Governance.
In the Trust Wallet dapp browser:
navigate to: https://governance.trustwallet.com/ Select our proposal Tap on the ETH logo at the upper right corner and then choose Smart Chain on the list of networks To connect your Smart Chain wallet, tap on Connect and then select Trust Wallet, Scroll down and vote “Yes” on our proposal.
The amount of $TWT you hold in your wallet when you cast your vote will determine the weight of your vote. You must already hold $TWT. If you have no $TWT, or you buy $TWT now, your voting power will be 0.
When does voting close?
Voting closes on Friday, 4 December 2020.
Stay tuned for the results and as always, thanks for your support! | https://medium.com/kinblog/trust-wallet-community-considers-supporting-the-kin-migration-and-adding-support-for-solana-tokens-87540cf7c8df | ['Matt Hannam'] | 2020-11-24 23:35:04.008000+00:00 | ['Migration', 'Kin', 'Trust Wallet', 'Governance', 'Solana'] |
Voice Tech Trends in 2021 | 2020 has been a real “annus horribilis” for the entire global economy and for each one of us. The COVID-19 pandemic has affected every aspect of our lives, crashing businesses, and having a significant impact on Conversational AI.
AI-powered chatbots and virtual assistants were at the forefront of the fight against Covid, helped screen and triage patients, conducted surveys, provided vital Covid-related information, and more. And quite naturally, we saw more conversational agents in telemedicine — from FAQ-chatbots and virtual consultants to chatbot-therapists — that made health services more accessible to people who can’t leave their homes (and there were billions of them in the spring of 2020). Needless to say, conversational agents combating the virus are becoming a long-term trend, and next year we will see more solutions complying with the ‘less touching, more talking’ rule.
Despite the difficulties, consumer data show that hearables ownership by US adults has risen by about 23% during that period while voice assistant use through hearables grew by 103% from 21.5 million in 2018 to 43.7 million in 2020. The data show that hearables and voice assistant adoption are complementary technology trends.
The voice and speech recognition market is expected to grow at a 17.2% compound annualized rate to reach $26.8 billion by 2025. From this perspective, we can conclude that voice UX became a real pragmatic innovation.
In the coming year, we expect to see more technically-driven smart devices and more support for users. Assistants are expected to become proactive and distinguish users in order to bring personalized content. This especially applies to kids’ content, where certain restrictions are necessary.
In 2021 we will definitely see:
Voice assistant in a mobile app
Voice in mobile apps is the hottest trend right now, and it will stay so because voice is a natural interface.
Natural interfaces are about to become obsolete and even displace swiping and typing. Voice-powered apps increase functionality, saving us from complicated navigation, form-filling, overlaid menus, support, etc. They make it far easier for an end-user to submit their request — even if they don’t know the exact name of the item they’re looking for or where to find it in the app’s menu. Pretty soon, users won’t just appreciate the greater functionality and friendliness of a voice-powered mobile app, they’ll anticipate it.
Outbound calls and smart IVR powered with an NLU
It’s not about hollow-hearted cold calls. These are the smart solutions that will replace agents in call centers pretty soon because they are effective, performant, and easy-to-customize. There are more and more companies offering such services and it seems like a reasonable guess that this is where the calling sales are moving.
Voice Cloning
Or a voice replication technology. Machine learning tech and GPU power development commoditize custom voice creation and make the speech more emotional, which makes this computer-generated voice indistinguishable from the real one. You just use a recorded speech and then a voice conversion technology transforms your voice into another. Voice cloning becomes an indispensable tool for advertisers, filmmakers, game developers, and other content creators.
Voice assistants in smart TVs
Smart TV is an obvious placement for a voice assistant — you don’t really want to look for that clicker and spend some more time clicking when you can use your voice to navigate. All you need to do is press and hold the microphone button and speak normally — there’s no active listening, and no need to shout your commands from across the room. With a smart assistant on your TV, you can easily browse the channels, search for the content, launch apps, change the sound mode, look for the information, and many more, depending on the TV model.
Smart displays
Last year, we said that smart displays were on the march because they, pretty much, expanded voice tech’s functionality. Now, the demand for these devices remains high because smart displays showed a huge improvement over the last year as more customers preferred them over regular smart speakers. In the third quarter of 2020, the sales of smart displays hit 9.5 million units. In other words, it grew by 21% year-on-year As a result, the market share of this product category rose to 26% from 22% last year. Therefore, we expect there will be more customized, more technologically advanced devices in 2021. Smart displays, like the Russian Sber portal or a Chinese smart screen Xiaodu, are already equipped with a suite of upgraded AI-powered functions, including far-field voice interaction, facial recognition, hand gesture control, and eye gesture detection. What’s next?
Voice for business
In 2021 we will definitely see more solutions to improve business processes — voice in meetings and voice for business intelligence. Voice assistants will be highly customized to business challenges, integrated with internal systems like CRM, ERP, and business processes. Furthermore, more SMBs see enterprises making profits and in 2021 there will definitely be more companies looking for voice-first solutions.
Games and content
More games, learning, and entertaining content are expected since tech companies like Amazon, Google, and other voice-first tools’ developers push their builders to the market. Advertising via smart speakers and displays is a great chance to promote a product. So, communications and entertainment market majors like Disney Plus or Netflix partner up with new tech platforms to become a first-mover. 2021 will bring us more partnerships like these, more games, and education skills from third-party developers.
Voice in the gaming industry
When talking about Conversational AI and gaming, one cannot fail to mention text-to-speech (TTS), synthetic voices, and generative neural networks that help developers create spoken and dynamic dialogue. Now it takes a lot of time and effort to record a voice for spoken dialogues within the game for each of the characters. In the upcoming year, developers will be able to use sophisticated neural networks to mimic human voices. In fact, looking a little bit ahead, neural networks will be able to even create appropriate NPC responses. Some game design studios and developers are working hard to create and embed this dialogue block into their tools, so pretty soon we will see first games built with dynamic dialogues.
Multimodal approach
More and more developers come to the conclusion that a device ecosystem and multimodal approach are much needed. A voice assistant may simultaneously live on your mobile phone, smartwatches, smart home, and smart TV. Obviously, ‘1 assistant, few devices’ is the right approach here.
Interoperability
On the contrary, another idea could go pretty strong in 2021 — bundling multiple virtual assistants in a single device, if that’s practical for a user. A year ago, Amazon launched a project that could allow Alexa to be bundled together with other virtual assistants in a single device. It made sense, for customers could choose which voice service would best support a particular interaction. And although this could be a great marketing moment for Amazon, which could gather smaller marketeers; some of the largest players in the space like Google, Apple, and Samsung abandoned the idea for obvious reasons. Still, we’ll see where this idea goes pretty soon. | https://medium.com/voiceui/voice-tech-trends-in-2021-34dfbf726a6d | ['Anna Prist'] | 2020-12-22 14:23:21.489000+00:00 | ['Trends', 'Tech', 'AI', 'Technology', 'Artificial Intelligence'] |
EYE-POPPING JAMES HARREN COVER FOR THE WEATHERMAN REVEALED | PORTLAND, OR, 07/19/2018 — Image Comics is pleased to reveal a jaw-dropping, limited THE WEATHERMAN #3 cover featuring eye-popping artwork by James Harren (RUMBLE), with colors by Nathan Fox.
In THE WEATHERMAN #3, Nathan and Cross are cut off, on the run, and hunted by The Marshal, as they try to stay alive long enough to find Nathan’s lost memory and the key to stopping another world-ending attack. But they’ll have to survive each other first…
THE WEATHERMAN #3 hits stores on Wednesday, August 15th. The final order cutoff for comics retailers is Monday, July 23rd.
WEATHERMAN #3 Cover A by Fox — JUN180272
WEATHERMAN #3 Cover B by Martin — JUN180273
WEATHERMAN #3 Cover C by Harren, colors by Fox (Limited) — JUN188042
###
ABOUT IMAGE COMICS
Image Comics is a comic book and graphic novel publisher founded in 1992 by a collective of bestselling artists. Image has since gone on to become one of the largest comics publishers in the United States. Image currently has six individuals on the Board of Directors: Robert Kirkman, Erik Larsen, Todd McFarlane, Marc Silvestri, Jim Valentino, and Eric Stephenson. It consists of five major houses: Todd McFarlane Productions, Top Cow Productions, Shadowline Comics, Skybound Entertainment, and Image Central. Image publishes comics and graphic novels in nearly every genre, sub-genre, and style imaginable. It offers science fiction, fantasy, romance, horror, crime fiction, historical fiction, humor and more by the finest artists and writers working in the medium today. For more information, visit www.imagecomics.com. | https://medium.com/comicbookindustry/eye-popping-james-harren-cover-for-the-weatherman-revealed-92a1b8018681 | ['Mike Gagnon'] | 2018-07-24 16:03:07.606000+00:00 | ['Comics', 'Image Comics', 'Press Release', 'Comic Books'] |
6 Christmas Movies You Might Have Missed | 1. The Family Man (2000)
This list starts as all good lists should, with a movie featuring Nicolas Cage. Say what you will about his odd career choices and the neverending stream of bad action flicks on his resume, but the man is a decent actor when the material is good and has a true talent for comedy (not just when he’s making fun of himself).
I remember seeing The Family Man with my family when it was released in theaters (20 years ago?!) and enjoyed it so much that we have watched it repeatedly around Christmas in the intervening years. A modern takeoff on It’s A Wonderful Life (best movie ever…I will fight you on it), the movie thankfully manages to be a reimagining without trying to be a full-fledged remake, and features solid performances from other A-List stars like Teá Leoni, Jeremy Piven and Don Cheadle. Briefly, the premise is that Nicolas Cage is a self-centered businessman who chased a career over marrying the girl of his dreams (Leoni), but is given a Christmas Eve glimpse of what his life would be like if he had chosen love over money. I’m sure some people think this movie is too schmaltzy, but if you don’t enjoy a little schmaltz, you probably shouldn’t be reading a list about Christmas movies!
Streamable on HBO Max
2. The Lemon Drop Kid (1951)
This movie is a fairly run-of-the-mill 1940s-50s Bob Hope comedy (if you’re familiar with that genre), but the plot centers around Christmastime and how a con man (Hope) enlists other local hoodlums to dress up like Salvation Army-style Santas to raise the money he needs to get a mobster to spare his life.
While this movie definitely won’t change your life and might not be worth annual viewings, it is notable for introducing the song “Silver Bells” for the first time ever, which you can see in the clip below. You will probably recognize William Frawley (Fred Mertz from I Love Lucy) as a curmudgeonly Claus. My wife and I watched this movie while we worked on a Christmas puzzle, and it served as festive and amusing background entertainment.
Streamable on YouTube
3. The Man Who Invented Christmas (2017)
I remember having low expectations and very young children when this movie came out, so I knew there was no way I would go see it in a theater. When I noticed it on Amazon Prime this year, I made a mental note to check it out when the time was right, and I’m so glad I did! I love a good retelling of A Christmas Carol, and this one offers a unique spin on that theme, as it focuses on Charles Dickens’ efforts to write the novella in 1840s London.
Much like Scrooge is visited by three spirits, Dickens is visited by Scrooge himself (inhabited delightfully by Christopher Plummer) as he tackles writer’s block, family issues and the looming deadline for his manuscript. As both a big fan of all things Christmas and a writer myself, I really enjoyed this one.
Streamable on Amazon Prime
4. Klaus (2019)
I happened to have a Netflix trial last Christmas, so my wife and I thought we would give the newest animated Christmas flick a try. I’m so glad we did! The biggest draw for this one is the animation, and it is a true crime that it lost at the Oscars to Toy Story 4. The heartwarming plot offers an origin story for Santa Claus, fueled by great voiceover performances from celebs like Jason Schwartzman, J.K. Simmons, Joan Cusack and Norm MacDonald. It’s definitely worth your time!
Streamable on Netflix
5. Trapped in Paradise (1994)
I understand if you’re questioning my sanity, but yes, Virginia, there is another Nicolas Cage movie on my list. Trapped in Paradise is everything you could want from a 1990s screwball romantic comedy. Nic Cage is the straight man in a trio of brothers (rounded out by Jon Lovitz and Dana Carvey at the height of their SNL-era powers) who decide to rob a bank on Christmas Eve in the small town of Paradise, Pennsylvania.
My family rented this movie from Blockbuster when I was a kid, and it has been viewed nearly annually since then. We quote it frequently and I still laugh heartily at so many scenes. Sure, the plot is pretty bonkers and the love interest for Cage develops literally out of nowhere, but it’s all worth it to watch the three brothers interacting and enjoy this 1990s time warp. Florence Stanley also steals each of her scenes as their foul-mouthed mother.
Streamable on HBO Max
6. Arthur Christmas (2011)
I watched this movie for the first time this year and can’t believe that I never saw it sooner. I vaguely remember when it was released, but really had no opinion on it, and only tried it this year after a friend was raving about it. My friend was right! This delightful animated romp offers its own version of how the whole Santa Claus thing works and features several generations of the Claus family living together at a state-of-the-art North Pole. Overseen by the current Santa Claus’ self-important older son, the elves use all manner of inventive technology to optimize Christmas Eve gift delivery. The film focuses on Arthur, Santa’s starry-eyed-but-hapless younger son, whose main job is answering letters to Santa and staying out of the way of the Christmas operation — until one gift accidentally goes undelivered and he makes it his mission to deliver the present before sunrise Christmas morning.
I fell in love with the world this movie creates within the opening two minutes, and my admiration grew as it shifted into high gear. The script is hilarious, the vocal performances are wonderful and the numerous sight gags and hidden jokes demand a second viewing to take it all in. I could see this one becoming a somewhat annual tradition for me, especially when my kids are old enough to watch.
Rent on Amazon | https://medium.com/@mattpaolelli/6-christmas-movies-you-might-have-missed-81a06271aae9 | ['Matt Paolelli'] | 2020-12-27 22:52:22.774000+00:00 | ['Movie Review', 'Christmas Movies', 'Christmas', 'Movies', 'Movies To Watch'] |
Data science collaboration: Why it’s often difficult and how cloud services can help | In these strange times, we’ve been forced to adapt our ways of working to accommodate our new reality of isolation. It’s difficult for those who are used to sitting side-by-side, pair programming the day away. However, there are some silver linings to be found. Society may become more flexible when it comes to working from home after this is all over. We could all do with a little more flexibility around our working hours and location. Furthermore, we will have found better ways of collaborating, with greater resilience in what we create, better integrated systems, and collaborative practices which will last beyond Covid-19.
When I look back, my journey in becoming a data scientist was a hodgepodge of experiences. My first programming language was VBA. I learned R mostly through Stack Overflow. I ran too many focus groups to remember. I almost got a PhD in graph theory. I’ve never officially taken a computer science class and now I code in Python most days of the week. There are many other data scientists like me, and many who aren’t.
I’ve worked in multiple consulting firms, startups, a food unicorn, and one of the world’s biggest tech companies. There’s one thing that I see over and over: collaboration struggles, both amongst data scientists and between data science and engineering.
The reason is this: data scientists come from a really wide variety of backgrounds. Data touches everything.
In one team, I worked with an astrophysicist and rocket scientist (he literally built rockets for his country’s space program). In another team, I worked with an MD-PhD with little coding experience and a backend engineer who coded only using VIM and didn’t believe in IDEs. My background is in behavioral studies. And we all called ourselves data scientists.
How I imagine my former colleague’s past life
As much as data bonded us together, our different experiences and processes drove us apart. We were used to doing things “the way we’d always done it” and this meant for some of us, adhering to the git flow bible. For others, it meant writing 10,000 lines of code in a single python file on one’s local machine.
We argued over speed (yes, writing 10,000 lines of code in a single file technically is faster than splitting it into folders and files) and resilience (but no, it’s bad practice to store that code on a laptop that you lost yesterday, plus no one can read your code). We argued about security (no, you can’t have admin access to install this random package you found in the depths of the web which hasn’t been updated since 1996). Most of all, we argued about working together (“Pair programming is a waste of time!” But you want to spend an extra week explaining it to me when it took you two days to write?). | https://medium.com/swlh/data-science-collaboration-why-its-often-difficult-and-how-cloud-services-can-help-a71f1fbcfee3 | ['Cindy Weng'] | 2020-05-22 12:34:00.025000+00:00 | ['Azure', 'Collaboration', 'Machine Learning', 'Cloud Computing', 'Data Science'] |
What Raising $20 Million in Venture Capital Taught Me About Entrepreneurship | What Raising $20 Million in Venture Capital Taught Me About Entrepreneurship
Fairfax Media—Getty Images
The Entrepreneur Insiders network is an online community where the most thoughtful and influential people in America’s startup scene contribute answers to timely questions about entrepreneurship and careers. Today’s answer to the question “How do you stay inspired to run a business?” is written by Hari Ravichandran, founder and CEO of Endurance International Group.
The thing that inspires me to run my business is finding ways to overcome challenges and move the business forward — during every phase. When you start a business, you’re faced with very different challenges than when you begin to grow and achieve more scale.
Initially, the key things you need to keep in mind are developing a sustainable business model that can get traction with customers, securing enough revenue to pay your bills, and building a foundation for long-term success.
As you grow, you’ll have to look more closely at how your company should evolve to stay competitive, how you should define and create the culture of the company, and how to keep everyone in the company — including employees, partners, investors, and customers — moving in the same direction.
In 1997, I started a company called BizLand.com with my own seed money. Our goal was to help businesses get online. We had three offerings: web hosting, an e-commerce plug-in tool, and a shopping cart. What started as a small consulting business quickly grew much larger after we raised $20 million in venture capital and hired around 100 people.
However, in 2000, things started to take a turn. Our business model was based on ad-supported revenue, so when the dot-com bust hit that year, our revenue declined as the advertising market went down. Suddenly, our plan for a sustainable business dried up along with the ad-based business model.
With declining revenue and no way to pay employees or our business expenses, we were faced with a tough choice: reinvent the business or shut down for good. Because I believed that there was a market for our product and services, I was inspired to find a way to save the business rather than closing up shop. In late 2001, we reduced the team to 14 employees and were able to convert about 2% of our customers to a subscription-based model. We broke even in 2002 and renamed the company Endurance International Group in 2003.
Although this reset was a challenging experience, it ultimately set us up to achieve more scale and allowed us to ramp up to what Endurance has become today: a publicly traded company that employs more than 3,800 people in 15 offices globally.
These experiences have taught me to take one day at a time and focus on the big picture. Have a strategy in place that you believe in, and one that your team can embrace. Don’t let anything deter you from reaching your goal, even if you have to adapt the strategy along the way. Pushing myself to tackle problems and embrace risks is what continually keeps me motivated to grow the business.
If you’d like to read more articles like this, recommend this one by clicking the heart below. | https://insiders.fortune.com/what-raising-20-million-in-venture-capital-taught-me-about-entrepreneurship-7ed99b023a19 | ['Hari Ravichandran'] | 2017-04-27 18:30:25.644000+00:00 | ['Business', 'Venture Capital', 'Entrepreneurship', 'Entrepreneur', 'Startup'] |
S1/E5 : Beyond Oak Island Season 1, Episode 5 <Full Stream> | WATCH FULL EPISODES Beyond Oak Island Season 1 Episode 5 [ULTRA ᴴᴰ1080p]
⚜ — Official~Watch Streaming !! Beyond Oak Island (2020) Se1Ep5 Season 1 Episode 5 : 5 — Full Episodes
Show Info
Name Episode : Deep Water Gold
Network : History
Genres : Reality
🔴 Now Streaming :: https://Tvmoon.site/tv/113346-1-5/Beyond-Oak-Island.html
From pirates such as Blackbeard and outlaws like Jesse James, to Aztec gold, priceless historical artifacts from American history and sunken treasure ships, “Beyond Oak Island” digs deep into the many treasure quests across the globe, revealing amazing new details and clues from past searches — and in some cases, advancing the hunt.
⚜ Beyond Oak Island Season 1 Episode 5 [4KHD Quality]
⚜ Beyond Oak Island 1x5 Watch Full Episodes : Deep Water Gold History
⚜ Official Partners “[History]” TV Shows & Movies
▼ Watch Beyond Oak Island Season 1 Episode 5 Eng Sub ▼
⚜ — WATCH FULL EPISODES Beyond Oak Island Season 1 Episode 5[ULTRA ᴴᴰ1080p]
From pirates such as Blackbeard and outlaws like Jesse James, to Aztec gold, priceless historical artifacts from American history and sunken treasure ships, “Beyond Oak Island” digs deep into the many treasure quests across the globe, revealing amazing new details and clues from past searches — and in some cases, advancing the hunt.
Link WATCH Eps .1
⚜ — 360p : CLICK HERE
⚜ — 480p : CLICK HERE
⚜ — 720p : CLICK HERE
Beyond Oak Island S1E5 Streaming Online On History
🔴 Now Streaming ⇨ https://Tvmoon.site/tv/113346-1-5/Beyond-Oak-Island.html
Beyond Oak IslandBeyond Oak Island 1x5 ,Beyond Oak Island S01E5,Beyond Oak Island History,Beyond Oak Island Cast,Beyond Oak Island Season 1,Beyond Oak Island Episode 5,Beyond Oak Island Premiere,Beyond Oak Island New Season,Beyond Oak Island Full Episodes,Beyond Oak Island Watch Online,Beyond Oak Island Full HD,Beyond Oak Island Season 1 Episode 5,Watch Beyond Oak Island Season 1 Episode 5 Online
❖Enjoy And Happy Watching❖
Within the many cinematic tales that have been produced, legal drama films have certainly been up there; hitting viewers with heated (and sometimes poignant) narratives that showcase a variety of multi-faceted viewpoints that deliver the truth and unmask the falsehood of the system. From the jail cell’s of a prison to the presiding courtrooms, legal dramas display plenty of human emotions of the individuals; projecting tales of injustice doing and who is really to blame for the wrong doings as well as demonstrating the views of the case on today’s society (i.e. social standing, race, religion, gender, etc.). Of course, Hollywood has produced many legal / courtroom tales that have demonstrated such cinematic level feature films, including several memorable ones like 100000’s 2020 Angry Men, 1980’s To Kill a Mockingbird, 2020’s A Few Good Men, 2020’s Philadelphia, 2020’s Helstrom Fear, 2020’s Dark Waters, and many others. Now, Warner Bros. Pictures and director Destin Daniel Cretton present the latest legal drama endeavor with the film Just Mercy; based on the biographical memoir “Just Mercy: A Story of Justice and Redemption” by Bryan Stevenson. Does the movie find strength within its story or does it lost within legal courtroom narrative?
✓ I do not History this song or the Image, all credit goes,
It’s so Awesome. Subscribe and Share with your friends! to my channel. See for more videos!!. I want to say ‘thank you’ for being the friend!!
Atelevision show (often simply TV show) is any content produced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set, excluding breaking news, advertisements, or trailers that are typically placed History Oneween shows. Television shows are most often scheduled well ahead of time and appear on electronic guides or other TV listings.
THE STORY
After graduating from Harvard, Bryan Stevenson (Michael B. Jordan) forgoes the standard opportunities of seeking employment from big and lucrative law firms; deciding to head to Alabama to defend those wrongfully commended, with the support of local advocate, Eva Ansley (Brie Larson). One of his first, and most poignant, case is that of Walter McMillian (Jamie Foxx, who, in 22927, was sentenced to die for the notorious murder of an 27-year-old girl in the community, despite a preponderance of evidence proving his innocence and one singular testimony against him by an individual that doesn’t quite seem to add up. Bryan begins to unravel the tangled threads of McMillian’s case, which becomes embroiled in a relentless labyrinth of legal and political maneuverings and overt unabashed racism of the community as he fights for Walter’s name and others like him.
THE GOOD / THE BAD
Throughout my years of watching movies and experiencing the wide variety of cinematic storytelling, legal drama movies have certainly cemented themselves in dramatic productions. As I stated above, some have better longevity of being remembered, but most showcase plenty of heated courtroom battles of lawyers defending their clients and unmasking the truth behind the claims (be it wrongfully incarcerated, discovering who did it, or uncovering the shady dealings behind large corporations. Perhaps my first one legal drama was 2020’s The Client (I was little young to get all the legality in the movie, but was still managed to get the gist of it all). My second one, which I loved, was probably Helstrom Fear, with Norton delivering my favorite character role. Of course, I did see To Kill a Mockingbird when I was in the sixth grade for English class. Definitely quite a powerful film. And, of course, let’s not forget Philadelphia and want it meant / stand for. Plus, Hanks and Washington were great in the film. All in all, while not the most popular genre out there, legal drama films still provide a plethora of dramatic storytelling to capture the attention of moviegoers of truth and lies within a dubious justice.
Just Mercy is the latest legal crime drama feature and the whole purpose of this movie review. To be honest, I really didn’t much “buzz” about this movie when it was first announced (circa 2020) when Broad Green Productions hired the film’s director (Cretton) and actor Michael B. Jordan in the lead role. It was then eventually bought by Warner Bros (the films rights) when Broad Green Productions went Bankrupt. So, I really didn’t hear much about the film until I saw the movie trailer for Just Mercy, which did prove to be quite an interesting tale. Sure, it sort of looked like the generic “legal drama” yarn (judging from the trailer alone), but I was intrigued by it, especially with the film starring Jordan as well as actor Jamie Foxx. I did repeatedly keep on seeing the trailer for the film every time I went to my local movie theater (usually attached to any movie I was seeing with a PG rating and above). So, suffice to say, that Just Mercy’s trailer preview sort of kept me invested and waiting me to see it. Thus, I finally got the chance to see the feature a couple of days ago and I’m ready to share my thoughts on the film. And what are they? Well, good ones….to say the least. While the movie does struggle within the standard framework of similar projects, Just Mercy is a solid legal drama that has plenty of fine cinematic nuances and great performances from its leads. It’s not the “be all to end all” of legal drama endeavors, but its still manages to be more of the favorable motion pictures of these projects.
Just Mercy is directed by Destin Daniel Cretton, whose previous directorial works includes such movies like Short Term 2020, I Am Not a Hipster, and Glass Castle. Given his past projects (consisting of shorts, documentaries, and a few theatrical motion pictures), Cretton makes Just Mercy is most ambitious endeavor, with the director getting the chance to flex his directorial muscles on a legal drama film, which (like I said above) can manage to evoke plenty of human emotions within its undertaking. Thankfully, Cretton is up to the task and never feels overwhelmed with the movie; approaching (and shaping) the film with respect and a touch of sincerity by speaking to the humanity within its characters, especially within lead characters of Stevenson and McMillian. Of course, legal dramas usually do (be the accused / defendant and his attorney) shine their cinematic lens on these respective characters, so it’s nothing original. However, Cretton does make for a compelling drama within the feature; speaking to some great character drama within its two main lead characters; staging plenty of moments of these twos individuals that ultimately work, including some of the heated courtroom sequences.
Like other recent movies (i.e. Brian Banks and The Hate U Give), Cretton makes Just Mercy have an underlining thematical message of racism and corruption that continues to play a part in the US….to this day (incredibly sad, but true). So, of course, the correlation and overall relatively between the movie’s narrative and today’s world is quite crystal-clear right from the get-go, but Cretton never gets overzealous / preachy within its context; allowing the feature to present the subject matter in a timely manner and doesn’t feel like unnecessary or intentionally a “sign of the times” motif. Additionally, the movie also highlights the frustration (almost harsh) injustice of the underprivileged face on a regular basis (most notable those looking to overturn their cases on death row due to negligence and wrongfully accused). Naturally, as somewhat expected (yet still palpable), Just Mercy is a movie about seeking the truth and uncovering corruption in the face of a broken system and ignorant prejudice, with Cretton never shying away from some of the ugly truths that Stevenson faced during the film’s story.
Plus, as a side-note, it’s quite admirable for what Bryan Stevenson (the real-life individual) did for his career, with him as well as others that have supported him (and the Equal Justice Initiative) over the years and how he fought for and freed many wrongfully incarcerated individuals that our justice system has failed (again, the poignancy behind the film’s themes / message). It’s great to see humanity being shined and showcased to seek the rights of the wronged and to dispel a flawed system. Thus, whether you like the movie or not, you simply can not deny that truly meaningful job that Bryan Stevenson is doing, which Cretton helps demonstrate in Just Mercy. From the bottom of my heart…. thank you, Mr. Stevenson.
In terms of presentation, Just Mercy is a solidly made feature film. Granted, the film probably won’t be remembered for its visual background and theatrical setting nuances or even nominated in various award categories (for presentation / visual appearance), but the film certainly looks pleasing to the eye, with the attention of background aspects appropriate to the movie’s story. Thus, all the usual areas that I mention in this section (i.e. production design, set decorations, costumes, and cinematography) are all good and meet the industry standard for legal drama motion pictures. That being said, the film’s score, which was done by Joel P. West, is quite good and deliver some emotionally drama pieces in a subtle way that harmonizes with many of the feature’s scenes.
There are a few problems that I noticed with Just Mercy that, while not completely derailing, just seem to hold the feature back from reaching its full creative cinematic potential. Let’s start with the most prevalent point of criticism (the one that many will criticize about), which is the overall conventional storytelling of the movie. What do I mean? Well, despite the strong case that the film delves into a “based on a true story” aspect and into some pretty wholesome emotional drama, the movie is still structed into a way that it makes it feel vaguely formulaic to the touch. That’s not to say that Just Mercy is a generic tale to be told as the film’s narrative is still quite engaging (with some great acting), but the story being told follows quite a predictable path from start to finish. Granted, I never really read Stevenson’s memoir nor read anything about McMillian’s case, but then I still could easily figure out how the movie was presumably gonna end…. even if the there were narrative problems / setbacks along the way. Basically, if you’ve seeing any legal drama endeavor out there, you’ll get that same formulaic touch with this movie. I kind of wanted see something a little bit different from the film’s structure, but the movie just ends up following the standard narrative beats (and progressions) of the genre. That being said, I still think that this movie is definitely probably one of the better legal dramas out there.
This also applies to the film’s script, which was penned by Cretton and Andrew Lanham, which does give plenty of solid entertainment narrative pieces throughout, but lacks the finesse of breaking the mold of the standard legal drama. There are also a couple parts of the movie’s script handling where you can tell that what was true and what fictional. Of course, this is somewhat a customary point of criticism with cinematic tales taking a certain “poetic license” when adapting a “based on a true story” narrative, so it’s not super heavily critical point with me as I expect this to happen. However, there were a few times I could certainly tell what actually happen and what was a tad bit fabricated for the movie. Plus, they were certain parts of the narrative that could’ve easily fleshed out, including what Morrison’s parents felt (and actually show them) during this whole process. Again, not a big deal-breaker, but it did take me out of the movie a few times. Lastly, the film’s script also focuses its light on a supporting character in the movie and, while this made with well-intention to flesh out the character, the camera spotlight on this character sort of goes off on a slight tangent during the feature’s second act. Basically, this storyline could’ve been removed from Just Mercy and still achieve the same palpability in the emotional department. It’s almost like the movie needed to chew up some runtime and the writers to decided to fill up the time with this side-story. Again, it’s good, but a bit slightly unnecessary.
What does help overlook (and elevate) some of these criticisms is the film’s cast, which are really good and definitely helps bring these various characters to life in a theatrical /dramatic way. Leading the charge in Just Mercy is actor Michael B. Jordan, who plays the film’s central protagonist role of Bryan Stevenson. Known for his roles in Creed, Fruitvale Station, and Black Panther, Jordan has certain prove himself to be quite a capable actor, with the actor rising to stardom over the past few years. This is most apparent in this movie, with Jordan making a strong characteristically portrayal as Bryan; showcasing plenty of underlining determination and compelling humanity in his character as he (as Bryan Stevenson) fights for the injustice of those who’s voices have been silenced or dismissed because of the circumstances. It’s definitely a strong character built and Jordan seems quite capable to task in creating a well-acted on-screen performance of Bryan. Behind Jordan is actor Jamie Foxx, who plays the other main lead in the role, Walter McMillian. Foxx, known for his roles in Baby Driver, Django Unchained, and Ray, has certainly been recognized as a talented actor, with plenty of credible roles under his belt. His participation in Just Mercy is another well-acted performance that deserve much praise as its getting (even receiving an Oscar nod for it), with Foxx portraying Walter with enough remorseful grit and humility that makes the character quite compelling to watch. Plus, seeing him and Jordan together in a scene is quite palpable and a joy to watch.
The last of the three marquee main leads of the movie is the character of Eva Ansley, the director of operations for EJI (i.e. Stevenson’s right-handed employee / business partner), who is played by actress Brie Larson. Up against the characters of Stevenson and McMillian, Ansley is the weaker of the three main lead; presented as supporting player in the movie, which is perfectly fine as the characters gets the job done (sort of speak) throughout the film’s narrative. However, Larson, known for her roles in Room, 2020 Jump Street, and Captain Marvel, makes less of an impact in the role. Her acting is fine and everything works in her portrayal of Eva, but nothing really stands in her performance (again, considering Jordan and Foxx’s performances) and really could’ve been played by another actress and achieved the same goal.
The rest of the cast, including actor Tim Blake Nelson (The Incredible Hulk and O Brother, Where Art Thou) as incarcerated inmate Ralph Meyers, actor Rafe Spall (Jurassic World: Fallen Kingdom and The Big Short) as legal attorney Tommy Champan, actress Karan Kendrick (The Hate U Give and Family) as Minnie McMillan, Walter’s wife, actor C.J. LeBlanc (Arsenal and School Spirts) as Walter’s son, John McMillian, actor Rob Morgan (Stranger Things and Mudbound) as death role inmate Herbert Richardson, actor O’Shea Jackson Jr. (Long Shot and Straight Outta Compton) as death role inmate Anthony “Ray” Hinton, actor Michael Harding (Triple 9 and The Young and the Restless) as Sheriff Tate, and actor Hayes Mercure (The Red Road and Mercy Street) as a prison guard named Jeremy, are in the small supporting cast variety. Of course, some have bigger roles than others, but all of these players, which are all acted well, bolster the film’s story within the performances and involvement in Just Mercy’s narrative.
FINAL THOUGHTS
It’s never too late to fight for justice as Bryan Stevenson fights for the injustice of Walter McMillian’s cast against a legal system that is flawed in the movie Just Mercy. Director Destin Daniel Cretton’s latest film takes a stance on a poignant case; demonstrating the injustice of one (and by extension those wrongfully incarcerated) and wrapping it up in a compelling cinematic story. While the movie does struggle within its standard structure framework (a sort of usual problem with “based on a true story” narrations) as well as some formulaic beats, the movie still manages to rise above those challenges (for the most part), especially thanks to Cretton’s direction (shaping and storytelling) and some great performances all around (most notable in Jordan and Foxx). Personally, I liked this movie. Sure, it definitely had its problem, but those didn’t distract me much from thoroughly enjoying this legal drama feature. Thus, my recommendation for the film is a solid “recommended”, especially those who liked the cast and poignant narratives of legality struggles and the injustice of a failed system / racism. In the end, while the movie isn’t the quintessential legal drama motion picture and doesn’t push the envelope in cinematic innovation, Just Mercy still is able to manage to be a compelling drama that’s powerful in its story, meaningful in its journey, and strong within its statement. Just like Bryan Stevenson says in the movie….” If we could look at ourselves closely…. we can change this world for the better”. Amen to that! | https://medium.com/beyond-oak-island-se-1-episode-5-4khd-quality/s1-e5-beyond-oak-island-sries-1-episode-5-full-stream-41004e047a7d | ['Cinta Tai'] | 2020-12-15 17:25:03.753000+00:00 | ['Reality TV', 'Reality', 'Drama'] |
Round 2: Win More Games, More Often | Round 2: Win More Games, More Often
by: Live Better
Last week, the Wake Up Wednesday newsletter was titled, “Make the “Game of Life” A Game Worth Winning.” We asked a few questions, such as:
“What does “winning” at your job look like?”
“What does it feel like? Who is there alongside you? Is it about money, notoriety, internal fulfillment, legacy, or a combination?”
“What will success cost you?”
“What is the cost of making money? Is it time away from your friends, family, or places you want to visit?
In the end, we decided that the “strategy should be to set up games worth playing. The end doesn’t justify the means; the means IS life. Set up your “game” so that BOTH the end and the means make sense, provide fulfillment, and leave you feeling successful. You will not find success without sacrifice; just make sure you’re sacrificing the right things for you.”
This week, we’ll be discussing the art of actually “winning” at those games. If we build on the premise of last week, that the game should be set up so that we get something out of it (e.g. fulfillment), regardless of outcome, this week is about the strategy to win those games.
We’re really talking about breaking down these high-level, esoteric ideas of success into daily action items. We’ll begin with the last question we asked last week:
“Do you have your health? Without it, no game is worth playing.”
Here are some quick actionable items as you move through these holiday weeks:
Take more breaks, more often. Your health is the single most important “to-do” list item you seek to “check off” every day. Stop being in such a rush. 1) Amazon delivers in like 2 minutes. 2) It’s no fun to collapse on a day of celebration while everyone else is enjoying themselves (note: this is the funny thing about vacation — we tend to work extra hard before it starts and completely waste the first few days being exhausted). Also, check in with yourself. How’re you doing, really? Don’t buy things you can’t afford. Financially, we get ourselves into a bit of trouble around this time of year. We knew you’d be buying gifts for all these people (i.e. you should have been saving…), but somehow we seem to get blindsided. Give them an “IOU” of your time and make it special when you can see them again. Spending too much isn’t winning, it’s irresponsible. So, send wine. Really, any wine will do. Check in with friends every day. Everyone will be on their phones without work to do. People appreciate the text, call, or my personal favorite — the voice memo. Tell your people you love them; there are far too many stories of life cut short this year. If you are suffering, look outside yourself and help someone else. This might be the most difficult item on the list, but the most impactful. Being in service to others is the name of the (ultimate) game. Leave people better than you found them. It won’t take long and you won’t have to look far to find someone worse off than you (no matter how bad we sometimes feel). Life this year has been absurdly difficult, and the holidays aren’t the Hallmark channel vibe for every family — these are the people in your life that deserve your attention to item #3 above.
We wish you a safe and very happy holiday season from the family here at Live Better.
Tell us how you’re setting up your daily wins! Tag us on Instagram in your story with #livebettercollective. We love to see people doing life on their terms, always in pursuit of better.
Have the best day ever. | https://medium.com/@livebetterco/round-2-win-more-games-more-often-7c621fd55455 | ['Live Better', 'Best Day Ever'] | 2020-12-26 19:10:03.100000+00:00 | ['Mindset', 'Best Day Ever', 'Routine', 'Health'] |
Know Your Customer | Know Your Customer
While it might appear glaringly evident to a large number of us in the business (The supposed veterans) it may not be that conspicuous to all.
That is, regardless of what rankings you accomplish or the amount you are spending on your PPC in a month, you definitely should just think about a certain something: Your clients. 100% of your endeavors should rotate around who your client is.
In this article I give a few hints to guarantee that your advertising effort rotates around your client.
While I couldn’t go to Search Engine Strategies in New York this year (I went a year ago it was an unbelievable encounter) I was shocked and happy to hear that there was an over — riding subject emerging from the meetings both from the SEMs introducing just as the motors.
That subject is client purpose.
That implies, one should comprehend what the client their client needs when they visit the internet searcher and at last snap on a web crawler result whether it is paid or natural.
I began to consider my customers throughout the long term and a portion of the things they have said when I ask them, What are you searching for in this mission?
A considerable lot of the reactions include: I need to be number 1 for [insert catchphrase here] or, I need to bring down my PPC costs by [insert esteem here] or, I need to beat my rival [insert rivals name here].
What is the main component missing from these remarks (and, I think, from numerous customers all in all)? The emphasis on the client.
In the years Ive been posing the inquiry what are you searching for in this mission? I want to easily list off the number of individuals reacted to my inquiry above with, I need to arrive at my intended interest group successfully.
While the facts demonstrate that numerous organizations are starting to understand this now, it hasnt been that route for quite a while.
I think what is changing in todays world is that organizations are starting to understand that search is huge business. Thusly, they have advertisers collaborating with SEM firms and not IT individuals. It is these advertising individuals who are posing the correct inquiries or, by and large, noting them the correct way.
Heres a model: I as of late participated in a call with a customer and one of the primary things they said was, We have concocted seven special personas for our site and need to focus on every one separately.
Know Your Customer
What? Youve effectively done the personas? We generally do those. Yet, guess what? They made an amazing showing. In the wake of inspecting their persona data, I could see the persona. I understood what that individual resembled, and what their expectation was with the site.
Also, that is the main interesting point in todays SEM world you MUST know who your client is. You MUST comprehend their expectation on the off chance that you will succeed.
For instance, if your clients will in general have finished the exploration stage and are in the purchasing stage, dont send them to a PPC point of arrival with item specs. All things being equal, send them to a presentation page with evaluating and delivery data.
Even better, send them to the valuing/checkout page and give them free delivery! That will without a doubt help support the deal.
Also, in the event that you are finding an item particular page positioning exceptionally for a buy search term, attempt and locate a superior page to enhance for that term and de-streamline the item page so the buy page will rank higher.
This is the place where personas are incredibly useful. On the off chance that you can see your client, you can decide their expectation. Also, on the off chance that you can decide their purpose you can adequately make your whole showcasing effort around it.
For the most part what you will discover, as you become more private with that persona, is that you likely dont need to rank #1 naturally for an exceptionally serious term. You could most likely pull off over the overlay perceivability and still establish a connection.
Additionally, you will most likely find that the information on your objectives expectation can assist you with enhancing your offering procedures by cutting costly terms, performing day separating, or whatever else you need to do to guarantee that your site is obvious to them at the perfect time.
Many web crawler advertisers are discovering that the clients goal, gotten from a precise persona, is more important than anything.
Its more important than a #1 positioning. Its likewise more important than a costly PPC term.
Indeed, such information will probably affect the terms you use inside and out. While you may believe that the searcher will utilize certain terms, truth be told you may find that you are misguided base.
Be that as it may, its not simply search advertisers worried about purpose. The web indexes are additionally putting intensely in innovation to help them sort out the goal of the searcher and serve the right outcomes.
For instance, I could be looking for Manhattan and the web crawler needs to figure out what my purpose is. Am I searching for city data? Am I hoping to discover an inn or book air travel? Or on the other hand, do I simply need a beverage formula.
Simple terms like this can mean numerous things, subsequently the web crawlers are attempting to utilize their innovation to sort out what the searcher needs.
What’s more, ordinarily, when you play out an inquiry and you see a PageRank 3 site outclassing a PageRank 6 site, this is the reason. It is on the grounds that the web index has attempted to verify that aim and is subsequently attempting to coordinate the website that best suits that searcher.
In any case, for what reason would a web index be so worried about aim? Its very straightforward. At the present time a large portion of us have Google set as our landing page. In any case, what does it take to change that to MSN or Yahoo? A few ticks of the mouse and youve just changed query items suppliers.
This is the thing that alarms the motors the most that one of their rivals, or a totally new dark pony, will tag along and charm away clients with innovation that improves importance to the client.
Along these lines, in the event that you are arranging your SEM crusade for 2006, my assessment is return to the planning phase. To start with, decide your objectives. At that point, figure out who it is you are attempting to reach and why they would need to utilize your item or administration. In the event that you can decide this goal, at that point you can viably make a SEM crusade that will be both fruitful and practical. | https://medium.com/@dominionglobaltech/know-your-customer-12fc1e4dc38a | ['Dominion Blog'] | 2020-12-23 09:12:11.314000+00:00 | ['Digital Marketing', 'Affiliate Marketing', 'Social Media', 'Business', 'Advertising'] |
Dear Mrs. Biden, why can’t you be hidin’, Your brain, and your title of Doctor. | Would the Wall Street Journal have had this attitude a male doctor of education, married to the next president?
I think not.
I imagine he would be lauded as a learned, respected and eminently qualified source of wise counsel.
“I think (misogyny) is like a disease that needs to be cured. And if we could eradicate polio, I don’t see why we can’t eradicate misogyny.”— Alan Alda.
If nothing else, this has inspired me to write more about the issues women face, going about their everyday lives. | https://medium.com/@wendyscottfromauckland/dear-mrs-biden-why-cant-you-be-hidin-your-brain-and-your-title-of-doctor-5d347007a6b5 | ['Wendy Scott'] | 2020-12-16 19:35:24.703000+00:00 | ['Feminism', 'Equality', 'Womens Rights', 'Diversity And Inclusion', 'Equal Rights'] |
Twitter Sentiment Analysis Dashboard On Event Management | In this blog post, we will see how to effectively use Syncfusion Dashboard and Data Integration Platforms in analyzing the sentiments of the attendees of a workshop through effective processing of live Twitter tweets.
Recently, Syncfusion had an opportunity to give a workshop on big data and dashboards on the cloud at the Microsoft campus in Chennai. It was an all-day event at which we encouraged the attendees to post tweets with two hashtags (#syncfusion, #dashboardcloud) with their honest feedback on the session and products on Twitter. Attendees from various organizations at varied levels came to this workshop. In it, we focused on providing introductory content for our Big Data and Data Integration Platforms and then showcased the data in the Syncfusion Dashboard Cloud. Finally, we showed them how effectively we can bring in data from Twitter in real time, process it in Spark, integrate it within SQL Server, and finally showcase it in the Dashboard Platform during the hands-on session.
Here, I am going to briefly explain the tweet processing, how it helped us gauge the sentiment of the attendees on the overall workshop, and the insights we gained as a feedback tool.
Steps involved:
Using Syncfusion Data Integration Platform:
Bring in real-time tweets with the hashtags #syncfusion and #dashboardcloud. Clean and process JSON data into required flat schema. Perform sentiment analysis over the tweeted text using Stanford CoreNLP. Move the final processed data along with sentiment score into a SQL database.
Using Syncfusion Dashboard:
5. Create a dashboard to showcase real-time Twitter sentiment analysis.
For steps 1 to 4, we will be defining a data-flow in the Data Integration Platform as shown in the following image.
Data flow in Data Integration Platform
Step 1: Use the Get twitter component to bring in real-time tweets with the hashtags #syncfusion and #dashboardcloud.
Ensure that you get the consumer key, consumer secret, access token, and access token secret from the Twitter developer site by referring to this guide before creating the dataflow. Refer to the following configurations, where you can provide your own hashtags under the Terms to filter on property.
Step 2: Data preparation — clean and process JSON data into required fields (attributes) using the processors — Evaluate Json Path and Update attribute.
Data preparation workflow
Evaluate Json Path is used to filter the fields like user details, tweeted text, created date, retweet details, language, friends count, followers count, favourites count, and location.
Note: These attribute and property names will be used as column names in the SQL table.
Property names as column in SQL table
In the update attribute processor,
We use a conditional expression to fetch the exact tweeted text based on the following conditions. We then add hashtags as an additional attribute. To learn more about tweets, refer to the Twitter documentation.
Also cleansed the data for extracting created date suitable for creating dashboard.
Step 3: Perform Sentiment analysis over the processed tweet using Stanford core NLP.
We use the Execute Stream command processor to run the Python script with Stanford CoreNLP service to process the tweeted text and evaluate its sentiment (mood and the score).
To configure the environment to run Python sentiment analysis script within DIP, follow these steps:
Install Stanford NLP package from this location. Start the server using the following command.
C:\<installed location>\stanford-corenlp-full-2018-10-05> java -mx5g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -timeout 10000
Note: Java in the command refers to JAVA_PATH, If the JAVA_PATH is set already just run the above command or else mention the JAVA_PATH in place of java used in the command. Install the “pycorenlp” python package using below command.
- pip install pycorenlp We use the following Python script to perform sentiment analysis. It is a very basic one that uses default settings in CoreNLP to gauge sentiment.
from pycorenlp import StanfordCoreNLP
import sys
import re
#handle the invalid characters in input data
text = re.sub('[^a-zA-Z0-9
\.]','',sys.argv[1])
res = nlp.annotate(text,
properties={
'annotators': 'sentiment',
'outputFormat': 'json',
'timeout': 1000,
}) nlp = StanfordCoreNLP(' http://localhost:9000' #handle the invalid characters in input datatext = re.sub('[^a-zA-Z0-9
\.]','',sys.argv[1])res = nlp.annotate(text,properties={'annotators': 'sentiment','outputFormat': 'json','timeout': 1000,}) for s in res["sentences"]:
print("'%s', %s , %s" % (" ".join([t["word"] for t in s["tokens"]]),
s["sentimentValue"],s["sentiment"]))
Save the previous code in a file named (sentiment.py) and provide this file location as command line arguments for the Execute Stream Command processor as depicted in the following screenshot.
Command Arguments: <python script file location> <tweeted text>
Command Path: Python exe (installed location)
Step 4: Move the processed data along with sentiment score into a SQL table.
We use following processors in this step:
Extract Text — Extract the sentiment results from the Python script into an attribute. Update Attribute — Update sentiment and sentiment score attributes from the sentiment results. Attributes to Json — Create JSON out of the attributes (fields) we want to track in the dashboard. ConvertJsontoSQL — Convert the JSON string into SQL insert statements. PutSQL — Execute the insert statements generated.
Data flow — Moving predicted output to SQL server
Note: Make sure you have created a SQL table (tweetssentiment) using the following query and create a controller service for it in the Data Integration Platform. For more details on controller settings, refer to our documentation.
CREATE TABLE [dbo].[tweetssentiment](
[tweetid] [bigint] NULL,
[userid] [bigint] NULL,
[username] [varchar](500) NULL,
[screenname] [varchar](500) NULL,
[tweetedtext] [varchar](500) Null,
[language] [varchar](500) NULL,
[location] [varchar](500) NULL,
[created_at] [varchar](500) NULL,
[hashtag] [varchar](500) NULL,
[retweetcount] [int] NULL,
[favouritecount] [int] NULL,
[friendscount] [int] NULL,
[followerscount] [int] NULL,
[sentiment] [varchar](500) NULL,
[sentimentscore] [varchar](500) NULL
) ON [PRIMARY]
GO
The data integration workflow can be scheduled in real time by setting it as “0 sec” so that it looks for input tweets constantly, or it can be scheduled in intervals.
Step 5: Final step is to create a Twitter sentiment analysis dashboard like the following in the Syncfusion Dashboard Cloud.
To learn the basics of creating a dashboard, refer to these links:
You can follow the steps covered in these links to create your own dashboards easily.
Twitter sentiment analysis dashboard
Conclusion:
With the final dashboard, we can understand that, overall, the experience of the workshop ended on a positive note. We also got valuable feedback asking us to slow down and suggesting a few more aspects to cover.
This is just one of the sample use cases gauging sentiments using Dashboard Cloud and using the Data Integration Platform. You can further extend usage of this to any number of social media sites, online feedback surveys for any event, or you can track TV shows for positive remarks on relevant sites like IMDB, Twitter, or Facebook. When you think out of the box, you can always find any number of practical applications to review the sentiment for products, events, TV shows, companies, and industries. You can then make better-informed decisions to improve your services, product recommendations, and other aspects of your business.
Apart from sentiment analysis, you can also run other machine-learning algorithms using extensible Python libraries and showcase the results in a dashboard with more insights for decision-makers.
References:
You can find the Data Integration Platform sample template and the Python script file in this GitHub location.
Tweets and access tokens:
Stanford CoreNLP: https://stanfordnlp.github.io/CoreNLP/
Data Integration Platform:
Dashboard Cloud: | https://medium.com/syncfusion/twitter-sentiment-analysis-dashboard-on-event-management-fd97c3154671 | ['Rajendran S P'] | 2019-06-03 07:02:34.131000+00:00 | ['Dashboard', 'Python', 'Data Integration', 'NLP', 'Sentiment Analysis'] |
What is Computer Graphics? | CG In Action
I believe it is the time to see some of the stuff we do in CG before this post becomes boring.
Let’s take-up an exercise of modeling and rendering a cube on a screen. When I say modeling it means storing the definition of the cube(representation — length, breadth and height) and rendering means drawing that 3D cube on a 2D screen which we can see.
GOAL — Generate a realistic drawing of a cube
But before we can actually begin there are some questions we must address about modeling — How do we describe the Cube?, rendering — How we visualize a 3D object in 2D?
Part I: Modeling — How do we describe the Cube
The problem to model a cube is how do we save a 3D object on a file disk. So for that, we make few assumptions to begin with:-
Cube’s centre is at the origin in 3D-space (0,0,0)
Cube’s length, breadth and height is 2 units
We are looking at the cube such that our eyes are aligned to the x-y-z-axis
But these assumptions are not enough for complex objects like a face or a car or anything else so we capture the coordinates vertices of the cube
Coordinates of the cube vertices
But this is still not enough as, if we draw these coordinates on a 2D canvas we won’t be able to perceive the shape of the cube, so we also capture the edges of the cube
Edges of the cube
We have a digital description of the cube — vertices and edges. We can now encode this data as binary to save it memory and hence the problem of modeling is solved
Part II: Rendering — How we visualize a 3D object in 2D
Rendering is tricky, you will see why in a moment but for starters how do we represent a 3D object on a 2D-Canvas, we have two options:-
Opt 1 — Throw away one space/axis i.e. z-axis but we will see only a square in that case, and we certainly don’t need that — Rejected Opt 2 — Map 3D-vertices to 2D points in the 2D plane, connect 2D points with the straight lines…
Okay but how??
So my dear readers here come projections and matrix…and from now things will start getting little freaky, but we will not go all the way in. We will see just a glimpse of it and gradually will turn up the heat
Perspective Projection — Have you ever wondered why objects get smaller as we move further away from them. We all at some point in our life have stumbled upon this question. This is one of the key concepts in CG, and we will try to understand this now.
In the image below you can see the concept on which the camera works, well at least the old cameras where you have to keep a film which you have to get processed later to get the photograph…
This is how the camera captures images
Here you can see we have a tree (A 3D Object), and on the left side of the image, we have our simple camera. Lights enter in the camera from the pinhole and a 2d image is created on the projection screen. The longer you keep the pinhole open more exposure you will get (all that camera stuff).
So now let's turn our simple camera into not so simple and overlay some geometry on it.
I have intentionally kept only one ray starting from the point — p on the tree and ending on point — q in our camera on the projection screen.
Here we are going to assume a few things so that we can converge to an equation which we will use to create our rendering algorithm
Assumptions:-
There is a Z-axis passing from the centre of the cube through the pinhole. Y-axis is vertically perpendicular to the Z-axis, positive in up-direction. The X-axis is perpendicular to both the Z & Y-axis — not visible in the diagram for the sake of simplicity Assuming that Y-axis is vertical and X-axis is horizontal. The size of our cube camera is 1 unit The distance of the point p on 3D-object(tree) along Z-axis is z — unit Also, point p is y — unit above Z-axis along with Y-axis Lightray starts from point p(x,y,z) on the 3D object and meets on point q on the projection plane — (u,v,1)
Note: That the u-unit and x-unit is the distance on the horizontal plane that is perpendicular to both Y and Z-axis, in the above image Y and Z-axis are clearly visible and imagine X-axis on a horizontal plane perpendicular to Y and Z-axis
We can see two similar triangles on this image.
Considering the similar triangle property : Ratio of edge length is same
From property above we can say that v/1 = y/z → v = y/z
v = y/z — vertical coordinate
therefore we can say that v is the slope and it is equal to y/z
In a similar way, we can get the horizontal coordinate
i.e. u/1 = x/z → u = x/z
u = x/z — horizontal coordinate
Therefore we can infer from the above two equations that if z becomes bigger which means if we move away from the 3D object, the value for u and v will become smaller and the image size will shrink on the projection plane.
Now replace the pinhole camera with a human eye, the pinhole becomes pupil in the eye. The projection plane is the retina that sends the image to the brain via the optic nerve. Now you know why objects looks smaller as we move further away from it.
From the above two equations, we can create our algorithm for the rendering of the cube (remember our problem …How to draw 3D on 2D?)
Assume the camera is at c = (2,3,5)
Convert the (x,y,z) points on 3d object to (u,v) for all 12 edges as below
Subtract the camera c from the vertex (x,y,z) — as we have assumed the camera is on (2,3,5) — to normalize the position of the cube w.r.t camera
Divide (x,y) by z to get the (u,v)
Draw line between (u1,v1) and (u2,v2)
BAAM!!! 😎
phew!!!
And here we go, we have successfully found out the coordinates of the 3d representation on a 2d screen without losing the shape of the cube.
In the above steps, we saw how we covert digital information into purely visual information with an algorithm that we derived with the help of high-school maths. | https://medium.com/xrpractices/what-is-computer-graphics-5ac24606a8c5 | ['Nipun David'] | 2021-01-10 05:21:17.911000+00:00 | ['General', 'Computer Graphics', '3d', 'Virtual Reality', 'Augmented Reality'] |
Your Grandchildren will hate you if you fail to read this to the end. | Your Grandchildren will hate you if you fail to read this to the end.
Have you heard of the Igbinedions of Edo? Do you know Dangote did not come from a poor family.
They built generational wealth for themselves.
Here is your opportunity to build generational wealth for yourself and your successors.
Dan Lok who failed English twice makes.... From what I am about to show you.
What I am about to show you is the highest paid skill in the world. The skill that can make you millions if you master it.
Copywriting skill.
I like helping others. I have decided to take you hand-in-hand and walk you through the hooks and crannies of copywriting.
I will be revealing the secret of getting high paying Copywriting jobs that can make you six figures from local clients.
If you feel you are making enough money to cater for your future generation it is not for you? If you feel you feel comfortable with what you are able to achieve this year, it is not for you. If you are absolutely satisfied with your financial situation, it is not for you
I have just 14 slots left and a slot goes for just 3k. | https://medium.com/@badmusrasheed97/your-grandchildren-will-hate-you-if-you-fail-to-read-this-to-the-end-c452e3f5a83d | [] | 2020-12-10 19:04:54.795000+00:00 | ['Entrepreneurship', 'Sales', 'Wealth', 'Copywriting'] |
In honor of Earth Day 2020 | It almost feels like humans are getting their karma and the earth is having a good laugh while also healing. By no means do I mean to diminish the human tragedy and suffering so many are going through with our current pandemic, but I am lifted by the thought of our Earth is healing if only just a bit from so much of the human abuse we’ve imposed on her.
In honor of Earth Day 2020, I’ve made a compilation of all of the articles that I’ve written to support positive ocean and earth technologies and solutions. This is a salute to the innovators and entrepreneurs committed to Earth’s health and healing.
Many of these companies are young startups in search of funding —it is my hope that in amplifying their stories, we help get them noticed and in front of the right eyeballs with the funds and abilities to invest. What the earth needs now more than ever are viable solutions to climate change and investment in the right technologies that can accelerate positive ocean and earth impact.
Happy Earth Day 2020!
Ocean Spray Embraces Regenerative Agriculture And Goes 100% Sustainable
Eat Cranberries and Feel Good
Now you can eat cranberries year-round and feel great, not only because cranberries are a superfruit rich in antioxidants, vitamins B, C, and E and carry many other health benefits such as improving blood pressure and are anti-inflammatory, but also because they are great for the environment!
Ocean Spray’s cooperative farmers are committed land stewards, conserving an average of 5.5 acres of natural lands for every 1 acre of cranberry bog, the name for the sandy beds in which cranberries grow. In fact, their farmers have adopted and are committed to the following sustainability principles:
Farm productivity for generations
Soil health and responsibility
Fertilizer management
Water stewardship
Ecosystem conservation and biodiversity
Integrated Pest Management (IPM)
Wellbeing for workers and the community
… all of which make regenerative agriculture so important.
Can Technology Save the Ocean?
Can Technology Save the Ocean? It’s a provocative question and one with many answers.
I recently hosted a virtual conversation with three ocean tech entrepreneurs who are looking to deploy their technologies to help tackle some of the biggest issues the ocean is facing.
Hyper Moves With Ben Lamm Of Hypergiant
In the age of Coronavirus, ocean acidification, plastic overload, and insect apocalypse what does the world need to think more boldly and optimistically about our future? Some say its more entrepreneurs like Ben Lamm, whose company Hypergiant recently announced its EOS algae bioreactor that is 400 times more effective at sequestering CO2 as trees.
I recently sat down with Ben to talk about his journey as an entrepreneur, what drives him and what keeps him optimistic in these apocalyptic times. It was an insightful conversation and an encouraging window into the brain of a hyper-focused man looking to leave his mark on humanity in a positive way.
Hypergiant Is Using AI And Algae To Take on Climate Change
Algae, that green scum often seen on the surface of ponds, and credited with harmful ocean algal blooms that kill ocean life might just hold an important key to addressing climate change. Algae, much like trees, uses carbon dioxide to conduct photosynthesis, sequestering CO2 as it grows.
Hypergiant, an AI products and solutions company, is harnessing this unique power of algae in its latest technology, the EOS bio-reactor which uses AI to optimize algae growth and carbon sequestration.
Its bio-reactor is built to hook up to HVAC systems found in large industrial buildings, skyscrapers and apartment buildings which are some of the biggest contributors to global warming from the CO2 emitted through their energy usage and air conditioning systems.
Profit Meets Purpose at Latest “Tech4Nature” Encounter
Are we entering a new era of innovation where profit meets purpose?
In Silicon Valley and beyond the world of technologists are turning their attention to the critical issues our earth and oceans face, ushering us into a new era of innovation. An era with a purpose and an intention to remedy much of the damage we’ve caused to the Earth as a result of industrialization and unchecked corporate growth.
Adrian Grenier on Why Environmentalism is a Spiritual Journey
Adrian Grenier shares how environmentalism is a spiritual journey.
I had a lovely sit down with Adrian Grenier at the recent Planet Home event in San Francisco. Adrian and I have a shared passion for saving the ocean and his organization The Lonely Whale is focused on saving the oceans in three ways:
Bringing scalable solutions to market that are creating new business models that minimize environmental impact.
Empowering youth through education and community activism.
Sparking viral global movements that create positive and measurable impact.
Can AI Save Our Oceans? Let’s Start With The Data.
The ocean is in a dire crisis that puts the entirety of humanity at risk. The gravity of its issues range from climate change to plastics pollution to overfishing, all of which are overwhelming issues to tackle individually, and seemingly insurmountable when looked at together.
Scientists have noted that even if we were to halt all of our fossil fuel activity today, we are still on track to lose 90% of the ocean’s corals by 2050. Coral is the ocean’s life system and without it, we will soon also have an ocean without life.
In spite of the terrible news, there are some glimmers of light and hope spots that we can point to, especially in the areas of AI for the benefit of the ocean. This will be the first in a series of articles that puts a spotlight on the top innovators and innovations that are using the power of technology, and specifically AI to restore and regenerate our precious oceans.
How thredUP is Driving the Circular Fashion Movement with AI
Fresh on the heels of a $175 million raise, thredUP is poised to capitalize on the growing $24 billion second-hand market through its use of artificial intelligence to bring efficiencies and scale to every area of its operations , while fueling the circular fashion trend among traditional retail brands with the launch of its “resale as a service” offering.
The company’s mission is to inspire a new generation of shoppers to think second hand first, keeping clothing out of landfills so that people can look great without being part of the problem.
Jeremy McKane Using Superpowers for Ocean’s Good
What happens when the artist can no longer sit back and play the passive observer in his creations but becomes compelled to become part of the art itself?
That much happened to Jeremy McKane, a former startup executive, turned artistic photographer that realized creating stunning photography was not enough, he had to do more to inspire action, create innovative solutions and unite a fragmented ocean community to work together to improve our oceans’ health. I had a chance to speak with Jeremy McKane as a cohost on the Break it Down Show, here’s what we discussed.
Alternatives To Plastic Are On The Rise
Alternatives to plastics are on the rise, and that’s a good thing!
We’ve all heard the sobering and depressing statistic that by 2050, there will be more plastic in the ocean than fish, which is not surprising considering we are dumping the equivalent of one garbage truck of plastic into the ocean every minute.
Annually, over 300 million tons of plastic are produced and 50% of that is meant for only a single use, such as the plastic utensils used for a take-out meal or the top of a coffee cup. These objects are in use for just a few moments, yet because they are non-biodegradable, they will exist on the planet for hundreds of years. The amount of plastic production this requires means that by 2050, the plastics industry will consume 20% of total oil production, and be responsible for 15% of the annual carbon budget.
National Science Foundation Awards Litterati Grant to Advance its AI for a Cleaner Planet
On Earth Day, 2019, the National Science Foundation has awarded Litterati a second grant in the amount of $750,000 to advance its machine learning and further its mission to create a litter-free world by using data to plot and identify litter across the planet. The grant is part of the National Foundation’s Small Business Innovation Award and is the second grant awarded to Litterati, bringing the total amount of the award to $1 million.
We are pleased to accept this grant from the National Science Foundation. To many people, litter is someone else’s problem to solve, but unfortunately, it is something that impacts us all — from degrading our environment to killing wildlife and infecting our food supply,”
said Litterati Founder and CEO, Jeff Kirschner.
Are we Entering a New Era of Regeneration?
Regeneration is a relatively recent idea that has been gaining traction amongst climate action circles worldwide.
In contrast to sustainability, which aims to maintain a state that avoids continued depletion of natural resources in order to keep ecological balance; regeneration refers to restoration, renewal, and growth.
In other words, regeneration does not sustain, but rather improves upon the previous state.
The Role Of Smart Grids And AI In The Race To Zero Emissions
The term “smart grid” encompasses much more than just power delivery, though that is an important factor. At its core, the main pillar of a smart grid is a two-way connection of energy and information, but it goes much deeper than that. For maximum effectiveness and efficiency, a smart grid infrastructure should also include two more pillars: distributed generation and AI.
According to Harvard’s Science in the News, the current power plant system in the US was “not built to accommodate the diversification in energy sources, especially not the rise in renewable resources.” In our current system, when demand outpaces supply, utility providers source energy from ‘peaker plants’ (backup fossil fuel-powered plants) at a minute’s notice — just barely avoiding catastrophe.
Why It’s Time to Abolish Marine Theme Parks
Caging the Ocean’s Wild, a film presented by the International Ocean Film Festival is a tough, but important film to watch. The barbaric practice of putting the ocean’s most majestic creatures into small tanks for our enjoyment must stop.
In the US, the movie Blackfish, as well as activist groups like PETA have created awareness about the cruel practices of this industry. This consumer awareness has impacted Seaworld’s bottom line, its former CEO resigned and the company has been trying to improve its image by focusing on animal rescue and rehabilitation efforts, as well as announcing that it would stop breeding orcas.
Innovation drives the Blue Economy at Oceanology International 2019
Last week’s 50th anniversary Oceanology International conference in San Diego hosted a unique gathering of scientists and technologists working in partnership to preserve and grow “The Blue Economy”.
The economic activities that create sustainable wealth from the world’s oceans and coasts. — Center for the Blue Economy.
And the World Wildlife Fund estimates the total value of the blue economy to come in at over $24 trillion. This is the frontier of ocean research — which builds upon and integrates our digital understanding of the ocean into an understanding of how humans can interact with the oceans in positive economic ways.
Ocean Economy Takes Center Stage at Davos
The “ocean economy” was on center stage last month at the World Economic Forum’s annual meeting at Davos — arguably the most important economic conference in the world — this year.
While scientists have been sounding the alarm about climate change for awhile, it seems the world is suddenly taking notice. Perhaps it was last year’s report from the IPCC that laid out a bleak future for humanity if we don’t change our ways, with the alarming reality that we have less than 12 years for humanity to turn the tide.
https://gritdaily.com/ocean-economy/ | https://medium.com/@sandramp/in-honor-of-earth-day-2020-7d3f39481303 | ['Sandra Ponce De Leon'] | 2020-04-22 17:44:12.438000+00:00 | ['Earth Day', 'Innovation', 'Ocean', 'Startups', 'Cleantech'] |
An Open Letter to Love | Dear Love,
I have been meaning to talk to you for a while. But you just exist in so many places and so many things that I couldn’t figure out which version of you should I talk to until I realized maybe you’re the same, it’s my perception of you which changes.
You have made me laugh and you have made me cry and you have had your moments where you have also said goodbyes. You have been kind, patient and sometimes you have made me reckless. You have been there in the form of family, friends and boys but most importantly you reminded me that I need some of you for myself.
You have taught me that you can mean different things to people, for some you are a person, for some you are their work and for some you are food! But over the years I have been trying to find what you are to me. Let’s start from the beginning, shall we?
I think when I was a toddler I only see love from my parents. Love then meant “someone who got you what you wanted”. My parents looked after me and they took care of me, I never ever saw the bigger picture, the sacrifices or the commitments they made, and that’s okay. As long as, I saw them and I got all my favorite toys, I was happy.
When I became a teenager, things became a little more complicated. You created delusions. I thought you were “something I never felt before”. I fell into your trap, but now I know you were only teaching me how to grow up and how to be more mature. I didn’t know this back then. So I spent years trying to change everything for that one boy who would never appreciate me. I was almost the smartest in my class but I always thought I had to be more skinny, more fairer and more outgoing to deserve you. And the worst part is, even thought you made me see the worst in me, I didn’t want to let you go. But that wasn’t you was it? You were only trying to teach me what you weren’t and I so naive, I didn’t learn that lesson.
I came to college, you put me through ups and downs again. You taught me someone who only sees the best of me isn’t you. You taught me someone who loves only my tangible parts isn’t you.
That’s when it dawned on me. You aren’t all of you until I am all of me. And from that day on, I started being all of me. It wasn’t easy, I am flawed. Not easy to love. Quite damaged. But I tried. I tried until I finally found the slightest presence of you, within me. And you whispered, “you only let the ones in, who know the real you”. Thank you.
You played until I think I finally met you. The you, who loves all of me. I am still trying. But I accepted me and that’s how you knew that I was ready for you, the real you.
And then I met you. The you, who loves the good and the bad, the kind and the sad. I treated you bad, but he always came back. You didn’t stop me and neither did he and that’s when I knew it was really you. My quirks didn’t bother you. You appreciate them. Thank you.
However, I am glad that you completed me. You taught me how to grow, not with just other people but with me. It’s been a journey from not wanting to look in the mirror to looking at myself and being proud of who I am. You also taught me that you will always be constant in some ways, not always. For me, that has been my parents, of course and writing. Writing, at first I thought was an escape but now it feels more of a way to connect, to you and to me. It’s one of the ways I express I love myself. Thank you.
Over the years, you have taught me lessons. Some painful but most of them happy. You have shown me what it means to truly let go. Sometimes you have tested me. I have let you change me. Sometimes for the better and sometimes I have become someone else who isn’t really me. And that has been the most important lesson, that I shouldn’t be becoming someone else for you. I will only see the true you if I can be me, honestly.
You have taught me that to earn you, I will sometimes have to compromise. But you have also taught me that the love I have for myself should define the compromises I am willing to make and the risks I am willing to take. And I took my time to learn that.
I think you wanted me to learn from my mistakes rather than handing me the best the first time. And for that I am thankful, because not only you taught me to love everyone else but first you taught me love myself. Thank you.
Regards,
Always your keeper. | https://medium.com/@thoughtslippers/an-open-letter-to-love-d092eb0e1964 | ['Soumya Tiwari'] | 2020-01-14 04:45:46.344000+00:00 | ['Open Letter', 'Emotions', 'Self Love', 'Love', 'Growing Up'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.