title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Blockport April update | In this article we provide a quick update on some of the main developments of Blockport in the past and coming weeks.
Testing Blockport 1.0 Beta
The last two weeks have been relatively quiet but very intensive for the Blockport development team. Since the Blockport application is decomposed into a collection of microservices that all have their own set of functionalities and responsibilities within the whole, it takes substantially more effort to properly test all functionalities and interdependencies. However, more time spent on testing will certainly pay off in the long run.
Read more about microservices here
We have been making great progress on testing the integration of all microservices and are now moving to the next phase where we will mostly be stressing the security and stability of the entire architecture. In this phase we are intensively collaborating with multiple cybersecurity companies that have a lot of experience with testing performance and security for financial services’ applications.
For instance, we are closely working with Bugcrowd who facilitate a global crowd of trusted security researchers that will help us in managing our upcoming bug bounty and responsible disclosure program. Together with Bugcrowd and several other cybersecurity companies, we are working hard to ensure a secure and stable product so that we can soon start on-boarding more users with confidence.
Next to assuring our system is secure and stable, we also focus our efforts on preparing our incident response strategy by mapping and standardizing all processes supporting our production readiness. ‘Production readiness’ is one of the concepts you hear often at Blockport, as it is the one thing that will allow our team to constantly improve, manage and repair our production services. You can think of relevant documentation, a great focus on automation of tasks, and eventually even something like ‘chaos engineering’.
Currently, there are around 20 participants that are extensively testing the beta. As many of you know, these testers are under NDA and cannot disclose any developments due to security reasons, which is in the best interest of Blockport and the community. The Blockport team confidently expects to increase the amount of testers over the next few weeks. Make sure to sign up for the waiting list if you have not done that already.
Marketing
As many of you know, we have invested a lot of time and resources into the development of the beta. Therefore we chose to lower our marketing efforts during its private stage, since it is not very effective to promote a product that is not publicly available yet. However, since of this week we are driving our marketing efforts up again to work towards the public release of the Blockport 1.0 Beta.
Essentially, this means that we will gradually develop:
Strong brand awareness
Content of product and team developments
Relationships with potential partners in the blockchain or financial services market
In the past few months most of the big advertising platforms (Twitter, Facebook, Google, etc.) decided to implement stricter policies on crypto and blockchain related advertisements. Due to these radical changes in their advertising policies we had to adjust our marketing activities. We have run several tests to find out what kind of ads still perform well, so that we can utilise them in future campaigns.
Next to advertisement, the Blockport Beta Access Campaign has been set up last week and the exact details and launch date will soon be announced. Participating will give you the opportunity to claim access to our open beta and allows you to take a shot at winning additional BPT. We have allocated 25,000 BPT and got Blockport merchandise to reward our community participating in the campaign.
In line with the Blockport Beta Access Campaign, we will launch our Community Content of the Month contest, where we reward community members for creating quality content. More information on the structure of this contest will be published later this week.
Community involvement
Since a few weeks we have been conceptualising the Social Trading features with the product and design team. We started with interviewing members from our community to gather feedback and insights on how users would rate, rank and compare experienced “social” traders on our platform. In the coming weeks you can expect us to increase the involvement of the community even more by performing surveys and interviews. We already received a lot of valuable feedback in last week’s interviews, but if you wish to participate you can still sign up here. Thank you to everyone who helped us so far!
Team and organization
The Blockport team grew enormously in the past few months (mentally and physically) and therefore we are now actively searching for a bigger office where we can expand our team in a nice and comfortable environment. If you have any tips or suggestions in Amsterdam, please feel free to contact us!
Additionally, we noticed that managing 22 developers in one single team is a real challenge. Therefore we decided to switch towards a new team division last week. This enables us to maintain cross-functional teams that have a clear scope, goal and responsibility.
Switching team set-up is always a bit of a trade-off. In the short term it may lead to lower team velocity as a result of task switching. However, in the long run it should lead to more focus and more empowered teams.
As a result of our new team division, one team will now completely focus on trading, which encompasses everything we need to build and improve on the trading related features. Our second team will work on account functionalities, which enable our users to access and configure the platform. Think of on-boarding processes, KYC, settings and security configuration. Finally, our third team will solely focus on the development of the new Social Trading features.
Risk, Compliance & Regulations
Together with our advisors Geert Blom (Enigma Consulting) and Johannes de Jong (Osborne Clarke) we visited the innovation team of the DNB (Dutch Central Bank) last week. This was an initial meeting in where we described our future plans, and they provided us with valuable feedback on the developments of Dutch and EU Regulations. We aim to stay in touch with them as much as possible to enable short feedback cycles between us and the Dutch and EU regulators.
At present, we are talking with multiple banks and PSPs in Europe to build a long-term relationship and work towards diversifying our options to reduce risk and to keep ahead of the ever-changing environment. Unfortunately, for the sake of developing a trustworthy relationship, we cannot disclose any particular companies we are talking with, but we will disclose important news once we can.
If you are excited about what we do, please join our community Telegram channel! | https://medium.com/blockport/community-update-16-4-2018-85b0262240fd | ['Sebastiaan Lichter'] | 2018-04-16 16:05:43.871000+00:00 | ['Ethereum', 'ICO', 'Bitcoin', 'Cryptocurrency', 'Startup'] |
Optimizing Bulk Load in RocksDB | What’s the fastest we can load data into RocksDB? We were faced with this challenge because we wanted to enable our customers to quickly try out Rockset on their big datasets. Even though the bulk load of data in LSM trees is an important topic, not much has been written about it. In this post, we’ll describe the optimizations that increased RocksDB’s bulk load performance by 20x. While we had to solve interesting distributed challenges as well, in this post we’ll focus on single node optimizations. We assume some familiarity with RocksDB and the LSM tree data structure.
Rockset’s write process contains a couple of steps:
In the first step, we retrieve documents from the distributed log store. One document represent one JSON document encoded in a binary format. For every document, we need to insert many key-value pairs into RocksDB. The next step converts the list of documents into a list of RocksDB key-value pairs. Crucially, in this step, we also need to read from RocksDB to determine if the document already exists in the store. If it does we need to update secondary index entries. Finally, we commit the list of key-value pairs to RocksDB.
We optimized this process for a machine with many CPU cores and where a reasonable chunk of the dataset (but not all) fits in the main memory. Different approaches might work better with small number of cores or when the whole dataset fits into main memory.
Trading off Latency for Throughput
Rockset is designed for real-time writes. As soon as the customer writes a document to Rockset, we have to apply it to our index in RocksDB. We don’t have time to build a big batch of documents. This is a shame because increasing the size of the batch minimizes the substantial overhead of per-batch operations. There is no need to optimize the individual write latency in bulk load, though. During bulk load we increase the size of our write batch to hundreds of MB, naturally leading to a higher write throughput.
Parallelizing Writes
In a regular operation, we only use a single thread to execute the write process. This is enough because RocksDB defers most of the write processing to background threads through compactions. A couple of cores also need to be available for the query workload. During the initial bulk load, query workload is not important. All cores should be busy writing. Thus, we parallelized the write process — once we build a batch of documents we distribute the batch to worker threads, where each thread independently inserts data into RocksDB. The important design consideration here is to minimize exclusive access to shared data structures, otherwise, the write threads will be waiting, not writing.
Avoiding Memtable
RocksDB offers a feature where you can build SST files on your own and add them to RocksDB, without going through the memtable, called IngestExternalFile(). This feature is great for bulk load because write threads don’t have to synchronize their writes to the memtable. Write threads all independently sort their key-value pairs, build SST files and add them to RocksDB. Adding files to RocksDB is a cheap operation since it involves only a metadata update.
In the current version, each write thread builds one SST file. However, with many small files, our compaction is slower than if we had a smaller number of bigger files. We are exploring an approach where we would sort key-value pairs from all write threads in parallel and produce one big SST file for each write batch.
Challenges with Turning off Compactions
The most common advice for bulk loading data into RocksDB is to turn off compactions and execute one big compaction in the end. This setup is also mentioned in the official RocksDB Performance Benchmarks. After all, the only reason RocksDB executes compactions is to optimize reads at the expense of write overhead. However, this advice comes with two very important caveats.
At Rockset we have to execute one read for each document write — we need to do one primary key lookup to check if the new document already exists in the database. With compactions turned off we quickly end up with thousands of SST files and the primary key lookup becomes the biggest bottleneck. To avoid this we built a bloom filter on all primary keys in the database. Since we usually don’t have duplicate documents in the bulk load, the bloom filter enables us to avoid expensive primary key lookups. A careful reader will notice that RocksDB also builds bloom filters, but it does so per file. Checking thousands of bloom filters is still expensive.
The second problem is that the final compaction is single-threaded by default. There is a feature in RocksDB that enables multi-threaded compaction with option max_subcompactions. However, increasing the number of subcompactions for our final compaction doesn’t do anything. With all files in level 0, the compaction algorithm cannot find good boundaries for each subcompaction and decides to use a single thread instead. We fixed this by first executing a priming compaction — we first compact a small number of files with CompactFiles(). Now that RocksDB has some files in non-0 level, which are partitioned by range, it can determine good subcompaction boundaries and the multi-threaded compaction works like a charm with all cores busy.
Our files in level 0 are not compressed — we don’t want to slow down our write threads and there is a limited benefit of having them compressed. Final compaction compresses the output files.
Conclusion
With these optimizations, we can load a dataset of 200GB uncompressed physical bytes (80GB with LZ4 compression) in 52 minutes (70 MB/s) while using 18 cores. The initial load took 35min, followed by 17min of final compaction. With none of the optimizations the load takes 18 hours. By only increasing the batch size and parallelizing the write threads, with no changes to RocksDB, the load takes 5 hours. Note that all of these numbers are measured on a single node RocksDB instance. Rockset parallelizes writes on multiple nodes and can achieve much higher write throughput.
Bulk loading of data into RocksDB can be modeled as a large parallel sort where the dataset doesn’t fit into memory, with an additional constraint that we also need to read some part of the data while sorting. There is a lot of interesting work on parallel sort out there and we hope to survey some techniques and try applying them in our setting. We also invite other RocksDB users to share their bulk load strategies.
I’m very grateful to everybody who helped with this project — our awesome interns Jacob Klegar and Aditi Srinivasan; and Dhruba Borthakur, Ari Ekmekji and Kshitij Wadhwa. | https://medium.com/rocksetcloud/optimizing-bulk-load-in-rocksdb-f3589786966c | ['Igor Canadi'] | 2019-09-13 17:43:11.450000+00:00 | ['Database', 'Performance', 'Data Ingestion', 'Software Engineering', 'Rocksdb'] |
New Kind of Ice in Deep-Earth Diamonds | New Kind of Ice in Deep-Earth Diamonds
Latest discovery of ice in diamonds hints to water pockets in the mantle
To us, diamonds are renowned for their sparkle and colour, but to scientists, they’re the key to unlocking the mysteries of the Earth’s interior. (Source: Pixabay)
Inside the rigid structure of deep mantle diamonds, scientists have discovered ice crystals, hinting at the existence of water pockets in the Earth’s mantle. The ice was discovered in the form of Ice VII — a high-pressure form of water — which was not known to naturally occur on Earth. Further analysis of the ice by scientists also showed that inside the mantle, the ice is actually liquid.
The mantle is one of Earth’s layers and consists of solid and hot rocks under intense pressure. It’s divided into three more layers: the upper layer, transition zone and a lower layer. While there is some water in the upper layer, scientists suspect that the transition layer has 10 times more water. In fact, evidence already exists for water in the mantle region. However, Ice VII is a special kind of ice crystal since it forms under high pressure.
A graphic cross-section showing the interior structure of Earth (Source: BBC)
Put simply, when diamonds form, they can trap some parts of their chemical environment in their ‘inclusions’. So, it might have encapsulated some water from around the transition zone while the zone’s high pressure prevented crystallization. Once the mantle’s convection current moved some of these diamonds to the surface, temperatures would drop while the structure of the diamonds maintained the transition zone’s pressure. As a result, the water could freeze as Ice VII.
And as mentioned earlier, the mantle is mostly believed to be solid or made up of solid rocks and elements. However, before the discovery of Ice VII down in the mantle, some mantle maps had already shown hidden water pockets. This discovery of Ice VII strengthens the possibility of a differently composed mantle.
In fact, “geologists discovered these diamonds in mines in southern Africa, Zaire, Sierra Leone and China” (Gizmodo), indicating that this might be more than just a regional abnormality.
In the end, the discovery of water pockets in the mantle could influence the rate at which heat escapes from the planet and lead to new approaches for understanding the mechanisms of the planet’s exterior.
Surprisingly though, the discovery was not deliberate, but rather an accident. It was found while scientists were looking for evidence of unusual phases of carbon dioxide in those same diamonds.
Further Reads
Popular Science — Why so many diamonds are making science headlines this week
Science Magazine — Ice-VII inclusions in diamonds: Evidence for aqueous fluid in Earth’s deep mantle
Nature: CaSiO3 perovskite in diamond indicates the recycling of oceanic crust into the lower mantle | https://medium.com/newscuts/new-kind-of-ice-in-deep-earth-diamonds-f1ad0261f9d0 | ['Yash Talekar'] | 2018-03-12 00:13:11.691000+00:00 | ['Technology', 'Diamonds', 'Earth', 'Science', 'Water'] |
Real People Don’t Use UTM Codes | Real People Don’t Use UTM Codes
UTM codes are a great way to track the success of your online activities, but you shouldn’t use them. Avoid link shorteners too. Especially if your goal is to engage in authentic human-to-human conversations.
Caveat: If you are in a marketing organization driving marketing campaigns, you should totally use UTM codes. For example, this post from Lee Hurst begs marketers to start using more UTM codes.
So why shouldn’t you use UTM codes?
Human filtering
Every time we dive into social media (email, reddit, Twitter, LinkedIn, Facebook, …) our brains drown in a sea of information overload. We are in this site to find interesting links, but our brains have had to develop quick strategies to separate real content from ads.
Which of these links would you rather click on?
The answer is clear: The first one looks like a link that a friend would send to me. The second one looks like a link I would find on an email campaign. The third one tells me nothing, and will probably remain un-clicked.
These 3 links go to the same place, but the first looks way more trust-worthy than the rest.
Machine learning filtering
A savvy internet expert will quickly counter: “People don’t need to see the links, you can hide URLs behind text”.
That’s true — and there will be people that click on a link without first snooping into what URL is hidden behind. But not all filtering is done by human brains. Computers try hard to filter authentic links from noise for you.
Take email, reddit, Twitter, LinkedIn, Facebook: They all try to surface links that are interesting to you, while filtering out spam. They all use machine learning to look at a link and score how interesting it might be to you. All of these ML models have been fed millions of annotated links, and all of these models have learned: If the link has an UTM code or if the link goes through an URL shortener, the probability of spam is much higher.
Let’s look at some data
For example, on reddit.com/r/programming during a given month 491 links had a score>5. Only one of these links had utm codes embedded:
/r/programming links don’t use utm codes (except 1 of 491)
Same on Hacker News: Looking at all the links with score>5 during 2020 so far, only a very small proportion of them have UTM codes embedded:
Hacker News links don’t use utm codes (except 49 of 36,170)
What about shorteners? 260 bitly URLs have been submitted to Hacker News this year, and their average score is 0:
Bitly links on Hacker News get an average score of 0
Queries
SELECT fhoffa.x.median(ARRAY_AGG(score)) median_score, url LIKE '%utm%' with_utm, COUNT(*) c
FROM `fh-bigquery.reddit_posts.2019_08`
WHERE score > 5
AND url>''
AND subreddit='programming'
GROUP BY 2; SELECT fhoffa.x.median(ARRAY_AGG(score)) median_score, url LIKE '%utm%' with_utm, COUNT(*) c
FROM `bigquery-public-data.hacker_news.full`
WHERE EXTRACT(YEAR FROM timestamp)=2020
AND score > 5
AND url>''
GROUP BY 2; SELECT ROUND(AVG(score)-1,1) avg_score, url LIKE '%bit.ly%' is_bitly, COUNT(*) c
FROM `bigquery-public-data.hacker_news.full`
WHERE EXTRACT(YEAR FROM timestamp)=2020
-- AND score > 5
AND url>''
GROUP BY 2
LIMIT 1000
UTM codes make it harder to re-share your content
Whenever I want to re-share an interesting link, I do so. But the link I got has UTM codes, now I have to work harder: Before sharing said link, I will need to clean it up, and remove al the UTM noise from it. Even more, if re-sharers don’t remove the existing UTMs, then all your analytics will become messed up with the wrong attribution.
If you want top social media influencers to naturally share your links, help them do so with minimum effort. They know their shares will be more successful without UTM codes, so help them by giving them clean links.
Shorteners are against the rules
On Medium, their rules:
To prevent fraud and abuse, you must, you may not: Include shortened URLs in your posts.
Medium rules agains shorteners. You may not “include shortened URLs in your posts”.
In fact, one of my friends was recently banned from Medium. The only flaggable behavior we found on their posts was the use of bitly links.
“Account suspended. Error 410. This account is under investigation or was found in violation of the Medium Rules”. If you use link shorteners on Medium, this might happen to you too.
This because URL shorteners are commonly used by spammers and abusers.
Quora seems to have similar policies (friends have had their content removed). And also people have strong negative reactions to UTM links. For example:
In summary
If you are driving marketing campaigns, use UTM codes.
If you are engaging in human-to-human conversations, don’t use UTM codes.
To avoid your links getting tagged as spam, don’t use URL shorteners or UTM tags.
To measure results without UTM codes… well, that’s for a future post.
Want more?
I’m Felipe Hoffa, a Developer Advocate for Google Cloud. Follow me on @felipehoffa, find my previous posts on medium.com/@hoffa, and all about BigQuery on reddit.com/r/bigquery. | https://medium.com/swlh/real-people-dont-use-utm-codes-30e6c12ea60 | ['Felipe Hoffa'] | 2020-08-12 22:41:54.717000+00:00 | ['Social Media', 'Analytics', 'Developer Relations', 'Marketing', 'Bigquery'] |
Golem Network vs. Render Farms | Whenever a rendering task is too demanding to be done on a local machine, 3D artists can rely on so called render farms to do the job. Golem Project aims to decentralise these computing tasks by splitting them up and distributing them onto a network of nodes. The promise is to make the rendering task cheaper and faster for 3D artists. We put Golem’s claims to the test and checked how it performs vs. 4 established render farms on the market — namely Garagefarm, Renderstreet, Superrenders, Sheepit.
Pricing Model
Render Farm
You are paying a fixed fee per CPU hour. This can vary from provider to provider, it also depends on how powerful the CPU you are paying for is.
Golem
Compared to classical render farms, Golem is an open marketplace so there is no fixed pricing per hour. Instead of a fixed fee you are making an offer, how much are you willing to pay for the job which nodes can or cannot accept. You set the desired resolution and frame range, and into how many subtasks you want to split the render job.
Example:
File: BMW benchmark file from Blender
Settings: 1 Frame @ 10 Subtasks (10 minutes timeout)
In this case, the image will be split into 10 evenly sized subtasks. Every subtask is 1 job done by 1 individual node. The maximum time a node is allowed to take is 10 minutes, so in the worst case scenario we are paying 10 nodes for 10 minutes of computing time.
10 subtasks x 10 minutes timeout x (0,1 GNT/60) = 0,16 GNT maximum amount paid
Things to notice
You are only paying a node when it has completed at least 50% of the subtask.
If a node can’t deliver in time, a new one will be assigned for the job.
The higher the amount you are willing to pay, the faster your request will be processed because nodes have an incentive to pick up the most lucrative jobs.
Setting the right timeout/price can be tricky, but if you master the bidding process Golem can save you a lot of money/time for your rendering jobs.
To set the correct timeout values, I used my computer as a point of reference:
To render one frame of the above scene, my machine took 20 minutes. One subtask is 1/10th of the whole scene, so that would be 2 minutes. So I expect a node to take a maximum time of 4 minutes (20 minutes total / 10 subtasks + 2 minutes as a security margin).
Test
For the test setup, I used the BMW example mentioned above. My target resolution is 1920x1080 and I’m going to render the 1st of 1000 frames.
I rendered this with the same settings on Garagefarm, Renderstreet, Superrenders, Sheepit and Golem.
Garagefarm
I had a little trouble to install the Garagefarm plugin “renderbeamer” in Blender. However, the live chat of Garagefarm was very fast and helpful. I choose the fastest render option and it worked quite well. Compared to the other render farms it was the fastest rendering, but also had the highest cost per frame.
Time per frame: 15 seconds
Cost per minute: 2,95$
Cost per frame: 0,76$
Renderstreet
Renderstreet was very transparent and user friendly. After registration you can directly upload your file and start the rendering process. No need for a plugin.
Time per frame: 1 minute 20 seconds
Cost per minute: 0,4$
Cost per frame: 0,53$
Superrenders
Superrenders felt quite convenient and easy to use. You register, provide your Google Drive (or Dropbox, or Box etc..) account to get a sharing folder where you then drop the file you want to render. No Blender plugin needed. It was fast and by far the cheapest commercial render farm.
Time per frame: 36 seconds
Cost per minute: 0,05$
Cost per frame: 0,03$
Sheepit
Sheepit is a free distributed render farm for Blender. In order to get your projects rendered, first you have to render a minimum of 10 frames of another projects. In order to do so, register, download the client, start rendering other projects with your machine. After 10 frames are completed, you are free to add your project to the network. The more you render for others, the higher prioritized your own projects will be.
Sheepit is free, but more time consuming then other renderfarms. It took a few hours until the render-process started, the render process itself was also much slower compared to the others. If you are not in a rush, it is definitely a good option though.
Time per frame: 16 min
Cost per minute: 0,00$
Cost per frame: 0,00$
Golem
Prices and times on Golem can vary due to it being an open marketplace. To make the results comparable I submitted multiple rendering jobs and averaged the prices and times:
Time per frame: 20 minutes
Cost per minute: /
Cost per frame: 0,3 GNT = 0,09 $
*Note: Prices can vary dramatically due to the volatile nature of crypto assets, you can check the GNT prices here*
During my test I couldn’t manage to render multiple frames successfully. /u/jamuszyn from the Golem team suggested these settings to me:
BMW 1920x1080 Frame range: 1–10 Task timeout: 2h Subtask amount: 10 Subtask timeout: 15 minutes Price: 0.2 GHT/h It computed in 28 minutes with performance slider changed to 4, so it is quite probable that it could compute even faster with better nodes.
Sadly I was unable to replicate this result myself.
Result Overview:
Conclusion
From the results I got during my test, it seems that Golem indeed offers a quite cheap rendering alternative in the render farm landscape. However, I found that superrender was even cheaper than Golem. Also Sheepit is of course a free alternative for non commercial projects.
One problem I see with the bidding model is its complexity and predictability of prices. Setting up the correct bidding is essential to achieve good prices and there is also the risk that a render job won’t be finished if the bidding is too low.
Also setting up the client is not super straightforward as it requires ports forwarding and knowledge of how to buy crypto assets.
These problems might vanish over time when Golem is further developed and moves out of the beta stage.
🚀 Interested in Tokenized Securities? Check out our newest project: STOCheck.com
Have you already tested Golem, what are your results ? | https://medium.com/trusteddapps/golem-network-vs-render-farms-7a1101bda79c | ['René Füchtenkordt'] | 2018-08-31 14:43:04.372000+00:00 | ['Review', 'Blockchain', 'Cloud Computing', 'Ethereum', 'Decentralization'] |
How to foster a culture of customer obsession | It’s a common refrain nowadays that companies need to be customer-centric. This often manifests itself in routine activities such as user research, customer interviews, and surveys.
However, I believe product teams can go much deeper than this.
Being customer-centric isn’t just about talking to your users; that’s table stakes. To truly be customer obsessed is to fully immerse yourself in your customers’ world.
And there may be no company that better operationalizes this than Amazon, who is highly renowned for its list of Leadership Principles; a set of tenets that guides its overall strategy and decision-making process.
The most important one, any employee will tell you, is Customer Obsession. They define this as:
Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leader s pay attention to competitors, they obsess over customers.
Every Amazon employee is expected to lead with this value, and it is a central piece of their interview process — whether you’re an engineer, marketer, or part of any other department.
Many other companies have since followed suit in adapting this as one of their values, and I’d like to share a few actionable ways that you can implement this in your teams.
Build relationships with your customer-facing teams.
(Yes, I realize the irony here)
Your customer support (and others like enablement, onboarding, account management, etc.) teams are some of the most valuable, yet often under-utilized assets. Get to know them, and engage with them regularly by:
Inviting them to your customer interviews
Sharing out your roadmaps with them
Showing them prototypes to get feedback
These teams have the strongest pulse on what customers are saying; why they’re dissatisfied with your product, what problems they have in their day-to-day, what alternatives they’re using, and so on.
The same goes for your Sales team. These folks have an incredible amount of competitive intel, can speak to blockers that commonly come up, and always know what moves the needle in closing a deal.
Walk a mile in your customers’ shoes.
News flash: your customers (likely) aren’t all using $2000 MacBook Pros and $1000 iPhones with HD Retina displays or blazing WiFi speeds.
To truly understand the norm for some of your customers, especially those in developing markets, try some of the following:
Access your website via Internet Explorer
Download your app on a low-end PC laptop
Load a large volume of data while on airplane WiFi
Moreover, if your product is a common utility, try using it as your go-to for your own day-to-day.
Whether it’s a calendar app, note-taking software, or any other tool you’re building, implement the products you’re building into your day-to-day routine. Check that calendar throughout the day for all your events, take all your meeting notes on that app, and so on.
Then you’ll really know what it feels like to put up with the potential bugs, UX flaws, or workflow constraints your product has. This is referred to as dogfooding.
Consume every customer interaction you can find.
There are so many tidbits of customer interactions that may not be reflected in feature requests, NPS comments, or tweets.
I call these ‘breadcrumbs’; bite-sized in nature, they leave a telling path to a land of promise, ala Hansel and Gretel. Try implementing these as a weekly routine:
Read customer support tickets
Listen to Sales calls
Read interview transcripts
Get a Zendesk, Intercom, Salesforce, etc. account and spend 30 minutes each week on there searching for keywords or terms that you’re interested in, find relevant chat transcripts or call recordings, and take notes on what you feel.
“Hi Company. Thank you for nothing. You made my daughter cry for 30 minutes yesterday, because we couldn’t login to stream Dora The Explorer. Switched browser, which seemed to solve half of the issues :-(”.
Unless you are a soulless robot, the above statement will probably trigger more emotions and requirements for actions than:
“Week 27 — Users experiencing issues on service: 57%”
We often put too much stock into quantitative data; users are people at the end of the day. Emotional sentiment is an incredibly strong signal, and that will never be reflected on any analytics dashboard. | https://uxdesign.cc/how-to-foster-a-culture-of-customer-obsession-3baaa5647474 | ['Samir Javer'] | 2019-05-28 23:47:06.217000+00:00 | ['Product Management', 'Product Design', 'UX', 'Customer Experience', 'Startup'] |
Optimizers in Deep Learning — Everything you need to know | In forward propagation, some random weights are assigned to neurons while training the neural network and at the end, we get an actual output which is represented as ŷ. Also, we know what the predicted output (represented by y) should be.
Now, the Loss function is calculated as (y — ŷ) ^2. Then, optimizers are used during backpropagation to adjust the weights and this is done repeatedly until y = ŷ
Let’s learn about various types of optimization algorithms -
1. Gradient Descent
Intuition — Consider a dataset with 100k records. The entire dataset is considered in forward propagation and then the loss function is calculated. After this, the loss function is minimized using a Gradient Descent Algorithm in backpropagation.
Disadvantage:
Weight updating may take a very long time on huge datasets, so takes time to reach the global minima. High RAM needed
Note: Global minima is the point on the loss function vs weight graph which gives the correct weight to be added in the network.
The formula used for updating the weight -
Weight updating formula
2. Stochastic Gradient Descent
Intuition — Consider a dataset that has 100k records. In 1 epoch, at a particular point, 1 record is taken in forward propagation then the loss function is calculated after which using SGD during backpropagation appropriate weight is added.
Likewise, this process is done for all the 99999 records.
Note: Epoch is 1 cycle of forward and backpropagation on all records of the dataset
Advantage
Comparatively, Less RAM is needed
Disadvantage
As only 1 record is considered at a time it takes a lot of time to just complete 1 epoch.
3. Mini Batch Stochastic Gradient Descent
Intuition: Consider a dataset with 100 records. First, we must decide the batch size. For example, let’s consider batch size as 20 and we run for 3 epochs.
So, in the 1st epoch, 5 iterations are run and in each iteration, 20 records are processed thus in 5 iterations all the 100 records are processed.
In the 2nd epoch, the same process repeats that is (5 iterations * 20 records per iteration = 100 records)
Likewise, the 3rd epoch is processed in a similar fashion as above.
Advantage
Less RAM Less computationally expensive
But,
In the Loss function vs weight graph, the convergence to the global minima is not linear in fashion but is skewed as we see in the diagram below, and this behavior is termed as noise.
Comparison with and without noise
Why does this happen? Because we consider records in the form of batches and records in a particular batch might not be representative of the complete dataset, or else think of it as — the batch considered might have outliers.
Due to the noise reaching global minima takes some time but Mini Batch SGD is better than SGD and GD.
4. Stochastic Gradient Descent with Momentum
The work is similar to the Mini Batch SGD but with slight modifications. In MBSGD, we faced a problem of noise, and this is resolved here. The word ‘Momentum’ means smoothing out the noise.
This is achieved using the exponential weighted moving average concept while updating the weights in backpropagation.
Note: As per researchers it is good to assume learning rate = 0.95
Let’s recap -
High RAM was needed — Taken care
Computational expensive to train the network — Solved
The noise was affecting convergence — Not anymore.
So, we conclude SGD with Momentum is the best optimizer!? Wait…
Read on,
5. Adagrad (Adaptive Gradient Descent)
Researchers found out that having a fixed Learning rate is not efficient. So, they introduced the concept of Dynamic Learning rate
Why do we need a Dynamic Learning Rate? Because consider a scenario where there is nothing new to learn from a set of records for the network so in this case by using a dynamic value for Learning rate helps us to reach global minima faster. Refer to the below image -
Dynamic Learning rate vs Fixed Learning rate
Explanation of Dynamic Learning Rate
Now there’s a problem here — If it is a very deep neural net with around 100+ hidden layers then there is a chance that our new and old weight might become the same.
Let’s talk about the optimizer which solves this and then we get the best optimizer!!!
6. AdaDelta and RMS Prop
By using the concept of Exponential moving weighted average in the Learning rate formula we solve the problem that causes New weight to be added in the network to be the same as old weight.
Formula -
AdaDelta and RMS Prop formula — incorporates EMWA instead of alphaT
We’ve come to the end and the best optimizer to be used is the combination of RMS Prop and SGD with momentum which is called the Adam Optimizer.
Thanks for reading!
Reach me at -
Email — [email protected]
LinkedIn — https://www.linkedin.com/in/tejasta/ | https://medium.com/analytics-vidhya/optimizers-in-deep-learning-everything-you-need-to-know-730099ccbd50 | ['Tejas T A'] | 2020-12-16 11:19:24.820000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Adam Optimizer', 'Gradient Descent'] |
Too Much Too Fast: Black Girls Need Therapy to Process the Racial Trauma of Everyday Life | Too Much Too Fast: Black Girls Need Therapy to Process the Racial Trauma of Everyday Life
In predominantly white institutions, Black girls are often left out of the narrative when decisions are made on who receives resources and support.
In direct response to the global pandemic and schools transitioning to remote learning, recently, there has been a surge in conversations at the intersection of mental health and Black girls. It is evident that school communities need to think critically about what resources they are providing to help support Black girls as they navigate the realities of racial trauma in their everyday life. At school, Black girls are the target of racialized experiences, negatively impacting their emotional safety and well-being. The cumulative effect of systemic racism has created institutional barriers that make it challenging for Black girls to receive the therapy that they both need and deserve.
In predominantly white institutions, Black girls are often left out of the narrative when decisions are made on who receives resources and support. There is a perception that Black girls are not in need of any help, an example of deficit thinking that suggests Black girls possess super strength and can handle everything thrown their way. Mirroring society, the dangerous stereotype of the ”strong” Black woman, posits that Black girls do not feel pain, and that they are obligated to take on the emotional labor and effects of intersectional racism and sexism. This harmful trope leaves many Black girls to suffer in silence, with little to no support to process their experiences that carry over into Black womanhood.
Statistics point to Black girls facing harsher disciplinary action in schools, resulting in higher suspension and expulsion rates, in comparison to white girls. This is due in part to Black girls being susceptible to adultification, a process in which their child-like behavior is viewed more punitively or not seen as child-like at all, stemming from racial discrimination. Black girls are also hyper-sexualized, where their bodies are objectified, further dehumanizing them, robbing them of their innocence and vulnerability as children. Both in and out of the classroom, Black girls are regularly tone and body policed for their hairstyles, their dress, and the way in which they speak. Their bravery and assertiveness is often miscategorized as “sassy” or “bossy”, negatively impacting their interactions with adult administrators and teachers. Black girls are constantly under a microscope, in which their every move is scrutinized or up for debate. This burden of representation can also be felt academically, where Black girls feel more pressure to perform well in the classroom because they know that all eyes are on them. This pressure extends beyond the classroom to the social dynamic, where Black girls feel the impact of being held to Eurocentric beauty standards, while simultaneously being culturally appropriated for their hair and likeness, all while being on the receiving end of daily racial microaggressions that strip away at their confidence and sense of belonging. These collective experiences are racially traumatizing, taking a toll on the psyche and mental health of Black girls.
Black girls deserve mental health resources that seek to nurture and support their growth, specific to their identity and experience as Black girls. Schools need to be proactive in their approaches to ensure that Black girls feel seen, heard, affirmed and valued. A first step for schools is to establish processes that make mental health and wellness resources accessible to Black girls. This would include identifying therapists with expertise in dealing with racial stress and trauma. Additionally, schools can work to connect Black girls to Black female mentors through online resources such as Therapy for Black girls, that offer strategies for navigating racially stressful environments.
Schools can meet this moment and call for racial justice, when they consider what it means to support the Black girls in their care. Schools must go deeper in how they respond to the racialized trauma inflicted upon Black within their schools. The stakes are too high and the implications are far too great to not develop practices that help Black girls thrive emotionally, mentally, and academically.
It is no secret that racism and COVID-19 continue to adversely impact the mental health of Black girls as they navigate everyday life and school. Too much is happening too fast, and the repercussions on not addressing this issue will have significant impact for generations to come. Educators must step up and get involved to combat these challenges that stand to get in the way of the promise and power of young Black girls, everywhere. Our Black girls deserve therapy and care so that when they are older, they can in turn, heal everyone else, just as Black women have done and continue to do everyday for the betterment of humanity. Black girls deserve to show up as their full authentic selves at school where they can go beyond merely surviving, but thriving. As educators, our commitment must be to support Black girls in their journey of life, helping them to unlock their fullest potential, harnessing and cultivating every ounce of their Black girl magic.
Ralinda Watts, a native of Los Angeles, is a diversity expert, consultant, educator and writer who works at the intersection of culture, identity, race, and justice, sparking thoughtful conversations on what matters most; authenticity! Her podcast, #RalindaSpeaks, is available on Apple, Spotify, and Google Podcasts. Connect with me on Instagram & Twitter @RalindaSpeaks | https://medium.com/an-injustice/too-much-too-fast-black-girls-need-therapy-to-process-the-racial-trauma-of-everyday-life-284d7e8a5371 | ['Ralinda Watts'] | 2020-11-01 21:57:44.493000+00:00 | ['Mental Health', 'Black Girls', 'Stress', 'Racism', 'Therapy'] |
100 lições para 2021 | Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.
Check your inbox
Medium sent you an email at to complete your subscription. | https://brasil.uxdesign.cc/100-li%C3%A7%C3%B5es-para-2021-3916c1d5db69 | ['Fabricio Teixeira'] | 2020-11-29 19:47:24.005000+00:00 | ['Visual Design', 'UI', 'Product Design', 'UX', 'Design'] |
Snap Snap | Written by
Daily comic by Lisa Burdige and John Hazard about balancing life, love and kids in the gig economy. | https://backgroundnoisecomic.medium.com/snap-snap-4232e688db2f | ['Background Noise Comics'] | 2019-10-10 07:11:01.412000+00:00 | ['Mental Health', 'Body Image', 'Comics', 'Addams Family', 'Humor'] |
N25: from Violence to Non-Violence | N25: from Violence to Non-Violence
baby steps…
© Tim Gallo. It’s a wet, dark, and long tunnel into darkness.,,
I wonder if there is no way of cutting through a circle of violence…
my grandmother used to beat me and close me in a dark cupboard. (though it was in my memory vault till like early 30-s when I finally made myself face my past and made myself remember it).
In kindergarten, teachers used to beat me cause I was different; post-soviet education was a mess at those times. My mixed blood look added to confusion.
All this made me a violent kid, I beat the shit out of my kindergarten and school classmates up until 12–13 y.o. Never felt good about it and hated me for the inability to communicate with people my inner struggle with violence and loneliness.
Parents tired of being called into school. My parents were always great, though, never took any freedom from me.
My father miraculously pulled me from an evil company. Moved me to a different school.
There was a moment when my violence made me lose half of my front tooth. Lying under the light waiting for the dentist to appear, I remember that I was fed up with myself so much that when the doctor finally arrived, to his surprise I asked him to not repair my tooth. I wanted a reminder that this is what it leads to — eventual self-destruction.
At 14, I made a conscious choice to become vegetarian and choose the least violent religion — Hinduism. The Advaita-Vedanta is accepting all there is…it helped me and continues to help me a lot.
I spend a lot of my youth just studying the nonviolent way of life. Becoming vegetarian calmed my heart, made me just a little bit more compassionate.
The next years were very “normal” for me. I prayed. Meditated a lot, otherwise lived an adventurous life. Moved to Japan. Build a career in photography in Tokyo. Then I became a father… that’s when I realized that I never entirely left the circle of violence. The seeds of old were so much deeper and so much stubborn than I imagined.
I unconsciously started to behave towards my son the same way my grandmother used to act towards me. A realization struck me when I told my 3 y.o. boy that I would close him in a room if he didn’t behave himself. I was repeating words of my grandmother…
Its funny how other people voices live inside us, affect us through life. What seemed insignificant when you are grown-up — for a small sensitive child is bigger than the world…
my own anger spiraled me into depression. I became a kid again — angry at the whole world and everything around me.
All those years of meditation and prayer, reading of spiritual books — all crumble in front of a scared, lonely kid that carnivorously inflates inside me. But I cant ignore him anymore. I am looking for ways to embrace him. Embrace my kid for the way he is. Words and creativite should help… if only words and creation worked on sheer will.
I am re-learning to control myself and start with accepting that I have an anger management problem. I le-learn to pray and meditate with heavy and disturbed heart — this is how real spiritual work is done.
Feeling all this struggle inside me made me more sensitive to the violent world we live in. I feel disappointed in the world. I don’t feel patience dealing with the ignorance of others. It's a wet, dark, and long tunnel into darkness.
When one feels disappointed in the world, one loses the ability to enjoy a creative life.
So here I am. Standing in darkness. Going nowhere, facing myself through words and honesty. Not feeling anything creative coming… even writing this is a struggle. But baby steps… | https://medium.com/by-this-side-of-heaven/n25-from-violence-to-non-violence-39b1873bf9f2 | ['Tim Gallo'] | 2020-07-25 02:29:30.776000+00:00 | ['Artist', 'Self-awareness', 'Personal Growth', 'Self Healing', 'Nonviolence'] |
RE:[Cryptography] Paid SMTP (PSMTP)-5 | Dear all,
Thank you for your time. The discussions have moved forward each round helping me see different aspects and develop the idea further. I hope you also found answers to your questions and understand PSMTP better.
I remembered the question I asked my Molecular Biology professor:
“If mutations are harmful, how can they help evolution through natural selection?”
“Thank you for asking this question, Ersin. Keep in mind that the environment is evolving as well. A feature that does not fit the current environment can fit the future environment as long as it can survive the hostile environment. The bad genes of the past may become the good genes of the future and vice-versa.”
I think this explains why good ideas of the technology history were considered bad when they were first offered until they become successful. They become good ideas and succeed when the environment changes accordingly. Adaptation is a double-arrow.
If you think about PSMTP again, try to imagine the new environment of the near future rather than the present and the past. You may think about the impact of Cryptocurrency, end-point TEE, DANE etc. on the environment. Judging PSMTP becomes fun and fruitful when you do so.
Best regards,
Ersin | https://medium.com/crypto-mails/re-cryptography-paid-smtp-psmtp-5-d6597d86d450 | ['Ersin Taşkın'] | 2018-03-09 07:30:57.037000+00:00 | ['Psmtp', 'Spam', 'Antispam', 'Science', 'Smtp'] |
I Miss Libraries | We checked out our last stack of Wimpy Kids and Timmy Failures a couple months ago. We’re fortunate in that, for us, missing libraries doesn’t have to mean missing new books — in fact, I just placed an order at my local bookstore. But what I miss about libraries isn’t really the books. It’s the sense of possibility. I thoroughly researched the books I ordered from the store. At the library, you wander in, planless, browse the shelves, blithely judge books by their covers. My busy branch never has the book you’re looking for, which inevitably leads to finding the book you need.
Because libraries are places of abundance. Where else can you look at something, be only 13% sure you want it, and bring it home anyway, risk-free? Where else can you say to your children, “Pick out anything you want, anything at all.” I’m not rich, but at the library I could be extravagant. My kids and I could shuffle home under the weight of our books and scatter them on the living room floor, picking through our choices like Roman emperors.
What’s more, public libraries are forgiving in a way many cultural institutions aren’t. Kids destroy most things they touch for a good percentage of their growing years, or at least mine do. But whenever I’d sheepishly take a battered Boxcar Children book back to our branch, showing our local librarian the page that had been torn in a reading frenzy, she would shrug and tell us not to worry, and show my daughter a great Boxcar read-alike she might want to tear through next.
My casual attitude toward germs used to mean I was a relaxed mother, not a pathogen-spewing killer.
Yes, library books were always kind of gross, especially the children’s ones (maybe because of my children, actually — sorry about that). But it used to not matter. I used to be able to think, Okay, this Elephant and Piggie is mysteriously sticky? Fine, we’ll wipe it down, whatever. My casual attitude toward germs used to mean I was a relaxed mother, not a pathogen-spewing killer.
I miss not caring who had my book last, and even delighting in the evidence left behind by previous readers: the to-do list tucked between the pages, the bookmark from another era. Who knows how we’ll feel about sharing books with strangers in the future? | https://humanparts.medium.com/i-miss-libraries-53138f959b6e | ['Amy Shearn'] | 2020-04-29 15:36:12.689000+00:00 | ['Family', 'Kids', 'Books', 'Life Lessons', 'Parenting'] |
Web Hosting using Python | Python is a general purpose, high level programming language with increasing audience and most popular in the field of Data Science due to its packages available for Machine Learning and Deep Learning, however Python has some packages that are very useful in the field of web development. Flask is one such Python package used to host websites.
What can Flask do???
Host a/group of html templates on a local server. Integrate your website with CSS/JS and Databases.
What Flask can’t do???
Write the Website for you.
While it is true that you can basically do anything using Python, the most common way to develop websites is still HTML/CSS and JS. Flask can only render the HTML code in the Python application, it can not enable you to create a website using Python (traditionally at least).
After having a brief idea about Flask, lets start using Flask by creating a Hello World page and hosting it using Flask.
Install the dependencies
First things first, we will install the dependencies that will enable us to run flask on our local computer.
You need to have Python and Pip on your system. To install Python go to your terminal and write:
2. After you have Python installed, you need to get pip. Pip is a package manager that would enable us to install Flask. To install pip, go to your terminal and type.
3. After having Python and Pip installed on our system, we need to get Flask. To do this, go to the terminal and execute the command
Let’s start Coding!!!!
Once we have all the dependencies set up, we are ready to make our first flask application. We will start by making a really basic application that will greet us with ‘Hello World’ on the homepage.
This 10 lines of code is enough to get you started with your first Hello World Flask application. We will have a detailed look at the code later in this article. For now let’s run the application.
Running the Flask Application
To run the application, go to the folder containing the Python file via terminal and execute:
Note: Here, the name of my file is 1.py. Your file can have a different name than mine.
2. Once you execute the command, your terminal should look something like this:
3. This tells us that the flask application is running successfully. The default port for flask is 5000. So to check the website you need to go to
localhost:5000. In your browser tab type localhost:5000 and you will see your application running.
Congrats, you have successfully created your first flask application.
Let’s take a detailed look at the code.
1.
This is used to import the Flask module that we downloaded earlier in our Python file.
2.
This line defines app as the flask application. The __name__ argument in the Flask is used to help determine Flask the route of the file.
3.
@app.route is used to determine the structure of the web application i.e how different links will the structured in the web application. Here (‘/’) is used to determine the homepage of our web application. A web application having multiple pages will have multiple different routes.
Line 5 is the function that gets executed once the user reaches the page mentioned in @app.route (the homepage in our case).
Line 6 is the content of the index function. It is the body of our web page and is reflected by the return statement.
4.
Line 9 is like dependencies in Flask and are always included in every Flask application, if the application is being run on your server locally. This line of code ensures that the server runs only if this application is being run directly and not using any import statements.
Line 10 actually runs the application once it is created, hence these 2 lines always go to the bottom of the page. debug is an argument for app.run. This enables us to make changes to the HTML/CSS or the Flask application without restarting the server to see the results. This is usually used in development phase to make things easier for developers.
Thanks for Reading
In the coming publications we will cover how to add CSS/HTML/JS to your flask application and how to connect our application to a SQL Database.
Please comment your views about the publication in the comment box. | https://medium.com/datadriveninvestor/web-hosting-using-python-3dbb00abdcba | ['Ayush Kalla'] | 2020-06-02 10:29:06.816000+00:00 | ['Flask', 'Python', 'HTML'] |
Introduction To Apache Kafka | Apache Kafka
What is Apache Kafka?
Kafka is a Publish-Subscribe based messaging system that is exchanging data between processes, applications, and servers. Applications may connect to this system and transfer a message onto the Topic(we will see in a moment what topic is) and another application may connect to the system and process messages from the Topic.
What is Kafka Broker?
Kafka cluster consist of one or more servers also known as Kafka brokers, which are running Kafka. Each broker is identified with its ID(Broker.id property is the unique and permanent name of each node in the cluster).To connect to the entire cluster, you first need to connect to bootstrap server(also called bootstrap broker).Bootstrap Servers are nothing but list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
Kafka Cluster
What is Kafka Topic?
A Topic is a category/feed name to which messages are stored and published.
Messages are byte arrays that can store any object in any format.
All Kafka messages are organized into topics.
If you wish to send a message you send it to a specific topic and if you wish to read a message you read it from a specific topic.
Producer applications write data to topics and consumer applications read from topics.
How does Kafka Topic Partition Look?
Kafka topics are divided into a number of partitions, which contain messages in an unchangeable sequence.
Each message in a partition is assigned and identified by its unique offset.
A topic partition is a unit of parallelism in Kafka, i.e. two consumers cannot consume messages from the same partition at the same time. A consumer can consume from multiple partitions at the same time.
Kafka Partitions
Replica In Kafka
In Kafka, replication is implemented at the partition level.
The redundant unit of a topic partition is called a replica.
Each partition usually has one or more replicas meaning that partitions contain messages that are replicated over a few Kafka brokers in the cluster.
Example of 2 topics(3 partitions and 2 partitions)
Replica In Kafka
Data is distributed and broker 3 doesn’t have any topic 2 data
Concept of Leader for a Partition
At any time only a single broker can be a leader for a given partition and only that leader can receive and serve data for a partition
The other brokers will synchronize the data
There each partition has only one leader and multiple ISR(In-Sync-Replica)
Concept of leader
Consumers
Consumers can join a group called a consumer group.
Each consumer in the group is assigned a set of partitions to consume from.
Kafka guarantees that a message is only read by a single consumer in the group
Data/messages are never pushed out to consumers, the consumer will ask for messages when the consumer is ready to handle the message.
are never pushed out to consumers, the consumer will ask for messages when the consumer is ready to handle the message. The consumers will never overload themselves with lots of data or lose any data since all messages are being queued up in Kafka.
Consumers read data in consumer groups
Each consumer within a group reads from exclusive partitions
You cannot have more consumers than partitions(otherwise some will be inactive)
Consumer groups
Producers
Producers can choose to receive acknowledgment of data writes:
Acks=0: Producer won't wait for an acknowledgment (possible data loss)
Acks=1: Producer will wait for leader acknowledgment (limited data loss)
Acks=all: Leader + replicas acknowledgment(no data loss)
Producer
Here are important concepts that you need to remember
Producer: Application that sends the messages.
Application that sends the messages. Consumer: Application that receives the messages.
Application that receives the messages. Message: Information that is sent from the producer to a consumer through Apache Kafka.
Information that is sent from the producer to a consumer through Apache Kafka. Connection: A connection is a TCP connection between your application and the Kafka broker.
A connection is a TCP connection between your application and the Kafka broker. Topic: A Topic is a category/feed name to which messages are stored and published.
A Topic is a category/feed name to which messages are stored and published. Topic partition: Kafka topics are divided into a number of partitions, which allows you to split data across multiple brokers.
Kafka topics are divided into a number of partitions, which allows you to split data across multiple brokers. Replicas A replica of a partition is a “backup” of a partition. Replicas never read or write data. They are used to prevent data loss.
A replica of a partition is a “backup” of a partition. Replicas never read or write data. They are used to prevent data loss. Consumer Group: A consumer group includes the set of consumer processes that are subscribing to a specific topic.
A consumer group includes the set of consumer processes that are subscribing to a specific topic. Offset: The offset is a unique identifier of a record within a partition. It denotes the position of the consumer in the partition.
The offset is a unique identifier of a record within a partition. It denotes the position of the consumer in the partition. Node: A node is a single computer in the Apache Kafka cluster.
A node is a single computer in the Apache Kafka cluster. Cluster: A cluster is a group of nodes i.e., a group of computers.
Additional Note | https://medium.com/analytics-vidhya/introduction-to-apache-kafka-88ebd24eb962 | ['Henok Micael'] | 2020-12-14 16:13:41.950000+00:00 | ['Kafka', 'Distributed Systems', 'Messaging Queue', 'DevOps', 'Big Data'] |
Focus on Your Digital Marketing Goals, Not Your Software Tools | We get it. We have a great job here at Element. It’s exciting to deliver content marketing and marketing automation platforms for our clients, delivering results for them and, yes, “playing” online all day. It can be tempting to dive right in with us, helping to choose the many products and services that we use to deliver our digital marketing programs. Frankly, we love it that our customers are so invested in partnering with us.
But it’s far too easy for organizations to lose sight of what they are really working to accomplish with their digital marketing efforts. It can be fun, in that marketing geek sort of way, to dive into the details on marketing automation platforms, email services, social media sites, and content marketing tactics. And with over 4,000 companies out there providing digital marketing services and tools, there’s a lot to sort through!
As the real drivers of marketing for your company (and yes, you are the boss!), it’s important to take a step back. A big step back, all the way back to that 10,000-foot level, to your high-level business goals. To achieve success, drive the most important details from the top, and leave the minutiae to your agency partner.
Lots of Marketing Services, Lots of Sameness
The truth is that most of the digital marketing services out there offer 99% of the same features as their competitors. Compare HubSpot and Marketo, for example. What do these two common marketing automation platforms have in common?
Email? Check.
Landing Pages? Check.
Forms? Check.
Lead Scoring? Check.
CRM Integration? Check.
You get the idea. All the major tools out there cover the basics. The differences between them all often come down to price, reporting features, and differences in the user interface. It’s true that some of them provide extras that others don’t, like social media posting, for example, but often times more is not better. Simply having a feature is not the same as doing it well, and there are usually purpose-built services out there that do it better.
More Choices Aren’t Always Better
Of course, having all these digital marketing products to choose from can cause decision paralysis: the difficulty in telling all of them apart makes it hard to just say “yes” and go for it. It’s the infamous paradox of choice. Psychologist Barry Schwartz explained it this way in Harvard Business Review:
“As the variety of snacks, soft drinks, and beers offered at convenience stores increases, for instance, sales volume and customer satisfaction decrease. Moreover, as the number of retirement investment options available to employees increases, the chance that they will choose any decreases.”
Fortunately, there’s an alternative: trust your agency.
You Already Have Too Much to Do
As a corporate marketer, your job is to focus on producing results for your organization. Between all the other things you need to get done — handling corporate communications, sales enablement and presentation support, internal training and product education, market planning, and budgeting, spending time on selecting marketing tools is “out of the wheelhouse.” Spending too much time selecting technology is a sure way to lose focus on your customers, both internal and external.
The Staffing Isn’t There
I’m sure you can hear your CEO now reviewing next year’s budget request: “This looks great, but you need to cut it by 15%.” It can be incredibly difficult to justify even one additional FTE (full-time-equivalent) staff member. Marketing’s funds are viewed with envy by other departments, and are often the first to be cut in times of revenue pressure. Your team is already too crunched to spend their time evaluating technology platforms.
You’ll Be Free to Focus on Strategy
Close your eyes for a minute, and think about all those requests you get every year that go by the wayside because of lack of time. You’ve been asked to tackle new strategic initiatives, provide more sophisticated research and comparison benchmarks, and so much more. If you have any extra time to spend during the year, shouldn’t it be on these meatier projects?
Your Agency Knows the Technology
Who has the capacity to stay up to date on constantly changing trends and challenges in digital marketing? Your agency, that’s who. Isn’t that why you engaged them in the first place? They’re the experts on this stuff. Not only that, but having the viewpoint from outside the company can be a huge advantage when it comes to selecting the right marketing technology.
We can help your organization move from indecision to action and execute a successful content marketing strategy. Talk to us. | https://medium.com/goelement/focus-on-your-digital-marketing-goals-not-your-software-tools-e48538c9c1ed | ['Element Creative'] | 2017-07-13 18:16:44.569000+00:00 | ['Business Development', 'Marketing Strategies', 'Marketing Technology', 'Marketing', 'Digital Marketing'] |
Reimagining Black Safety | My conversation with these three seers led me to create a list of seven ways that we might establish true safety and make it a way of life going forward.
1. Bring back our connection to our ancestors. One of the most common indigenous African philosophical constructs is the idea of ancestral veneration. A basic scientific principle is that energy is never destroyed; it is simply transformed. This principle must also apply to human energy. Ancient people understood this as a basic truth of human experience and felt profound, ongoing connection with their ancestors. This connection helped them face challenges from a position of strength and with a perception of protection and safety.
2. Embrace Ubuntu. Ancestral veneration is rooted in the concept of Ubuntu. This idea says that “a person is a person because of other persons.” In other words, none of us exists alone; we are not really individuals. We belong to a collective of people who have existed for millenia. This understanding stands in direct conflict with the Western notion of rugged individualism. True safety may be found by tapping into the genetic memory of that indigenous wisdom.
3. Take the painting off the wall. If you go back far enough (and sometimes not that far) almost every ethnic group has used song and dance, not for the purpose of artistic display, but as an integrated part of daily life. Somehow, in Western civilization, art has become something we collect and view at a distance, like a painting on the wall of a museum. Perhaps re-engaging with spontaneous music and dance would bring us closer to a way of life more in keeping with a safe environment. There is safety in the release provided by song and dance.
4. Focus on quality of life. One of my own anecdotal observations is that when people have a decent quality of life, they are less likely to accept an unsafe environment. They become committed to maintaining their safety, the safety of their family, and the safety of their community. Conversely, when people’s lives are precarious, they will often end up in unpredictable, perilous, and unsafe circumstances. The math here is not that hard.
5. Defund the police. Yes, I know this phrase is a lightning rod, but that’s only because it has been highly politicized. Something is genuinely amiss with police funding. According to a New York TImes report from June 12th, 2020, city police budgets across the country have risen by millions of dollars annually — even during lean years for city finances, and even despite a steep nationwide decline in violent crime that began in the early 1990s. More dollars are being funneled to police departments to fight the “ghost” of an out-of-control crime rate, and this has come at the expense of funding for other city services. Police departments have also grown more militarized, equipped with assault-style weapons and even tanks from arms makers. “Defunding the police” doesn’t mean eliminating police from your city. It simply means that resources should be focused on necessary services that address mental health, food scarcity, addiction, and other challenges that have been exacerbated by massive income inequality, and not merely on adding more armed men and women on the streets, which often only invites violent altercations. Police budgets should not be tied to political whim, but to overall crime trends and statistics. Science, data, and truth should still matter.
6. Support Relationships. The more we devote time and resources to developing and strengthening relationships among family and friends, the more we can tap into those relationships to help family members overcome challenges instead of leaving that responsibility to the police. Traditionally, family has been the first line of defense in a crisis. Without this support, we call on the police to step in where family could be the first call. Of course, there are times when the police should be called, but imagine a society that supports families and provides more family-based services.
7. Stop creating the “Procariat.”If you combine the words precarious (unstable) with proletariat (working class), you create the word Procariat. Millions of Americans live the life of the Procariat — the unstable working class — especially African Americans. If you cannot plan for the next eighteen months, you are living an unsafe life. If your household is food insecure, you are living an unsafe life. If you don’t know where your next paycheck will come from, you are living an unsafe life. When we start to address these issues, we will improve safety for African Americans and halt the perpetuation of the Procariat.
We have opportunities to do something about safety; they are not beyond us. They don’t even require us to leave the comfort of our homes. They do, however, require a shared philosophy of Ubuntu that helps us understand that when others around us are unsafe, we are all unsafe. | https://medium.com/an-injustice/reimagining-black-safety-c0e1c60adc32 | ['T Lt West'] | 2020-10-04 21:05:27.241000+00:00 | ['Safety', 'Wellness', 'Police', 'Community', 'Racism'] |
Have You Had *Too Many* Sexual Partners? | What would you think of a fictional female character who had four lovers in one year?
Take a second and think of the label you’d give her. Got it?
In the first manuscript I ever wrote, the female protagonist went on dates with 24 different men over the course of one year and had sex with four of them — but only after having had at least three dates with said men.
The man whom the protagonist was most interested in having a relationship with was someone she’d known for years. And although she never did sleep with him in the story, their relationship was central to the character’s growth arc.
I had an early draft of the manuscript read by a developmental editor — a woman who was quite progressive in her social beliefs — and she was unequivocal in her advice:
She: Do you want to try to sell this as a mainstream book?
Me: Yes.
She: Then reduce the number of men your protagonist dates from 24 to fewer than five and have her sleep with just one man — the one she knows at the beginning of the story and ends up with at The End.
Me: But this is not a romance. Certainly, it doesn’t have to follow the expectations of romance readers.
She: Shaking her head at the naïve writer questioning her good advice. No matter what genre you’re writing you need your readers to like your main character. Nobody will cheer for a promiscuous woman.
Me: Wait, what? Serial monogamy is promiscuous? And what you’re suggesting is an entirely different story to the one I’ve written.
She: Well, if you want to save it, make your protagonist the man. Readers don’t mind a guy who plays the field, looking for Ms. Right. But they’ll put the book down if your female protagonist gets action from more than one man.
And this is where you insert language that would make any raping, pillaging male Viking from a historical romance novel blush.
That conversation took place in 2006. Certainly, things are different in 2019, the ever-hopeful, but apparently, this still naïve writer thought…
Sadly, not at all. And worse, not just in novels, but in flesh-and-blood real life.
What would you think of me if I told you I’ve had ten lovers in my lifetime?
In my research for my current work-in-progress, I read some academic research published in December 2018 titled, The Sexual Double Standard in the Real World: Evaluations of Sexually Active Friends and Acquaintances.
What this study found is that the sexual double standard that readers accept in fiction is alive and well in the real world, too. It’s a world where
men are socially rewarded for engaging in sexual activity and women are socially denigrated for sleeping with the men who are reaping those rewards.
What does “socially denigrated” mean in real terms?
Well, if you were part of this study of 4,455 American men and women between the ages of 18 and 35, it means that if you have a male friend and a female friend who have both had, let’s say, ten sexual partners, you’d give your female friend lower scores for attributes related to her values, likability, success, and intelligence than your male friend.
Even though I was the one who read the study and wrote that line I need to pause and take that fact in. The fact that I’ve slept with ten men (in theory) means that my own friends will think I’m less intelligent than our mutual friend, John, who’s also had ten lovers.
Quoi le phoque?
This double standard has dangerous social implications
Imagine you’re on a jury in a case where a man has been accused of sexually assaulting a woman. It comes to light that the woman has had ten sexual partners in the last two years.
But wait! So has the man, so they’re equal in morality, trustworthiness, intelligence, making good decisions and so on, right?
You know the answer. This virtually invisible double standard allows us to judge the woman as less moral, with less a trust-worthy testimony, lower intelligence and more prone to have made a bad decision that landed her in this pickle.
Silly woman shouldn’t have led him on. What did she expect? Blah blah blah.
And this isn’t just opinions of people who self-identify as being conservative or part of the religious right — this is how friends label their own friends based on their sexual activity.
This is depressing and disappointing and generally makes me feel too sad to be as angry as I’d like to be.
In 1969, author E. B. White said,
“Writers do not merely reflect and interpret life, they inform and shape life.”
But, how can writers inform and shape a new way of thinking about sexual norms and mores when the majority of readers (being regular people) are, apparently, not willing to cheer on women who challenge these sexual double standards?
Thank the goddess for the exceptions to the rule
This little bubble inside Medium is one of those places where hundreds of writers push the boundaries, sharing first-person stories about sexual activities that are generally not acceptable to engage in, let alone write about.
Photo by VINICIUS COSTA from Pexels
And thank goodness for them — writers like traceybyfire, Emma Austin, Darcy Reeder, Shannon Ashley, Zita Fontaine, James Finn, Joe Duncan and so many others.
These are just some of my Go To writers when I need to know the entire world is not as narrow-minded as mainstream fiction readers and the 4,455 Americans who were part of that depressing study. | https://medium.com/love-and-stuff/have-you-had-too-many-sexual-partners-f9040c704871 | ['Danika Bloom'] | 2020-04-29 17:38:48.621000+00:00 | ['Life Lessons', 'Sexuality', 'Culture', 'Feminism', 'Books'] |
The skaters | The canal has frozen, and I’ve been handed a sharp pair of skates.
Look, everyone is racing by, propelled by the swift chock-chock-swish of their lusty strides.
The tree trunk that you’ve set me down on is rigid with cold.
I can’t put the skates on until I find the warm center of the tree, pirouette in the biggest storm it ever thrashed through, coat myself in the mud that froze around the scaly trunk when it fell, vanish into the watery center of its emptiness.
Look at them skate past. How did they learn all these things? What have they left me to discover? | https://medium.com/drmstream/the-skaters-66f2ea194f67 | ['Dan Mccarthy'] | 2016-11-10 02:25:11.977000+00:00 | ['Skating', 'Writing', 'Holland', 'Canal', 'Desire'] |
An Injustice Newsletter #6 | An Injustice Newsletter #6
Editor’s pick and Shoutouts!
Photo by Keila Hötzel on Unsplash
Hello everyone!
Really sorry for bringing this one to you a day late. Yesterday was crazy and I didn’t have the time. Unfortunately means there won’t be an editor’s letter this week, however thing’s will be back to normal come next week!
Editor’s Pick
The Impossible for the Ungrateful: Confessions of An American Educator — Solomon Hillfleet
So many educators find that the students they fight the hardest to save can’t be saved. The collateral damage is the students who love us and we never notice. The ones who could bloom if they only knew their worth. Students like myself who never got that attention because of the fires their teacher put out.
This one is just a must read guys.
Shoutouts!
To the Gender Queer Person Who Served Me Ice-Cream Today — Charlie Bartlett
Seeing other gender queer folks out and about during my day — shout out to You in Jane bakery; to You in Salt and Straw and to You who struts into my workplace to find a quiet spot in the poetry section on your lunch break.
What If There Are No Leaders Of Tomorrow? Only Servants Of Today! — Stephen Uba
You can only prepare for tomorrow, but you can’t change it, the only zone that supports change is the present, that is the only zone where our alleged ‘Leaders of Tomorrow’ can truly lead and majestically wear their crowns of transformation.
You Can’t Hate the Sin and Still Love the Sinner — Ellie Heskett
Love the sinner, hate the sin” is bullshit. That “sin” is a huge part of who I am as a person. You can’t truly love me if you don’t accept that part of me. That’s why I have family members who think they love me, but they don’t know the whole me.
Poetry;
I’m Not a Coward I Just Never Been Tested — Joe Váradi
I’m not a coward I just never been tested
Looked down upon
Made to feel small ‘n
Detested
Never been called out and compelled to face
A resentful mob
On account of
My Race | https://medium.com/an-injustice/an-injustice-newsletter-6-6bf59b6dca79 | [] | 2019-08-22 10:56:09.925000+00:00 | ['Millennials', 'An Injustice', 'Nonfiction', 'Culture', 'Publication'] |
Bitcoin a disaster for our planet and should be heavily taxed, researchers claim | There has been a lot of discussions on the environmental impact and sustainability of Bitcoin. A scientific article by the University of Qatar now comes to the conclusion that due to the high energy consumption government intervention is necessary in the future. Thus, corresponding taxes could reduce its demand. In addition, it needs incentives for alternative blockchain technologies and new consensus modes. While these are finding more and more application, the Association Research Center for Energy Management emphasizes that the energy problem of blockchain is “solvable”.
Environmental impact of Bitcoin calls for government intervention
The growing energy demand in the crypto sector endangers the environment and sustainable survival on earth. This is the basis on which Jon Truby, a researcher at Qatar University, argues in his recent research. His thesis: By consuming huge amounts of energy every year, Bitcoin undermines global efforts to combat climate change and the Paris climate change. This overshadows the benefits of Blockchain, he claims. The huge trustworthiness of transaction security is overshadowed by Bitcoin’s deliberate resource-intensive design.
These verifications (known as mining) are now threatening the climate we depend on to survive, he writes. The answer? The government should limit the demand by introducing taxes.
Consequences of the Bitcoin boom: Mining creates a huge ecological footprint
While Bitcoin owners are happy about the record highs, the environment suffers. As the cryptocurrency analyst Digiconomist has calculated, theoretically 215-kilowatt hours of electricity are consumed for a single Bitcoin transaction. This is as much electricity as an average US household uses in a week!
The reason for this is the rising price of the currency. For businesses, it pays to upgrade their own computing infrastructure to mine for bitcoins or act as network nodes and receive transaction fees. Along with this, electricity consumption is rising.
In addition, it could be twice the currently largest Tesla recharge battery, operate a refrigerator for a full year or boil 1,872 liters of water. Currently, around 300,000 bitcoin transactions are performed per day. Even with conservative calculation, the energy consumption of Bitcoin is enormous. The minimum value assumed is 77-kilowatt hours per transaction.
The impact of high energy consumption raises the question of the consequences of the Bitcoin boom. The design of the cryptocurrency means that increasingly complex encryption puzzles have to be broken by computers in order to obtain new bitcoins. While Bitcoin’s idea is to bypass middlemen and facilitate more efficient money traffic, mining is a compromise to the environment. | https://medium.com/behaviourexchange/bitcoin-a-disaster-for-our-planet-and-should-be-heavily-taxed-researchers-claim-7150890ec71 | ['Behaviourexchange', 'Bex'] | 2018-08-07 08:32:27.099000+00:00 | ['Bitcoin', 'Environment', 'Electricity', 'Ecology', 'Energy'] |
The One Simple Trick I Use in the Morning to Start the Day Right | Why does it work?
It works because it forms a habit, and habits keep you going.
You are basically creating a habit of “if I do XYZ, then I will also start working”. Your brain will eventually connect an action that you were already doing every morning, to start working.
What’s easy about this method is that you’ve already won half of the battle. Half of this trigger is something you are already doing.
Notice that this trick has nothing to do with motivation. Because motivation is a myth. You can have all the motivation in the world but it’s useless if you don’t start doing it.
Motivation is something you get, from yourself, automatically, from feeling good about achieving small successes — Jeff Haden (The Motivation Myth)
But as mentioned, starting is the hardest part. The entire goal of this trick is to force you to START DOING. Once you start, everything else will fall into place. This will bring incremental success, which generates more motivation and momentum to keep you going.
By having a trigger, it can put you in a focused state of mind and allow you to concentrate on the things that are truly important.
Motivation comes and goes, but habits will stick with you for life. Even on the weekends, I find myself searching for meaningful work to do after I take the first sip of coffee. If I don’t start working right away, I feel bad.
Photo by Corey Agopian on Unsplash
Think about pre-pandemic times, when most of us had to physically go to the office to work. In a way, that is a focus trigger.
Because you had to psychically go to the office every single morning to work, your mind starts to associate that period of time with what’s coming next: work.
On the bus ride or walk to the office, your mind starts to prepare you for a day’s work by narrowing your focus.
This is why a lot of us experienced some loss of discipline when first starting to work from home because we lost that association. But we can create another habit to get ourselves “in the zone”.
It will be hard in the beginning, because forming a habit is hard. But hey I promised you this trick is simple but never said it was easy. But if you put in the effort, you will reap the reward.
You can also take this trick to other things. It doesn’t have to be work, but anything you want to pursue that you find yourself having a hard time getting started. | https://medium.com/the-ascent/the-one-simple-trick-i-use-in-the-morning-to-start-the-day-right-211aed11e277 | ['Michael Chi'] | 2020-12-14 15:57:55.906000+00:00 | ['Self Development', 'Focus', 'Habit Building', 'Self Improvement', 'Productivity'] |
What Kind of a Writer Are You? | I’m so excited to share Aimée’s feedback on my analysis with you! Here’s what she wrote:
“As I’ve learned more about Human Design, more and more resonates with me. I’ve been nudged and called to lead throughout my life, but have been hesitant to do so until reaching my forties. As a young child and up to my forties, I struggled with accepting my unique idea and standing out from the pack. Now that I embrace my uniqueness I find it rewarding to lead and inspire.
Nature has been a grounding force throughout most of my life. I felt much more grounded and mindful when I worked as a gardener at a botanic garden. Now that it’s not a requirement to show up to garden, I need to remind myself to do so. Even going in the backyard and observing the creatures around me lights me up and also grounds me. It inspires me to daydream, and generate new ideas to share. I think being in nature is where I find the most wonder and joy. It’s where I feel the most grateful for this experience of living.
It is reassuring to see my Human Design chart reflect the aspects of myself that I’ve begun integrating in my forties. Yes, I am empathic. Yes, I am an unique. Yes, I am a leader. Yes, nature is vital to my grounding and mindfulness. Yes, I feel deeply and am inspired to share what I learn with others. I’ve begun saying yes to my innate abilities and it feels so good!” | https://medium.com/bingz-healing-light/what-kind-of-a-writer-are-you-16ca167450f4 | ['Bingz Huang'] | 2020-12-27 15:01:36.660000+00:00 | ['Case Study', 'Writing', 'Human Design', 'Writer', 'Writers On Writing'] |
FPL Gameweek 1: And so it has begun.. | State of things
Last week, we came to the conclusion of using the FDR strategy — Focus on choosing players from teams with a higher chance of victory. With the higher chance of victory, we also gain valuable points in defense, our points should generally be higher across the field and we allow the possibility of significant points from goals scored as well.
After taking a look at the data we accumulated and analyzed, we came to the conclusion that it would be a good starting strategy to pick players from either Everton, Wolves, Tottenham, Leicester, or Chelsea. Of course, we’re all learning along here as we try different methods (especially me), so there are several things we can observe from our results.
Here is also a link to the Gameweek 1 results by fixtures.
Fantasy Data Gameweek 1 Team [FPL]
For our 15 man squad, we chose:
3 Players from Wolves (Jota, Traoré, Patricio), 3 Players from Everton (Keane, Richarlison, Doucouré), 3 Players from Tottenham (Kane, Alderweireld, Moura), 3 Players from Leicester (Schmeichel, Barnes, Söyüncü)
Since we can only choose a maximum of 3 players from each team and the choosing of these above players left only 1 spot in a FWD Position and 2 spots in DEF, we chose Azpilicueta from Chelsea and Van Dijk, Firmino from Liverpool for the remaining spots.
Pros of our selections:
12 of the 15 players we chose played in a winning team. The only loss coming from Tottenham but this was bound to happen as we chose players from both Tottenham and Everton. The result could have gone the other way around. The reason for choosing both was due to the fixtures over the first 6 Gameweeks. We did sacrifice some points here in Gameweek 1 but we may have a better chance over time with our strategy. Our defensive players did quite well with the most points accumulated in the team with 22 points vs 5 from midfielders and 6 from attackers. This presents a hypothesis that we generally do not need to have the best attackers to score points. A strong defense is also valuable. Granted, we are dealing with very limited data at this point. This is something to explore.
Cons of our selections:
Captain points matter so much — we haven’t yet taken this critical point into consideration. For each week captain points are doubled. Taking a look at the best team of the week below, we can clearly see that if we had chosen Salah as our captain, we would have been able to accumulate over 40 points from a single player! That’s 2 points more than our current team total. As the saying goes, When in doubt, choose Salah. We also didn’t consider the free transfers allowed per week. In Fantasy Premier League, the rules state that we are allowed one free transfer per week. We had strategized with the first 6 Gameweeks in mind without any transfers.
Highest Scoring Players of Gameweek 1 [FPL]
This scenario allows for more dynamic team selections. Needless to say, FPL has so many things one must consider and it’s easy to miss something. We can start to consider this into our future team selections. One could also argue whether it’s a better option to keep the same team for 6 Gameweeks to see their progression or to switch players often, this is a study worth considering.
This past week, I came across a wonderful lecture by Joshua Bull of the University of Oxford. Joshua is a post-doc who researches in the area of Mathematical Biology at Oxford but in his free time, enjoys playing Fantasy Football. He attempted to tackle FPL with data as well last year and goes into great detail over all the strategies he attempted during the 19/20 season. He also discusses the struggles of trying to create a magic formula for a winning team each Gameweek. Something we just witnessed above with our own team. It was a great listen and I definitely recommend it for anyone interested in this. If you’re reading this, I’m guessing you are. Oh, and it should be mentioned that Joshua also won the 19/20 Fantasy Premier League season. So don’t miss it!
So why these players?
Using the data from FBRef, we can see that there are a large number (Well over 150+) of varied stats we can choose from. It can get quite difficult to analyze each and every stat. So as a base starter, we will choose a few stats that we can consider to be key separators to the rest when it comes to choosing the most effective players.
npxGxA90 - Non-penalty Expected Goals & Assists per 90. This is another key stat that reveals the most creative and productive player in a team.
- Non-penalty Expected Goals & Assists per 90. This is another key stat that reveals the most creative and productive player in a team. SCA90 - Shot-creating actions per 90. This stat reveals the most creative player in the team. The true playmaker of the team. This stat is valuable when trying to find key players that might otherwise go unnoticed with npxGxA90. These are those players that are next in line to add points after the most effective players.
- Shot-creating actions per 90. This stat reveals the most creative player in the team. The true playmaker of the team. This stat is valuable when trying to find key players that might otherwise go unnoticed with npxGxA90. These are those players that are next in line to add points after the most effective players. PrgDistPass - Progressive Distance Passing. These are the players that have passed the longest and the most towards the opponent’s goal. This could be a useful stat to find the most attacking players, possibly from defense. The effectiveness of this stat is still subjective but we did use it for the past Gameweek.
- Progressive Distance Passing. These are the players that have passed the longest and the most towards the opponent’s goal. This could be a useful stat to find the most attacking players, possibly from defense. The effectiveness of this stat is still subjective but we did use it for the past Gameweek. KP - Key Passes. This stat helps us find the most effective passers.
- Key Passes. This stat helps us find the most effective passers. AerialAcc - Aerial Duels Won % - These players have the highest chance of scoring from set-piece plays due to their great heading ability.
Aerial Duels Won % - These players have the highest chance of scoring from set-piece plays due to their great heading ability. Clr - Clearances. The number of times a player cleared a ball out of their own goal-end. Possibly a valuable stat for defenders in FPL Value.
- Clearances. The number of times a player cleared a ball out of their own goal-end. Possibly a valuable stat for defenders in FPL Value. PassesCompAcc - Pass Completion Accuracy. These are the most accurate passers in the game. The passes of these players have a higher probability of reaching their team-mates.
- Pass Completion Accuracy. These are the most accurate passers in the game. The passes of these players have a higher probability of reaching their team-mates. Ast90 - Assists per 90. The most amount of Assists over 90 minutes by any player.
- Assists per 90. The most amount of Assists over 90 minutes by any player. Gls90 — Goals per 90. The most amount of Goals over 90 minutes by any player.
— Goals per 90. The most amount of Goals over 90 minutes by any player. SoT - Shots on Target. The players who score higher here are the most clinical shooters.
Using these stats above, let’s take a look at just the players from the teams we chose. We excluded new signings, they have a possibility to start but also a highly unlikely one to not in a new team especially in Gameweek 1.
Defenders
One of the key areas in our team selection was in defense. As we’re able to see in hindsight the number of points we accumulated here. A key criterion to consider was the number of matches each player played last season. So as a base requirement, we are only highlighting those players that have played the most amount of games for their team. Barring injuries, this allows for a solid chance that they will feature in most games this season as well.
Taking a look at the # of clearances a player made last season and combining it with the most capable aerial players gives you an interesting stat. This combined selection allows us to consider defenders that have a higher chance of scoring both from set-pieces while also being strong in the back.
No doubt, Player of the Year Van Dijk stands out. Last season, he scored 178 FPL points and even more the year before! (208)
So it was a no-brainer to add him to our team. We also see that some other solid additions would have been Evans, Saiss, Söyüncü, Keane, and Alderweireld.
Evans would have been a solid choice but he was injured for Gameweek 1. So we added all the other players except Saiss to our list. We will regret not choosing Saiss as he scored 15 points this past Gameweek. Part of it is luck. But looking at this data, It could have been any one of these players. Another hypothesis we could consider is opposition’s defensive ability by pairing these teams against each other. This is an area worth exploring for any of you out there.
Clearances vs Aerial Duels Won % (2019/2020 Season) [FBRef]
Another player you might have noticed in the list is César Azpilicueta. Looking at the key passes players made last season combined with the progressive distance they covered, we see these following players stick out. Considering both Alexander-Arnold (£7.5) and Robertson (£7.0) cost the most among all the defenders, we were left with 2 options, Digne and Azpilicueta. It was a toss-up between the two. However, Chelsea as a team seemed more formidable to Everton (who faced Tottenham in Gameweek 1), so we chose Azpilicueta. In the end, Azpilicueta didn’t start and Digne scored a goal. Cmon Azpi! We failed to consider the Right back competition at Chelsea with Reece James. Azpilicueta played all last season (36 Matches) until an injury forced him out at the end of the season. A parameter that was missed. Albeit another reason for Digne’s omission was the selection of Keane who cost only £5 (£1 less) and did fairly well himself. Alas, we will tinker with the squad and consider more metrics as time progresses.
Key Passes vs Progressive Distance Passes (2019/2020 Season) [FBRef]
Mids and Forwards
Take a look at this following spread where we combine the most creative attackers with the most effective attackers. You’ll see some familiar expensive faces. Salah (£12), Mane (£12), Firmino (£9.5) but some valuable cheap options too in Jimenez (£8.5), Barnes (£7), Jota (£6.5), Maddison (£7.0), and Richarlison (£8.0). Instead of going for the most expensive options, we chose a strategy of continuing to choose from our preferred teams. Jota seems like an insanely good value-for-money considering how he ranks in this spread. Unfortunately, he seems to be down the pecking order at Wolves and only came on as a substitute. Salah should have been a no brainer with his 20 points but we also have to take into consideration that if we had picked Mané, we would have scored just 2 points this past Gameweek. So more expensive isn’t always the better option.
Shot-Creating Actions per 90 vs Non-penalty Expected Goals and Assists per 90 (2019/2020 Season) [FBRef]
Barnes played exceptionally well in his first Gameweek. He had 5 shots overall (1st among all LEI players) and 2 Shots-on-target as well. It was only a pity he couldn’t join in on the fun with Vardy and Castagne.
Also Richarlison did this over the weekend.
When did Richarlison turn into Fernando Torres? via Streamable
Additionally, taking a look at the Pass Completion Accuracy tied into the total Assists per 90, we can see that in addition to the popular choices, Traoré is a good cheap buy and a good addition from Wolves. We’re hoping he produces more over the coming Gameweeks. We also found another cheap but great pick in Lucas Moura who believe-it-or-not compared fairly well to Salah himself on SoT value. | https://towardsdatascience.com/gameweek-1-and-so-it-has-begun-b38e090ccc74 | ['Tom Thomas'] | 2020-10-05 13:33:22.575000+00:00 | ['Premier League', 'Fantasy Football', 'Fantasy Sports', 'Soccer', 'Data Science'] |
How Well You Define a Problem Determines How Well You Solve It | How Well You Define a Problem Determines How Well You Solve It
A productivity tip from Albert Einstein.
Here’s a great Albert Einstein quote:
“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.”
Einstein believed the quality of the solution you generate is in direct proportion to your ability to identify the problem you hope to solve.
With that in mind, he believed a key to productivity was to invest your time in defining the problem as opposed to jumping right into dreaming up solutions to it.
To better define the problem, you can do the following:
1. Rephrase the problem.
2. Expose and challenge assumptions.
3. Consider a broader or narrower version of the problem.
4. Rewrite your problem statement from the perspective of different stakeholders.
5. Assume there’s more than one possible solution.
6. Use language to frame the problem in a more exciting way.
7. Reverse the problem and think about what you would do to generate an opposite intended result.
8. Ask yourself questions about the problem. Do research.
Learn more: Einstein’s secret to problem solving.
Subscribe to my newsletter to get 10 helpful ideas each week. | https://medium.com/an-idea-for-you/how-well-you-define-a-problem-determines-how-well-you-solve-it-847090979898 | ['Josh Spector'] | 2016-10-25 20:08:08.065000+00:00 | ['Life Lessons', 'Productivity', 'Leadership', 'Problem Solving', 'Work'] |
What is Machine Learning? | What is Machine Learning?
Learn what is machine learning, how it works, and its importance in five minutes
Last updated on October 8, 2020
Who should read this article?
Anyone curious who wants a straightforward and accurate overview of what machine learning is, about how it works, and its importance. We go through each of the pertinent questions raised above by slicing technical definitions from machine learning pioneers and industry leaders to present you with a basic, simplistic introduction to the fantastic, scientific field of machine learning.
A glossary of terms can be found at the bottom of the article, along with a small set of resources for further learning, references, and disclosures.
What is machine learning?
Computer Scientist and machine learning pioneer Tom M. Mitchell Portrayed | Source: Machine Learning, McGraw Hill, 1997, Tom M. Mitchell [2]
The scientific field of machine learning (ML) is a branch of artificial intelligence, as defined by Computer Scientist and machine learning pioneer [1] Tom M. Mitchell: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience [2].”
An algorithm can be thought of as a set of rules/instructions that a computer programmer specifies, which a computer can process. Simply put, machine learning algorithms learn by experience, similar to how humans do. For example, after having seen multiple examples of an object, a compute-employing machine learning algorithm can become able to recognize that object in new, previously unseen scenarios.
How does machine learning work?
How does machine learning work? ~ Yann LeCun, Head of Facebook AI Research | Source: Youtube [3]
In the video above [3], Head of Facebook AI Research, Yann LeCun, simply explains how machine learning works with easy to follow examples. Machine learning utilizes various techniques to intelligently handle large and complex amounts of information to make decisions and/or predictions.
In practice, the patterns that a computer (machine learning system) learns can be very complicated and difficult to explain. Consider searching for dog images on Google search — as seen in the image below, Google is incredibly good at bringing relevant results, yet how does Google search achieve this task? In simple terms, Google search first gets a large number of examples (image dataset) of photos labeled “dog” — then the computer (machine learning system) looks for patterns of pixels and patterns of colors that help it guess (predict) if the image queried it is indeed a dog.
Query on Google Search for “dog” | Source: Google Search
At first, Google’s computer makes a random guess of what patterns are reasonable to identify a dog's image. If it makes a mistake, then a set of adjustments are made for the computer to get it right. In the end, such collection of patterns learned by a large computer system modeled after the human brain (deep neural network), that once is trained, can correctly identify and bring accurate results of dog images on Google search, along with anything else that you could think of — such process is called the training phase of a machine learning system.
Machine learning system looking for patterns between dog and cat images [5]
Imagine that you were in charge of building a machine learning prediction system to try and identify images between dogs and cats. As we explained above, the first step would be to gather a large number of labeled images with “dog” for dogs and “cat” for cats. Second, we would train the computer to look for patterns on the images to identify dogs and cats, respectively.
Trained machine learning system capable of identifying cats or dogs. [5]
Once the machine learning model has been trained [7], we can throw at it (input) different images to see if it can correctly identify dogs and cats. As seen in the image above, a trained machine learning model can (most of the time) correctly identify such queries.
Why is machine learning important?
“Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” ~ Andrew Ng | Source: Stanford Business Graduate School [4]
Machine learning its incredibly important nowadays. First, because it can solve complicated real-world problems in a scalable way, second, because it has disrupted a variety of industries within the past decade [9], and continues to do so in the future, as more and more industry leaders and researchers are specializing in machine learning, along taking what they have learned to continue with their research and/or develop machine learning tools to impact their own fields positively. Third, artificial intelligence has the potential to incrementally add 16% or around $ 13 trillion to the US economy by 2030 [18]. The rate in which machine learning is causing positive impact is already surprisingly impressive [10] [11] [12] [13] [14] [15] [16], which have been successful thanks to the dramatic change in data storage and computing processing power [17] — as more people are increasingly becoming involved, we can only expect it to continue with this route and continue to cause amazing progress in different fields [6].
Acknowledgments:
The author would like to thank Anthony Platanios, Doctoral Researcher with the Machine Learning Department at Carnegie Mellon University, for constructive criticism, along with editorial comments in preparation of this article.
DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University, nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.
You can find me on My website, Medium, Instagram, Twitter, Facebook, LinkedIn, or through my SEO company. | https://medium.com/towards-artificial-intelligence/what-is-machine-learning-ml-b58162f97ec7 | ['Roberto Iriondo'] | 2020-10-09 18:46:14.345000+00:00 | ['Machine Learning Course', 'What Is Machine Learning', 'Artificial Intelligence', 'Machine Learning', 'What Is Ml'] |
I Just Want Whatever Reopening Phase Lets Me Back in the Library | I Just Want Whatever Reopening Phase Lets Me Back in the Library
As a child, books helped me find myself; now, maybe they can help society find its way back to normalcy
I’m an unashamed and unequivocal book nerd. Give me a choice between a thousand dollar shopping spree and an evening curled up with a favorite author’s much-awaited new release, and I’ll choose the written word every time. Save your judgment and scorn for someone else — my Myers Briggs personality profile says they don’t bother me.
I pretty much came out of the womb reading. Not literally, but I think it was maybe a few days after I figured out phonics and could string a few sentences together that my family lost me to the world of Enid Blyton. I started with Noddy, progressed to the Famous Five, and gradually disappeared into the Enchanted Wood.
The next fifteen years were mostly a book-induced blur. I emerged now and then to participate in quotidian tasks like school, or mandated activities like violin practice and holiday gatherings. Sometimes, I even deigned to make conversation at the dinner table, but mostly I preferred my fictional friends to the company of real people. | https://medium.com/curious/i-just-want-whatever-reopening-phase-lets-me-back-in-the-library-a6f32865f0a | ['Purnima Mani'] | 2020-10-22 23:02:57.009000+00:00 | ['Memoir', 'Libraries', 'Covid-19', 'Books', 'Reading Life'] |
The plasticity of the anti-single-use-plastics movement | Each time a state or municipality bans or limits single-use plastic, Amy Perry Basseches has a “what took them so long?” moment.
She’s had many such moments of late.
In just the past three months, Maine, Vermont, Connecticut and New York state have passed laws banning some forms of single-use plastic, ranging from plastic bags in grocery stores to “expanded polystyrene foam” coffee cups and restaurant takeout containers.
Perhaps most intriguingly, the first veto issued by Florida’s newly-elected Republican governor, Ron DiSantis, blocked an industry-backed bill aimed at preventing local governments from banning plastic straws. That allows ordinances passed in Coral Gables, Miami Beach, Ft. Lauderdale and St. Petersburg to stay in effect.
There are many lines between those victories and the work that Amy did in the late 1980s and the early 1990s. Back then, she ran the solid waste program for the Massachusetts Public Interest Group (MASSPIRG). She was one of the country’s leaders in the drive to, as she now puts it, win “a change in the overall mentality and practice about the use of raw materials, design, marketing, and disposal.”
Her stature was bolstered by MASSPIRG’s long track record of winning against long odds (such as the Bottle Bill in 1981–82) and willingness to push the policy envelope (such as the Acid Rain Cap-and-Cut Bill in 1985 and the Toxics Use Reduction Act in 1988).
Photo Credit: Chris Yarzab via Flickr CC BY 2.0
So when Amy proposed what eventually became the Massachusetts Recycling Initiative (Question 3 on the 1992 ballot), a lot of people took notice.
The proposal required “all packaging used in Massachusetts on or after July 1, 1996, to be reduced in size, reusable, or made of materials that have been or could be recycled.”
Some praised the measure’s comprehensive approach to one of the biggest sources of waste: the boxes, cans, bottles, wraps, tubes and more used to cover or contain the stuff you buy. In one word, “packaging.”
Some lauded its nuanced approach, using detailed language to specify deadlines, close potential loopholes, and allow exemptions for health and safety.
In a TV ad, Republican Gov. William Weld called the initiative “pro-environment and pro-business” because it would “help create a whole new recycling industry to process and sell these materials.″
Of course, others were less enthusiastic. Companies — including Dow Chemical, Exxon and Union Carbide — that produced packages from raw materials weren’t thrilled with the prospective loss of billions of dollars to people who would recycle their handiwork. National and international businesses — such as Proctor & Gamble — that shipped their products into Massachusetts didn’t like being told what to do and, perhaps worse, the prospect of dealing with different laws in Massachusetts than elsewhere.
In response, these companies rustled up $4 million in loose change for an advertising blitzkrieg against Question 3. Their arguments ranged from “civilization as you know it will end” proclamations (“many … products and the products they contain are going to be banned from Massachusetts”) to “we’re all pro-recycling, but this goes too far” expressions of empathy (“The general rule is the more recycled material you have in your packaging, the weaker the package becomes. When somebody goes to the store to buy cookies, they want cookies. They don’t want crumbs.”)
Alas, we lost the battle at the polls, by a 59–41 percent margin. But 27 years later, there are signs that we are increasingly winning the war.
The recent victories are partly the result of applying lessons learned from the Massachusetts Question 3 experience and Oregon’s similar “Mandatory Recycling of Packaging” initiative (Question 6), sponsored in 1990 by OSPIRG. Among them:
People care more about impacts than virtues. Much of our 1988–92 messaging focused on the abstract sin of “waste” and virtue of “recycling.” The opposition’s message hammered on beloved products you could no longer buy and on crumbling cookies.
During the past couple of years, we’ve switched the emphasis to visible suffering caused by our throwaway society: a sea turtle with a plastic straw stuck in his nostril; a young whale washing ashore with a belly full of plastic containers; beer and water contaminated by plastic microbeads.
Timing matters. Based on our original polling in 1989, we had what I considered a 30–40 percent chance of winning. But when the economy soured (Massachusetts’ unemployment rate went from 3.5 percent at the beginning of 1989 to 8.8 percent by the time we started collecting initiative signatures in September 1991), our prospects were greatly diminished. All things considered, getting even 41 percent was rather miraculous.
It’s not a coincidence that support for plastic bans — like most environmental proposals — has picked up steam this decade as the unemployment rate has dropped and economic prospects have improved.
Keep it simple . Especially for an initiative campaign, it’s much easier to describe and understand the concept of banning a single product or category of products than a proposal that, as the Associated Press summarized it in 1992, “requires that packaging be reusable at least five times, or use increasing amounts of recycled or recyclable material.” Programatically-laudable nuances tend to create more opportunities for question and doubt (“Why five times?”), which can be fatal.
. Especially for an initiative campaign, it’s much easier to describe and understand the concept of banning a single product or category of products than a proposal that, as the Associated Press summarized it in 1992, “requires that packaging be reusable at least five times, or use increasing amounts of recycled or recyclable material.” Programatically-laudable nuances tend to create more opportunities for question and doubt (“Why five times?”), which can be fatal. A partial loaf is better than no loaf at all. At the same time that MASSPIRG’s bold, nuanced, comprehensive proposal was nitpicked to death, Massachusetts voters passed an initiative calling for a modest tax increase on cigarettes and smokeless tobacco to fund health programs by a 54–46 percent margin. Incrementalism by the anti-tobacco movement (of which the PIRGs were a part) won the day, despite the tobacco industry spending 80 percent more against that measure than the packaging industry spent against Question 3.
Putting the Massachusetts and Oregon recycling initiatives on the ballot was perhaps the high point for anti-solid waste efforts for the next two and a half decades. But the victory of the tobacco tax initiative encouraged activists throughout the country to take the next step and the step after that on that topic. One measure of the difference: In 2015, the average American generated 1,634 pounds of municipal solid waste, just 2 percent less than in 1990. In 2015, 11.4 percent of American adults smoked cigarettes daily, about half of the 22.1 percent figure in 1992.
The decision a few years ago to focus on small, winnable reforms, starting perhaps with the Environment California-sponsored ban of single-use plastic bags in California in 2014 (which followed 50+ victories in cities and counties), helped turn things around.
Getting a partial loaf doesn’t mean settling for that share. Just as the anti-tobacco tax movement didn’t stop at $0.51 per cigarette pack in Massachusetts (the rate is now $3.51), the “7 Rs Movement” (rethink-refuse-reduce-repurpose-reuse-recycle-rot) has graduated — where politically appropriate — from targeting plastic straws or bags to Vermont’s new law covering bags, stirrers, cups and takeout food containers. We’ve gone from winning in tiny, environmentally hip towns to entire states, countries (Canada) and federations (the European Union).
The result has been not just millions fewer plastic bags littering the landscape or winding up in the Great Pacific Garbage Patch. There’s been an increased questioning — by the public and by producers — of the value of every use of plastic, of every piece of packaging, of all of our natural resources. . . .just as Amy Perry Bassaches sought to provoke three decades ago.
Nowadays, Amy is based in Toronto, where she stays in touch with The Public Interest Network’s solid waste work while running her international “active travel” business.
As she gleefully wrote on Facebook in response to a recent Toronto Star article describing a test by Canada’s largest grocery store chain shift to use more reusable containers, “In the late 1980s . . . we were completely rad when we said ‘ban wasteful packaging,’ and drafted / campaigned for that legislation in several US states. Thanks to all who have carried the advocacy torch onward.”
Here’s to giving Amy many more “what took them so long?” moments in the days ahead. | https://medium.com/the-public-interest-network/the-plasticity-of-the-anti-single-use-plastics-movement-9184dda462bc | ['Kirk Weinert'] | 2019-07-22 17:43:26.337000+00:00 | ['Pollution', 'Single Use Plastic', 'Environment', 'Organizing', 'Plastic'] |
How to use Pinterest for Marketing your Business | How to use Pinterest for Marketing your Business Aesthetic Follow Dec 27 · 9 min read
Pinterest is an extraordinarily powerful tool for consumers, but is still misunderstood by marketers. I worked at Pinterest for 2 years helping build their core content understanding technology, and learned a lot about what makes for a successful Pinterest marketing strategy. I hope this guide helps demystify how marketers can get the most from Pinterest as a marketing channel.
If you have any questions, please Tweet @jmilinovich!
Why is Pinterest marketing important?
Pinterest as a distribution channel
Pinterest is a powerful tool that helps people all over the world discover ideas for things to do in their lives. Whether it’s finding recipes, figuring out what to wear, finding new beauty tips or literally any other use case imaginable… people are doing it on Pinterest.
While search engines like Google focus on the bottom of the purchase funnel (ie, once someone knows that they have a need and are actively looking for it) and social networks like Facebook and Instagram focus on the top of the funnel (ie, when customers are passively looking to consume content with no intent), Pinterest is the only place on the internet that lets marketers reach consumer in the consideration phase.
This creates a big opportunity for businesses to get their products, goods and services in front of potential buyers while they’re deciding what they want to buy, but haven’t made the decision yet. That’s one of the most powerful things about Pinterest- people go there to find ideas, not just to make purchasing decisions. This means that marketers are able to reach consumers before they’ve made up their mind on what they’re looking to do.
Pinterest as a source of inbound links
While the most clear first-order effect to a strong Pinterest presence is creating a powerful new referral traffic source, there’s also a misunderstood but very powerful second-order effect: creating more inbound links to your website.
Have you noticed how no matter what you search for, it seems that you almost always see Pinterest results on the first page of Google? Pinterest’s core growth strategy has been about getting excellent at SEO, or search engine optimization. Practically speaking this means that the company has spent a lot of effort creating millions of high quality landing pages with the explicit purpose of being indexed by Google. Each of these landing pages shows dozens of Pins, and each contains a link to that Pin’s page on Pinterest.
As your content becomes more popular, it will begin showing up on more of Pinterest’s SEO pages, which means that it will be more readily indexed by Google. Since Google gives Pinterest’s domain a high authority and quality score, this means that over time you will start to accumulate some of this authority if your Pins are shown in prominent places. So, getting good at Pinterest doesn’t just help improve your Pinterest referral traffic, it can also improve your traffic from places like Google!
How does Pinterest marketing work?
Pinterest is a search engine
The most important thing to understand about Pinterest is that at its core it’s a search engine, not a social network. People don’t use Pinterest to “follow” specific brands but rather to follow interests and search for ideas. A “Pin” is simply a visual bookmark to a webpage, and under the hood Pinterest’s technology stack is focused on figuring out what interests a Pin is about, and which users are interested in which interests. Content on Pinterest isn’t temporal like other social networks, but evergreen like on Google.
Help Pinterest understand your content
This means that the most important thing to get right for Pinterest marketing is helping Pinterest understand what interests your Pins are about. This means that the key underlying concept for Pinterest marketing is to create Pins for all of your web content, and then make sure that Pinterest has a clear understanding of what interests they align to.
Once Pinterest understands what a given Pin is about, it can start showing it to users to see how they interact with it. If they engage with it (ie, Save it to one of their boards or Click on it to see the underlying content), Pinterest uses this as a positive sign that this is quality content and will begin showing it to more people.
As such, one of the most important things to get right is having a clear strategy for how to communicate to Pinterest what your Pins are about and getting engagement signals on the Pins early.
How to create a Pinterest marketing plan?
Choose your Interests
The most critical thing to get right in your Pinterest marketing plan is determining what Interests are most important to your business. There are over 10,000 interests on Pinterest today, ranging from highly broad to highly specific. Start by brainstorming what interests your target audience has today, as well as what interests your content is actually about. Look for the overlap of these two sets and choose the 10–15 that have the most promise to focus in on first.
Create your Boards
Once you’ve chosen the interests that you want to focus on, the next step is to decide on the architecture of your Pinterest for Business account. Pinterest accounts for users and businesses alike are defined by the boards that they create and post Pins to. You can think of a board as a folder of visual bookmarks that are public by default. When someone looks at your Pinterest account, the fastest way that they will understand what you’re about is by the names of the boards that you create.
Start by creating boards whose names are the same as the 10–15 interests you chose to focus on. It’s OK if they have more words in them as well, but make sure that the Interest name itself is very prominent. Make sure that each board has a very specific description that explains the core ideas that you’ll be pinning to the board.
Next, you need to decide what content to start posting onto these boards.
Pinning existing content
There are only two kinds of Pins on Pinterest: Pins from your own website, and Pins from other people’s websites. Both are equally important to a strong Pinterest strategy. The first thing you should do is to fill your boards with Pins that are already on Pinterest and were created by other people. Spend some time saving 20–30 Pins to each of your boards. As you do this, Pinterest will also begin recommending new Pins in your homefeed that are related to what you’ve been Pinning.
The reason you’re seeing the Pins that Pinterest is recommending to you is because Pinterest already knows a lot about them and has a high confidence that users like you will find them interesting. When you save them to your boards, you’re giving Pinterest even more signal about what your board is about. This is extremely important, because Pinterest learns a lot about new Pins based on the other Pins that it shows up on boards with.
Each week you should also continue to save new, existing content to your boards to keep giving Pinterest more signal and context for what your Pins are about.
Pinning your own content
Once you create a good base of existing Pins on your boards, you can start to plan your strategy for getting your own original content into Pinterest. First, go through all of your existing website content and map out what content would be relevant for the Pinterest audience. Generally speaking, the best content will be things like blog posts or eCommerce product landing pages. You should skip things like your homepage, about page, contact us page or other informational pages that don’t provide highly specific and useful content about a specific concept.
On social networks, it’s important that you have a steady pace of posting content into your feed so that you stay top of mind and also don’t inundate followers by posting 100 things at once. Pinterest is much more like Google, however, where you want them to know about your content upfront and all at once.
You should post all of your existing, relevant content to Pinterest upfront and then consistently add new content as it’s published online. Save it to the most relevant board to give Pinterest a clear understanding of what your content’s about. You can also post it to more than one board if it’s relevant to multiple categories.
How to make Pinterest Pins?
The Pins that perform best on Pinterest have been created specifically for Pinterest following their creative best practices. Practically speaking this means that each Pin will require some editing work within a graphics editor tool. Generally speaking, each Pin can take anywhere from 5–20 minutes to create by hand if using a tool like Photoshop, Canva or Adobe Spark. This can be quite burdensome, especially if you’re trying to create dozens or hundreds of Pins for your site.
Aesthetic’s software is able to generate on-brand Pinterest Pins from a company’s website automatically. Simply enter a URL, and our app will create dozens of variations of graphics to promote that webpage, including several that follow Pinterest’s best practices guide. We’ve seen our users cut down the time it takes to create a Pin by 95% using our tool.
Once you’ve created your Pin graphics, you can upload them into the Pinterest system. Add the URL for each Pin along with a detailed description that touches upon what the Pins about and ideally mentions the specific interests that it’s related to. Post these to the right boards, and you’re off to the races!
How to make Pinterest Pins popular?
Once you’ve uploaded your content to Pinterest, you will see the impressions slowly start to trickle in as the system understands more about what your content’s about. Generally speaking it can take months for new content on Pinterest to get enough exposure for Pinterest to determine whether it’s sufficiently interesting enough for it to be promoted more widely within the system.
Another option to fast track the distribution of your Pins is to run small budget ads for your own Pins, targeting the Interests that they’re related to. Spending $2–3 per Pin per day should be enough to start getting 100,000’s of paid monthly impressions of your content. This is enough for the content that truly resonates with the market to start being shown more organically.
An interesting phenomenon is that once your own Pins are deemed popular and start to be shown to more people, other content from your boards will start to show up more prominently as well. Pinterest will begin recommending other content from your boards to relevant users, and will also start suggesting people follow your boards more prominently.
Conclusion
Pinterest is a highly popular consumer destination that’s still misunderstood by marketers. By focusing your marketing strategy on getting your content to rank highly for a specific set of interests, you can benefit from Pinterest in a way many didn’t think possible.
If you’re looking to help supercharge your Pinterest marketing strategy, check out Aesthetic to dramatically amplify the quality and scale of Pins that you can create! | https://medium.com/plus-marketing/how-to-use-pinterest-for-marketing-your-business-236527af2ef | [] | 2020-12-27 16:49:48.204000+00:00 | ['Marketing', 'Pinterest', 'Social Media', 'Social Media Marketing', 'SEO'] |
Is Your Cat Acting Aloof? Do This, And You’ll Be Best Buds In No Time | Is Your Cat Acting Aloof? Do This, And You’ll Be Best Buds In No Time
Stop struggling to bribe, pet, and (cat) nip your way into the hearts of frigid felines once and for all.
Image by author via Canva.
Let’s face it; we all feel judged by cats.
You know how it is.
You start a new relationship, they have a cat, and casually mention that they judge people based on how the cat takes to them. (Cut to jerky cat acting repulsed by you.)
Or you go to your friend’s house for the millionth time, and the cat still acts like they don’t know you.
Image by author via Canva.
Or maybe it’s your cat.
You saved that poor creature from a cruel life on the streets only to have them take you for granted and treat you like dirt. Like some common snuggle whore and meal ticket.
Stop struggling to bribe, stroke, and (cat) nip your way into the heart of these frigid felines once and for all. Science says the solution might be easier than you think.
A new study in the Nature journal Scientific Reports shows that it is possible to make a cat like you more. | https://medium.com/illumination/is-your-cat-acting-aloof-do-this-and-youll-be-best-buds-in-no-time-192486d2e172 | ['Erin King'] | 2020-10-19 03:25:32.452000+00:00 | ['Humor', 'Funny', 'Pets And Animals', 'Pets', 'Science'] |
We Need to Stop Separating Athletes by Sex — It Diminishes Women | When my parents enrolled me in my first soccer league at the tender age of seven, I never questioned the fact that my team was made up of only girls.
In high school playing on the Varsity soccer team with all women, I never questioned that either.
Then after moving to New York after college, I joined a recreational soccer league for adults. For the first time in my life, I was playing organized soccer with both men and women.
But only on the physical plane.
On the mental plane, I was still only playing with women. When I judged my own individual performance, I only took the other women into account. I only wanted to be better than they were.
And it seemed to be generally agreed upon by the men and women on both sides of the field, that the women would defend one another, further emphasizing the fact that we were playing with each other as opposed to with everyone else.
The “female” standard
The standard I held myself to then and throughout my whole life was a standard of womanhood.
And without anyone saying it outright, we all understood this standard was the less-talented standard. The good, but never quite good enough standard. The even if you’re “the best”, you still won’t really be the best because you’re not a man and you never will be standard.
Yup, that one.
And that is the problem. As a female athlete, you automatically hold yourself to a lower standard than whatever the best is. Without even consciously realizing it, you do it.
The separation of men and women in sports is psychologically diminishing to women.
But what about biological differences?
And I know what you’re thinking right now: Men are better than women at sports, it’s just biology.
And I concede there are real biological differences between men and women. One has on average more testosterone. The other has on average more body mass. One has a more on average muscle…okay you get the point.
But the keywords here are on average.
The biggest woman in the world is not smaller than the largest man. The strongest woman is not weaker than the weakest man. The fastest woman is not slower than the slowest man. Far from it.
The world’s 10th fastest men’s marathon runner finished roughly 10 minutes before the world’s fastest women’s marathon runner. That’s a lot of minutes, but it’s not insurmountable.
And according to a study of ultramarathon runners, “when people race beyond 195 miles, the average pace of a woman is slightly faster than the average pace of a man, at 17:19 min/mile for women, and 17:25 min/mile for men.”
So why separate based on gender? Aren’t there other ways we can separate the competition?
If we want to somehow level the playing field, giving people who are smaller, let’s say, the opportunity to compete in sports, such as basketball then maybe we should separate based on physical height, not based on gender.
Sort of like we do in wrestling with weight.
A photo from my first soccer team // Photo provided by author
In a high school, you could have a varsity team where any height is allowed. Then you might have the Varsity II team where only people up to six feet tall are allowed. And so forth.
One could argue, in that case, the Varsity team would still be all men. That might be true. It might not, but let’s continue with that line of reasoning.
Even if the varsity team were all men, it wouldn’t be any different from how things are currently.
Even though currently there is usually an all-women varsity team as well, we’re not fooling anyone when we declare that team the best, whether it’s college or professionally. We know they are still not really the best.
They’re only the best when you exclude half the population from competing against them.
When we separate teams based on other metrics, such as height or weight, there will still be more men at the top, at least initially, but that is not any different from the current reality.
Separate but equal doesn’t work
In today’s world of sports, women are not even given a chance to be the best. At anything. Because we’re always separate.
As Joe Duncan, who also wrote about this topic says, separate but equal doesn’t work in race, so why should it work with gender?
In 1896, the supreme court ruled against separate but equal because they realized you could never be separate and equal.
Furthermore, as the very definition of gender expands and blurs it’s going to be increasingly hard to draw a line between men and women.
About 0.6% of Americans are transgender and the number is growing. And that number does not include the growing number of individuals who prefer not to take on a gender label at all.
Additionally, roughly 1.7% of individuals are intersex, meaning they have biological features associated with both sexes. For example, a person born with a vagina, the female sex organ, who has the XY chromosome, typically associated with men.
From a logistical standpoint, separating men and women will also become more and more difficult. All the more reason to solve this issue now by not separating athletes based on gender at all. | https://medium.com/an-injustice/we-need-to-stop-separating-athletes-by-sex-it-diminishes-women-a1e02704fd8b | ['Sarah Stroh'] | 2020-12-08 22:32:27.757000+00:00 | ['This Happened To Me', 'Culture', 'Feminism', 'Psychology', 'Women'] |
Borith Lake and Majestic Hunza — Published in The More Magazine | As I whizzed past the chilly Chilas, I was thinking about what lies ahead of me. I had never gone past the lower Kashmir let alone the princely state of Hunza before.
The thought of a medieval princely state with Mirs still perched upon the throne with riveting folklore rampant among the fair-skinned denizens of these barren mountains arose a feeling of mixed exuberance.
My passion for mountaineering and high-altitudes have taken me to scenic Zermatt where the mighty Matterhorn stands tall looking over the small but delightful town. I was charmed by the winding train rides in Austrian Alps and their funny German accent which they proudly claim to be the only way of speaking it. I drove to Norwegian fjords through a spectacular landscape driving through hordes of reindeer.
Only my homesickness grew by my interactions with the travelers who had been to Pakistan and their stories. I was told about, a world, I never knew existed very close to Nanga Parbat and the spot where the three greatest mountain ranges meet are a visual delight.
Crossing there I had spent a good half of my life. The Himalayan folklore was nothing like Bavarian dresses and customs. It was elegant, a story of perseverance, valor, and belief, very different from pastoral Gesellschaft of the Alps. I was northbound to one of the remotest city in Hunza called Shimshal, a good 80 km off the Karakoram Highway, the city is known for its remoteness and peace. Chronicles from Mustansar Hassan Tarrar and word of mouth were intriguing enough to lure me away from touristic routes and places without a story.
Even though it was only in the coming hours that my destiny was revealed to me. After a rainy and rocky journey to serene Dasu, we hit a stop because of a landslide. An ironic felicity subdued me from my natural confusion over the time it took to clear that up. That was something that would never happen in The Alps but again, I was on a journey like Frodo and fewer chances of rescue meant a slightly more palatable sense of adventure.
As I perched on a rock with a local Baba willing to impart his wisdom through his wizened eyes the tumultuous Indus river kept roaring, a kind of unintimidated welcome only for the brave. As our happy caravan set off again, among them the older, grunted despise for the inconvenience they have had to endure. Our public transport snaked along the miraculous KKH. The miraculous highest paved international highway, it’s sometimes sarcastically called the joy-ride to death.
It takes a type of sadistic pleasure to enjoy a barricade-less treacherous ride that can punish your reluctance in a lethal manner. Connecting Islamabad to historic Kashgar, KKH was ranked 3rd among best tourist attractions in Guardian. Catching glimpses of snow-clad Diamir district, we entered into the fairy tale state of Gilgit. The Alps were already fading away by the majestic elevations of the terrain.
The tallest rock sculpture of the Swiss Alps, Mont Blanc, which literally means ‘White Mountain’ stands at an elevation of 4807 meters and I was struggling to find a peak that was below that altitude already in Gilgit. After an acclimatization overnighter, I took a local bus to Aliabad, the de facto capital of Hunza state. A great travel tip that has always served me well from good old days as ‘Do in Rome as the Romans do’. A simple shalwar kameez and low profile will do you wonders.
Showing off your Northface climbing gear will only make you another tourist, the ones infamous for their Mallorca style havoc. My new travel partner in Aliabad, which I befriend over a cup of tea in a local bus stop cum hotel was heading towards a small village called Shiskat in Attabad Lake. A lake that was supposed to be a part of KKH before a disastrous landslide in January 2010. Attabad village came crumbling down onto the road merging it with the river leaving behind a 23 KM lake with changing watercolors.
As with us Pakistanis, when life gives us lemons we make a perfect lemonade. Years from the incident there are boats that take in cars, cattle, motorbikes, luggage and sometimes a moving hotel across the lake. As we sat and sipped our strong teas, entered a lanky young lad over-tanned with high-altitude sun and a hazel-eyed, blond foreigner. They ensconced themselves in seats and ordered some snacks as the foreigner started talking to an eager youth. | https://medium.com/minhaajmusings/borith-lake-and-majestic-hunza-712649e50c48 | ['Minhaaj Rehman'] | 2017-11-05 02:28:59.193000+00:00 | ['Hunza', 'Borith Lake', 'Pakistan', 'Travel', 'Trekking'] |
Unlocking a Stats Hidden Potential | The Day Job
For anyone that works in a sales-based job, you’re probably all too familiar with call boards and call reports. I know I am, and I hate them. Ok, maybe hate is a bit strong, but I’m not the biggest fan.
I get the sentiment behind them; no one wants to be bottom, and everyone wants to be at the top. It’s a pride thing. It’s natural, the issue is, at what cost?
I’m fortunate that the company I work for is forward-thinking. We still have call boards mounted on the walls, but they seem to be more for our benefit than anyone else’s. It wasn’t always this way, though.
As recent as a couple of years ago, the calls boards were the be-all and end-all.
Every morning and afternoon, without fail, they were monitored - For every incoming and outgoing call. They would display the first call of the day, the most recent call, call duration, amount of calls and average call time.
If the figures were low, according to the powers that were, you weren’t doing your job. If you weren’t at the top of the board, then you weren’t doing enough — This resulted in many employees merely playing the game.
They’d make hundreds of calls, top the list, but the calls weren’t moving them forward, e.g. phoning numbers that they knew would hit a voicemail instead of trying to speak to someone. It was ineffective and not a good use of anyone’s time, but it got them to top spot. It kept the boss off their back. | https://medium.com/swlh/why-stats-are-limiting-db43a11bc4bb | ['Dave Sellar'] | 2020-08-08 06:03:21.907000+00:00 | ['Management', 'Life Lessons', 'Leadership', 'Development', 'Life'] |
The Forgotten Memory | The plane bounced around, swaying back and forth in the air. The pilot came on the speaker to announce they were experiencing some turbulence. Glancing at her boyfriend who was sound asleep, she contemplated waking him up. A familiar voice sounded in front of her. The woman from the waiting area was sitting in the next row, telling the man next to her about flying with her husband.
Kara tried to read the book she brought, but the woman talking reminded her of the first flight she ever took. Something she hadn’t thought of in a very long time. A memory buried deep in the back of her memory bank, tucked away and forgotten.
At ten years old, she had found herself waiting in a room for her flight from Phoenix to Boston. The tall lady with blonde hair, wearing too much makeup, told her she would be switching planes in St. Louis before heading to Boston. The woman told her to stay in the room, she would be back to get her when it was time to get on the plane.
She was traveling to stay with her father who had been living with his new girlfriend, a woman Kara had not met yet. Her father had already told her on the phone the girlfriend would be her new mom.
Nothing happened on the flight to St. Louis. The blonde lady handed her off to another lady for the second flight. This lady was mean, leaving her alone for the entire flight. When they landed in Boston, the woman escorted her to the baggage area, telling her to find her suitcase and wait for her father to meet her there.
Kara waited for her suitcase to come around on the carousel, the purple tag slightly ripped. Her name was written in black marker. Kara turned around and found an empty seat near the window. Sitting down, she waited for her father to arrive.
Soon she found herself alone, the passengers disappearing after finding their own luggage. But her father was nowhere to be found, still she sat there waiting as she had been told. A large group of people walked toward her as the carousel started turning again. Bags and suitcases falling out of the chute and spinning around slowly.
She began to wonder if her father had forgotten her; the night before he insisted he would be there to pick her up, even confirming her flight information with her aunt on the phone. Maybe he was lost or standing somewhere else. Standing up she looked up and down the area, but she couldn’t see him anywhere.
The second wave of people dispersed and Kara found herself alone again. It would take a few hours before anyone realized she was still hanging around the airport. The security guard asked her name and who she needed to call. She told them her name but that she wasn’t sure who to call.
After making a few phone calls, they reached someone who said they would be there shortly to pick her up. A short time later, a woman showed up in the room. She said she was Kara’s father’s girlfriend and she was going to bring Kara back to her place to wait for her father.
About four hours later, her father walked into the tiny apartment, slurring his words and swaying back and forth. His eyes red and oblivious to the fact that he had forgotten his ten year old daughter at the airport. Breaking yet another a promise in the short span of her life. | https://medium.com/literally-literary/the-forgotten-memory-5e67b10a980c | ['Tammi Brownlee'] | 2019-12-25 13:26:01.159000+00:00 | ['Nonfiction', 'Short Story', 'This Happened To Me', 'Exulansis', 'Travel'] |
Responding to COVID-19: A Guide for City Budgets | A PERFECT STORM
This pandemic has plunged your local economy and budget planning processes into a sea of uncertainty. Needs — and expenditures — are rising, revenues are plummeting, and a liquidity crisis looms. Budget shortfalls are a certainty, but their relative size and severity are unknown. The lack of coordination at the federal level has left cities and states to chart their own courses, and the patchwork approach only exacerbates the uncertainty. The CBO is predicting budget shortfalls for states twice as high as they were following the 2008 financial crisis. States may not only cut aid to local governments but also impose additional charges and rules.
Many of the usual tools for raising revenues (e.g., increasing fees for parking, services, or utilities) are unusable for reasons both practical and humanitarian, and uncertainty is roiling the municipal bond market. It looks unlikely that any further stimulus will be structured in a way that offers much help to local government. How can you produce a budget that responds to all that has changed, is changing, and will continue to change? How can you make informed trade-offs? What can you do in your role to provide stability amid the uncertainty?
PHASE 1: RESPOND
It is not enough right now to simply revise your budget; you must reimagine it. This means questioning all your assumptions, giving serious thought to worst-case scenarios, and making adjustments to your operating and capital budgets as you learn more from residents, businesses, and other levels of government. Begin with a frank assessment of your situation:
What are your short-term revenue losses?
What are your short-term expenditure increases?
Who in your community is suffering the most, and how can you provide relief?
Next, identify critical tasks:
Create a team to coordinate budget-related tasks and manage hard decisions.
Identify urgent cash shortages.
Organize supports for high-priority sectors and underserved groups.
Communicate your priorities, decisions, and plans clearly.
To cut your budget strategically, in a way that is aligned with your priorities, embrace activity-based budgeting. Think about the outcomes you want, the activities you think will produce them, and budget for those activities. Resist across-the-board cuts. Departments may not share your priorities, or may slash the line item guaranteed to turn out the public in defense of their budget. Ask them to break down their budget by activities (for most departments, these are the tasks that involve direct contact with clients or customers), then:
Suspend ongoing activities that are not making a clear contribution to your strategic priorities.
Prioritize essential activities and eliminate or scale back those that are lower priority.
Cut overhead costs aggressively.
Look for opportunities for productivity gains.
Ask yourself:
Are you budgeting appropriately for new and emerging essential activities during the crisis?
Are there any possible new sources of revenue?
Have you reviewed (and renegotiated) your contracts?
On the capital side, examine every item with the same rigor you bring to operations — every project and every lease in your portfolio. Are there opportunities for refinancing debt or extending borrowing capacity?
To produce a budget, plan for two scenarios: your best guess about what will happen based on what you know right now and your worst-case scenario. For each of these, think carefully and creatively about costs and revenues. Build both version of the budget in a way that allows you to update, revise, and recombine them on at least a quarterly basis. Your budget is based on the information you have, and information is changing fast.
Use microsamples for forecasting purposes: take a small sample of taxpayers and use surveys or focus groups to understand their needs, priorities, behaviors, and thinking around issues that will affect your bottom line. (This is how the CBO forecasts the effects of new changes in tax codes.) Use what you learn as a rough indicator, and incorporate it into your thinking and planning. | https://medium.com/covid-19-public-sector-resources/responding-to-covid-19-a-guide-for-city-budgets-5cbe17486a61 | ['Harvard Ash Center'] | 2020-05-11 14:53:40.940000+00:00 | ['Covid 19', 'Leadership', 'Cities', 'Coronavirus', 'Finance'] |
How Tech Bootcamps Are Supporting the Enterprise World | How Tech Bootcamps Are Supporting the Enterprise World
Tech schools may have the key success factor for the urgent need for updating the workforce with technology skills
Image by vectorjuice available in freepik
In 2020, tech bootcamps are not a new topic anymore. They have been around since at least 2012, offering courses in diverse subjects (typically, web development, mobile development, UX/UI design, data science, and project management) and flooding a digital-starved market with new graduates each year.
Last year, something changed. The number of corporate training bootcamp graduates has surpassed the number of classic bootcamp graduates. This reveals a new trend, not only in the bootcamp market itself but also in enterprise operations. Companies are acknowledging the benefits of intensive, accelerated learning programs to qualify their workforce for digital roles.
The advantages of bootcamps for companies
Workforce Reskilling
Reskilling: learning new skills for a different job function
Digital Transformation is shaping industry after industry. Companies are resorting to technology to improve their business operations, to launch new products and services, and to keep up with their competitors. This situation leads to high demand for specialized tech workers, that it is expected to keep growing for the next years.
Currently, even non-technical roles require the use of mobile phones, computers, and the Internet on a daily basis. Therefore, companies face themselves with a large number of long-time professionals with roles that are (or are about to become) obsolete, at the same they have high need for tech specialists who are hard to find and retain in a very competitive market.
Since firing long-time workers and hiring tech workers are both expensive processes, some companies are turning to a new solution: providing their workers with the necessary knowledge to take on new roles.
Workforce Upskilling
Upskilling: learning new skills within the same job function
At 13.2%, the technology industry has the highest turnover of any industry. This means enterprises not only face the challenge of hiring new skilled workers, but they also face difficulties in retaining them.
Although high-demand and rising compensation seem to be the main reasons for tech turnover, 94% of employees state they would stay longer if the organization was willing to invest in their learning and development.
Training programs to turn entry-to-mid-level team members into experts in a specific technology is especially common practice in consulting companies because these firms focus on delivering specific projects for their clients.
New Hires
The tech sector demands a workforce with very specific skills that change quickly. College programs have trouble following the demands of such a fast-paced market. Students often graduate with general knowledge, rarely matching the specific skill set and expertise level required for a tech role. On the other hand, entry-level developers are cheaper to hire than experts. Therefore, many companies end up hiring professionals with low expertise levels and submit them to training programs in which they acquire the necessary skills, knowledge, and behaviors to quickly become effective contributors to the organization.
Image by startup-stock-photos on Pexels
Bootcamps vs old training formats
Training programs come in several formats: online courses, training on the job, tuition reimbursement, etc. All of these training formats have pros and cons. Bootcamps appear as a valid alternative, offering the following advantages:
A big number of employees receive the same training at the same time. The bootcamp assures the teams will receive cohesive knowledge and may also work as a team-building event.
Unlike self-paced online courses, bootcamps have planned classes, so the company knows how their employees’ days are structured. Also, we must note that online courses without real-time communication with an instructor may cause some confusion on the part of the employees if a particular point isn’t explained to their satisfaction.
Bootcamps have a teaching team exclusively dedicated to helping students overcome any difficulties they may encounter. — a lot of learning to code involves trial & error and spending time bug-fixing — you can get stuck on problems for a long time which can be frustrating and stops people from learning on their own
One of the downsides of the bootcamp used to be it required on-site classes, either in the bootcamp cohort or in the company, but after the 2020 pandemic, methods to provide remote classes have been adopted by many bootcamp organizers and are here to stay.
Are bootcamps ready to meet the challenge?
Bootcamp organizers have identified companies’ need of providing specialized training for their employees. For some time now, many tech schools have offered customizable corporate training programs and many enterprises already trust them.
As of December 2020, Course Report maintains a list of more than 50 tech schools that already provide corporate B2B bootcamps all over the world.
Are companies willing to accept bootcamps as a solution?
Companies already have acknowledged the rapid pace of business and technology advance is something they have to deal with. They are invested in understanding what skills will be most in-demand in upcoming years. Reskilling employees requires investment. However, the alternative to letting go of long-term employees and finding new workers with in-demand skills may not be cheaper or easier. Therefore, reskilling is an attractive option and many business leaders are leveraging training programs for filling key roles.
We know bootcamps are already considered among old training formats: last year, more than 22,000 workers have acquired digital skills via corporate bootcamps.
What to expect in the future?
It is impossible for a bootcamp to replace long-term education offered by college degrees. However, universities have acknowledged the advantages of accelerated education programs and have started to launch their own bootcamps. In some cases, these programs are created through partnerships with established bootcamp schools but adapted to the college’s educational purposes and student needs. By participating, students develop projects to add to their portfolios and get an additional learning experience in their curriculum.
Companies also have been taken the initiative of creating their own programs. Google, for example, offers its own digital marketing bootcamp and also partnered with General Assembly school to create an Android intensive course. By building the curriculum and contents, companies make sure the graduates of these courses have acquired the knowledge they are looking for.
Summing up, bootcamp’s quick and intensive education method earned its early reputation through a business-to-consumer model, but, in the most recent years, it's gaining relevant traction through business-to-business solutions. Companies, and even universities, already took the step of trusting in this type of education to prepare the current and next generation of workforces for the fast-paced digital World challenges.
Thanks to Ricardo Silva
Related Articles:
Mariana Vargas is a UX Engineer based in Lisbon, Portugal. In 2019, she was part of the teaching team in the first Ironhack’s B2B Bootcamp: a training program for MediaMarkt in Ingolstadt, Germany. This Bootcamp re-skilled more than 20 white-collar workers to assume developer positions.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/how-tech-bootcamps-are-supporting-the-enterprise-world-cb5fa076442 | ['Mariana Vargas'] | 2020-12-25 09:08:51.845000+00:00 | ['Programming', 'Technology', 'JavaScript', 'Business', 'Software Engineering'] |
A New Year’s Eve Like No Other | A New Year’s Eve Like No Other
A Rebel’s Prompt: A Grand Party.
Photo by Michel Oeler on Unsplash
“No, I don’t want to attend your f***ing zoom party!”
I erased the message, and just hit the “maybe” button instead, knowing full well I wasn’t going to be sitting at the computer, pretending to be happy, tonight, on this New Year’s Eve. The year 2020 was going out with a groan, and I wasn’t even vaguely interested in participating in yet another virtual celebration as if there was anything at all to celebrate anyway.
The chiming clock sounded at 10:30. I decided there was no reason to wait to open the bottle of bubbly I had shipped to me, weeks ago, for this evening. All of the admittedly pathetic plans I had tried to make had fallen apart. The woman I was hoping to start dating didn’t want to risk the drive to my place. My other girlfriends were all completely stuck in the mud and had never been an option. I wasn’t even babysitting for the grandkids, which had been canceled when the little girl got the sniffles.
I put on my favorite, cozy, pajamas. There was a fire in the fireplace and I had gathered a few snacks for the evening. It was clearly time to start drinking. At least it was good champagne.
By 11:30 I was staring at the painted flowers on the empty bottle and wondering how they got them to look so real as I pulled the cozy sage-colored blanket around me on the couch. I began to drift off to sleep, then awoke with a start.
It was cold and windy, and it was starting to snow. How did I end up outside? I looked down and noticed I was wearing a green dress the color of grass, that I didn’t actually recognize. My legs were covered in coppery colored stockings that almost looked like snakeskin, which explained why I felt so cold, but nothing explained what I was doing out here in the first place. I was wearing boots — the first thing that made any sense at all — made of a soft dark brown leather.
I take it back about the boots making sense. They had tiny little bells on them and they sparkled.
Looking around to get my bearings, I saw there was a bench close by, and on it was what looked like a blanket. I was starting to shiver, so I walked over and picked it up. As soon as I touched it I realized it was a fine wool cloak, with a brown collar and buttons. It went with the dress I was wearing very nicely. I’m not sure why that mattered, but it seemed like it did somehow.
When I placed the cape on my shoulders, I instantly felt warmer. I also felt a little dizzy and sat down on the bench.
The sound of water drew my attention to a nearby stream. Suddenly a tall man, dressed in skin-tight pants and a shirt that was as thin as my stockings appeared.
Photo by Joey Nicotra on Unsplash
“We’ve been waiting for you, my Queen.”
His voice was deep, and of such a timbre that I felt it in my chest. I started to ask him what was going on, where were we and why was he calling me his Queen, but he had already turned and begun walking towards a nearby hill. I followed because it seemed like the right thing to do, and in any case, a quick look around had yielded no other places to go, in what was beginning to whip up into quite a snowstorm.
Then things got really weird.
As suddenly as he had appeared, he was gone. He seemed to have simply vanished into the hillside.
I moved closer and could hear the sound of voices, musical voices that were not quite human; voices that made me want to lean into the trees. I got a little closer still, and then gasped as a delicate hand with long slender fingers reached out and grasped me by the wrist.
“Morgaine, what are you waiting for? Aalton is already at the festival.” The hand was attached to a very fae young woman, dressed in shimmery blue layers of silk. She pulled me into the trees and the earth dissipated around us like smoke, forming back into the solid ground behind us.
We emerged into a New Year’s party, or really parties, that definitely qualified as a “rockin’ New Year’s Eve.” To the left, there were giant fountains of champagne. The fountains were decorated with anemones, and the entire tableau seemed to be from a time apart from the year that was ending. Perhaps I had entered a faerie Belle Époque, an otherworld where there has been no pandemic; where people still party like there is no tomorrow, where they dress up and dance and laugh with glee and more.
In fact, the further I looked past the fountain, the more joyful the party seemed to be. There was a live band and couples and threesomes and moresomes were dancing to the throbbing beat. I saw several groups peel off from the dance floor and slide behind some sheer curtains.
It looked like that room was where the real fun was happening.
I forced myself to turn to the right, where a whole different party was taking place. Now I was looking at table after table with a feast, beyond which, people were sitting in large and small groups.
“This meal looks fit for a king,” I thought to myself. “Or a queen, I added.”
As if on cue, Aalton appeared. “My queen, what would be your pleasure?” he asked. Then he added, “Aerwyna said she found you standing outside. I am sorry I rushed ahead without you. Let me make it up to you!”
Then he looked at my boots, which had experienced some minor damage when I followed him in the woods. “Oh, I really am sorry! Please allow me to take care of these.” He quickly removed my boots and handed them to someone who had silently appeared by his side with a pair of elegant slippers.
Time stopped. I ate sparingly but well, rolling my tongue around tastes that were new and yet hauntingly familiar. I drank champagne and danced with men and women, and some who were perhaps neither, or maybe both. I found my way to the other side of the curtains and was rewarded with a series of hot tubs and pools, saunas, and massage tables.
I remember lying naked on a massage table with Aalton on one side and Aerwyna on the other. They were saying something about they hoped I would remember them this time and come back soon.
And then I awoke on the couch.
I felt uplifted and hopeful. I realized it must have been a vivid dream, and I laughed. “Wow, that sure wasn’t on my 2020 bingo card!”
On the floor, near where I was seated, there were a pair of soft brown boots with a tiny patch on one of them. I picked up the one that had been repaired and closed my eyes. I could feel the touch of a soft kiss on my neck, her long blonde hair caressing my back and shoulder. My back remembered the strength of his hands kneading the muscles until they were soft and pliant.
I whispered into the air, “I won’t forget you.” | https://medium.com/the-rebel-poets-society/a-new-years-eve-like-no-other-b2e16be2a250 | [] | 2020-12-04 01:06:31.103000+00:00 | ['Storytelling', 'Fairy Tale', 'Fiction', 'New Year', 'The Rebel Poets Society'] |
Some tips of a Data Analytics student at Ironhack! | Photo by Jude Beck on Unsplash
Update: Added a tip for installing Python on your machine for the first time. It might seem very easy, but if someone is just starting to learn programming, this will be very useful.
Install python in your environment
conda install python=3.x (version you want)
If you don’t know what version of Python you have installed on your computer, you can always type the following in the terminal:
python --version
For install Pip you only write a few lines
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
Use a conda environment in a Jupyter notebook
A great trick is to be able to use your own environments with all their libraries in a Jupyter environment. I learned this from two great teachers at Ironhack (David and Pedro).
Under these lines you have the step by step to create a virtual environment to work on your projects.
1.- Create a new environment:
conda create --name the-name-of-environment
2.- To activate the environment:
conda activate the-name-of-environment
3.- To get a list of your environment:
conda env list
4.- In order to use our envelopes in Jupyter we must install ipykernel:
conda install -c anaconda ipykernel
5.- To use the-name-of-environment in a Jupyter notebook:
python -m ipykernel install --user --name=the-name-of-environment
6.- Now run a Jupyter notebook:
jupyter notebook
Now you can select the environment you have created with all the libraries installed and run your projects in Jupyter Notebook! | https://medium.com/datatau/my-git-conda-cheat-sheet-c27e969fb456 | ['Borja Uría'] | 2020-10-27 08:53:40.898000+00:00 | ['Git', 'Environment', 'Conda', 'Cheatsheet'] |
Stop Keeping Score | COLUMN
Stop Keeping Score
How to quit measuring success by net worth, fancy titles, or TikTok views
Photo illustration; source: Stuart Walmsley/Getty Images
Every other week, Paul Ollinger investigates how redefining success can help us lead better lives.
A few years ago, when I was looking for a new workout routine, my wife suggested I take a spin class at a place called Flywheel. The last time I had biked en masse was at a fancy California health club, so Flywheel’s spandexed clientele, neon lighting, and ebullient instructor were not new to me. But one thing did stand out: Behind the coach hung a flat-panel screen displaying each rider’s name, bike number, and total “power points.” It was a scoreboard.
Giving it little thought, I spent the next 45 minutes just trying to follow the leader’s directions. But that first session ended with my pulse racing, my T-shirt dripping, and my name near the bottom of the list. Over subsequent classes, I became obsessed with moving up.
Eventually, I cracked the code: To score high, one had to pick one of the “fast” bikes, ignore the instructor, and do nothing but pedal like crazy. The scoreboard soon replaced the fitness benefits as my main driver. Thirty seconds into a ride, I’d see “CycleFella” rise above “PaulO69,” and think, “Not today, CycleFella!” Then I’d spend the rest of class proving that I was better than the guy, even on days when my body was begging me to take it easy.
The whole experience reminded me of how scoreboards drive our behavior in many other areas of our lives. Even though Teddy Roosevelt reminded us “comparison is the thief of joy,” consciously or not, we design our days around improving our positions on invisible ladders that supposedly quantify success. When I was a kid, I’d throw a fit if one of my siblings got a bigger piece of cake than I did. Today, I look in the neighbor’s driveway and can’t help but evaluate my life based in part on what I see parked there.
Escaping this pattern isn’t easy, but it can be done. On my podcast Crazy Money, I spoke with Yoshee and Diana Sodiq, the hosts of a podcast about intentional living called F the Joneses. Years ago, Yoshee’s work as a consultant and Diana’s as a doctor had earned them fancy cars and a big house with a pool, but they both yearned to do something more creative with their careers. They realized they had spent years trying to “keep up with the Joneses,” and that while they’d reached the spot on the ladder that they thought would bring them happiness, the life they’d created just wasn’t working for them. As Yoshee told me, “One day, we decided, ‘eff the Joneses.’” They traded Diana’s Mercedes SUV for a Honda minivan, moved into a smaller home, and began living on their own terms.
For Yoshee, the most important part of the process of taking control was “defining who we are,” he told me. It’s a critical insight. We define who we are by picking — or not picking — the metrics to which we assign value. Left to their own devices, our brains will measure success by our net worth or TikTok views because those things are far easier to quantify than the amount of creativity, joy, and connection we experience every day.
Try this: Instead of counting the number of friends you have on Facebook, count the number of meaningful conversations you’ve had in real time this month. Rather than comparing your results against your neighbor’s efforts, set a personal goal, and track your performance over time. People may judge your decisions. “It’s human nature,” Diana said on a recent episode of F the Joneses. “But guess what? It has nothing to do with you.”
Rather than relentlessly grinding out in your fancy spin class, maybe it would be more satisfying for you to pedal to the beat of the music. Or better yet, take your bike outside and cruise around with the people you love. Your name might not end up at the top of a leaderboard, but finishing first is not the same as having enjoyed the ride. | https://forge.medium.com/stop-keeping-score-c05048ff2a0a | ['Paul Ollinger'] | 2020-11-11 19:33:48.243000+00:00 | ['Social Media', 'Life Lessons', 'Productivity', 'Success', 'Self'] |
A Strategic Method for Making 2021 Not Suck | I’ve had this idea for a long, long time that I’ve never really put into action.
I think 2021 is the year. Because if there was ever a year that really needs to not suck, this is it.
Here’s the idea: I want to have a personal theme for each month. I’ve tried this before and gotten as far as creating the themes. But in the past, I think, I’ve always gotten too caught up in the planning to really dig into implementing.
Last night I was playing around with the idea in my bujo, as one does, and I hit on the idea of something simpler that I think will work. What if I have a theme for each month and just challenge myself to three things related to it.
Read a book. Form a habit. Meet a goal.
The theme can guide other things. I follow James Altucher’s advice about making a list of ten ideas per day. My theme could help guide those idea lists. And they could help me with prioritizing my time and efforts.
But if I just read a book, worked on developing a habit, and met a goal for each theme? That could be epic, right?
My Twelve Themes for 2021
To be honest, if this sticks, these are likely to be my themes for pretty much every year going forward. Because it was hard to come up with twelve at all. That’s a lot of themes, guys.
Anyway, here they are:
January: My word of the year. (For 2021 it’s prepare.)
February: Love
March: Home
April: Money
May: Body
June: Spirit
July: Adventure
August: Work
September: Learn
October: Self-Care
November: Family
December: Organize
I just sat down and spent an hour coming up with a book, a habit, and a goal for each theme. They’re subject to change, of course. A year is a long time. But I wanted to come up with something now, because some of them I want to work on all year — and just have top of mind during the assigned month.
So, I also came up with a list of things I want to start or do in January.
A Deeper Look at the Themes
WOTY: Prepare
I’ve picked a word-of-the-year every year for the last five years. For 2021, my word is prepare. Not very sexy, but it’s resonating with me and I’m going with it.
My book is The Prepper’s Blueprint by Tess Pennington. It has 52 weekly tasks that appeal to my poor, disorganized little heart. My habit is to spend some time every week thinking about preparation and my goal is to complete all the tasks in the book.
Love
Between Covid and my husband’s parents both having Alzheimer’s and living with us, I haven’t spent two minutes alone with him in most of a year. Sucks. I’m determined to figure out some way to get back to dating Kevin, even with the weird state of the world.
My book is The 5 Love Languages by Gary Chapman. My habit is a weekly date. My goal is . . . none of your business. ❤
Home
We bought a house in 2020 — which was the last of my big, big life goals from the last couple of decades or so. It’s a 100-year-old house that we’ve already been living in the last two years. There’s plenty that needs to be done. But I realized that task one needs to be something a little different: getting the stuff in the house under control.
My book is Minimalism Room-by-Room by Elizabeth Enright Phillips. My habit is to institute a one in/one out policy for new purchases. And my goal is to declutter my entire house, from attic to basement.
Money
For the first time since I was about fourteen years old, my money situation is stable at the moment. That feels really good. Also, for the first time, my money-related goal for the year doesn’t include some kind of increase in income. That feels really — weird.
My book is I Will Teach You to be Rich by Ramit Sethi. My habit is to track my spending. My goal is to have one full no-spend month in 2021.
Body
Body is code word for health/wellness/fitness. I just wanted a single word that encompassed it all for me.
My book is How Not to Diet by Dr. Michael Greger. My habit is to use my brand-spanking new Shapa scale — which doesn’t weigh weight. Or at least, it doesn’t share my weight with me. I’m intrigued by it. My goal is to get my insomnia under control.
Spirit
I have a fairly traumatic, tumultuous relationship with spirituality. For me, the spirit theme could as easily have been called ‘mind.’
My book is The Holy Wild by Danielle Dulsky. My habit is to figure out a way to incorporate a weekly spirituality practice into my life — which will include figuring out what that looks like. And my goal is to create a grimoire — which is just a word I like and my way of thinking about a notebook dedicated to my spiritual practice.
Adventure
My life has been so work-focused the last few years. I want to incorporate a little more — something that’s not work. I thought about calling this theme ‘fun,’ but adventure called to me, so here we are.
My book is Sisters on the Fly by Irene Rawlings. My habit is to take one risk every month — AKA do something scary. My goal is to buy a little travel trailer.
Work
I’m so work focused that it hardly seems like I need to have a whole month themed to it. But I went for it anyway.
My book is Dotcom Secrets by Russell Brunson. I’m slightly skeptical about how useful a book called Dotcom Secrets that was written in 2015 will be, but I’ve heard a lot about it, so I’m going to read it anyway. My habit is to implement one new thing per month to increase my business revenue. And my goal is to increase my revenue to the point where I can comfortably pay my employees without giving myself an anxiety attack every month. So, about a 25 percent increase.
Learn
I challenge myself to learn something new every year. It’s one of my favorite things. My 2021 something-to-learn is yoga.
My book is Every Body Yoga by Jessamyn Stanley. My habit is to spend ten minutes a day doing yoga and my goal is to establish a daily practice.
Self-Care
I really thought hard about what my self-care theme would look like for 2021. In the end, I decided that what I really need is to do something every now and then that isn’t work. I’ve missed my Reno garden since moving to PA, so I picked that.
My book is The Suburban Micro-Farm by Amy Stross. My habit is to spend some time every month doing Sharon Astyk’s Independence Days exercise. And my goal is to have a three-season garden. This is one of those themes that I’m not going to be able to actually wait for the assigned month — since that’s October.
Family
This ties into my self-care theme, sort of. I really need to be intentional about not working all the freaking time. I’m seriously on work from the moment my eyes open until I fall asleep at night.
I actually have two books for this theme. Dinner: A Love Story and Dinner: The Playbook by Jenny Rosenstratch. My habit is to have a monthly Zoom call with my dad and my brothers and sisters. And my goal is to have a family dinner that’s not takeout 90 percent of the time.
Organize
This one, for 2021, ended up being an extension of last year’s learning topic, which was personal style.
I have two books for this theme as well. The Curated Closet by Anushka Rees and it’s accompanying workbook. My habit is to log my outfits, which I hope will help me tame my tiny, overpacked closet. And my goal is to have a totally organized closet.
I had some other ideas for themes — maybe they’ll be useful to you if mine don’t work for your life. I thought about: mind, writing, creativity, food, aging. Maybe one or all of those will show up in future years for me.
Things I’ll Start in January
So, it’s not the point of the themes to only think about these things for one month out of the year. Some of these habits and goals will need to be at least thought about starting in January.
Here’s my list:
Start the weekly Prepper Blueprint challenges. (Convenient, since prepare is the January theme.)
Have a monthly date with Kevin — that’s Covid friendly and won’t, you know, kill us.
Declutter for 15 minutes a day.
Start tracking my spending.
Start using my Shapa scale.
Do daily stretches.
Start those monthly family Zoom calls.
Start logging my outfits.
Keep using the Every Plate meal delivery service, which has seriously cut down on our takeout habit.
And start a dinner notebook, which I’m super excited about.
Is that a lot? Yeah, I guess so. Will it all stick? Who knows. But I feel better, after a seriously insane year, having a solid plan for 2021.
Recap: Setting Up Your Own Themes for 2021
Step One
Come up with twelve one-word themes and assign one to each month of the year. I started with my word of the year in January, then added in other obvious months. Money in April, because taxes. Learning in September, because school. Self-Care in October, because it’s my birthday month. Etc. Then fit in the rest.
Step Two
Decide on a book, a habit, and a goal for each theme.
Step Three
Figure out what you need to start now to make those habits and goals a thing. It maybe the whole habit/goal— like logging my outfits for my organizing theme. Or it might be a path into the habit/goal like starting with stretching in January for my body theme.
Step Four
Schedule your theme work in your planner or calendar. Order your first couple of books. Make sure you have what you need for your first habits and goals of the year.
Step Five
You know. Do the things. | https://medium.com/the-write-brain/a-strategic-method-for-making-2021-not-suck-e4c1b1f89799 | ['Shaunta Grimes'] | 2020-12-23 00:32:59.349000+00:00 | ['Goals', 'Productivity', 'Lifestyle', 'Organization', 'Life'] |
How to Be a Quitter | How to Be a Quitter
What I learned from Jon Acuff about quitting and why it is good to be a quitter (when writing)
Photo by Romain V on Unsplash
I’m not a quitter. From my early childhood memories, I remember if my Mom or Dad asked me to do something, I would not quit until it was done. Being persistent, having grit, getting over the dip — these are all things successful people do.
So it surprised me when I heard the idea that successful people have all quit. It makes sense. To be successful in one area of your life, you have to ‘quit’ other areas that are taking away your time and energy from being the best. In management consulting, I noticed a lot of partners (the big bosses) who had divorced from their spouses. No idea if it was because of the work, if it was a coincidence, or what, but management consulting has long hours and sometimes intense travel and this may not be what the spouse originally signed up for in the relationship.
Jon Acuff wrote a book called Quitter: Closing the gap between your Day Job and your Dream Job. It’s not quite about quitting, but I did learn a few things and thought about how it applied to writing:
Most people want things to be perfect
Would you rather not complete something and leave it at 0% complete? Or would you rather start something and have it 50% complete? Most people would rather not start it if they can’t complete it.
The takeaway for writing: How do you get out of this mindset, especially for writing? If you are starting out and struggling to find ideas to write about (maybe you are ‘blocked’), it is because you are not writing enough poor material. You want to write something ‘good’, but because you cannot come up with a good idea, you do not want to write. Everybody who has had success in writing has also had a lot of bad writing. It’s part of any creative job to come up with a lot of awful stuff.
Cut your goal in half
Do you want to lose 10 pounds? Cut it in half. Why? Imagine that you lost 8 pounds in 2 months. You did not reach your 10-pound goal and you would feel you failed. Cutting your goal in half is another way of really saying that you should create the smallest goals for yourself in order to accomplish them easily and then use that momentum and motivation from reaching your first few set of goals to create more ambitious goals. What is a great way to practice this?
Take B.J. Fogg’s Tiny Habits course — it’s free, and it teaches you this exact idea — set a super tiny goal, reward yourself, and then use that as a starting point to build bigger habits. For example, if you want to get into the habit of flossing your teeth, floss one tooth. Even if you floss no more teeth aside from that one tooth, you have accomplished your goal and celebrate (though chances are, you will feel like flossing more teeth).
In writing, it means setting smaller writing goals. If one thousand words a day feels like a struggle to you, set your goal to be 100 words a day. Or if even that is too much, set it at two sentences a day. The idea is to write every day, no matter the word count, and to build the identity of yourself as a writer.
Choosing to fail at certain things
We all know that we cannot multi-task and we also know that if we divide our focus on too many things, we are just not going to have enough time, energy, or focus to achieve everything. The implication of this is that we can and should choose to ‘fail’ at specific things that we work on. Maybe you feel that you cannot be a husband, a father, and an entrepreneur all at once. This does not mean that you should choose to be a terrible father but maybe in order to run your business, drop your kids off with your parents or daycare so you have the time to work on your business. If you haven’t already, it may also mean that you watch less television or avoid social media, so you have the time to do other things.
The takeaway for writing: In writing, I see the idea of experimenting with certain topics. When you are starting out as a writer, you may not know where your ‘niche’ or area of expertise is. Experiment with different topics and expect each one to fail (or at least do not expect to go viral with every article you write) — then you will be pleasantly surprised when something hits.
Make it fun
I like this concept a lot, and I think Gretchen Rubin describes a similar concept, though not quite in Jon’s words that summarizes this nicely. It’s the concept of pairing fun things with not so fun things to motivate yourself to get things done. When you don’t really want to exercise in the morning but know that that is the only time you can watch Game of Thrones is when you are on the treadmill, you are going to complete the run in the morning. When you do not really want to do the dishes or the laundry but have an exciting podcast to listen to while doing so, you’re going to have clean dishes and fresh smelling clothing. Pair up the fun things with your not so fun things and watch yourself get more things done.
The takeaway for writing: Do you have a favorite snack you like eating? Or you have a favourite TV show or movie you like watching? Reward yourself at the end of your writing session with something you enjoy. Use the reward to motivate yourself to write.
Secret rules to accomplish more things
Jon shares a fantastic story about when Will Smith first moved to LA to break into the movie industry. His manager asked him what goal he wanted to set for himself and Will replied “I want to be the biggest movie star in the world.” Remember, this was before Will became a huge mainstream rapper and even further before he became a blockbuster actor. His manager did some research and looked at the ten top grossing movies of all time to understand if there are patterns. They both realized that 10 / 10 of the movies had special effects. 9 / 10 had special effects with creatures. 8 / 10 had special effects with creatures and a love story.
Now look at Will’s six most successful movies:
Independence Day — special effects, creatures, love story
Suicide Squad — special effects, creatures, love story
Hancock — special effects
Men in Black 3 — special effects, creatures, love story
Men in Black — special effects, creatures, love story
I am Legend — special effects, creatures
Find someone with an amazing diploma and then borrow it for yourself.
The takeaway for writing: Which writers do you admire? Tim Denning? Tom Kuegler? Ayodeji Awosika? I’m not saying to copy their ideas, but read carefully through their popular articles and identify the elements you can steal for your articles. Are they well-researched? Full of quotes? Formatted well? Anecdotes? Interviews?
Use data to help motivate you
If you were trying to improve your running, if you went from running 3 miles/hour to 4 miles/hour, you would improve your 20-minute mile to a 15-minute mile. That’s quite significant. However, if you ran 8 miles/hour and increased it to 9 miles/hour, you improve your mile time by 40 seconds. I know right? It’s not a lot, and it seems insignificant (and it also amazes me when people tell me how quickly they run marathons or half marathons because even a small amount of time shaved off can be significant knowing that fact). My point is you can use data to motivate you and show the progress that you are making because when you do not have the data, progress may not be visible to you. I’d also point out, as Scott Adams did in his book “How to fail at everything and still win big” that systems are superior to goals — rather than trying to lose 10 pounds, my system is to eat healthily (say at least 3 servings of vegetables every meal) and be active (exercise 3 times a week for 30 minutes or fewer). The way you set your goals can help you achieve them as you can see.
The takeaway for writing: I was pleasantly surprised to see an email from Grammarly, a free tool that checks your spelling and grammar. The email told me I had written over 45,000 words and gave me several stats on how my writing compares to other writers, the tones I used in my writing, and how my spelling and grammar compare to others. I don’t care about any of the stats, other than the 45,000 words — I want to make sure I am writing every day and the email is a glorious reminder that even a bit every day can add up. | https://medium.com/bulletproof-writers/how-to-be-a-quitter-742df0a6b82c | ['Wang Yip'] | 2020-12-01 18:58:15.596000+00:00 | ['Writing Tips', 'Quitting', 'Writing', 'Writing Habit', 'Jon Acuff'] |
About Publishous | About Publishous
How we’ll help you get published
We set out to create a place where we could publish delicious content, and came up with the name PubLishous, pronounced “Pub-LISH-ous” (rhymes with delicious) — this was the genesis of our “PL” logo, in case you were wondering. But the more we played around with the pronunciation, the more we heard the cries of so many writers and started pronouncing it “Publish Us.” We’re here to be, a publishing house.
We’re looking for professional quality pieces to help others improve their lives and to help writers, creatives, and artists get noticed. We publish a limited number of the highest-quality pieces each day.
Get noticed
We’ve been where you are and we know how difficult it is to get noticed. To get approved as a writer fill out the form at the bottom of this page. After you’re approved you’ll be added as a writer. This way you can submit future stories to us through Medium’s writing tools (the three dots).
After submitting the application, we’ll review the published work on your profile and see if you are a good fit. If you are, you’ll be added as a writer. This way you can submit future stories to us through Medium’s writing tools. We accept drafts, sent to the publication that are previously unpublished and fit our categories.
What kind of articles are we looking for?
We want to motivate and entertain people. We want fresh content to help people live healthier lives, help writers, improve productivity, increase faith, and entertain people through creative output all while helping you get published.
Please submit unpublished drafts. Here’s why: Medium’s algorithm favors fresh stories, and our homepage is sorted by date of when the article was published ( not when it was accepted into the publication). The longer it’s been since a story’s original publish date, the poorer it will usually perform. Here’s how you send a draft to the pub.
when it was accepted into the publication). The longer it’s been since a story’s original publish date, the poorer it will usually perform. Here’s how you send a draft to the pub. All submissions require a featured image directly below the title and subtitle. Images must be Creative Common license, or owned by you and notated as such. Name the place of origin on Pexels, or Unsplash, etc. and provide the actual link. Graphics should be visually striking and should not reflect your website. Here’s a link to our style sheet. We may change graphics to fit our style or those that do not comply with Medium’s guidelines. Original graphics created by you are great and should be cited “Graphic created by author.”
Use Medium’s built-in title and subtitle formatting (T/t). A kicker is permissible. Make your story easy to read for mobile readers and short paragraphs instead of large blocks of text.
Titles should be informative and answer a question rather than creating one. Avoid clickbait, see this.
All submissions should fit the spirit of the publication. Stories should be edited to be as error-free as possible and, at minimum, should be run through a free Grammarly spellcheck and grammar check. We feel so strongly about this that we’ve become a Grammarly affiliate. We also use a title case converter for titles and optimize them for performance. The best writer resources we’ve found can be found here.
Send your best work. We consistently publish a limited number of the highest quality submissions. We’re a small staff and cannot always offer feedback, although we aim to be as helpful as possible. Please do not overwhelm the queue with multiple submissions or expect responses on nonbusiness days. About once a week feels right for frequency.
Submissions should comply with Medium’s content and curation guidelines and should not include affiliate links.
We expect more (See what we mean here). Stories should be well developed and have around 750–1000 words, at minimum. We want your piece to perform at the highest level possible. We know you work hard and put a lot of effort into your writing. We work hard, as a publication partner, to make editorial changes to help your story perform at its highest level, based on the feedback we’ve been given. Edits will stay true to the intent of the story, so please do not make changes after we have edited your story. We make great efforts to respond to each submission with care and respect, but if a story is in the queue five days you may assume that piece will not be published. Submit your best stories so we can publish more stories faster.
Please don’t write about the same ‘ole subjects that have been covered repeatedly unless you have an obviously fresh take.
Bring your personality with you, leave your snark behind. Use your experience to help people understand their options. Do research. Text link to your external sources in every piece. Be creative.
Many of our authors write about their faith and this new page is focused to help readers who want to grow and explore their faith. You’ll find stories here using the tags Faith, Spirituality, and Religion.
Everyone is an expert in something, and our writers love to share their expertise to help others. Keeping you healthy and happy is a full-time job. So, to help you keep focused on being the best you you can be. On this new page, you’ll find stories using the tags Self-Improvement, Life, Life Lessons, Health, Relationships, Psychology.
This is Medium, after all, so we would be remiss if we didn’t have a place to showcase our writers who write to help other writers. Stories under this category should be about writing or improving writing, but not specific to Medium. If you’re interested in improving your writing, this is where you’ll find stories with the tags Writing, Writing Tips, and Writing Life.
We don’t want your stories that address hot-button topics. We’re a family-friendly publication.
What are we doing? Crowdsourcing Readership.
That’s right. We’re a crowdsourcing platform for readership. In crowdsourcing, large numbers of individuals get involved at various levels of commitment to help launch a project. At PublishousNow, authors are our projects and we ask our contributors to bring their followers to help launch them. See Shoplishous.
Think of it this way: Each of our contributors has his/her own group of followers, with varying degrees of loyalty. And, your most loyal followers subscribe to your blog so they’re among the first to read each of your works. You may guest post your content with another author to expose yourself to that person’s followers with the hope that some of them will also follow you, and that will grow your list faster.
What if you brought your followers to see your work in a place where scores of other authors are bringing their followers? Then your 2,000 followers join someone else’s 800, who are joining another author’s 5,200 followers, who are joining tens of thousands of followers other authors are bringing?
That’s what we mean by Crowdsourcing Readership. And, with nearly 1.5 million eyes on the website each month, that’s not too shabby.
Offer us the piece for the first 30 days, then treat it as a guest post and repost to your blog with backlink to us like this: Originally published on Publishous (with link). You retain the right to your work. Any publication who suggests the work belongs to it in perpetuity is not in accordance with Medium guidelines.
And, what’s better than first publish?
Not one…
Not two…
Not even three…
But four Publishous, because we publish to the Medium publication, the website, our Facebook page, Twitter, and on Quora. . Better yet, each piece goes in our library so it gets shared again in the future.
Ready to write with us?
Go here.
Best practices are shared here.
Consider following the founder and editors, especially if they relate to your niche. | https://medium.com/publishous/about-publishous-ff8811f34ba9 | ['Nicole Akers'] | 2020-12-22 16:23:54.619000+00:00 | ['Self Improvement', 'Writing', 'Submission Guidelines', 'About', 'Life'] |
Bunny Rabbit’s Adventure in Suffering | Dale is a full time artist, writer, and poet. Born and raised in Indiana he eventually moved to Chicago. While there he graduated from SAIC. From there he wandered over to Ohio where he had a leather crafting business. On a chance visit to Asheville he decided it was too good to pass up. He immediately moved to Western North Carolina where he still resides. He’s hip deep in a daily writing challenge, and busy in the studio all the time. | https://dhbogucki.medium.com/bunny-rabbits-adventure-in-suffering-f7ed3c6ab7fa | [] | 2018-04-07 14:58:14.021000+00:00 | ['Cartoon', 'Mental Health', 'Online Dating', 'Graphic Novels', 'Depression'] |
Clean Up Your Home Office and You’ll Boost Your Productivity | Clean Up Your Home Office and You’ll Boost Your Productivity
Messy home, messy mind, messy results?
It’s Monday morning, you want to start into your new work week full of power. You sit down at your ‘office desk’, the kitchen table. You look up and the other half of the table is full of dirty dishes from your partner's breakfast.
Your energy is transforming into anger. Especially when looking towards the sink which is still full of stuff from your family’s late-night snack. You feel the urge to quickly clean up and send an angry message into your family group chat, but the 8:30 am team meeting call starts in 5 minutes. You have barely time for a quick coffee.
Your Zoom meeting begins and you try to position yourself so nobody can see the mess in the back. You would love to focus on your work, but with the dirty dishes dancing around you, there is no way. Studies show our brains have a hard time focusing in chaotic environments.
We are all facing the same problem. Our home offices are a mess. As long as we cannot escape back to the office, do we have to clean more to keep up our productivity and focus? | https://medium.com/age-of-awareness/clean-up-your-home-office-and-youll-boost-your-productivity-d0ea0d014882 | ['Karolin Wanner'] | 2020-09-22 12:16:23.381000+00:00 | ['Work', 'Minimalism', 'Work Life Balance', 'Productivity', 'Remote Work'] |
The Force | Fast Access Blockchain’s scheduled hard fork will include the complete network capabilities outlined in our Whitepaper.
What is “The Force”?
“The Force” is a planned hard fork of the Foundation Chain of the Fast Access Blockchain. We are currently ahead of the development schedule. It will be introduced in two phases:
Phase 1: Basic Edition, end of August 2018/beginning of September 2018.
Specific features will be deployed to meet user and market needs, prepare the network for listing on our new exchanges, and provide better security features against 51% attacks. This version does not have all the design elements of the white paper.
Phase 2: Full Implementation, Test version October 2018, Main net version end of 2018
Will include all features outlined in the whitepaper
“The Force” was announced on March 28, 2018 by Founder and President of the FAB Foundation, Paul Liu.
A hard fork as it relates to blockchain technology, is a massive change to the protocol that makes previously valid blocks invalid (or vice versa) and creates a new protocol version. During a hard fork, nodes (or users) need to upgrade to the latest version of the protocol software.
During “The Force” hard fork, the forked chain will inherit all the data on the original chain and become the main chain of the network.
Why is it important?
The design laid out in the FAB whitepaper is specific in features to enable a decentralized platform for global enterprise-level applications.
Legacy blockchain networks have low transaction throughput and high transaction costs making them ill suited for large-scale application uses.
“The Force” will solve these big problems that prevent blockchain networks from being mass adopted by both businesses and individuals alike.
“The Force” will introduce scalabilty to the network, providing to 1 million transactions per second for quick payment settlements and data transfers.
Additionally, it will make blockchain development more accessible by introducing a development layer that makes blockchain application development and onboarding quick and easy for businesses of all sizes to create decentralized applications.
Finally, it introduces powerful features such as:
interoperability with other blockchain networks such as Ethereum or NEM, allowing for cross chain transactions of cryptocurrency and data
with other blockchain networks such as Ethereum or NEM, allowing for cross chain transactions of cryptocurrency and data more than 1 million gigabytes of data storage
What technical innovations will “The Force” bring?
Currently there is no blockchain that is secure, decentralized and scalable. Some blockchains are quite secure and decentralized while lacking scalability, while other new innovators in the space have scalability and security features while lacking true decentralization (ex. the use of a limited number of validators). This is the main problem all blockchain projects are trying to solve.
Fast Access Blockchain has solved this problem through several unique innovations introduced in “The Force”. The fork will inherit all the data on the original Foundation Chain, which carries all the security features of the Bitcoin blockchain but will add Annex Chains, SCAR, KanBan and CCUA functions that were laid out in the whitepaper.
Annex Chains are similar to side chains and will carry a large number of transactions for a specific business use case, such as for an exchange, e-commerce transactions, supply-chain automation, an Internet-of-Things platform or a medical platform. There will be several thousand Annex Chains which will bare the transaction load, creating scalability.
Smart Contract Address Router (SCAR) is a unique account issued by a smart contract that executes transactions between the Annex Chains and Foundation Chain.
KanBan is designed to provide real-time updates and querying capabilities for the Annex-chain transactions in a global context without significantly increasing the burden on the Foundation Blockchain. It is a special module designed to prevent double-spending attacks and maintains decentralization across the network. Annex Chains are unable to be created to defraud users because Kanban validates all transactions.
The Cross Chain Unified Address (CCUA) protocol provides a convenient means for implementing transaction verification and simplifying the management of cross chain transactions. In fact, the CCUA is not limited to the FAB system only. It can be used as a universal cross-chain address protocol, adapted to any blockchains, for the implementation of generalized management for decentralized transactions. At this time, a new coin will be released.
How will “The Force” affect my Fabcoin?
During Phase 2 of “The Force”, a new coin will be created and be the official coin of the FAB project. Current coin holders will receive the same amount of the new coin as they currently hold of Fabcoin at that time. Details on the process will be released in the near future.
The new coin will be compatible with all the functions of the network.
How can I get more details?
Interested users and developers who intend to use the FAB network for their applications can simultaneously participate in the development of the Force along with their application. We encourage everyone to contact the team for more information. The convergence of wisdom and expertise will lead to success!
Email us at [email protected] or join our telegram at
or visit our developer Slack at
You can also see our codebase on github | https://medium.com/fast-access-blockchain/the-force-749fa40d5b3 | ['Fab Info'] | 2018-08-13 15:10:51.156000+00:00 | ['Articles', 'Blockchain', 'Development', 'Cryptocurrency', 'Updates'] |
dLab Cardano Fellowships. We’re funding 5 research and… | Our first dLab/emurgo accelerator cohort starts in just a couple short months, and we couldn’t be more excited to get to work. If you haven’t seen our Call for Startups yet, please check it out! We’d love to hear from you if you’re building a blockchain or DLT startup.
But I don’t want to talk about startups today. I want to talk about fellowships. In addition to five startups, we’re also interviewing and hiring five Cardano Fellows. Individuals selected for the fellowship will be funded for a full calendar year to explore new topics in decentralization and distributed ledger technologies with applications for the Cardano blockchain.
Or, said another way, we’ll provide a $60k USD stipend for you to come hang out with us and build neat stuff for a full year that will improve the Cardano ecosystem. We aren’t expecting candidates to pitch us a fully baked solution; you just have to convince us that the problems you want to investigate are worth working on, and that you’re the right kind of person to find useful solutions and produce tangible results.
If the startup accelerator is about funding people who have identified solutions, our fellowships are about funding people who have identified problems.
In most cases, we expect that selected fellows will be developers, but we’re also happy to fund researchers or others who may be learning to develop as part of their journey. We expect some fellows to work on open source, while others may be creating educational resources, products or infrastructure. Some projects we fund may have startup aspirations, with obvious routes to commercialization, in which case accelerated acceptance into a future accelerator cohort is an option. Others may achieve sustainability through a non-commercial route, for example, by building an active open source community around a critical new software library. Others might be pursuing pure research, where the outcome is publishing and advocacy. We’re open to all of these options — because the blockchain ecosystem is still at an early stage — and many different types of thinking are needed to push us forward.
Throughout the residency period, the selected fellows will work with SOSV and EMURGO leadership, publish regular updates on their progress, participate in events, and receive access to development resources, mentors and program partners. We’ll work with fellows early on in the process to establish goals, and make sure they’re getting the help and support they need to be successful. In addition to the core topic that the fellowship is exploring, fellows will also be expected to explore areas of overlap and work with resident accelerator companies.
The following is a list of topics we’re actively thinking about that are certainly appropriate for fellowships. If you’re interested in working in one of these areas, we’d love to talk with you. But we’re also interested in hearing about different areas we haven’t included here if you feel that they’re relevant to dLab and Cardano.
Blockchain Education
We need to work together to onboard the next generation of developers who understand blockchain technology, and how to write and test smart contracts using Solidity, Cardano’s new Plutus language, and existing languages like Python, Java, and Rust. That means creating content and scalable educational resources. We’re looking forward to working with fellowship applicants who are passionate about educating others, particularly those who are interested in unconventional or experiential approaches.
Of course, in the process of creating educational materials to explain difficult things, we often stumble on better, simpler, more abstract ways to do those things, which leads us to…
Developer Tools + Frameworks
Developing and deploying serverless code to a blockchain still feels very foreign to most developers. We’d like to simplify that experience in order to accelerate network growth and create more decentralized apps. This means that we’re looking for eager developers who are interested in tooling, OSS libraries, adapters, and frameworks (have you messed around with those new React Native bindings yet btw?). With the release of the IELE language and its VM, now deployed on the second Cardano testnet, it’s a great time to consider the tooling that needs to be developed around these innovations to get more developers in the front door and building useful things.
As usual, some inspiration can be found from tools that already exist. Embark, for example, is a framework for developing and deploying decentralized applications on the EVM, encapsulating common development, testing, and deployment operations and easing developer workflows. Another example is Ganache, part of the Truffle suite, which provides developers with a local blockchain that can be used for development and testing, without having to sync or deploy code to a live testnet; it’s both a great learning tool and a crucial piece of a decentralized development kit. Smart contract security testing is another area that needs more tooling focus.
Integrated Development Environments (IDEs) play a big role in tying this all together and providing streamlined tooling for developers; Remix, for example, is an IDE originally funded by the Ethereum Foundation that’s been forked to support the IELE VM. It’s a great starting place but we’d like to take this to the next level and talk about what the next logical evolution of these tools will be.
Infrastructure + Extensibility
The flip side of the developer tools coin are production services that make it easy to deploy and support certain classes of applicants. A good example is Emurgo’s recently-launched Yoroi web wallet, which leverages a hosted full node API infrastructure in order to provide lightweight access to clients. There’s a clear need for hosted enterprise services like this that enable other applications and we believe that there’s a real opportunity to create a product, or a self-sustaining public service to fill that gap. Best yet, we’ve already got a head start on this one and a first customer ;-). Interested?
There are similar needs in areas like interoperability and security. And, as far as Yoroi is concerned, there are also opportunities to build services and integrations on top of the wallet, transforming it into an application platform. If you have specific ideas here, we’d love to discuss them.
Accessibility + User Experience
As we wrote about in our Call for Startups article, one of the biggest obstacles to widespread adoption of any blockchain is the end user experience. We’re looking to fund fellows who have radical new ideas for humanizing the way we interface with wallets, smart contracts, and decentralized applications.
For example, if we’re going to scale demand beyond early adopters and special purpose financial instruments, we need to rethink the way that users interface with these services. The average user paying for a service in ADA or transacting with a dApp probably shouldn’t need to know anything about cryptographic hashes or QR codes. As such, one area we’re particularly interested in is exploring alternate ways to identify (e.g. naming) and discover individuals, contracts, and apps. Not that we hate QR codes or anything. But there’s certainly some room for improvement here.
Developing World + Social Impact
“If your project is solving for transparency, fraud and reducing the costs of moving money or identity, blockchain has a lot of potential to help,” wrote Doug Galen of Stanford’s Graduate School of Business, in an article written for FastCompany earlier this year about blockchain and social impact. Cardano and other blockchains have obvious potential to address many societal issues that plague us. Many of those problems are exacerbated in the developing world, where we expect many of the most important developments to have the greatest impact, likely leapfrogging adoption in the rest of the world.
With one of EMURGO’s stated goals being to create a more connected and equitable world using the Cardano blockchain, it’s no surprise that creative ways to achieve financial inclusion is at the top of our list of interest areas for fellowships. In addition to making financial products more inclusive, blockchains have the potential to grant digital identities and personal data management to millions of individuals with no formal economic identity, such as refugees. Other areas where we’d like to see fellows investigating include agricultural data exchanges (empowering rural agricultural producers), systems for recording and enforcing land use rights, decentralized energy markets, and preserving freedom of the press. Government use cases are frequently discussed; in addition to direct democratic process (admittedly not a near-term objective), the immutable and transparent nature of blockchains lends itself well to recording and auditing government activity, and can be used for compliance, verification, and distribution of benefits. Frameworks for inter-jurisdiction identification and compliance in smart contracts are a particularly interesting area for us.
Can blockchain applications curb government corruption? Put an end to human trafficking? Prevent hyper-inflation? We’re anxious to find out and we’d like to talk to people who are working on practical near-term solutions to society-scale problems, who also understand that solutions here are more than technical; they’ll require careful coordination with the right progressive policymakers.
Join Us
I hope this post has given you a better idea of what we’re trying to accomplish with the Cardano fellowship program. Our goal is to provide an alternative type of funding to interesting people who want to come experiment with us for a year, build interesting new things, and accelerate adoption for the Cardano ecosystem.
If you’re a hacker, researcher, or an ambitious learner who is passionate about the potential of decentralization, we’re accepting applications for our first five fellowships now. Apply by November 30th to be part of the first wave. | https://medium.com/dlabvc/dlab-fellowships-15498a9e0859 | ['Nick Plante'] | 2019-01-17 19:31:50.176000+00:00 | ['Funding', 'Fellowship', 'Startup', 'Blockchain', 'Research'] |
Wandering Warblers | Alaska’s Songbirds
Wandering Warblers
Tiny Migrant Songbirds with Arctic Aspirations
Townsend’s Warbler (Setophaga townsendi) is arguably the most striking warbler species that breeds in Alaska. 📷 Intermountain Bird Observatory/Zak Pohlen
Of fifty warbler species regularly found throughout the U.S. and Canada, 11 make their way to Alaska each summer to breed. Like many other migratory birds, warblers take advantage of abundant insects and prime nesting habitat to raise young in the U.S. and Canada, before traveling to warmer areas like Mexico, Central and South America, and the Caribbean to spend the winter.
Several species of warbler breed in Arctic National Wildlife Refuge in the summer, all using the treed or shrubby habitats found in the southern portion as well as the Brooks Range and its foothills. Here, Arctic Refuge’s Porcupine River cuts through the surrounding boreal forest creating picturesque cliffs and canyons. 📷 USFWS/Callie Gesmundo
Migration is no easy task for any bird, let alone warblers, which average less than the weight of one AAA battery! Prior to our understanding of bird migration, people had many interesting ideas about where migratory birds went during the colder winter months. Centuries ago, people thought birds hibernated underground or underwater, while others proposed they transformed into new species (as people noticed seeing different birds at different times of the year). Some even thought birds flew to the moon and spent the winter there. The truth is that these minute creatures (made mostly of feathers and hollow bones) traverse the globe, some doubling their body weight to survive non-stop flights across oceans.
In the 17tth century an English educator named Charles Morton wrote the first extensive work on bird migration, in which he wrongly suggested birds migrate to the moon. 📷 Michele Lamberti via Flickr
As birds go, Alaska is unique among North American states and provinces. Numerous species travel several flyways (migration routes) to meet in Alaska during the summer months. These migratory birds come from wintering grounds in the Americas, eastern Asia, Oceana, and Africa before arriving on their breeding grounds here in Alaska. And Alaska’s warblers are no exception to the incredible journeys and diversity of migratory routes and wintering locations.
Blackpoll Warblers
Blackpoll Warbler (Setophaga striata). 📷 USFWS/Zak Pohlen
Blackpoll Warblers are a summer breeder in the northern coniferous forests throughout Alaska and Canada. Males sport a striking black cap during summer (superficially similar to the Black-capped Chickadee) and sing an insect-like high pitch trill to attract a mate. Though not as flamboyant as other warbler species, it makes up for it with its fascinating migratory feats.
This map shows animated weekly abundances of Blackpoll Warblers throughout the calendar year. The data used to create this animation was collected by citizen scientists who submitted sightings of Blackpoll Warblers to eBird.
Attaching small devices known as geolocators, scientists tracked Blackpoll Warblers from their breeding grounds to their wintering grounds, uncovering one of the most impressive migrations among birds. Blackpoll Warblers that breed in Alaska travel up to 12,400 miles roundtrip each year, crossing the entire North American continent before making a non-stop 3–4 day transoceanic flight to northern South America.
Scientists use geolocators to collect light level data (i.e. daylight) to estimate an individual’s location. In order to collect the location data, the geolocator must be retrieved from the tagged individual, downloaded, and analyzed. 📷 USFWS/Zak Pohlen
Before they complete this amazing leap across the ocean, they spend a month fattening up along the eastern seaboard, doubling their body weight with fat reserves in order to survive the arduous oceanic crossing.
Birds do not grow feathers on every part of their body, but rather in uniformed sections called feather tracts or pterylae. Scientists check a bird’s condition by lightly blowing along the featherless spaces (called apteria) to check the amount of muscle and fat present. 📷 Intermountain Bird Observatory/Callie Gesmundo
Blackpoll Warblers have lost over 90% of their population since the 1970s. Understanding where these birds go is critical to our understanding of what’s driving this population loss.
Wilson’s Warbler
Wilson’s Warblers can be spotted in nearly all parts of the United States at some point during the calendar year. | https://alaskausfws.medium.com/wandering-warblers-ef003a77e358 | ['U.S.Fish Wildlife Alaska'] | 2020-11-11 15:18:17.651000+00:00 | ['Arctic', 'Migration', 'Birds', 'Science', 'Alaska'] |
How Code-Switching Causes More Harm Than Good | How Code-Switching Causes More Harm Than Good
Let’s talk about how we feel impacted by switching it up
When you see a chameleon, you may notice it possesses a unique ability. It can use camouflage, concealing itself in plain sight. These reptiles adapt their appearance for self-protection. Similarly, people assimilate into a variety of social circumstances. When social environments require strict codes of behavior, we often feel that assimilation benefits us — changing ourselves to fit in at school, work, and informal spaces. However, we do not make changes for the same reasons. Black people and other minorities often make changes to avoid prejudice, which causes more harm than good.
Changing the way you speak, particularly in the face of discrimination, can take a mental toll (Retta, 2019).
While it is perfectly normal to modify your behavior, it can harm your health when that transition happens under duress. Within America, Black people experience intense pressure to assimilate because of a resounding rejection of Black culture in the mainstream. White people incentivize compliance by providing better educational and professional opportunities to Black people who successfully adopt Europeanized behaviors. They also use negative reinforcement by engaging in microaggressions to curve nonconformist behavior. When white people laugh at a minority’s speech, fashion, or lifestyle, they ensure that this behavior will be less likely to occur in white-dominated spaces.
Some white people are only beginning to understand “the talk” that Black parents must have with their children. Our parents try to prepare us for a world that views us as a threat. However, there is another kind of “talk” that Black parents must give. They have to teach their children how to assimilate when around white people. As a child, I felt joy in my heart when my father proudly told me, “You speak the King’s English.” I knew that he discouraged my siblings and me from using the word “ain’t.” He did not want us to use African American Vernacular English (AAVE) because he wanted to protect us from ridicule.
Black children know that they must assimilate, but they must do so because their culture is considered inferior. Like a chameleon, Black people change their behavior as a form of self-defense. It is through demonstrating proficient use of American Standard English that Black people feel welcome in white spaces. It is their way of letting white people know that they understand American Standard English, attempting to counter stereotypes that Black people are intellectually inferior.
No matter the situation, it is clear that a person who must present different versions of themselves in different environments will be faced with feelings of stress, confusion, frustration, or even inferiority. These feelings can affect that person’s mental health (Adikwu, 2020).
When we see a chameleon change its hues, we often revel at the beauty. Nevertheless, we rarely consider that fear that motivates this behavior. Similarly, white people may admire a Black person’s ability to assimilate without understanding why they feel the need to do so. Given the discrimination that Black people and other minorities experience, code-switching is an extra burden to bear.
Code-switching, when it happens out of obligation, does more harm than good. When Black people and other minorities associate making changes with compliance to white people’s expectations, it normalizes systematic racism. As long as society treats European culture as the standard, we will live in a society that justifies discrimination. If we want to live in a more inclusive community, it must begin by accepting linguistic diversity in professional and educational settings. | https://medium.com/an-injustice/how-code-switching-causes-more-harm-than-good-18ede1a57ba0 | ['Allison Gaines'] | 2020-10-29 23:56:47.923000+00:00 | ['BlackLivesMatter', 'Mental Health', 'Equality', 'Race', 'Code Switching'] |
Ghost or stalker? Meet Phooey, Donald Duck’s fourth nephew | Ghost or stalker? Meet Phooey, Donald Duck’s fourth nephew
When an artist’s mistake becomes an urban legend
You don’t have to be a big Disney fan to know that Donald Duck has three twin nephews named Huey, Dewey, and Louie. They have been around for some time: artist Al Taliaferro introduced the trio in the comic strip “Donald’s Nephews”, published on October 17, 1937.
But have you ever heard of Phooey Duck, Huey, Dewey, and Louie’s long-forgotten fourth brother?
If you’ve never heard of him before, don’t worry: Phooey is not part of the Disney canon and, in fact, was born from the mistakes of some artists, who ended up fueling a curious urban legend that spread among Disney fans even before the internet!
The fourth nephew first appeared in “Mastering the Matterhorn”, a story drawn by Carl Barks and published in 1959’s comic book Vacation in Disneyland.
On one panel (left), we can see one of the nephews running from a Beagle Brother, while three others appeared in the background.
Wait, what? Four identical little ducks in the same panel? Apparently, Barks drew an extra duckling by accident. But the mistake would be repeated by Barks’ own pen in “Medaling Around” and “Beach Boy”, respectively published in issues 261 and 276 of Walt Disney’s Comics (from June 1962 and September 1963).
In the latter, the fourth duckling appears in two of the panels, and he’s particularly noticeable in the second — in which he observes his three “brothers” from the right end of the panel, with a strangely sad expression (below)!
The reasons for Barks’ slips are the most varied: from the tiredness of drawing stories practically on an industrial scale to the fact that the fourth duckling could be just an extra very similar to Donald Duck’s nephews — the prolific artist just forgot to give features that could easily distinguish him from Huey, Dewey, and Louie.
But except for specialists of Barks’ work, no one really noticed the wrong math in these stories.
Decades later, other artists began to make similar mistakes. In Bob Gregory’s “The Missing Mogul”, published in 1976, the silhouette of a fourth duckling very similar to Huey, Dewey, and Louie appears in one of the panels. In “River Run”, produced for the foreign market in 1980, artist Tony Strobl drew, for the first time, all four nephews together and fully visible (left), without shadows, silhouettes, or the possibility of being just someone very similar to the siblings.
José Mascaró did the same thing in “The Moving Island” (1984), where we can see without any doubt that there are four twin ducklings (below). Interestingly, Disney wiped out the fourth nephew in some of the reprintings of this story, which didn’t happen with other appearances. | https://medium.com/fan-fare/ghost-or-stalker-meet-phooey-donald-ducks-fourth-nephew-bbd1409a2bd6 | ['Felipe M. Guerra'] | 2020-10-28 09:37:57.334000+00:00 | ['Funny', 'Comics', 'Disney', 'Pop Culture', 'Books'] |
Robert Bussard on IEC Fusion Power & The Polywell Reactor | Robert, let’s start out with the current state of fusion research. I think most people are aware that fusion research is a collection of big dollar research projects and government & academia, but I’m not sure how much anybody knows about the progress being made in this field. Can you describe for us the current state of fusion research in the United States and how much money this progress is costing us?
Tim, that’s a very complicated topic because controlled fusion research goes back to 1952. Lyman Spitzer at Princeton university invented a machine he called the Stellarator to try and make controlled fusion happen, and from that point on until 1956 it was a classified program.
Dr. Robert W. Bussard
It was finally declassified after the Geneva atomic energy conference because the Russians appeared and spoke openly about it. From then on, there has been a continuous government investment in a particular line of research that was adopted by nearly everyone in the Western world.
So far, over $18 billion has been spent in 56 or 58 years and they are no really closer to success than they were at the beginning, except in the sense that they’ve learned more about why things aren’t working as they wish they would.
The problem these fusion programs have is that they’re trying to control & confine fusion reactive ions using magnetic fields, and unfortunately, magnetic fields don’t really confine plasmas. A plasma is a combination of equal numbers of negative electrons and positive ions — it’s the positive ions that make fusion.
A plasma is a neutral thing overall, made up of both positive and negative charges, and magnetic fields will constrain their motion to a predictable level, if you manage to avoid instabilities, but they‘re not able to hold them in place, and that’s a fundamental physics problem that plagues all these Maxwellian local equilibrium plasma machines that everyone is trying to build.
It’s a fundamental physics difficulty that drives the machines they try to build to huge sizes. If you looked at the press releases from the government over the last few decades, you’ll see that these toroidal Tokamak big magnetic donut things are the size of small factories, and cost tens of billions of dollars if you scale them to the size where anybody thinks they might make net power.
They’re simply not economical — they won’t do what the utility companies want, and utilities have been telling them that for 30 years. The government programs go on anyway because it is good & interesting science — but it doesn’t necessarily mean we’re ever going to get to an economical fusion power plant.
I think OPEC and the 1973 oil crisis really highlighted our need for energy independence from foreign oil decades ago. Given that a lot of this research was born from those events, I’m wondering what the timeline is for the big government approach to Tokamak style fusion?
Well, I was an Assistant Director of the Thermonuclear Division of the AEC in 1971 through ‘73 when it was headed by Dr Robert Hirsch. He’s a brilliant man who earlier had worked with Philo T. Farnsworth on the things that we’re now pursuing.
At the time, Hirsch was the head of the thermonuclear fusion program at the AEC when OPEC decided to astound the world by raising oil prices arbitrarily, and suddenly there was an energy problem.
A Tokamak style fusor resulting from the Hirsch-era AEC programs
The AEC under Hirsch decided we’d capitalize on that to try and raise enough money to get fusion research really moving in the Atomic Energy Commission, because up until that point it had run it at a relatively small level. It was split amongst five national labs, with nobody really getting anywhere except understanding instabilities and the problems of confining neutral plasmas.
So we went to Congress and created a program that eventually reached something like $800 million a year in 1970-type dollars, because we could say, “look, if you can get fusion to work, you don’t have to keep using oil”.
However, the problem with those types of Maxwellian systems is they all have to use deuterium, the second isotope of hydrogen, and tritium, the third isotope. Tritium is a radioactive material you have to manufacturer by neutron capture in lithium-6, and it’s an enormously complicated process in an engineering sense. However, it’s the easiest probable way to make fusion between ions, and it’s also the only thing that can possibly work with a magnetic Maxwellian equilibrium system.
So we started the program in the early seventies and raised the money through Congress. The three of us who really put that through, Dr Hirsch, myself, and Dr Alvin Trivelpiece, said, “look, let’s get a lot of money here, make the national labs feel happy so they will be able to pursue their own interest in this at levels that they’re happy with and we’ll take 20% off the top to study the things that we know should really be done at our smaller scale.”
The problem was that all three of us left within nine months, and the people who inherited the program thought it was all real and the program should go forward with magnetic tokamaks, and it’s been that way ever since.
Well in 1995 you wrote a letter to most of the physicists and government administrators in the hot fusion field as well as the influential members of Congress on the funding committees and house and Senate saying that you as well as the other two gentlemen had supported Tokamak designs in the 70s for these political reasons you just discussed. I’m wondering what kind of response you got from them.
No response at all, because the plain fact is what we were saying is the program was completely derailed after we left. Nobody understood that we were doing it to try to raise enough money to scoop some off the top to try new real things, and it became a budget program.
I think professor Larry Lidsky at MIT said it best. He wrote an article in the MIT Technology Review, called The Trouble With Fusion and his article that basically said that when the fusion program became very large in budget, it became a big budget program and ceased to be a fusion research program.
In other words, everybody spent their time worrying about continuing the large budgets year after year, so that they could keep on with their large scale laboratories. That’s a human failing - I mean, people want things to stay the way they are and not be bothered with change.
Well, now that we’ve painted a bleak picture for the conventional approach, I want to talk about your technology. You’ve been working on a form of inertial electrostatic confinement fusion for years now that’s producing big results. Can you start by describing what IEC fusion is?
Yeah, let’s go back. It all begins in 1924 with Irving Langmuir and Katharine Blodgett in the East who wrote several papers in the Physical Review on how to produce negative potential energy wells by having oppositely moving ions and electrons in spherical, cylindrical and slab geometries.
Later, in the 1950s, Philo T. Farnsworth — the inventor of raster-scan television — conceived of a way to use these negative potential wells for controlled fusion. He produced the wells by injecting opposite-sign particles in opposite directions inside of spheres, and then he used a radial electric field to concentrate ions and make fusion in the center of the sphere
Inventor Philo T. Farnsworth with a prototype fusor.
Farnsworth was a very ingenious man, and filed very extensive patents in his process in the late 1950’s. Then he proceeded to try to build some of these machines.
The idea is to make a spherical negative electric potential well, and when you drop ions into it, they’ll circulate back and forth like rolling marbles — hitting each other now and then at the center.
As they come to the center, the density increases because of 1/R² convergence, and when they collide with another ion they’ll either scatter out of the center & give their energy back to the well or they’ll create a fusion event.
The fusion products are very energetic, so they leave the system radially and fly out toward the walls. This creates heating on the walls that you can capture with steam pipes and then use to run steam turbines to produce electricity. If the particles are charged, you can also use a grid to capture them for direct-electric conversion.
The point is that an IEC fusion generator is a spherical colliding beam machine, not a Maxwellian mixed plasma machine. The IEC is completely out of equilibrium.
Farnsworth achieved the spherical wells by putting in spherical screen grids — like two sieves back-to-back inside a sphere. He biased the internal screen grids to a high negative potential, which let him accelerate ions from outside through the screens and they would come to a focus in the center.
Farnsworth had a young postgraduate student, Dr. Robert Hirsch, working with him and together they built little machines that gave them record-breaking results, generating 10¹⁰ and 10¹¹ fusions per second on D-T out of these little devices only six to eight inches in diameter. That was the beginning of it.
A modern Farnsworth-Hirsch fusor generating fusion reactions in the lab.
There was a problem, though. Farnsworth knew — and his patents disclose — that his IEC reactors could never generate net power because the ions had to go back and forth through the sieve-like grids, and every time they went through the grids, they had a chance of hitting the wires of the grids and losing their power.
Remember, the ions had to go back & forth through these grids over a thousand times before they’d create a fusion event in the center of the reactor, and there’s simply no grid that’s transparent enough for that many transits. So ultimately, the machine could never generate net power, despite the fact that it did generate fusion output.
Along about that time, three other people at Los Alamos, Bill Elmore, Jimmy Tuck, and Ken Watson, wrote a paper Elmore-Tuck-Watson, in which they inverted the potential geometry.
Instead of negative voltage on the screens, they put positive voltage on the screens and injected electrons, which would be accelerated inside of the positive screens and make a negative potential well. You could then drop ions inside the screen, and they wouldn’t have to go back & forth through the screens, so the ions wouldn’t hit the screen.
So they solved the problem of ions hitting the screens — but in their design, the electrons would have to make over 100,000 transits before a fusion event occurred, so the electrons ended up hitting the screens instead — which killed that idea.
Ultimately what this means is that no matter how you build these screen-driven systems, you’re likely to produce fusion, but not enough to ever generate net power.
Dr Hirsch was a key developer of this work going back to Farnsworth’s day, and from what I’ve heard he still has one of the machines on his desk at his office in Virginia.
It sounds like Farnsworth’s original idea had a lot of merit, and even produced fusion reactions. The only problem was ions hitting the screens. So what did you do differently to avoid this problem?
We got rid of the screens! I realized after some time that you couldn’t solve the problem of ions hitting the screens as long as you use screens to provide the electric potential. There’s only one other way you can provide the potential without screens — containing the electrons with a magnetic field.
The Bussard Polywell fusor, using magnetic fields instead of charged screens.
As it turns out, even though magnetic fields don’t confine neutral plasmas very well, they do confine electrons, because electrons have very little mass. So if you inject electrons into a quasi-spherical magnetic field that goes towards zero at the center and has a big field at the surface, the electrons will come back, be reflected by the field, and continue going back & forth — and they’ll never see a screen.
Now the problem in our design was how to keep electrons from migrating across the fields to hit the magnetic coils that produced the fields and losing power. So you trade the predictable laws of hitting screens, which you can’t solve for a design loss of magnetic fields and coils outside, which you can control by design.
That’s the basis of our patents for Polywell. We use a quasi-spherical magnetic field into which you inject energetic electrons that are trapped inside, go back & forth and create a negative potential well, and you drop the ions in and they never see any screens and circulate until they collide and create fusion events.
So if I’ve got this right, instead of using an electrostatic field to confine the ions, you’re using a magnetic field — and the only way for hydrogen to leave the reaction chamber is to convert into helium. You know, if ions are fed in as fuel & the reactor automatically ejects helium exhaust, this sounds more like a type of fusion engine than a traditional reactor.
Oh, it is. It’s more like a turbojet engine, in the sense that it’s not a device that you load up and ignite. It doesn’t do that. It’s a continuous dynamic through-flow machine where you are injecting electron which makes the well and take the losses. The ions are independent of the electrons. You drop them in, they can circulate and make fusions without the loss.
It’s like a turbojet engine because you inject something in the front end, add fuel in the combustion chamber, then the reaction takes place and the exhaust products go out the back. It’s a continuous through-flow machine, really a type of power amplifier — not an ignition machine.
So the fuel that you’ve chosen for this device is deuterium, right? You’re running a pure D-D fusion reaction in it?
No, no, no. If Polywell works as it’s supposed to and as we seem to have proven it, it will work with any fusionable ions. It will work with deuterium-deuterium (D-D), deuterium-tritium (D-T), d-helium-3 (D-³He), and with hydrogen-boron-11 (pB¹¹)— it just has to be driven at different voltages for these different kinds of fuels.
D-D is the simplest and cheapest because deuterium is available in all the seawater of the world. Believe it or not, every glass of water you drink 1/6000th of every gallon water on Earth is heavy water. You can buy deuterium at any gas welding shop — it’s a perfectly good fuel and generates heavy net fusion power.
However, D-D actually makes a neutron every now & then, so it’s not a non-radioactive fuel. It makes about as many neutrons as you get when you run a pressurized water fission reactor — without any fission products, of course.
Aneutronic fusion reactions convert pB¹¹ into helium nuclei
In the long run, what you want to do is run them on hydrogen-boron-11 because hydrogen + boron-11 makes three helium atoms and no neutrons. It’s the only aneutronic fusion reaction we know of, and produces three helium atoms — three alpha particles — that escape with high velocity from the center of the machine.
You can put grids outside the machine to electrically bias these alpha particles, slow them down, and use them to make electricity directly without the benefit of turbines. This gives us an interesting long range prospect for clean, aneutronic nuclear power that’s directly converted to electricity at maybe 80% efficiency. That’s the eventual goal.
What kind of power output do you anticipate seeing in the near future from this technology, such as perhaps a successful final prototype?
Well, the next major step in the development program we hope to find support for is about $150 to $200 million, to build a 100 megawatt demonstration plant. That’s not a commercial plant — it’s only a demo plant.
Commercial plants might run at 100 megawatts, but you could scale them up to run as a standard 1,0000 megawatt central station plant. However, smaller sizes seem to be desirable to utilities people because it reduces transmission line losses and makes the grid more distributed.
How does the price tag for this compare watt-for-watt with the big Tokamak fusion projects?
We’ve done studies over the years of what we think our plant costs and what a “big fusion” plant would cost, and we seem to conclude that the actual cost of electric power would drop about a factor of two from the thermal systems that produce steam and run turbines. It might even more if we ran it on pB¹¹ with direct electric conversion.
The problem with pB¹¹ direct conversion is that you have to redesign the plants, because you don’t have any turbines and a lot of the infrastructure that conventional plants have. If you use D-D, can use some of the existing infrastructure because it’s a steam turbine driven system and you’ll have to put the reactors in pits just like for fission. That’s an easy retrofit, though.
You can put a D-D power system down next to an existing steam plant, cut into the steam lines, and shut off the either the oil boilers or the fission reactors to run it on fusion. That immediately saves you money, because the fuel is a very important part of the cost of electricity.
This current project has been underway since 1986, right? How much progress were you able to make in that period of time?
We had 20 years of small scale research in our laboratories with the $21 million of government money has been invested in it, and over 200,000 man hours by technical staff working on this project.
During that time, we’ve tested 15 different prototypes of these machines, and we‘ve managed to define, understand and solve all 19 of the critical physics issues that we found — and we solved them one at a time.
It’s been a very long & tedious process because our funding was always very small. The Navy, which supported us, couldn’t ever put the right amount of money into it. If they had, it would have raised eyebrows on Capitol Hill and generated complaints from the DOE about the allocation.
The Polywell WB6 fusor design
In any case, they said “we can only fund you at a small level — can you do anything with that?” Well, we did. It took us 20 years, but we did finally succeed in solving the last basic physics problem during our final test. All the physics is done, we were ready to do engineering development.
The Navy discontinued funding for your project, and I’m wondering what their reasoning was after 21 years of support?
The reason our funding died was because of the Iraq war. The war budgets have been just consuming everything in sight in Washington — and when it came time for the annual budget process, the total Navy R&D budget was cut by 26%.
So the Navy didn’t just cut us — there were cuts all across the board. One of the things that was cut out was a thing called the Navy energy program, and we were a victim of that.
We managed to find some friends in the Office of Naval Research who kept us alive for about nine months, but that was it, and at end there was no more money.
After they pulled funding, I understand that you kept running tests up until the very moment the power was shut off and that you had some promising results that emerged from later analysis on those final tasks.
Yeah, it wasn’t the power of being shut off. We were looking at our budgets and we had to pay leases on our lab space and we had to commit to yearly leases and we couldn’t do that with the budget monies that were left.
We had a plan to close down by the 1st of November, and we’d just started closing the lab and getting rid of the equipment when we ran some tests on a big machine called WB5, which illuminated us on some things we should’ve seen for 10 years and didn’t.
We realized we had missed a critical point in the old problem of electron losses and quickly designed & built a final machine called WB6 our solution to the problem.
By then we were approaching 1 November 1st, and we still hadn’t tested WB6 in heavy fusion conditions. But I told the team, “we have to do this, we have to finish this”, so we kept right on working past the time to shut the lab down.
On November 9th & 10th we ran the machine for four times, and finally produced fusion from D-D at a rate 100,000 or more times higher than had ever been done by Hirsch and Farnsworth at the same voltages. We realized that we’d solved the electron loss problem. Finally, at last it was solved.
If you had the solution, what made you stop when you did? What ultimately led you to shut down the lab?
On November 11th, we tried another test run, but the hasty construction caused a short in the magnet coils and the thing arced and blew — it didn’t blow up, but arced & melted down. We didn’t have time or money to rebuild it, so on Monday the 14th we started shutting the lab down.
In six weeks we took the whole lab to zero and it all disappeared. We didn’t even know our final results for a month because we didn’t have time to reduce the data until December.
When we finally reduced the data, we looked at it and said, “Oh My Lord, look what we’ve done! It’s actually worked….the last piece is there & the puzzle is solved.” That’s it. Quite ironic. | https://medium.com/discourse/robert-bussard-on-iec-fusion-power-the-polywell-reactor-be4a59dc7318 | ['Tim Ventura'] | 2019-12-13 04:49:30.830000+00:00 | ['Fusion', 'Futurism', 'Science', 'Nuclear Energy', 'Energy'] |
City of Pain | And after one of those journeys that involved the drawing of an imaginary line across the Earth, a flight that brought her to the gleaming nowhere of an airport in the early hours of the morning, the traveler caught another flight, around noon, and continued her journey. She spent six hours on this second plane, or it might have been 16 hours; the difference between the two was difficult to tell in that airborne suspension. The traveler’s wristwatch made one claim, her calendar made another, and her jet-lagged body made a third. Finally, around midday, the plane began its approach for landing, and the traveler could see from the air what resembled, in almost every respect, a familiar metropolis: the same twisting highways, the same elongated parks, the same repeating towers. It reminded her, as aerial views always did, of what her mother had once told her about the collapse of enormous stars, which could shrink to a width no greater than that of a city. Judging only by distance covered, the traveler might well have just circled the globe and returned to the place where she had started. But there was something about this view that convinced her otherwise: The immense city was circular, and the tangle of highways in its center resolved neatly into several major roads leading outward, like the spokes of a wheel. This cartographic regularity was how she knew that she had arrived, for the first time, in the city of Reggiana.
In the terminal, she saw a signboard featuring the crest of the city, illustrated with three dolphins. She’d arrived in late winter, and the weather was inconstant. There were snow flurries one moment and the softest sunshine the next, and yet, she was soon to discover, no matter what the weather looked like on a given day, the temperature was always higher than expected. She who knew how similar cities could be was now interested only in their differences. When she discovered that all the citizens of Reggiana were refugees, recent arrivals from elsewhere, she knew she had come to the right place. The city had been rapidly constructed, everyone coming in at almost the same time; the founding myth said that between the city’s establishment and its peak population, hardly a full season had passed. This collective newness meant that learning the culture of Reggiana was itself central to the culture of Reggiana. Like Qom or Touba, Reggiana was a holy city. As in Jerusalem, Lhasa, Mecca, and Ile-Ife, the experience of the numinous was pervasive in all its streets and on all its walls. To touch a railing or open a gate in Reggiana was to be reminded of human mortality. There was no interaction that was not imbued with a tremendous weight; each was like a sugar cube as heavy as Mount Everest. All this was true, but the atmosphere of the city was neither ascetic nor dramatic. The citizens had, instead, a sober sense of living in an ensorcelled world in which the conflict was with what they could not see. They knew that what was invisible was not thereby imaginary. They were under the scourge of a Visitation, and it was for this reason that all Reggiani carried a curfew in their heads and a knot in their hearts.
At the time of the traveler’s visit, the hand of Death was heavy on that careful city. As in all earthly places, people were born and people died, but Reggiana also daily endured numerous additional bereavements. So extensive was the scourge that, for many of the citizens, the day’s first activity was to check the obituaries. As one carter said to the traveler, taking her to collect her rations: “Each one of us checks the obituaries not only to see who has died during the night but also to confirm that it was not us.” On his cart was painted the city’s crest, which featured a doorknob. “Reggiana,” the carter added, “is one of the few true democracies in the world. Anyone at any moment might succumb to the Visitation: the rich, the poor, the educated or simple, the well-known or anonymous. The actual population of the city is unclear, for we stubbornly include the dead on our census rolls.” He told her that one of the first to die during the Visitation was a customer of his, who had been a leading architect of the city. That man was still referred to, the carter said, in the present tense.
Photo: Teju Cole
The Reggiani traded stories the same way merchants in other cities traded spices, leather goods, perfumes, rugs, and carvings. Each story told in Reggiana was different, and only in memory, only when the traveler attempted to recount to herself the story that she heard, did it become evident that the stories were all one story, variations on a single tale which, in one way or another, were connected to the Visitation. But she soon forgot this realization, and, the next day, encountering new stories, each was as fresh in her hearing as it was in the storyteller’s telling. One afternoon, writing down an account of one who had recovered from the Visitation, she heard the church bell ring, summoning no one, and the azaan sound from the next neighborhood over, unattended by the hurry of feet. No one gathered in the churches or synagogues, no one assembled in the temples or mosques, the schools were empty, the shops remained shuttered, the people stayed home. But the inhabitants of Reggiana were deeply interconnected, and all their civic and social life was conducted within domestic walls. In each house in the city was a means of communication. Families from all over the city reached each other from these humble enclosures; businesses large and small were operated from kitchen tables and bedrooms; lovers, ex-lovers, and future lovers engaged in all the stratagems by which desire could be cultivated in the absence of the beloved’s body.
There are particular forms of knowledge possessed by those who have had to rebuild their reality. The Reggiani were great gourmands, cooking with flagrant disregard for borders, with palm oil, fish sauce, garri, beets, yoghurt, harissa, anchovies, yuca; but what really marked out the cuisine of Reggiana was the love of concentration in all its forms: Given a choice, they would choose demi-glaces instead of stocks, spirits instead of beer, wine instead of water, paste instead of tomatoes, chiles instead of capsicum. Alongside this love of whatever was reduced, preserved, dehydrated, spicy, and pickled, was an inclination toward frugality and an abhorrence of needless waste, culinary or otherwise. No city of comparable size produced less garbage. These habits came naturally to whoever came to the city. The traveler had only been with them a few days when she noticed that she was writing her notes down with the same pencil, day after day, sharpening it each time it became blunt. She felt no inclination to go outside to buy a new one nor would she have been able to as the stationery stores were closed. Shorter and shorter the pencil became until it was the size of a golf pencil. The traveler, writing, marveled at this, not only because she had not persisted with a single pencil since childhood, but also because she was thrilled to share the modesty of the citizens of the city in this way, though she was, herself, only a temporary resident. | https://level.medium.com/city-of-pain-1f77a5eae1e9 | ['Teju Cole'] | 2020-04-02 18:59:33.115000+00:00 | ['Creative Writing', 'Short Story', 'Fiction', 'Literary Fiction', 'Writing'] |
First Draft vs. Final Draft | First Draft vs. Final Draft
What is the difference between the first draft and the final draft of a story or novel?
The first draft contains everything you wanted to say. The final draft contains everything you needed to say—those things that are essential to the story.
The first draft is likely to have more abstractions, while the final draft should be brimming with significant detail.
The final draft should not contain every detail you find interesting or clever, every detail that came to you during your many inspired and challenging hours of writing. It should, instead, contain relevant details that add meaning. Purple flowered couch may be less meaningful, for example, than the broken pot beneath the window. The purple couch is merely a matter of taste, whereas the broken pot indicates that something has happened—a break-in, maybe, or a more general state of disrepair in the lives of the characters.
The final draft may be longer or shorter than the first draft, depending on your inclinations, but it should be more focused.
I usually edit out many thousands of words over the course of my revisions, but some writers create a skeletal first draft and flesh it out later. I tend to write an overblown first draft and pare it down over time. Whether you pare down or expand upon your first draft, in the end, your final draft should be more focused. The associations among the various parts of your narrative will be clearer, and the themes will have been strengthened by the actions and observations of the characters.
The first draft contains everything you wanted to say. The final draft contains what is essential to the story.
The first draft is your baby, the thing you can’t let go of. The final draft is your concession that a book must be interesting, it must be cognizant of an audience, and it must make the reader want to keep turning pages.
By “concession” I do not mean that you have sold your literary soul, only that you have found a way to combine your best vision and your hard-won narrative skills, in order to make a thing of beauty that is both meaningful and entertaining.
Michelle Richmond is the author of four novels and two story collections. Get my weekly writing and publishing tips, or like my facebook page for book giveaways and more. | https://medium.com/a-writers-life/first-draft-vs-final-draft-8b3c9378518f | ['Michelle Richmond'] | 2016-05-30 16:44:55.794000+00:00 | ['Drafts', 'Writers', 'Writing'] |
Beyond Coding: Watson Assistant Entities — Part 4, New System Entities | Photo by Franck V. on Unsplash
In the previous article we looked at system entities — a group of Watson pre-packaged entities available for commonly used concepts. We also saw that this useful feature allows us to quickly build a powerful assistant without having to spend time creating synonyms or Regular Expression patterns.
But system entities are going through an improvement process! In this article we’ll be looking at the upcoming new system entities and the major improvements upon the current system entities.
System Entities, New And Old
Photo by Samuel Zeller on Unsplash
We should first understand that at the time of this article, the new system entities are in Open Beta. This means that:
• New system entities may slightly change in the future
• New system entities are available to users, not just Beta users
• They are only available for certain languages (see the System entity feature support details table on the Supported Languages page for more information)
Also, not all system entities have gone through changes. We’ll only be looking at system entities which have been overhauled. If a system entity is not mentioned in this article it means that it will continue with its current behaviour (see the previous article Beyond Coding: Watson Assistant Entities Part 3 — System Entities for system entity behaviours).
Now let’s look at some of the new system entity features to help you get an understanding of what’s available and what might be useful to you.
The Date System Entity
Photo by Eric Rothermel on Unsplash
As we’ve seen, the Date system entity identifies dates and date ranges, but the new Date system entity allows for additional date detection.
With the current Date system entity a customer might mention an incomplete date, such as “Show me the invoice for 11th of May”. The Date system entity will assume the customer is talking about the future date (such as 11th May 2020) — in other words the following year will be inserted into the date to form a complete date structure.
With the Alternatives feature, the system entity will now supply a list of alternative dates — including the current year. This allows us to get a better understanding of the specific date without having to create complex structures to define aspects such as year.
The Date/Time link now improves the connection between date and times being mentioned in a customer’s query. For example, if a customer were to say “I want to make an appointment for Monday at 2pm”, the current Date and Time system entities would detect the relevant information. With the new Date system entity, both date and time are detected by the Date entity.
This means the assistant has detected a link between the two components, that the customer is likely talking about a time for the specified date. With the current system entity no such link is detected and the system treats the information as two separate items.
If you’ve ever had to deal with public holidays you know that sometimes they occur on slightly different dates. The Festival improvement helps alleviate some of the holiday headaches!
With the Festival improvement the system entity will detect that a significant day has been mentioned, such as New Year. The added advantage is that the system will also use the date and populate the Alternatives feature to help determine the year — does the customer mean New Year from the current year, or the following year?
The Range Link feature allows the assistant to detect a range of dates mentioned in the customer’s query. For example, if a customer were to say “Can I make a booking from 16th May to 1st June?”, the assistant would detect that a range of dates have been mentioned as well as the start and end dates — in this case starting at 16th May and ending at 1st June.
The Number System Entity
Photo by Mika Baumeister on Unsplash
The Number system entity also has a Range Link addition. This is similar to the Date Range Link, but based on number detection instead of date detection.
For example, if a customer were to say “I want to buy items 12 to 15”, the system entity would detect the start and end of the range — allowing for a smoother understanding of the customer’s interaction, rather than forcing the customer to identify every item number.
The Time System Entity
Photo by noor Younis on Unsplash
The current Time system entity automatically assumes a time indicator. For example, if a customer says “Can I make a booking at 2?”, the system entity assumes the time discussed is 2am. In some cases this might be correct, but not in all cases.
This is where the Alternatives feature helps to determine the correct time. This feature provides an alternative time to that which was identified — in this case it would detect that the alternative is 2pm and allow the system to confirm the correct time.
If a customer uses the 24-hour time format, for example, “Can I make a booking at 14?”, the entity will detect that 2pm is being discussed and will not provide an alternative time.
The Time system entity also has a Range Link feature. If a customer were to say “Can I make a booking from 2pm to 5pm”, the system entity would detect the range starting at 2pm and finishing at 5pm.
The advantage is that the system entity will also recognise a time range when the time indicator isn’t provided — and customers can often do this!
For example, “Can I make a booking from 2 to 5?”, would be detected as a time range beginning at 2am and ending at 5am. When associated with the Alternatives feature, the system can begin to identify if the customer was truly talking about 2am, or 2pm.
However, if the customer were to say “Can I make a booking from 14 to 17?”, the time range would be detected as 2pm to 5pm and an alternative wouldn’t be provided.
More often customers will use 12-hour time rather than 24-hour time, so the Alternatives feature is a great way to clarify the customer’s interaction.
One pitfall to look out for is the case where customer omit time indicators and don’t use time formats — for example, “Can I make a booking from 230 to 530?”. The Time entity won’t identify this as a time range.
This is where the Number Range Link can be used. The Number system entity will identify the range starting at 230 and ending at 530. This would allow your assistant to then ask subsequent questions to confirm the time range, or to ask the customer if they could include the time indicators (am/pm). Providing a much better experience than not detecting the range at all.
One thing to note, if the customer provides a time format, such as “Can I make a booking from 2:30 to 5:30?”, the system will detect the time range (and provide alternatives where required).
The last improvement we’ll look at is the Part Of Day feature. Currently the Time entity interprets the time of day references as the following:
• “Morning” is interpreted as 9am
• “Afternoon”, “Evening” and “Night” are interpreted as 6pm
The Part Of Day feature extends this existing component to provide a wider range of times by returning a range of hours, as follows:
• “Morning” is now interpreted as 6am to 12pm
• “Afternoon” is now interpreted as 12pm to 6pm
• “Evening” is now interpreted as 6pm to 10pm
• “Night” is now interpreted as 10pm to 11:59pm
This wider range of interpretations allows for a better understanding of a customer’s interaction.
Other System Entity Improvements
Photo by Roman Kraft on Unsplash
The feature discussed above are some of the more commonly used, but there are several other features available which you may find useful.
You can find more information about the features discussed here and technical changes for new system entities at the New system entities page.
More About Entities
Now we’ve taken look at system entities we’ll be taking a closer look at implementing user generated entities. In future articles, we will examine:
• How to use entities
• How to determine which entities we need
• Entity Best Practices
Stay tuned for future updates! | https://medium.com/ibm-watson/beyond-coding-watson-assistant-entities-part-4-new-system-entities-5db4447524a3 | ['Oliver Ivanoski'] | 2019-11-06 21:39:05.210000+00:00 | ['Artificial Intelligence', 'Chatbots', 'Wa Editorial', 'Editorial', 'Watson Assistant'] |
5 Tips on How to Write Every. Single. Day. | Here’s how I write every single day:
Remove the preciousness.
Let go of the idea that the thing you write has to be perfect, has to generate a lot of attention, and has to be both captivating and meaningful at the same time. Sometimes something can be captivating and not meaningful, and sometimes something can be meaningful and not captivating. But if you wrap yourself up in expectations, you probably won’t write anything at all.
I’ll be honest with you: the first blog post you create probably won’t get much traction. Some do, but those are the outliers. And that’s fine! Even if you don’t create the most click-baity of articles, if you publish consistently and your work has value for people, you’ll find your audience.
But first, remove the preciousness. The blank page is not pristine and pure and perfect — It is a coloring book. It wants to be scribbled on, messed up, crumpled, and begun again.
Remove the preciousness, and begin playing. This is the internet…there are unlimited pieces of blank paper. They only have worth when you give them worth.
Walk. Swim. Be active. Socialize. Step away.
Don’t write.
At least, don’t set aside large chunks of time for writing.
Never say to yourself: I’m going to write all day today.
Instead, tell yourself: I have to write this blog post in an hour, because I’m going for a walk then.
Human beings work when we have time limits. The more time you give yourself, the more likely you are to procrastinate. If you do need eight full hours to write one blog post, you can break it up and write a little bit each day.
If you want to write…do other things. Give yourself a time limit. Give yourself that pressure of “If I don’t write this right now, I’ll have to wait until the kids are asleep/the laundry is done/after I come back from my walk.”
When writing is something special that you have the opportunity to do…as opposed to a chore you have to do…you’ll do it with more determination and eagerness.
Recognize that no one is making you do this.
Writers and other artists sometimes feel a strange pressure that we put upon ourselves.
Society doesn’t ask us to write. Literally no one asks us to write. It’s something we do because we want to.
But sometimes, we don’t feel that way. We might see creativity as a way out of our current lives and problems…If I could just write a novel, if I could just have a successful blog, if I could do this, it’ll be my way “out.”
Especially now, with people feeling insecure about their jobs, this pressure is mounting.
It’s important to keep track of the fact that no one is making you do this. You could create a viral blog post and then never make money again. Hell, you could create a viral blog post and not make a single dime. Alternatively, you could spend years writing and just make a moderate amount of $100-$300 a month.
Which, hey, it’s the phone bill and then some. That’s not bad!
Remove the pressure that writing will utterly transform your life, and remember that this is something you’re doing primarily for fun.
So don’t feel bad about not writing — You don’t have to, any more than you have to play video games.
Realize you’ll always have topics.
Sometimes, writers are afraid they’ll run out of things to say.
You won’t. Forget about this immediately. You might continue to revisit the same topics, over and over. I frequently write about women who regret marrying and having kids, medical trauma and anxiety, and PTSD. This isn’t everyone’s cup of tea, but my stories around these topics definitely still get traction.
You don’t need to worry about being trendy. Just keep writing. Eventually, something you write will line up with the news cycle. But if you’re not a news writer or present yourself as someone who keeps up with trends, you don’t have to keep up with the ever-changing news cycle.
Write about topics you enjoy. Even if it’s dated…like gender roles, for example…it will resonate with someone. If you thought to create it, then there are still people out there who thing about it, too. Many people, I’ve learned, still struggle with the idea that as a woman, there are certain duties and obligations they have to fulfill, even though it’s a topic I used to fear was “dated.”
Ask what feeling you want to provoke in others.
I think this is more important than the topic. What impact do you want your work to have? Do you want people to laugh? To be angry? To be driven to act?
I try not to use anger and fear as motivators. Anger, fear, and shock are good clickbait, and I probably would have more readers and clicks if I utilized these emotions.
But I have made a personal choice to try instead to evoke humor, hopefulness, and introspection. That’s not to say my pieces never provoke anger — Sometimes they do, especially when I talk about sexism, my experiences with racism, and when I’m satirical.
However, when I create something and imagine my audience, I am trying to provoke a specific positive reaction in them.
Instead of worrying about the craft, format, and timeliness of a piece, ask yourself what kinds of feelings you want your ideal reader to come away with.
I usually imagine my reader as one person. It’s impossible to write something for everyone. I imagine she’s one person, probably a woman, who has similar values as myself.
This might seem limiting, and I know that not all of my readers fit that description, and I’m not saying that they should. It’s just a tool to help me write. Thinking of your audience as the entire internet is not helpful. You know around who your audience is — How they lean politically, what their hobbies are, what they like and dislike.
Your audience shouldn’t always dictate everything you write, but if you’re writing to be read, and not writing a diary, eventually you must consider how the work will be received. After all, you do want people enjoying what you write…I think any writer will agree that they want people to read their work by choice. | https://medium.com/energy-turtle-expressions/5-tips-on-how-to-write-every-single-day-94360e793ff6 | ['Lisa Martens'] | 2020-12-28 14:15:54.116000+00:00 | ['Writing Tips', 'Writing', 'Blogger', 'Creative Writing', 'Blogging'] |
Writing for global audiences | Draft clear content so users know how to take action
Illustration by Alexa Ong, Next Billion Users illustrator
I wrote this story with assistance from Luke Easterwood, LeAnn Quasthoff, Jessica Caimi, and Erik Ninomiya, UX writers who have focused on the needs of the Next Billion Users.
John Steinbeck said that “Poetry is the mathematics of writing and closely kin to music.” The Nobel Prize winner in Literature understood the challenge of writing well. Writers must always consider the effect their word choice and grammar have on the meaning and flow of the text, as well as how those choices might make their readers feel.
It can be especially challenging to write for a global audience made up of different cultures, languages, abilities, and expectations. Before you begin, it’s essential to question your own assumptions, deeply understand your users’ contexts, and recognize the potential impact of your writing choices.
The writing guidance you see here includes principles and recommendations for both new and experienced internet users that were formulated after doing user testing around the world. Before going over writing examples, let’s review some of the main challenges involved in creating content for diverse groups of users.
Writing challenges
Challenges users face include limited literacy, limited technology experience, or both. Language and technical terms should be simple to accommodate a wide range of user fluency.
Tech fluency
Tech fluency means understanding the capabilities of devices and apps. Where mobile use is less common, gestures, such as swiping and dragging, can be unfamiliar. Visual cues and videos are needed as instructional tools. Showing how to swipe might be communicated with a simple animation of the horizontal scroll or swipe as the user lands on the screen. (To learn more, consider following the Material Design guidelines for gestures.)
Language
Users interact with language in a variety of ways. For example, many users in India find inputting text in Indic scripts, like Hindi, difficult due to the complexity of the script. Users might be more comfortable typing in English, in Hinglish (a mix of Hindi and English, written in the Latin alphabet), or in Hindi written in Latin letters. In Kenya, many use a Swahili interface, but search in English.
Translating many technology terms can be counterproductive to enhancing comprehension and can, in fact, make terminology more confusing. Many users first learned technological terms, such as Bluetooth, software, and hardware, in English and might be unfamiliar with the translations in their native language. To determine which technical vocabulary to keep in English, it is best to consult with a localization and translation expert.
Writing strategies for global users
As you’re designing experiences for global users, approach language with care and consideration, and keep the following best practices in mind:
Reduce ambiguity for users with clear and simple language
Provide translation guidance to translators and linguists
Create relevant content that matters in users’ daily lives
Examples
Here are some examples of these best practices in action.
Keep language simple
As explained in the Material Design writing guidelines, it’s important to keep words simple. This principle is especially important for emerging markets since non-native English speakers might be using apps in English and could find complex sentence structures hard to understand. New internet users might not understand technological concepts, so explaining steps, like opt-ins and permissions, requires even more care.
When possible, avoid long introductory clauses.
Do (green): The welcome message has a short introductory phrase. Don’t (red): The welcome message has a long introductory clause that may be hard to follow.
Use basic sentence structure. Try to target a single thought per sentence.
Do (green): The UI text has 2 sentences with a different thought per sentence. Don’t (red) example: The UI text has one long sentence with 2 different thoughts.
In general, choose clarity over brevity. Longer messages can add needed context and increase trust, while short sentence fragments can lead to confusion. Add clarifying details if you have the space.
Do (green): The longer message clearly states how much storage is available on the device. Don’t (red): The shorter message doesn’t explicitly state how much storage is available on the device or SD card, possibly causing some users to feel uncertain.
To learn more about the importance of clarity when writing about data storage and cost, read Nurture Trust Through Cost Transparency.
Consider using more words to avoid a potentially complex or unfamiliar term.
Do (green): The question about knowing more than one language is clear and understandable. Don’t (red): The term multilingual might be too hard to understand.
Refer to word lists available online. Different word lists focus on different goals. For example, Ogden’s basic English lists 850 of the simplest and most useful words. It’s composed of simple interactions, such as put, give, and go, and picturable words, such as arm and ticket. Consider using this word list — or one like it — to keep your UI language easy for everyone to understand.
You can also consider creating your own product, category, or location-based word lists as needed.
Lost in translation
As your app might be localized to other languages and regions, it’s key that your UI language can be easily translated into other languages.
Writers sometimes find it tempting to use idioms or slang to create a conversational and approachable tone. But it’s important to understand that colloquial expressions can prove difficult or impossible to translate, reducing global comprehension.
Where there is imagery, think about how the words and images go together. Idiomatic language that plays off an illustration might make it challenging for visually impaired users who rely on screen readers to understand the context and meaning. The language should be comprehensible on its own.
For example, a ride-sharing app might use the colloquial term “Your chariot awaits” when a car is approaching the customer’s location. The phrase might be cute for a native North American English speaker, but not make sense to approximately 20% of people in the US who speak another language at home and to those outside of the US whose device language is set to North American English. If a Bosnian-speaking user translates “Your chariot awaits” with an automatic computer translation service, it is literally translated to “Vaša kočija čeka,” not to the Bosnian equivalent of “Your ride is waiting.” The rider might wonder where their 2-horse chariot is! If the same app showed a notification saying “Hold your horses” to inform a waiting passenger that their ride is late, the non-native North American English speaker might also not understand that “Hold your horses” means to “wait.”
Do (green): “Your ride is here” is easy to understand and translate. Don’t (red): “Your chariot awaits” is too colloquial and may get translated literally.
Do (green): “New features are available” is clear and easy to understand. Don’t (red): “New awesomeness ahead” is too colloquial.
Do (green): “One moment” is clearly understood. Don’t (red): “Hang on” is too colloquial.
If you’re using slang or colloquial phrases, provide a description of what this phrase means to your localization partner or translator so they can properly translate the phrase to other languages.
Use global examples and highlight local references
When you reference examples in UI text, try to use global examples (information that’s understood by people around the world) when possible. Certain references, like local places and holidays, won’t always work for global audiences. If it doesn’t make sense to use a global example, be sure to explain the reference to your translator or localization team so that the translator can substitute a locale-specific example. Some instances where local references should be called out include:
Providers (internet and cable)
Locations (cities)
Names (common first names and nicknames)
Currencies | https://medium.com/google-design/writing-for-global-audiences-d339d23e9612 | ['Susanna Zaraysky'] | 2020-10-08 14:01:44.780000+00:00 | ['Next Billion Users', 'UX', 'Internationalization', 'Content Strategy', 'Writing'] |
I’m a Graduate Student. Why Am I Still Waiting Until the Due Date to Start My Essays? | Photo by Andrew Neel from Pexels
I’m a Graduate Student. Why Am I Still Waiting Until the Due Date to Start My Essays?
In everyday life, I am not a procrastinator.
A typical day for me includes a little bit of tidying up my apartment, getting to work early, and cooking dinner at the same time. I find time to write in between my daily work schedule and my tutoring schedule, even striving to wake up early each day just to get some extra writing in. When I don’t have any assigned responsibilities, I assign them to myself and make sure that my novel receives the 1,600 words per day that it deserves.
When it comes to schoolwork, I cannot say the same thing. No matter how old I grow and how much more responsibility I accept, I still seem to find myself waiting until the day-before or day-of my deadlines to complete my academic work. In fact, as I write this, I am supposed to be writing a paper that is due tomorrow at midnight.
Why am I such a responsible person except for when it comes to the most integral aspect of this time in my life?
It’s not as though I haven’t already planned out the essay and concocted my list of scholarly articles to use for this analysis of methodology I’ve been assigned. I have a plan and I’m ready to write it — I just won’t do it until I can begin to feel my skin crawl with tension and the ticking of the clock intensifying.
Over the years, I’ve learned that some of my best work is produced when I am under pressure. In the entirety of my five years learning at the college level, I have never written a paper once gradually over time and instead have always opted to save my work for the last minute. I faithfully believe in my ability to produce quality work under a time crunch and know that I will successfully meet all of the criteria I’m supposed to when the anxiety hits.
Isn’t that the problem with procrastination, though? The anxiety that it causes is often so stressful that it stifles us. Would I write better if I didn’t have a deadline? I was made aware of the due date of this paper roughly six weeks ago!
Fellow procrastinators, I believe that if we didn’t have a deadline then the work would never get done. If we didn’t subject ourselves to the variable fear and anxiety of closeness to the due-date, pure intimacy with the possibility of failure, we would not produce any work at all. It’s just the way that we are.
Am I going to be this way forever? Are you going to be this way forever?
Probably.
I’m working on a Masters degree and I still can’t seem to stop acting out like this when it comes to my studies, even amidst being a shining example of adulthood when it comes to everything else.
Maybe the answer is to simply stop bullying ourselves for being procrastinators. There’s a stigma to procrastinating that I know I don’t appreciate, and you probably don’t, either.
As long as the paper is getting done efficiently and correctly within excellent standards, does it really matter whether it was written over two weeks or over two hours?
No.
This is the system that works for me and if it works for you too, then relax, kick back, and remember: sometimes that saying about the journey being more important than the destination is nonsense. | https://medium.com/curious/im-a-graduate-student-why-am-i-still-waiting-until-the-due-date-to-start-my-essays-1b1435eaa9ad | ['H. M. Johnson'] | 2020-12-15 02:33:35.999000+00:00 | ['Humor', 'Students', 'Writing', 'Procrastination'] |
In praise of praise | In praise of praise
Acknowledging your employee’s efforts is powerful, motivational, and inexpensive.
Photo courtesy of Kelly Sikkema on Usplash
In the workplace, offering praise to employees is powerful, motivational, and inexpensive.
It’s also grossly underutilized.
Studies show that only around 25 percent of employees are fully engaged with their work. The rest are somewhat or fully disengaged.
Often the problem isn’t the employee; it’s leadership. Employees routinely feel under-appreciated or worse: invisible.
One of the most frequent complaints I hear from coworkers is a lack of recognition from their managers.
“It’s like she has no idea how hard I’m working or what I’m working on,” a coworker told me.
“I drafted the report he requested,” another said. “I stayed late and did a great job. He didn’t even read it and never acknowledged my effort. Instead he asked for a one-sentence summary and then changed the subject.”
Nothing is more demotivating to employees than feeling their efforts aren’t appreciated or noticed. Employees who feel unrecognized are less happy, less productive, and more likely to leave the company. They also are more likely to feel resentment, a corrosive attitude that can spill over to their coworkers.
The good news is that acknowledging and expressing gratitude for your employees’ efforts is easy. It costs next to nothing. And it takes almost no time.
Praise is one of the most powerful things a leader can offer an employee. When delivered sincerely, praise can give people the drive and motivation they need to continue their work at a high level.
Simple gestures that show acknowledgment or appreciation don’t mean that constructive feedback isn’t warranted or should be withheld. They simply make it that much more likely that constructive criticism will be heard.
Of course, praise shouldn’t be dispensed randomly or when undeserved. Doing so would undermine managerial credibility and dilute the impact of the message. But neither should praise be held back for fear of giving too much. The issue is rarely too much praise, but too little.
In their book The Carrot Principle: How the Best Managers Use Recognition to Engage Their People, Retain Talent, and Accelerate Performance, leadership consultants Adrian Gostick and Chester Elton analyzed data from a 10-year study of more than 200,000 employees. They found that managers who consistently praise their employees see significantly lower employee turnover rates. Seventy-nine percent of employees who quit their jobs during the study cited a lack of appreciation as a key reason for leaving.
Gostick and Elton also found that frequent workplace praise helps employees achieve better results. Sixty-five percent of respondents cited “appreciation” as a primary motivator.
Advisory firm Towers Watson found in their Global Recognition Study that expressing praise and appreciation are among the most important factors in building trust between employees and management. More than 40 percent of respondents who said they felt unappreciated said they didn’t fully trust their managers. They also believed their managers didn’t trust them.
The study also found that managers who frequently recognized their employees saw increases in engagement of nearly 60 percent.
Small gestures a manager takes can go far toward motivating him to stay engaged and productive.
Recognition can take many forms: it can be monetary; it can be a formal performance-related program; it can be public recognition; or it can be simple words of praise or encouragement from a manager.
Organizations tend to spend a great deal of time setting up complicated recognition programs. What may instead be needed are well-trained managers who provide praise on a regular basis.
The fact is that it takes very little to offer praise and express gratitude. A thoughtful note and a $20 gift card might be sufficient to show an employee her efforts are noticed and appreciated. This small gesture may lead to deeper employee engagement, higher productivity, enhanced trust, and better results.
Four reasons to offer praise and express gratitude to your employees
It costs the company nothing. Unlike bonus programs, the cost of small tokens of gratitude and verbal praise is negligible. It requires little effort. Writing a quick note or offering well-timed verbal praise is nearly effortless and represents a minuscule investment of managerial time and energy. It makes employees feel good. Employees who feel good about their jobs are happier, more engaged, and more productive. It makes employees feel valued. Employees who feel valued are motivated to contribute to the organization.
Easy ideas for expressing verbal gratitude to employees
“Great job. You made a difference.” “I really appreciate the effort you’re putting in.” “I don’t know what we would do without you.” “Thanks for working late.” “Keep up the good work.” “I have been busy lately, but now I want you to know how much I value you you and your work.” “I’ve noticed how hard you’ve been working.” “Thank you. Your efforts mean a lot to the organization and to me personally.” “Thanks for being there when we needed you.” “You’ve really gone above and beyond.”
Easy Ideas for offering material gratitude | https://medium.com/datadriveninvestor/in-praise-of-praise-6365b92369f3 | ['Tom Johnson'] | 2020-11-12 07:26:55.466000+00:00 | ['Management', 'Employee Engagement', 'Productivity', 'Leadership', 'Work'] |
NLP — Zero to Hero with Python. A handbook for learning NLP with basics… | NLP — Zero to Hero with Python
A handbook for learning NLP with basics ideas
Photo by Sincerely Media on Unsplash
Topics to be covered:
Section 1: NLP Introduction, Installation guide of Spacy and NLTK
Section 2: Basic ideas about a text, Regular expression
Section 3: Tokenization and Stemming
Section 4: Lemmatisation and Stop words
Section 5: Part of Speech (POS) and Named Entity Recognition (NER)
Let’s talk about one by one step about these.
Section 1:
Introduction about NLP
Natural Language processing comes under the umbrella of the Artificial Intelligence domain. All computers are good with numerical data to do processing, this class of section is dealing with text data to analyze different languages in this world.
In this article, we will do a morphological study in language processing with python using libraries like Spacy and NLTK.
If we consider raw text data, the human eye can analyze some points. But if we try to build a mechanism in programming using Python to analyze and extract maximum information from the text data.
Let’s consider we will use a jupyter notebook for all our processing and analyzing language processing. Jupyter comes in anaconda distribution.
Installation guide
First, do install anaconda distribution from this link. After installation, anaconda installs Spacy and NLTK library in your environment.
To install the Spacy link is here.
To install the NLTK link is here.
To download the English language library for spacy is
python -m spacy download en #en stands for english
Section 2:
Basic Concept
We all know that the data is the same in both the files. We will learn how to read these files using python because to work on language processing. We need some text data.
Starts with basic strings with variables. Let’s see how to print a normal string.
print('Amit') #output: Amit
Take an example:
The name of the string is GURUGRAM, the name of my city. When we need to select a specific range of alphabet, then we use the slicing method and indexing method. When we go from left to right, the indexing starts from 0, and when we want the alphabet from right to left, then it starts from minus (-1), not from zero.
Photo created by the author
With python
#first insert the string to a variable string = GURUGRAM #get first alphabet with index
print(string[0]) #output: G #printing multiple alphabets
print(string[2], string[5]) #output: RR #for getting alphabet with negative indexing
print(string[-4]) #output: G
Now get the character with slicing
print(string[0:2]) #output: GU print(string[1:4]) #output: URU
Let’s do some basic with sentences. An example of cleaning a sentence with having starred in it. Lets
I came across a function named is strip() function. This function removes character in the starting and from the end, but it cannot remove character in the middle. If we don’t specify a removing character, then it will remove spaces by default.
#A sentence and the removing character from the sentence
sentence = "****Hello World! I am Amit Chauhan****"
removing_character = "*" #using strip function to remove star(*)
sentence.strip(removing_character) #output: 'Hello World! I am Amit Chauhan'
We see the output of the above, and the star is removed from the sentence. So, it’s a basic thing to remove character but not reliable for accuracy.
Like strip function, I also came across a different operation is join operation.
Example:
str1 = "Happy"
str2 = "Home" " Good ".join([str1, str2]) #output: 'Happy Good Home'
Regular Expression
A regular expression is sometimes called relational expression or RegEx, which is used for character or string matching and, in many cases, find and replace the characters or strings.
Let’s see how to work on string and pattern in the regular expression. First, we will see how to import regular expression in practical.
# to use a regular expression, we need to import re
import re
How to use “re” for simple string
Example:
Let’s have a sentence in which we have to find the string and some operations on the string.
sentence = "My computer gives a very good performance in a very short time."
string = "very"
How to search a string in a sentence
str_match = re.search(string, sentence)
str_match #output:
<re.Match object; span=(20, 24), match='very'>
We can do some operations on this string also. To check all operations, write str_match. Then press the tab. It will show all operations.
All operations on a string. Photo by author
str_match.span() #output:
(20, 24)
The is showing the span of the first string “very” here, 20 means it starts from the 20th index and finishes at the 24th index in the sentence. What if we want to find a word which comes multiple times, for that we use the findall operation.
find_all = re.findall("very", sentence)
find_all #output: ['very', 'very']
The above operation just finds the prints a string that occurs multiple times in a string. But if we want to know the span of the words in a sentence so that we can get an idea of the placement of the word for that, we use an iteration method finditer operation.
for word in re.finditer("very", sentence):
print(word.span()) #output: (20, 24)
(47, 51)
Some of the regular expressions are (a-z), (A-Z), (0–9), (\- \.), (@, #, $, %). These expressions are used to find patterns in text and, if necessary, to remove for clean data. With patterns when we can use quantifiers to know how many expressions we expect.
Section 3:
Tokenization
When a sentence breakup into small individual words, these pieces of words are known as tokens, and the process is known as tokenization.
The sentence breakup in prefix, infix, suffix, and exception. For tokenization, we will use the spacy library.
#import library
import spacy #Loading spacy english library
load_en = spacy.load('en_core_web_sm') #take an example of string
example_string = "I'm going to meet\ M.S. Dhoni." #load string to library
words = load_en(example_string) #getting tokens pieces with for loop
for tokens in words:
print(tokens.text) #output: "
I
'm
going
to
meet
M.S.
Dhoni
.
"
We can get tokens from indexing and slicing.
str1 = load_en(u"This laptop belongs to Amit Chauhan") #getting tokens with index
str1[1] #output: laptop #getting tokens with slicing
str1[2:6] #output: belongs to Amit Chauhan
Stemming
Stemming is a process in which words are reduced to their root meaning.
Types of stemmer
Porter Stemmer Snowball Stemmer
Spacy doesn’t include a stemmer, so we will use the NLTK library for the stemming process.
Porter stemmer developed in 1980. It is used for the reduction of a word to its stem or root word.
#import nltk library
import nltk #import porter stemmer from nltk
from nltk.stem.porter import PorterStemmer
pot_stem = PorterStemmer() #random words to test porter stemmer
words = ['happy', 'happier', 'happiest', 'happiness', 'breathing', 'fairly'] for word in words:
print(word + '----->' + pot_stem.stem(word)) #output: happy----->happi
happier----->happier
happiest----->happiest
happiness----->happi
breathing----->breath
fairly----->fairli
As we see above, the words are reduced to its stem word, but one thing is noticed that the porter stemmer is not giving many good results. So, that's why the Snowball stemmer is used for a more improved method.
from nltk.stem.snowball import SnowballStemmer
snow_stem = SnowballStemmer(language='english') for word in words:
print(word + '----->' + snow_stem.stem(word)) #output: happy----->happi
happier----->happier
happiest----->happiest
happiness----->happi
breathing----->breath
fairly----->fair
Section 4:
Lemmatization
Lemmatization is better than stemming and informative to find beyond the word to its stem also determine part of speech around a word. That’s why spacy has lemmatization, not stemming. So we will do lemmatization with spacy.
#import library
import spacy #Loading spacy english library
load_en = spacy.load('en_core_web_sm') #take an example of string
example_string = load_en(u"I'm happy in this happiest place with all happiness. It feels how happier we are") for lem_word in example_string:
print(lem_word.text, '\t', lem_word.pos_, '\t', lem_word.lemma, '\t', lem_word.lemma_)
Description of words in the lemmatization process. Photo by Author
In the above code of lemmatization, the description of words giving all information. The part of speech of each word and the number in the output is a specific lemma in an English language library. We can observe that happiest to happy and happier to happy giving good results than stemming.
Stop Words
Stop word is used to filter some words which are repeat often and not giving information about the text. In Spacy, there is a built-in list of some stop words.
#import library
import spacy #Loading spacy english library
load_en = spacy.load('en_core_web_sm') print(load_en.Defaults.stop_words)
Some Default Stop Words. Photo by Author
Section 5:
Part of Speech (POS)
Part of speech is a process to get information about the text and words as tokens, or we can say grammatical information of words. Deep information is very much important for natural language processing. There are two types of tags. For the noun, verb coarse tags are used, and for a plural noun, past tense type, we used fine-grained tags.
#import library
import spacy #Loading spacy english library
load_en = spacy.load('en_core_web_sm') str1 = load_en(u"This laptop belongs to Amit Chauhan")
Check tokens with index position.
print(str1[1]) #output: laptop
How to call various operations this token
#pos_ tag operation
print(str1[1].pos_) #output: NOUN #to know fine grained information
print(str1[1].tag_) #output: NN
So the coarse tag is a NOUN, and the fine grain tag is NN, so it says that this noun is singular. Let's get to know what is POS count with spacy.
pos_count = str1.count_by(spacy.attrs.POS)
pos_count #output: {90: 1, 92: 1, 100: 1, 85: 1, 96: 2}
Oh! You are confused about what these numbers are, and I will clear your confusion.
Let’s check what this number 90 means.
str1.vocab[90].text #output: DET
DET means that the 90 number belongs to determiner and the value 1 belongs to it is that this DET repeated one time in a sentence.
Named Entity Recognition (NER)
Named entity recognition is very useful to identify and give a tag entity to the text, whether it is in raw form or in an unstructured form. Sometimes readers don't know the type of entity of the text so, NER helps to tag them and give meaning to the text.
We will do NER examples with spacy.
#import library
import spacy #Loading spacy english library
load_en = spacy.load('en_core_web_sm') #lets label the entity in the text file file = load_en(u" I am living in India, Studying in IIT")
doc = file if doc.ents:
for ner in doc.ents:
print(ner.text + ' - '+ ner.label_ + ' - ' +
str(spacy.explain(ner.label_))) else:
print(No Entity Found) #output: India - GPE - Countries, cities, states
In the above code, we see that the text analyzed with NER and found India as a countries name or state name. So we can analyze that the tagging is done with entity annotation.
Conclusion:
These concepts are very good for learners and can get an idea of natural language processing.
Reach me on my LinkedIn | https://medium.com/towards-artificial-intelligence/nlp-zero-to-hero-with-python-2df6fcebff6e | ['Amit Chauhan'] | 2020-11-15 09:40:26.139000+00:00 | ['Programming', 'NLP', 'Analytics', 'Data Science', 'Python'] |
Your Intuition Will Whisper To You | Difference Between Intuition And Fear
From my own experience, I have often mistaken my intuition for fear. It can be confusing and it’s important to know the difference. After all, I don’t want you to pass up your perfect opportunity out of fear because you’ve mistakenly felt there was an intuitive warning keeping you from moving in that direction.
Intuition can be recognised as an inner guidance, a kind of knowing, or you might say an internal compass. We use terms like “hunch”, “a gut feeling”, “just a feeling”, and “instinct” to describe the way intuition influences our behaviour. Intuition lead towards a path that makes us feel comfortable, even if we are not certain.
Fear or negative emotion, on the other hand, can express itself through a physical response such as aggressiveness, sweating, an adrenaline rush, or a racing heart. Fear may cause us to run away and hide and avoid, and dictates a decision that makes us feel relieved, as though we just survived a threat to our very existence.
Don’t get me wrong, a sudden rush of intuition may be felt strongly as fear and certainly should be looked at, but understanding the difference is important if we really want to properly access our intuition.
Also, try not to use your intuition if you are using your head too much because then you’ll be using your ego and your intuition will not be able to come through. Usually, this is a place of fear, so it is not a good time as you will be too reactive. Instead, use your intuition when you are calm, not rushed and have time to think things over. | https://medium.com/swlh/your-intuition-will-whisper-to-you-told-you-so-67db8b206eda | ['Ye Chen'] | 2018-04-16 12:08:01.789000+00:00 | ['Awareness', 'Life Lessons', 'Self Improvement', 'Intuition', 'Entrepreneurship'] |
Michael Chabon Is A Nerdy Night Owl | Michael Chabon Is A Nerdy Night Owl
A profile of the author in advance of his new novel, ‘Moonglow’
Any profile of Michael Chabon is also necessarily a portrait of his marriage to the writer Ayelet Waldman—the two are “wholly intertwined with each other,” writes Awl pal Doree Shafrir. Waldman does a fair bit of the legwork herself in describing her husband:
“He’s never doing what’s fashionable,” Waldman said. “He’s always just doing what sparks his interest.” That, said Waldman, extended to Telegraph Avenue, which she characterized as “one of his great unheralded masterpieces.” She continued, “He engages with the emotional life of an African-American midwife in a way that’s so believable and authentic and nuanced and complicated. I know that there are some writers who feel like unless they’re actually fucking an African-American midwife they couldn’t write that character, but he did it, and I think he did it beautifully because he approaches writing women now in the same way he approaches men, with a humble openness.”
The fifty-three-year-old Chabon is most notable for this kindness and open-hearted approach to life. Perhaps it’s not unrelated to his resistance to Twitter, and his preference instead for Instagram as his main social network!
Moonglow is fictionalized autobiography (or would you call it autobiographical fiction)—the grandparents in the book are loosely based on Chabon and Waldman, down to Waldman’s struggle with mental illness. Indeed, the only thing that seems to have changed in the Chabon-Waldman household since we last checked in with them in the Times Fashion & Style section in an article entitled “Parents Burning to Write It All Down,” is a greater openness about writing about each other.
Compare 2009:
When they do write about their children, Mr. Chabon and Ms. Waldman check with them first. If the topic might be sensitive, they read the child sections aloud and ask for their permission to publish.
with 2016:
Typically, no one in the family asks permission to write about another member of the family. “We don’t ask that question. Our attitude always is, ‘You get what you get.’ That’s what it means to be the child, spouse, parent of a writer.
I presume this has more to do with writing about the children while they’re still figuring themselves out, but now that they’re mostly grown up and the youngest was the subject of a viral essay in GQ, all bets are off. (In the photo that went along with that 2009 Times piece, the husband and wife are pictured with fingers, hands, wrists, and elbows intertwined, Chabon leaning against Waldman, who sits sideways in a leather armchair, her feet resting atop what appear to be oversized leatherbound books. It’s perfect.)
You can read the rest here: | https://medium.com/the-awl/michael-chabon-is-a-nerdy-night-owl-4445e04fbf49 | ['Silvia Killingsworth'] | 2016-11-16 16:29:44.037000+00:00 | ['Writers', 'Writing', 'Ayelet Waldman', 'Michael Chabon', 'Profiles'] |
4 Lessons Typos Have Taught Me About Grace | This is what happened this spring as I read my Passion Week meditations each evening at the dinner table for “family time.” In the middle of the devotion, I would catch a mistake, which would cause me to pause, grimace, and lose my place. Not the spiritually enriching experience for the family I had envisioned.
Then I realized those posts had been published on multiple platforms. Oh, no! Others were grimacing, too. Not because of their mistake, but in judgment of mine. At least it felt that way.
As I scrambled from the dinner table to my basement study, I began to update each post — on Medium, on Wordpress, on my paperback and Kindle versions. It was exhausting.
Yes, typos are like mosquitoes. Always annoying. Usually frustrating. Sometimes, downright infuriating.
This time a silver lining appeared in the clouds in the form of four lessons my typos have been teaching me about grace. I thought I’d share them in hopes that some other writer out there (or anyone else who makes mistakes) might be encouraged and helped. If you can relate, this is for you.
1) I Have Blind Spots.
Sometimes when I’m driving, I’ll ask a passenger if there is anything in my blind spot, the area just behind the car outside the view of the rear or side mirrors. New cars and trucks now come equipped with “blind spot detection,” a little light that illuminates in the side mirror if a vehicle is occupying the driver’s blind spot. Even though my wife’s car has that snazzy tech feature, I don’t trust it. I still have to look for myself just to make sure before changing lanes.
Writers have blind spots, too, which is why even professional writers employ trained editors. For example, I can read over a manuscript ten times and not catch the misspelled or misplaced word. Usually, I already have re-written the piece so many times that scraps of editorial content I intended to cut out get left in. That scrap content becomes blind spot material.
Every writer who is continually producing new material has blind spots, and that’s okay. It is the same reason why professional athletes have coaches. Regardless of how much we practice our craft, we will never reach perfection. We can only pursue improvement and progress. To that end, we need editors, coaches, and friends, which leads to the second lesson.
2) I Need the Insight and Input of Others.
As a writer and as a human being, I need others to help me see my blind spots. Sometimes they are grammatical, but other times they are moral. Not only does a writer need editors, but believers need each other to help us see areas of our lives that need to come under the influence and control of the Holy Spirit. Self-righteous anger. Pride. A spirit of entitlement. Jealousy. Greed. These are just some of the things that hide in dark spaces my mirrors can’t see.
A huge lesson my typos have taught me is not only how critical community is for the Christian, but how vital teachability is for the disciple of Jesus. As a writer, I often will push back against the suggestion of an editor. By nature, I want compliments, not critique and correction. But I will not make progress with the pen until I am willing to confess the limitations of my ability.
That is what disciple means, anyway. He or she is a learner who follows the example and guidance of someone else. Obviously, a Christian is a disciple (a learner-follower) of Christ, the professor in God’s school of grace, where students make it their aim to conform every aspect of their lives to that grace which defines those who follow Jesus as Savior and Lord.
Every disciple is given various talents and abilities with which they are empowered to contribute to the advance of the gospel as members of the Kingdom of God. But these gifts are limited and imperfect. As a result, we need the insight and input of others as we live in community as teachable believers.
3) The Illusion of Perfection is a False Gospel.
When I have used a proofreader, I am stunned by how many typos were still present in a document I assumed was relatively “clean” from errors. Ah, but remember the blind spot. The moment I deny the blind spot is the precipice of authorial stupidity — thinking far too highly of one’s self. And the proverb becomes reality, that pride goes before a fall.
When a typo is revealed in a published work, either a simple blog post or in a full-length book, it tends to have a disproportionate impact on me as a writer. For all the good that is there, one mistake can bring the entire edifice down upon my heart like a house of cards.
Why do I get so panicked to discover a simple mistake?
The answer: I want to be perfect.
I do not want to just be a good writer, but a great writer. The best writer. At this point, if I could accurately analyze my heart, I’d find the idol of self-righteousness functionally defining my identity. I may claim that Jesus is my righteousness with my lips, but my anxiety would reveal where my true hope is being found — in writing perfection, which is the illusion of a false gospel.
4. Jesus Uses My Imperfection to Magnify His Grace.
In 2 Corinthians 12:7b-10, the apostle Paul wrote about his lesson with grace,
In order to keep me from becoming conceited, I was given a thorn in my flesh, a messenger of Satan, to torment me. 8 Three times I pleaded with the Lord to take it away from me. 9 But he said to me, “My grace is sufficient for you, for my power is made perfect in weakness.” Therefore I will boast all the more gladly about my weaknesses, so that Christ’s power may rest on me. 10 That is why, for Christ’s sake, I delight in weaknesses, in insults, in hardships, in persecutions, in difficulties. For when I am weak, then I am strong.
Like Paul’s boasting, maybe to keep authors from pursuing “writer righteousness,” the Lord has given us typos. Like mosquitoes for the rest of humanity, we must endure the annoyance, frustration, and infuriation of spelling blunders and grammatical gaffes. But the point is clear enough. Even typos are intended to show us that regardless of how flawed our writing (and our lives), we are those who live by grace through the all-sufficient sacrifice of Jesus, who shed his blood, giving his life as white-out for all our sin.
Receive a Free Gift
Receive the digital version of my short book, The Grade Exchange: An Introduction to the Gospel (with Discussion Guide) as a gift and you also will receive a free subscription to Rest for the Weary.
Get Your Free Copy | https://medium.com/the-mustard-seed/4-lessons-typos-have-taught-me-about-grace-65f80e2cabb4 | ['Dr. Mckay Caston'] | 2020-08-19 03:33:56.258000+00:00 | ['Writing', 'Spirituality', 'Religion', 'Grace', 'Christianity'] |
Google mT5 multilingual text-to-text transformer: A Brief Paper Analysis | Unlike the C4 dataset, which was explicitly designed in such a way that any page whose probability is below 99% of being English would be discarded when detected by langdetect,mC4 uses cld3 to detect over 100 languages. ( すごい 🤩)
For anyone curious how this C4 and mC4 originally data is collected you can check out Common Crawl. It is an open-source text data corpus consisting of petabytes of data collected since 2008 scraped from the internet. It contains raw web page data, extracted metadata and text extractions.
Apart from this, some preprocessing steps were applied to mC4 dataset to maintain data quality while training.
A “line length filter” that requires pages to contain at least three lines of text with 200 or more characters.
cld3 was used to detect each page’s primary language was and discard any page whose confidence score less than 70%.
After these filters, the pages were grouped by languages and include in the corpus of all languages.
Pretrained Model: mT5
The model architecture of mT5 is quite similar to that of T5(In mT5 authors used GeGLU instead of RELU).
One of the major tasks in pre-training a multilingual model is; how do we sample data from each language(Since there is quite a disparity in the size of the dataset collected for each language). The authors used the term “zero-sum game” for this problem. Let's take a little detour to learn what is Zero-sum😅;(you can skip this part!)
Zero-sum is a situation in game theory in which one person’s gain is equivalent to another’s loss, so the net change in wealth or benefit is zero. A zero-sum game may have as few as two players or as many as millions of participants. In financial markets, options and futures are examples of zero-sum games, excluding transaction costs. For every person who gains on a contract, there is a counter-party who loses.
So coming back mT5, the main issue here is if they sampled a low resource language too often, the model would end up overfitting it; but if high-resource languages are not trained on enough the model will remain under fitted on them.
Therefore the authors to boost lower-resource languages use the following technique.
p(L) ∝ |L| ^(α),where
p(L) is the probability of sampling text from a given language during pre-training,
|L| is the number of examples in the language, and
α (typically with α < 1) is a hyperparameter that allows us to control how much to “boost” the probability of training on low-resource languages.
Finally, α = 0.3 is a reasonable compromise between high and low resource languages.
Other multilingual related models
Now that we have explored mT5 thoroughly let’s compare it with other existing massively multilingual pretrained language models.
Photo by Annie Spratt on Unsplash
mBERT is one such model. Similar to mT5, mBERT show strong resemblance to BERT.The primary difference in BERT and mBERT is the training dataset. Instead of training on English Wikipedia and the Toronto Books Corpus, mBERT is trained on up to 104 languages from Wikipedia.XLM is also based on BERT but includes improved methods for pre-training multilingual language models.
XLM-R was later released which is based on Roberta model. It is trained with a cross-lingual masked language modeling objective on data in 100 languages from Common Crawl.
mBART was the first sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in 25 languages using the BART objective. In terms of similarity, mBART is very similar to mT5.
Experimentations and Results
To validate the performance of mT5 it is been evaluated on XTREME, XNLI(consisting of task covering 14 languages), the XQuAD, MLQA and TyDi QA.
The largest model mT5-XXL reaches state-of-the-art on all of the task's authors considered. Below is the table showing the performance of mT5 and compared to other models on similar benchmarks.
So here I conclude my brief blog on mT5. Hope you find this an interesting read. If you have any suggestions on any research paper or Machine learning-based topic for you need a blog post for me, do comment.
References | https://medium.com/ai-in-plain-english/google-mt5-multilingual-text-to-text-transformer-a-brief-paper-analysis-30591a3cb7d5 | ['Parth Chokhra'] | 2020-11-26 14:07:00.452000+00:00 | ['Programming', 'Technology', 'Data Science', 'Artificial Intelligence', 'Machine Learning'] |
Is AlphaZero really a scientific breakthrough in AI? | As you may probably know, DeepMind has recently published a paper on AlphaZero [1], a system that learns by itself and is able to master games like chess or Shogi.
Before getting into details, let me introduce myself. I am a researcher in the broad field of Artificial Intelligence (AI), specialized in Natural Language Processing. I am also a chess International Master, currently the top player in South Korea although practically inactive for the last few years due to my full-time research position. Given my background I have tried to build a reasoned opinion on the subject as constructive as I could. For obvious reasons, I have focused on chess, although some arguments are general and may be extrapolated to Shogi or Go as well. This post represents solely my view and I may have misinterpreted some particular details on which I am not an expert, for which I apologize in advance if it is the case.
Chess has arguably been the most widely studied game in the context “human vs machine” and AI in general. One of the first breakthroughs in this area was the victory of IBM Deep Blue in 1997 over the world champion at the time Garry Kasparov [2]. At that time machines were considered inferior to humans in the game of chess, but from that point onwards, the “battle” has been clearly won by machines.
Garry Kasparov vs IBM Deep Blue. 1997. Source: Reuters
On a related note, DeepMind released a couple of years ago AlphaGo, a Go engine which was able to beat some of the best human players of Go [3]. Note that the complexity of Go is significantly larger than in chess. This has been one of the main reasons why, even with the more advanced computation power available nowadays, Go was still a game on which humans were stronger than machines. Therefore, this may be considered a breakthrough in itself. This initially impressive result was improved with AlphaGo Zero which, as claimed by the authors, learnt to master Go entirely by self-play [4]. And more recently AlphaZero, a similar model that trains a neural network architecture with a generic reinforcement learning algorithm which has beaten some of the best engines in Shogi and chess [1].
This feat has been extensively covered by mass media [5,6] and chess-specialized media [7,8], with bombastic notes about the importance of the breakthrough. However, there are reasonable doubts about the validity of the overarching claims that arise from a careful reading of AlphaZero’s paper. Some of these concerns may not be considered as important by themselves and may be explained by the authors. Nevertheless, all the concerns added together cast reasonable doubts about the current scientific validity of the main claims. In what follows I enumerate some general concerns:
Availability/Reproducibility. None of the AlphaZero systems developed by DeepMind are accessible to the public: the code is not publicly available and there is not even a commercial version for users to test it. This is an important impediment, as from the scientific point view these approaches can be neither validated nor built upon it by other experts. This lack of transparency makes it also almost impossible for their experiments to be reproduced.
None of the AlphaZero systems developed by DeepMind are accessible to the public: the code is not publicly available and there is not even a commercial version for users to test it. This is an important impediment, as from the scientific point view these approaches can be neither validated nor built upon it by other experts. This lack of transparency makes it also almost impossible for their experiments to be reproduced. 4-hour training. The amount of training of AlphaZero has been one of the most confusing elements as explained by general media. According to the paper, after 4 hours of training on 5000 TPUs the level of AlphaZero was already superior to the open-source chess engine Stockfish (the fully-trained AlphaZero took a few more hours to train). This means that the time spent by AlphaZero per TPU was roughly two years, a time which would be considerably higher on a normal CPU. So, even though the 4-hour figure may seem impressive (and it is indeed impressive), this is mainly due to the large capacities of computing power available nowadays with respect to some years ago, especially for a company like DeepMind investing heavily on it. For example, by 2012 all chess positions with seven pieces or less had been mathematically solved, using significantly less computing power [9]. This improvement on computing power paves the way for the development of newer algorithms, and probably in a few years a game like chess could be almost solved by heavily relying on brute force.
The amount of training of AlphaZero has been one of the most confusing elements as explained by general media. According to the paper, after 4 hours of training on 5000 TPUs the level of AlphaZero was already superior to the open-source chess engine Stockfish (the fully-trained AlphaZero took a few more hours to train). This means that the time spent by AlphaZero per TPU was roughly two years, a time which would be considerably higher on a normal CPU. So, even though the 4-hour figure may seem impressive (and it is indeed impressive), this is mainly due to the large capacities of computing power available nowadays with respect to some years ago, especially for a company like DeepMind investing heavily on it. For example, by 2012 all chess positions with seven pieces or less had been mathematically solved, using significantly less computing power [9]. This improvement on computing power paves the way for the development of newer algorithms, and probably in a few years a game like chess could be almost solved by heavily relying on brute force. Experimental setting versus Stockfish. In order to prove the superiority of AlphaZero over previous chess engines, a 100-game match against Stockfish was played (AlphaZero beat Stockfish 64–36). The selection of Stockfish as the rival chess engine seems reasonable, being open-source and one of the strongest chess engines nowadays. Stockfish ended 3rd (behind Komodo and Houdini) in the most recent TCEC (Top Chess Engine Competition) [10], which is considered the world championship of chess engines. However, the experimental setting does not seem fair. The version of Stockfish used was not the last one but, more importantly, it was run in its released version on PC, while AlphaZero was ran using considerably higher processing power. For example, in the TCEC competition engines play against each other using the same processor. Additionally, the selection of the time seems odd. Each engine was given one minute per move. However, in the vast majority of human and engine competitions each player is given a fixed amount of time for the whole game, and then this time is administered individually. As Tord Romstad, one of the original developers of Stockfish, declared, this was another questionable decision in detriment of Stockfish, as “lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move” [10]. Tord Romstad also pointed out to the fact that Stockfish “was playing with far more search threads than has ever received any significant amount of testing”. Generally, the large percentage of victories of AlphaZero against Stockfish has come as a huge surprise for some top chess players, as it challenges the common belief that chess engines had already achieved an almost unbeatable strength (e.g. Hikaru Nakamura, #9 chess player in the world, showed some scepticism about the low draw-rate in the AlphaZero-Stockfish match [11]).
In order to prove the superiority of AlphaZero over previous chess engines, a 100-game match against Stockfish was played (AlphaZero beat Stockfish 64–36). The selection of Stockfish as the rival chess engine seems reasonable, being open-source and one of the strongest chess engines nowadays. Stockfish ended 3rd (behind Komodo and Houdini) in the most recent TCEC (Top Chess Engine Competition) [10], which is considered the world championship of chess engines. However, the experimental setting does not seem fair. The version of Stockfish used was not the last one but, more importantly, it was run in its released version on PC, while AlphaZero was ran using considerably higher processing power. For example, in the TCEC competition engines play against each other using the same processor. Additionally, the selection of the time seems odd. Each engine was given one minute per move. However, in the vast majority of human and engine competitions each player is given a fixed amount of time for the whole game, and then this time is administered individually. As Tord Romstad, one of the original developers of Stockfish, declared, this was another questionable decision in detriment of Stockfish, as “lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move” [10]. Tord Romstad also pointed out to the fact that Stockfish “was playing with far more search threads than has ever received any significant amount of testing”. Generally, the large percentage of victories of AlphaZero against Stockfish has come as a huge surprise for some top chess players, as it challenges the common belief that chess engines had already achieved an almost unbeatable strength (e.g. Hikaru Nakamura, #9 chess player in the world, showed some scepticism about the low draw-rate in the AlphaZero-Stockfish match [11]). 10 games against Stockfish. Along with the paper only 10 sample games were shared, all of them victories of AlphaZero [12]. These games have been praised by all the chess community in general, due to the seemingly deep understanding displayed by AlphaZero in these games: Peter Heine Nielsen [13], chess Grandmaster and coach of the world champion Magnus Carlsen, or Maxime Vachier Lagrave [11], #5 chess player in the world, are two examples of the many positive reactions about the performance of AlphaZero against Stockfish in these games. However, the decision to release only ten victories of AlphaZero raises other questions. It is customary in scientific papers to show examples on which the proposed system displays some weaknesses or may not behave as well in order to have a more global understanding and for other researchers to build upon it. Another question which does not seem clear from the paper is if the games started from a particular opening or from scratch. Given the variety of openings displayed in these ten games, it seems that some initial positions were predetermined.
Game between AlphaZero and Stockfish. Last move: 26. Qh1! Top Grandmaster Francisco Vallejo Pons defined this game as “science-fiction”. Source: chess24
Self-play. Does AlphaZero completely learn from self-play? This seems to be true according to the details provided in the paper, but with two important nuances: the rules and the typical number of moves have to be taught to the system before starting playing with itself. The first nuance, although looking obvious, is not as trivial as it seems. A lot of work has to be dedicated to find a suitable neural network architecture on which these rules are encoded, as also explained in the AlphaZero paper. The initial architecture based on convolutional neural networks used in AlphaGo was suitable for Go, but not for other games. For instance, unlike Go, chess and shogi are asymmetric and some pieces behave differently depending on their position. In the newest AlphaZero, a more generic version of the AlphaGo algorithm was introduced, englobing games like chess and Shogi. The second nuance (i.e. the typical number moves was given to AlphaZero to “scale the exploration noise”) also requires some prior knowledge of the game. The games that exceeded a maximum number of steps were terminated with a draw outcome (this maximum number of steps is not provided) and it is not clear whether this heuristic was also used in the games against Stockfish or only during training.
Does AlphaZero completely learn from self-play? This seems to be true according to the details provided in the paper, but with two important nuances: the rules and the typical number of moves have to be taught to the system before starting playing with itself. The first nuance, although looking obvious, is not as trivial as it seems. A lot of work has to be dedicated to find a suitable neural network architecture on which these rules are encoded, as also explained in the AlphaZero paper. The initial architecture based on convolutional neural networks used in AlphaGo was suitable for Go, but not for other games. For instance, unlike Go, chess and shogi are asymmetric and some pieces behave differently depending on their position. In the newest AlphaZero, a more generic version of the AlphaGo algorithm was introduced, englobing games like chess and Shogi. The second nuance (i.e. the typical number moves was given to AlphaZero to “scale the exploration noise”) also requires some prior knowledge of the game. The games that exceeded a maximum number of steps were terminated with a draw outcome (this maximum number of steps is not provided) and it is not clear whether this heuristic was also used in the games against Stockfish or only during training. Generalization. The use of a general-purpose reinforcement learning that can succeed in many domains is one of the main claims in AlphaZero. However, following the previous point on self-play, a lot of debate has been going around with regards to the capability of AlphaGo and AlphaZero systems to generalize to other domains [14]. It seems unrealistic to think that many situations in real-life can be simplified to a fixed predefined set of rules, as it is the case of chess, Go or Shogi. Additionally, not only these games are provided with a fixed set of rules, but also, although with different degrees of complexity, these games are finite, i.e. the number of possible configurations is bounded. This would differ with other games which are also given a fixed set of rules. For instance, in tennis the number of variables that have to be taken into account are difficult to quantify and therefore to take into account: speed and direction of wind, speed of the ball, angle of the ball and the surface, surface type, material of the racket, imperfections on the court, etc.
We should scientifically scrutinize alleged breakthroughs carefully, especially in the period of AI hype we live now. It is actually responsibility of researchers in this area to accurately describe and advertise our achievements, and try not to contribute to the growing (often self-interested) misinformation and mystification of the field. In fact, this early December in NIPS, arguably the most prestigious AI conference, some researchers showed important concerns about the lack of rigour of this scientific community in recent years [15].
In this case, given the relevance of the claims, I hope these concerns will be clarified and solved in order to be able to accurately judge the actual scientific contribution of this feat, a judgement that it is not possible to make right now. Probably with a better experimental design as well as an effort on reproducibility the conclusions would be a bit weaker as originally claimed. Or probably not, but it is hard to assess unless DeepMind puts some effort into this direction. I personally have a lot of hope in the potential of DeepMind in achieving relevant discoveries in AI, but I hope these achievements will be developed in a way that can be easily judged by peers and contribute to society.
— — — — — -
[1] Silver et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.” arXiv preprint arXiv:1712.01815 (2017). https://arxiv.org/pdf/1712.01815.pdf
[2] https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
[3] https://www.theguardian.com/technology/2016/mar/15/googles-alphago-seals-4-1-victory-over-grandmaster-lee-sedol
[4] Silver et al. “Mastering the game of go without human knowledge.” Nature 550.7676 (2017): 354–359. https://www.gwern.net/docs/rl/2017-silver.pdf
[5] https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
[6] http://www.bbc.com/news/technology-42251535
[7] https://chess24.com/en/read/news/deepmind-s-alphazero-crushes-chess
[8] https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
[9] http://chessok.com/?page_id=27966
[10] https://hunonchess.com/houdini-is-tcec-season-10-champion/
[11] https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author
[12] Link to reproduce the 10 games of AlphaZero against Stockfish: https://chess24.com/en/watch/live-tournaments/alphazero-vs-stockfish/1/1/1
[13] https://www.twitch.tv/videos/207257790
[14] https://medium.com/@karpathy/alphago-in-context-c47718cb95a5
[15] Ali Rahimi compared current Machine Learning practices with “alchemy” in his talk at NIPS 2017 following the reception of his test of time award: https://www.youtube.com/watch?v=ORHFOnaEzPc | https://josecamachocollados.medium.com/is-alphazero-really-a-scientific-breakthrough-in-ai-bf66ae1c84f2 | ['Jose Camacho Collados'] | 2017-12-23 17:47:08.544000+00:00 | ['Chess', 'Alphago', 'Artificial Intelligence', 'Machine Learning'] |
This AI Predicts Online Trolling Before It Happens | (Sander van der Werf/shutterstock)
Stanford’s Dr. Srijan Kumar’s AI is being used to weed out fake reviews, but could also help predict and curb online harassment.
By S.C. Stuart
How do you keep online trolls in check? Ban them? Require real names?
Dr. Srijan Kumar, a post-doctoral research fellow in computer science at Stanford University, is developing an AI that predicts online conflict. His research uses data science and machine learning to promote healthy online interactions and curb deception, misbehavior, and disinformation.
His work is currently deployed inside Indian e-commerce platform Flipkart, which uses it to spot fake reviewers. We spoke to Dr. Kumar ahead of a lecture on healthy online interactions at USC.
PCMag: Dr. Kumar, how do you counteract online harassment using data science and machine learning? How does your system identify the trolls?
Dr. Srijan Kumar: In my research, I build data science and machine learning methods to address online misbehavior, which transpires as false information and malicious users. My methods have a dual purpose: first, to characterize their behavior, and second, to detect them before they damage other users. I have been able to investigate a wide variety of online misbehavior, including fraudulent reviews, hoaxes, online trolling, and multiple account abuse, among others.
How are you teaching the AI to spot these patterns?
I develop statistical analysis, graph mining, embedding, and deep learning-based methods to characterize what normal behavior looks like, [and] use this to identify abnormal or malicious behavior. Oftentimes, we may also have known examples of malicious behavior, in which case I create supervised learning models where I use these examples as training data to identify similar malicious entities among the rest.
Your research is currently being used in Flipkart. What problem were they trying to solve and how are they measuring results?
The key problem that I helped address on Flipkart was of identifying fake reviews and fake reviewers on their platform. This is a pervasive problem in all platforms; recent surveys estimate as much as 15 percent of online reviews [are] fake. It is therefore crucial to identify and weed out fake reviews, as our decision as consumers is influenced by them.
What’s the method called here?
My method, which is called REV2, uses the review graph of user-review-product to identify fraudsters [who] give high scoring ratings to low-quality products or low scoring ratings to high-quality products. REV2 [compares] our recommendations to previously identified cases of fake reviewers.
Is it possible for AI to keep an eye inside social networks and raise the alarm when bad behavior is about to arise? Is this purely pattern-based analysis with sentient data crunching or something entirely different?
It is possible to proactively predict when something may go wrong by learning from previous such cases. For instance, in my recent research, I showed that it is possible to accurately predict when one community in the Reddit online platform will attack/harass/troll another. This phenomenon is called “brigading,” and I showed that brigades reduce future engagement in the attacked community. This is detrimental to the users and their interactions, which calls for methods to avoid them. Thus, I created a deep learning-based model that uses the text and community structure to predict, with high accuracy, if a community is going to attack another. Such models are of practical use, as it can alert the community moderators to keep an eye out for an incoming attack.
Do you see a logical extrapolation of your work used in “nudges” to prompt users to clean up their act prior to prosecution? Akin to a teacher at the front of the class keeping a wary eye on the troublemakers in the back row before they fall into criminal masterminded gangs?
Absolutely! A natural and exciting follow-up work is how to discourage bad actors to do malicious acts and to encourage everyone to be benign. This will help us to create a healthy, collaborative, and more inclusive online ecosystem for everyone. There are many interesting challenges to achieve this goal, requiring new methods of interventions and better prediction models. Enabling better online conversations and nudging people to be their better self is going to be one of my key thrusts going forward.
Have you had personal experience with online harassment or was this more of an interesting AI problem to solve for you?
One of the major reasons for me to follow this direction of research was seeing some of my friends being harassed by social media trolls. This led to look for non-algorithmic ways to curb this problem. Being a challenging task, it piqued the interest of the scientist inside me and I eventually learned to create data science and machine learning methods to help solve these problems.
You’re collaborating on the $1.2 million DARPA-funded project “Active Social Engineering Defense,” which continues until 2022. What has the agency asked you to prove?
In this project, we are studying how malicious actors carry out social engineering attacks on unsuspecting victims. Social engineering attacks are very nuanced and complex personalized attacks, with the aim of compromising sensitive information. So the key questions we want to answer are: can we predict when a social engineering attack is happening; and how can we defend against them?
Finally, as a scientist, what really excites you about predicting human behavior? Do you feel we’re getting closer to understanding what makes us “tick” at last?
Human behavior is highly volatile and unpredictable, which makes it fun and challenging to predict. That being said, I do feel that AI is indeed getting better at understanding human behavior. To give one example, recommendation systems are significantly better than years ago at predicting what we want. However, one key piece of this puzzle that needs to be solved is to forecast how malicious entities will recreate themselves after being caught and banned. Thus, I am enthusiastic to build new machine learning and AI models to address this problem. | https://medium.com/pcmag-access/this-ai-predicts-online-trolling-before-it-happens-3bb45aac008c | [] | 2019-03-20 12:33:20.372000+00:00 | ['Technology', 'Solutions', 'Artificial Intelligence', 'Machine Learning', 'Online'] |
No, You Don’t Have a Right to Not Wear a Mask | Image by Sumanley xulx from Pixabay
Coronavirus cases are skyrocketing in a third wave across the U.S., so you know what that means: More lockdowns.
And when lockdowns, stay-at-home orders, and mask mandates are issued, you know what that means: Contrarians will insist that they don’t have to wear a mask.
Known as “covidiots,” some of them claim that the government can’t tell them to wear a mask. That idea is debunked, here:
Others, however, make an argument that is similar, but different in important ways: “I have a right to not wear a mask.”
Here is a tactical guide for responding to this absurd claim when you encounter them recklessly putting other people at risk of infection in stores.
The Preliminary Step: Shift the Burden of Proof
The most important first step to take is to shift the burden of proof. Covidiots say that they have a right to not wear a mask.
Ask: Which right?
This stops a lot of covidiots dead in their tracks. They’ll stammer or stumble into a tautology like, “My right to not wear a mask.” You may have to coax them, a bit.
Ask: Where does that right come from? Do you get it from a statute? Do you get it from the Constitution?
Asking in this particular way — do you get it from a statute or the Constitution? — is designed to trigger them into shouting what most covidiots¹ think is akin to checkmate:
“My constitutional right!”
Of course, some covidiots hope that someone stands up to their mask-less spittle-spewer precisely so they could bellow those very words. That hope forms the second they walk through the door, nay, from the moment they left the house and held aloft the frayed strings of the lone surgical mask on the doorknob, asking it, “Do I care about the well-being of my fellow man, today?”
But hark! What is? That sound?
’Tis the author, a legal writer with a law degree, chuckling softly and popping his knuckles.
Ask: Which constitutional right?
There’s No First Amendment Right to Go Mask-less
If the covidiot says that it’s their First Amendment right to go without a mask, in violation of mask mandates, there are three outcomes. None of them go well for covidiots.
First, they say it violates their freedom of speech. Lawyers have actually advanced this argument in court. They lost, of course, but they did try. Obviously, masks don’t prevent free speech — the covidiot just said something, so it doesn’t stop speech or idiocy from happening — and they also don’t penalize speech, after the fact. That should end the discussion but, if the covidiot doesn’t think that it does, you can say that there’s a significant government purpose for the pandemic response, mask mandates and lockdowns are narrowly tailored to that purpose, other avenues for speech are left open, any speech restrictions are content neutral, and this a “time, place, or manner” restriction, much like a noise ordinance.
Say: Clark v. Community for Creative Non-Violence.
If they point out that they’re talking about wearing a mask and demand to know why you’re talking about noise ordinances, tell them they they’re the ones who were stupid enough to make this all about free speech.
Second, they can say that it violates their religious rights. You can just laugh at this one.
Third, they might claim that it violates their right to assemble.
Say: You can assemble wherever you want… With a mask on.
Regardless of which avenue the covidiot has chosen, end by calling the manager.
They Don’t Understand the Fifth Amendment
Once in a while, a covidiot will stumble on the Fifth Amendment, which forbids:
Deprivation of life, liberty, or property without due process of law
Taking private property for public use, without just compensation
Ask: Would a mask deprive you of life, liberty, or property?
Their only response is “liberty.” They will say it in all caps.
Ask: And you can’t go to court to have your case heard?
They can, of course. They can file a lawsuit and claim that the government is “TYRANNICAL” and their rights have been violated. They just don’t want to.
Say: You have due process available. Use it. In the meantime, put on a mask.
If their idea of the Fifth Amendment was the Takings Clause, rather than the Due Process Clause, it’s much easier.
Ask: What piece of property has been taken from you and is being used for a public purpose?
While the covidiot flounders, call over the manager.
“This Mask is Violating My Eighth Amendment Rights!”
Lord help him. The piece of cloth on his face is a cruel and unusual punishment.
Say: Thoughts and prayers!
Then call the manager.
“I Have Rights Under the Ninth and Tenth Amend –”
Say: No you don’t.
Interrupt them and end on a full stop. Cut the covidiot right off. They will bluster.
Say: There are no rights guaranteed by the Ninth or the Tenth Amendments.
The Ninth and Tenth Amendments are like a key to interpreting the rest of the Constitution. If the Constitution gives, or enumerates, certain powers to the government, then the exercise of those powers by the government do not violate constitutional rights. Hence, why the federal government can tax people without violating the Takings Clause.
Not only does these amendments not give people any substantive rights, at all, let alone one that specifically allows someone to walk mask-less in a pandemic: They explicitly give state governments their “police power,” which lets them govern for the public health, safety, and welfare.
Say: So, you see, you thought that the Ninth and Tenth Amendments gave you the right to go without a mask, but they actually give the governor the power to make you wear one, and the manager the right to kick you out.
“But My Equal Protection Rights!”
You don’t have to say anything. Just point to your own mask. Point to everyone else’s mask.
Oftentimes, though, the covidiot won’t get it.
Say: You’re not being treated unequally.
In Conclusion
No, you do not have a right — constitutional or otherwise — to be in public without a mask on. As cases spike, a vaccine has not yet arrived, and our government has shrugged, it has fallen on our shoulders to keep Americans safe.
Please wear a mask and please keep this article open on your phone so you can confront the mask-less and make the scene that saves us.
Endnotes
[1]: Tricky blighters might respond with, “a federal statute,” because that’s actually a more reasonable retort. That’s okay, though. If they haven’t said which statute gives them the right to go mask-less in public, ask which one it is. If they say anything other than the Americans with Disabilities Act, or the ADA, you can laugh at them because they’re wrong. They’re likely to say the ADA, though, because of that card that made the rounds. If that’s what they say, you ask which disability they’re claiming gives them the right to violate your state’s mask mandate. Even if they do raise a legitimate respiratory health condition that makes mask-wearing difficult for them, the appropriate response is, “Let’s talk to the proprietor of the store to get you a reasonable accommodation.” Just because someone has a breathing issue doesn’t mean they can never be told to put on a mask in a pandemic. That would be stupid. It just means that stores have to make reasonable accommodations available to them, like curbside shopping or home delivery. If you ask what disability they’re claiming and they scream, “HIPPA” back at you, ignore it and proceed directly to the reasonable accommodation part. | https://medium.com/politically-speaking/no-you-dont-have-a-right-to-not-wear-a-mask-bf0b4e64f5d4 | ['Sean Myers'] | 2020-11-21 13:02:40.394000+00:00 | ['Covid 19', 'Pandemic', 'Politics', 'Coronavirus', 'America'] |
Heidi.news’ recipe for growing its members and expanding into new areas during the pandemic | 2. Les flux (The flow) — 5–15 pieces of content per day covering science, health, innovation and climate. Les flux also includes Geneva Solutions, an English section, which is a solutions journalism project. It focuses on the UN and humanitarian community and is published in English. Les flux is published on weekdays.
3. Explorations — a deep dive into a specific topic published in 6–12 episodes. These long read articles allow people to slowly discover a topic and are published on weekends. Topics include digital addiction and the Swiss people who wait for the end of the world. Some of them are turned into a quarterly print edition called La revue des explorations (The review of the explorations). These allow readers to slowly discover a topic and are sold online and in bookstores. | https://medium.com/we-are-the-european-journalism-centre/heidi-news-recipe-for-growing-its-members-during-the-pandemic-24a14f0bfb68 | ['Tara Kelly'] | 2020-09-29 12:51:56.134000+00:00 | ['Journalism', 'Resilience', 'Case Study', 'Membership', 'Media'] |
4 Powerful Books to Help You Be Successful | A Favorite Pastime
Avid readers would no doubt agree that reading books is one of the time-honored, evergreen pastimes that never gets old, mainly because of all the endless possibilities to choose from and because reading is simply enjoyable.
A Pew Research study shows that reasons people love to read are many — learning, discovery, entertainment, enrichment, expanding the worldview, and escaping reality.
Pandemic reading is probably a way of escaping reality for some, getting lost in the pages of a good book — or better yet, a great book.
Great books are powerful and reading is amazing, and the four books that I’m recommending below are great books that do great and powerful things to the brain when read.
These benefits can help you on your road to success in anything that you wish to do or achieve. | https://medium.com/books-are-our-superpower/4-powerful-books-to-help-you-be-successful-d894cde4d64 | ['Audrey Malone'] | 2020-12-29 20:46:35.824000+00:00 | ['Life Lessons', 'Book Recommendations', 'Advice', 'Reading', 'Books'] |
Calculate and run a Flutter migration project | Developing mobile apps with Flutter is a good choice for most projects that start from the scratch. But what about companies that are already run native apps? Does it make sense to migrate them to Flutter? How much would the migration costs be and how much can you save afterwards? We would like to share some insights from our past projects and give you answers to the above questions.
Determining the migration cost and amortization period of such an investment depends on many factors, such as the characteristics of your native code, your business situation, and your project setup. The more information you have, the more accurate your estimate will be. But what if you need to give a quick answer? Imagine you are standing in the elevator with your CTO, only 2 floors to go, and he wants to know the migration costs and the expected payback time of this investment? In this case, the best possible answer is that the cost is 40% of the effort it took to build a native app, and that this investment will pay off in 2 years.
40% and 2 years is a blind estimate, a starting point that will never be the final result, but a good point to start calculating. Based on this assumption, we will look at the three most important factors for migration costs:
1) How best to rewrite the code?
The most efficient and probably only viable way to migrate native apps to Flutter is to write new code in Dart. This may sound like it is super inefficient, but it’s actually the opposite. If your native source code is in good shape, well-structured, and sufficiently enriched with in-code documentation, rewriting the code will only take a small fraction of the time that was originally spent creating the first version of the code.
When calculating the initial effort, consider the effort for your first release and add the sum of all efforts for change requests. Typical maintenance efforts like updates should not be considered. If you rely on many external libraries or SDKs, you should add an additional effort to your calculation. On the other hand, you can reduce the estimated migration effort if you run large parts of your business logic on your back-end server.
I do not want to list all the factors for increasing or decreasing your initial blind estimate. We have put all of our experience into an online calculator. With our “Flutter Migration Cost Calculator” we have created a small online tool that helps you to make an initial estimate of the expected migration costs and realizable cost savings.
2) Consider your business situation
While the characteristics of your native code determine the migration cost, the main factor to calculate the amortization period is your business situation. Let us start with the assumption that you plan to run your business as usual for the next 1–2 years.
In this case, migrating your two native apps to a single code base will reduce your maintenance costs by almost half, since there is only one code to maintain and test, and your support unit also only needs to be trained on one code. So, your estimated cost savings are about half of what it costs to run your two native apps today. Your payback period can be easily calculated by dividing your migration effort by your monthly cost savings.
If you are planning major changes to your apps, the situation changes significantly. Migrating to Flutter while implementing your changes increases your potential cost savings, as you will not only save half of your future maintenance costs, but also half of the development costs for your planned changes. If you plan to change your native app code by about 1/3, then your migration to Flutter would pay for itself immediately.
3) Optimize your project set-up
If your previous calculations have shown that a flutter migration could be a good business case, there are still many things you can do to screw it up or outperform. Here is a short list that can help you avoid the biggest mistakes:
Mixed development teams:
The smoothest migration projects we have seen have been those where 50% of the developers came from the client’s existing iOS and Android development team and 50% were Flutter experts that we provided for the project. Having the native app creators on board accelerates the project because they know the code that needs to be migrated in and out. Combining this with external Flutter expertise will speed up development and provide a steep learning curve for the former native developers. This is critical as they will most likely be the ones maintaining the new Dart code once the project is complete.
Do a planning sprint up front:
Even if it is a minor violation of pure SCRUM principles, start with an architecture and planning sprint. List all the features and epics you will migrate in a chronological migration timeline. The main criterion for prioritizing the feature and epic list is that there are no, or as few as possible, dependencies from anything you do to anything you will do in the future. Forward dependencies should be avoided on both sides, technology, and business. That is why this sprint requires a lot of involvement from the business owner of the project. In most cases, this leads to starting with the security items and working forward from there.
Continuously delivery and testing:
Even though you are sticking to your existing code, you are producing brand-new code and brand-new bugs. To do this, do not underestimate the amount of testing you will need to do. Ideally, include crowd testing from the beginning. Keep your sprint cycles short, no longer than 2 weeks, and have real users testing all the time.
I hope I was able to answer some questions and give you some insight into this exciting topic. I would be happy if we all can expand the Flutter universe by migrating more existing apps out there to this platform. | https://medium.com/flutter-community/calculate-and-run-a-flutter-migration-project-8ef10fdcd8e6 | ['Tobias Kress'] | 2020-12-20 01:21:20.225000+00:00 | ['Project Management', 'Software Migration', 'Flutter', 'Mobile App Development'] |
Issue #2: Updates from the Dev Cave | Welcome to a jam-packed, head-spinning, ultra-caffeinated Updates from the Dev Cave. We have so much dev goodness going on at XYO that we actually had to cut back on what to talk about this week.
Sidenote: For that crying XYO engineer in the corner who cracked the MIT Time-lock puzzle in less than 45 seconds — while driving — sorry, fam. We’ll get to you next time.
Put your seatbelts on, folks, here’s a breakdown of XYO projects and other news!
Arie and LayerOne built a mapping framework and tools roadmap:
While XYO Founder and Architect Arie Trouw was visiting the new XYO Sacramento team (formerly LayerOne), the group dug deep into the roadmap. They immediately settled on how to quickly turn the LayerOne stack into an open tool for anyone to build mapping solutions on Ethereum. In addition to the short-term roadmap, the team worked on a framework for how the two tools will begin to become part of the largest XYO toolset.
We held an XYO Network demo:
The entire XYO team, from King’s Landing (SD headquarters) to Winterfell (Sacramento) was extremely pumped to see a demo of the XYO network in action. This was a breakthrough for XYO as a company, and a real moment for the team. Here’s what our own Lord of Winterfell, XYO Sacramento’s Graham McBain, had to say:
“Seeing such brilliant work consistently coming from the San Diego team inspires us to work harder and more intelligently to keep up. This breakthrough is more than inspiring, it’s another step in our journey to build something larger than ourselves.”
We have a brand new website:
Clean and crisp new graphics. Eye-catching colors that pop. A mesmerizing 3D graphic that you can stare at for hours and never, ever get bored. It’s all there — have a look!
We’ve released a Cooperative Data Handling white paper:
How do you incentivize good behavior on the blockchain? XYO crypto! Resident Research and Data Scientist Erik Saberski came up with a plan to reward users who share correct info across the XYO Network, and wrote a white paper to explain it all. Watch the video for Cooperative Data Handling in the XYO Network, or read the paper in full.
XYO Devs in the Community
Just over a week ago, XYO Network Founder and Architect Arie Trouw teamed up with Blockchain Architect and R&D Lead Sajida Zouarhi of Consensys for a deep dive into the decentralized ecosystem. Like a pair of tech superheros with the Power of Ultimate Knowledge, the two took questions, doled out answers, and gave the audience plenty to think about.
Dozens of people poured into San Diego headquarters for the talk, which covered some of the most exciting areas of blockchain today. Some of the questions discussed included:
What is the cost of privacy in a decentralized ecosystem?
How is blockchain technology forcing businesses to redefine their models in a global economy?
Will the concept of borders, from a physical, geo standpoint, even matter in the future, or will people be aligned by interest instead?
The talk was a massive hit, and it’s just one of many in the works!
XY Oracles not only play a huge part in the XYO Network, the essentially are the XYO Network. And since all-around XYO genius Arie Trouw knows the ins and outs of how they work, he headlined the San Diego Blockchain Developers Meetup to talk about all things XY Oracle.
At “Intro to Oracles”, an EdgeSecure Office event in downtown San Diego, Arie started with the fundamentals — what are oracles — and went deeper by explaining why they are important, what the difference is between centralized and decentralized ones, and more.
Stay tuned for more meetups featuring Arie and XYO talent!
Mark your calendar: On Wednesday, August 26th, we’re holding our next Developer Geospatial AMA! This is your chance to ask the nagging questions about the geospatial and blockchain universe, and get answers from the developers themselves.
Have a question for the hosts? DM us on Facebook, Twitter, or Reddit.
In the meantime, watch our previous AMA!
Thank you for reading — see you in two weeks!
Jenn Perez
Senior Content Manager
XYO Network | https://medium.com/xyonetwork/issue-2-updates-from-the-dev-cave-e4808547d588 | ['Jenn Perez'] | 2018-08-02 18:56:08.162000+00:00 | ['Ethereum', 'Blockchain', 'Bitcoin', 'Cryptocurrency', 'Development'] |
Are You A Distant Dad? | Are You A Distant Dad?
Fear of failure hides our power. Practicing love shares our power.
Photo credit: iStock
By Robert Rannigan
Keeping distance from danger is a universal self-protective response. Fathers who hold themselves away from their children often experience their own children as dangerous — -emotionally. What can men do to resume their full emotional power as fathers?
Men who stay away from their children don’t understand or believe in themselves enough. They haven’t had sufficient exposure to healthy fathering. They haven’t watched and learned how to communicate their own loving presence, intimacy, and learning. Men who distance from their children, whether it’s in front of the T.V., at work or across the country, are avoiding hurt in themselves, which lingers from not receiving their own fathers’ loving attention.
As a therapist, my work is helping people understand where self-doubt is rooted. All of us have experienced feeling inferior. Men who are distant fathers have a history which includes a distant father.
Distancing
It doesn’t matter if the father was never there, left part way through development, died, or just lived distantly right under the same roof. The impact of father remoteness carries meaning into the life of children. Without a steady diet of dad’s loving attention, the experience of his detachment can become a threat to life itself. Daddy distance easily becomes a statement that “outsider children” have a tragic, permanent flaw. Bringing love to this experience is the only healing. Bringing love to his own children can feel impossible to a man experiencing core unworthiness.
And so distanced sons grow to be men unsure what healthy intimacy looks like. Feeling unimportant (because he was unloved) he unwittingly avoids the healing love of relationship with his own children. Going near loving acceptance within him brings his own pain nearer, and because he hasn’t dealt with it lovingly, it remains overpowering. Afraid, distant dad treats his wounded self and his children as peripheral rather than intimate and important.
The man living his experience as “I wasn’t good enough for dad to choose me” must either wake himself from the nightmare with healing self-love or continue to dream walk around his love-hungry children. Deeply wounded men — -isolated from father love, distanced from father-child intimacy — -are part of social confusion which can be healed by men courageous enough to look at their own fear and sadness with love.
Distant dads are an intergenerational habit and pattern of emotional atrophy. Children acting out in school, learning disabled, isolated, all need the balance and closeness of their fathers. | https://medium.com/a-parent-is-born/are-you-a-distant-dad-68258ccd56ef | ['The Good Men Project'] | 2020-12-12 21:57:06.090000+00:00 | ['Parenting', 'Advice', 'Self-awareness', 'Self Doubt', 'Fatherhood'] |
How To Ace Your Internship | Here’s some candid advice from the horse’s mouth on how to handle internship jitters like a pro.
Induction day
This is that one day where department heads get to talk about themselves to people who have to listen. Like most interns, I doodled and daydreamed those sessions away, but I’d advise you to at least note down the bullet points on each slide they show. It’ll help you get a basic idea of what each department does (and in most cases, a basic idea is enough). Plus, that way, when it’s time for them to throw out the dangerous “Are there any questions?” line, you can at least appear to have paid attention by asking “Could you please explain what (bullet point) is again?” Works almost every time.
Agreeing on the project aims and objectives
At some point after the induction, you’ll be assigned a project mentor and handed a sheet covered with detailed bullet points on what you’re supposed to be doing for the next several weeks. The best way to get down to the essentials is to ask your mentor — what is the main thing that they expect you to do? Schedule time for a detailed conversation during which you understand what’s expected of you and come to an agreement on how you will proceed. I cannot emphasise how important it is that both you and your mentor are on the same page about this — you do NOT want to realise in the last week of your internship that you’ve done one thing while your mentor was expecting something else.
Asking your mentor for help
This is where so many interns (including myself) trip up. We think that asking for help is a sign of incompetence. Believe me — it is not. Your mentor will appreciate you asking questions far, far more than you acting like you’re in control and then screwing up at the end. Ask questions whenever you feel stuck — and ideally, schedule an hour’s session each week to talk about how it’s been going and how the next week can go. If your mentor is even halfway committed to helping you do a great job, he/she will welcome requests for help from you.
Approaching others for help
You’ll often have to work with other colleagues, whether for answering a questionnaire, getting contacts, help with setting up a web page or anything else. In all likelihood, these people will view you as an interruption to their own busy days, so the best way to get their cooperation is to be cheerfully direct about who you are and what you want. Tell them clearly how they can help you, and ask them to block out some time for it. Follow up politely a couple of times, and thank them nicely once the work is done. Oh, and pro tip — for things like surveys, hand them out at least ten days before you need the results and tell them you need their response in five days. That, plus multiple follow-ups, is the only way you’ll get a substantial number of responses.
Making mistakes
The two fundamental truths of your internship can be laid down as follows. One — you will make mistakes, and lots of them. Two — the sun will keep rising and setting regardless. Trust me when I say that almost no mistake you make is irreparable. Even if you completely forget to send an agency an image set they need for tomorrow’s media launch (yep, I did that), it is NOT as big a deal as you think it is. Sure, your mentor will give you a talking-to, and you’ll feel as small as a baby ant for a few hours. But after that? Absorb the lesson you learnt, take extra care to not do it again, and move on. It’s as simple as that.
Preparing the final report
This is where you use as much corporate jargon and as many stylish PowerPoint templates as possible to show that you’ve done a marvellous job. It forms the major portion of your final evaluation, so leave no stone unturned to make it as impressive as possible. Get your mentor to spruce it up as much as possible, and download the fanciest templates you can find. Obviously, don’t compromise solid facts for fluff. And make sure you triple check each and every point you mention with your mentor so that you can handle any question your evaluators throw at you.
Presenting your report
Most internships will require you to talk about your project and your recommendations in front of an evaluation panel. While the presentation will lay out the skeleton of your talk, you have to substantiate it with enough verbal explanations, examples and data. Ensure that you practise at least a couple of times with your mentor beforehand so that you can be ready with answers to the questions likely to be asked. As for the talk itself, don’t worry too much about stammering or using fillers in your speech. While you should speak as calmly and confidently as you can, what matters more is that you explain your points clearly and answer questions to the panel members’ satisfaction. Do so, and they won’t mind the occasional pause or filler!
Asking for feedback
After your presentation is over, your panel should ideally provide you feedback on the spot. If not, ask them for it. While they won’t immediately reveal all the results of their evaluation, they should give you some general pointers on what you did well and where you could improve. In addition, have a one-on-one session with your mentor afterwards and get some more detailed feedback. Since he/she has been working extensively with you the past several weeks, he/she will have a good idea of what your strengths and weaknesses are, how you approach problems and handle tough situations and your general suitability for the role you were working in. Most importantly, make sure you stay in touch with your mentor even after the internship is over. He/she will be a great person to ask about career advice, help with professional projects or just general life advice. | https://medium.com/maice/how-to-ace-your-internship-e1de6b683e2c | ['Deya Bhattacharya'] | 2018-10-12 04:31:01.909000+00:00 | ['Productivity', 'Corporate Culture', 'Internships', 'Career Advice', 'Careers'] |
Recommending Trump’s Next Friend with Machine Learning | The How
We’ll demonstrate the concepts I outlined above with real life data. This time we’ll be working front-to-back.
Training Data
In my previous post I have shown how I constructed ‘social’ links by identifying pairs of entities which appeared in the same news article, on the 26th of March. I repeated the same exercise for the following 6 days, all the way up to the 1st of April. This netted me over 400,000 (non-unique) links, distributed across the week like so:
Total number of entity links by day
I deliberately didn’t visualise the 7th day, as we won’t be creating a graph from that, we’ll only check newly formed links compared to day 6.
We’ll do a walk through the exercise for the first day for demonstration’s sake.
Let’s build the training data set by importing the CSV with all the links, filtering on the first day and grouping the same links together, assigning weights to them based on the count. I’ll also be discarding any links which only occurred once to focus on the more prominent ones:
import pandas as pd
import numpy as np df_links = pd.read_csv('/content/drive/My Drive/News_Graph/links.csv') date = '2020-03-31' df_links_train = df_links[df_links['date']==date] df_links_train = df_links_train.groupby(['from', 'to']).size().reset_index() df_links_train.rename(columns={0: 'weight'}, inplace=True) df_links_train = df_links_train[df_links_train['weight'] > 1]
Next, we’ll build a graph structure from the resultant links, then find the largest connected subgraph — this is a requirement for calculating our features. We’ll find that over 95% of nodes are connected on all days, anyway.
import networkx as nx G = nx.Graph() for link in tqdm(df_links_train.index):
G.add_edge(df_links_train.iloc[link]['from'],
df_links_train.iloc[link]['to'],
weight=df_links_train.iloc[link]['weight']) subgraphs = [G.subgraph(c) for c in nx.connected_components(G)] subgraph_nodes = [sg.number_of_nodes() for sg in subgraphs] sg = subgraphs[np.argmax(np.array(subgraph_nodes))] df_links_sg = nx.to_pandas_edgelist(sg) G = nx.Graph() for link in tqdm(df_links_sg.index):
G.add_edge(df_links_sg.iloc[link]['source'],
df_links_sg.iloc[link]['target'],
weight = df_links_sg.iloc[link]['weight'])
The above steps yield us a graph, G, which contains a weighted and undirected network that encapsulates the largest connected sub-graph representing the ‘social network’ of the news on the 26th of March. Specifically, it contains 8,045 edges between 2,183 nodes.
Network plot of the news on the 26th of March
Beautiful, isn’t it? Not very useful, though. Let’s see if we can find out which of these pink guys will make friends the next day.
Similarly to before, we define a data set of links weighted by occurrence, but this time we also filter to links where both entities appeared in the network on the previous day. After that, we will remove all links from the network on the 27th which have already appeared on the 26th and therefore aren’t newly formed edges. Finally, we’ll label the remaining links with a 1, to represent a link that was newly formed:
date_target = '2020-03-27' df_links_tar = df_links[df_links['date']==date_target]
df_links_tar = df_links_tar.groupby(['from', 'to']).size().reset_index()
df_links_tar.rename(columns={0: 'weight'}, inplace=True)
df_links_tar = df_links_tar[df_links_tar['count'] > 1]
df_links_tar.reset_index(drop=True, inplace=True) # filter to only include nodes which exist in training data all_nodes = G.nodes() df_links_tar = df_links_tar[(df_links_tar['from'].isin(all_nodes)) & (df_links_tar['to'].isin(all_nodes))] # create edge columns and filter out those who also appear in training data df_links_tar['edge'] = df_links_tar.apply(lambda x: x['from'] + ' - ' + x['to'], axis=1) df_links_sg['edge'] = df_links_sg.apply(lambda x: x['source'] + ' - ' + x['target'], axis=1) # remove edges which exist in training data df_links_tar = df_links_tar[~df_links_tar['edge'].isin(df_links_sg['edge'])] # label remaining edges with 1 df_links_tar['label'] = 1
With the above steps we have produced a dataframe with 2,183 pairs of entities which have not yet formed a link on the 26th, but will form one on the 27th, and labelled them with a 1. What about those which won’t form a link on the 27th?
Well, theoretically any unconnected pair in our graph could form a link the next day, for all we know. So we need to find all possible combinations of two nodes in the graph from the 26th, which aren’t already included in the pairs we identified as forming a link on the 27th, and label them 0.
Let’s pause here for a second and consider something.
We have 2,382 entities in our graph and we know 2,183 pairs of them will form a new link. How many won’t then? There is an enormous number of possibilities: 2382-choose-2, to be exact. That number is nearly 3 million, which would mean that the ratio of positive to negative samples would be under 1:1000! That’s a very imbalanced data set to train a classifier on. We have a few options to deal with this, but for this exercise I opted for down-sampling my data, because I knew I was likely to have a fair number of rows by the time I finished the whole week.
Note: this may not be best practice — I primarily went with down-sampling for demonstrative purposes and to reduce feature engineering and model training times. Something like SMOTE might be more appropriate here.
We will find all possible combinations of nodes in the network, remove those which have a link on the 26th and those which will form a link on the 27th. We’ll then randomly take a sample of 9x2,183 (nine times the number of newly formed links) from these unconnected pairs, to give us a training ratio of 1:10 — much better.
from itertools import combinations combs = list(combinations(all_nodes, 2)) all_combinations = pd.DataFrame(data=combs, columns=['from', 'to']) all_combinations['edge'] = all_combinations.apply(lambda x: x['from'] + ' - ' + x['to'], axis=1) all_combinations = all_combinations[~all_combinations['edge'].isin(df_links_sg['edge'])] all_combinations = all_combinations[~all_combinations['edge'].isin(df_links_tar['edge'])] all_combinations.reset_index(inplace=True, drop=True) sample_size = 9*len(df_links_tar.index) all_combinations = all_combinations.iloc[np.random.choice(len(all_combinations.index),
size=sample_size,
replace=False)] all_combinations['label'] = 0
Finally, we concatenate the two dataframes:
edges_to_predict = all_combinations.append(df_links_tar[['from', 'to', 'edge', 'label']])
The dataframe edges_to_predict will be home to the list of all new links, labelled with a 1, and a sample of edge pairs which won’t form a link, labelled with a 0.
Feature Engineering
We’ll recall that to create features we need to calculate the number of common neighbours, the Jaccard Coefficient, the Adamic/Adar Index and the Preferential Attachment Score for each pair of our training data. The NetworkX library will be our best friend for this. All we need to do is create a bunch of node pairs, that is, a list of tuples to identify the source and target of each:
bunch = [(source, target) for source, target in zip(edges_to_predict['from'].values, edges_to_predict['to'].values)]
Feature calculation is then laughably easy:
# Preferential Attachment preferential_attachment = nx.preferential_attachment(G, bunch) pref_attach = [] for u, v, p in preferential_attachment:
pref_attach.append(p) edges_to_predict['preferential_attachment'] = pref_attach # Jaccard Coefficient jaccard_coefficient = nx.jaccard_coefficient(G, bunch) jaccard = [] for u, v, p in jaccard_coefficient:
jaccard.append(p) edges_to_predict['jaccard_coefficient'] = jaccard # Common Neighbours cn = [] for source, target in zip(edges_to_predict['from'].values, edges_to_predict['to'].values):
cn.append(len(list(nx.common_neighbors(G, source, target)))) edges_to_predict['common_neighbours'] = cn # Adamic Adar Index adamic_adar = nx.adamic_adar_index(G, bunch) aa = [] for u, v, p in adamic_adar:
aa.append(p) edges_to_predict['adamic_adar_index'] = aa
Finally, we have real features and real targets! The dataframe now looks like this:
Classification
We’re ready to train our machine learning model. Off-screen, we have repeated the above process for 5 subsequent days, with a slight modification for the 6th day, our training data: on the 31st of March, I did not down-sample the data, as it won’t be a part of our training set. We need to pretend like we don’t know what links will be formed on the 1st of April, therefore we need to use all the data available.
We’ve seen in our sample training data that our feature columns assume widely varying ranges of values. To enhance our learning model, we’ll scale them to values with a zero mean and a standard deviation of 1, using scikit-learn:
from sklearn.preprocessing import StandardScaler df_test = pd.read_csv('/content/drive/My Drive/News_Graph/edges_test.csv') df_train= pd.read_csv('/content/drive/My Drive/News_Graph/edges_train.csv') features = ['preferential_attachment',
'jaccard_coefficient',
'common_neighbours',
'adamic_adar_index'] X_train = df_train[features].to_numpy()
Y_train = df_train['label'].to_numpy() X_test = df_test[features].to_numpy()
Y_test = df_test['label'].to_numpy() scaler = StandardScaler() X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
As a reminder, we are training a model on a slightly imbalanced data set and testing on a realistic, very imbalanced data set:
Ratio of positive samples in training set: 0.1.
Ratio of positive samples in test set: 0.00092.
We’re ready to initialise our model: logistic regression.
Yes, logistic regression. Not what you expected when I touted machine learning in my title, is it? I promise I tried — I really did. I tested a Random Forest model, a Support Vector Machine, I spent a week tuning Light Gradient Boosting and XGBoost models. I grid-searched a deep neural network. None of them could beat the ROC of logistic regression. So there you have it. I suppose it’s quite fitting for my old-school features.
from sklearn.linear_model import LogisticRegression clf = LogisticRegression(class_weight='balanced') clf.fit(X_train, Y_train) lr_prob = clf.predict_proba(X_test)[:, 1] lr_pred = clf.predict(X_test)
The class_weight parameter ensures that the error penalties in the model are proportional to the ratio of positive/negative training samples in our data, making the model more sensitive to getting a positive prediction wrong, and increasing performance overall.
Let’s see how we did on our test data.
ROC Curve of logistic regression model
We have a very respectable ROC AUC score of 0.9312. Let’s plot the confusion matrix.
Logistic regression confusion matrix
This allows us to calculate our precision and recall:
Precision: 0.0076
Recall: 0.8394
Not bad — not great, but not bad. We managed to identify almost 84% of future links. However, we identified so many potential links that less than 1% of our positive predictions actually came true the next day. | https://towardsdatascience.com/recommending-trumps-next-friend-with-machine-learning-6317cdc640c3 | ['Marcell Ferencz'] | 2020-05-10 00:07:44.941000+00:00 | ['Link Prediction', 'Data Science', 'Imbalanced Class', 'Python', 'Graph Theory'] |
A Pandemic and Neo-Conservative Capitalism are Killing the Working Class | Disasters cause loyalists. People look to our leaders for help, and — whether they receive it or not — are easily swayed into looking optimistically towards the future. Political figures give heartwarming speeches about how we will get through this as a nation, as we “always do”, and, among the heroic doctors and samaritans spending countless hours working to solve the crises, billionaires who chip into various charities are regarded as benevolent humanitarians. Rinse and repeat.
But for now, the subways remain crowded with working people, all of whom would rather be home. They’re stocking the aisles of Target with the necessities that those at home, with the luxury of safety, will be ordering online. Fast-food workers will still be in contact with dozens of customers. First responders will continue to work tirelessly, knowing that the next emergency could be the one that infects them. Everyone knows how you get the virus; only some have the security to keep away from it. Stimulus package efforts, supposedly aimed at alleviating the impact of the working people, have been underwhelmingly, unsurprisingly mediocre. The Federal Reserve’s corporate fund, however, will manage to provide around $4-$4.5 Trillion to the wealthiest corporations in the nation. As the Washington Post’s Helaine Olen puts it, “The just-passed stimulus bill is not only a missed opportunity to permanently give American workers the benefits enjoyed by those in other wealthy countries, but yet another successful cash grab by corporate interests and the wealthiest among us. The bill is facilitating a corporate coup.”
Research suggests that the poorer you are, the likelier you are to catch, and die from, COVID-19. In New York, nineteen of the twenty neighborhoods with the lowest percentage of positive COVID-19 tests have been in wealthy zip codes. Meanwhile, the highest concentration of cases is in neighborhoods with low average incomes, largely populated with immigrants and minorities. Since America does not provide a single-payer healthcare system, coronavirus patients only visit if they’re insured. Those who suffer from the virus are doomed to an ugly dilemma: either go to the hospital and try to survive the medical expenses, or stay home and try to survive the virus. Those who have been laid off are scraping together whatever they can to pay next month’s rent. Whoever may keep their jobs, in these neighborhoods, are reliant on buses, subways, or some other form of mass transit where the virus can spread with ease. Substandard housing, and the sharing of homes between families or roommates, also facilitates the spread of the virus. Minorities are, like in most nationwide disasters, most affected. In Milwaukee County, Wisconsin, black citizens make up for 80% of coronavirus-related deaths, while only making up 26% of the population. Illinois, Michigan, and Louisiana make up the three states where the rate of black coronavirus-related deaths triples their population. Nationwide, black Americans make up 18% of the population, but they have counted for 33% of COVID hospitalizations.
The pandemic’s impact on low-income communities and working-class citizens is inextinguishable, economically, than the recessions of years prior. However, America’s stock market has held strong. It’s easy to see why: the airline industry, for example, has received $58B in funds from the federal government in the stimulus package — with a reprieve from paying the fuel or cargo tax. The Federal Reserve has pumped $2T into lending markets to prop up the unstable stock market, almost like a scene from Weekend at Bernie’s. While Congressional Republicans and Democrats alike are quick to defend the needs of the wealthy, blue-collar workers are either being laid off or working in disastrous conditions. A record 17 million jobless claims have been filed in America in March, with those who remain in work fighting for their right to safety. Two weeks ago, bus drivers in Birmingham and Detroit went on strike over the lack of protective gear and benefits they received on the job. Despite the growing lethality of the working conditions in hospitals, grocery stores, or meatpacking factories, many companies have taken their workers for granted. Nonetheless, corporate America has left its workers in the dust, flexing its muscles with a bailout that will provide substantial funds at the taxpayer’s expense. Talk about adding insult to injury. America’s Randian idea that one’s worth is defined by their rank and productivity as an employee is grossly misleading, and contradicted by the criminal lack of caution conducted by billion-dollar organizations that remain open. They simply do not care about you.
Pseudophilanthropists such as Jeff Bezos and Mark Zuckerberg are chipping in a minute’s worth of wages to various charities and organizations. Don’t be fooled by their charity work; these tax write-offs are used for nothing more than your complacency. Some ask — “why not be grateful? They could have donated nothing at all.” — that’s exactly the kind of complacency they’re looking for. They will let you hold a few pennies, as long as you don’t ask for a dollar bill. The pandemic’s effects on lower-income communities — many of which work under Bezos and Zuckerberg, by the way — could be reduced to a fraction, if billionaires and their corporations provided livable wages and full benefits to their employees. During the crisis, they’ve flown themselves to remote homes for an early summer vacation.
Many people were inevitably going to die from this virus. The working class didn’t have to be first in line. | https://ethanhekker.medium.com/a-pandemic-and-neo-conservative-capitalism-are-killing-the-working-class-46d4aad82020 | ['Ethan Hekker'] | 2020-04-10 17:13:54.570000+00:00 | ['Opinion', 'Coronavirus', 'Pandemic', 'Covid-19'] |
Cuthbert’s Holy Island | Monastery Wall, Lindisfarne. Author photograph.
Like many monastic foundations in England, the church and monastery fell into disrepair and ruin after the Reformation took hold in Britain. The king suppressed the monasteries and expelled the monks. In the 1800s, the church tower collapsed, and today an isolated stone arch frames the sky, linking the weather-worn remains of two medieval columns.
Arch, Priory Church, Lindisfarne. Author photograph.
The abbot of Melrose Abbey had sent St Cuthbert to assume the post of Prior at Lindisfarne. Part of his responsibilities included reforming the life and customs of the monks in his charge, bringing their practices into conformity with the Roman practices that were displacing the indigenous Celtic ideas. Cuthbert’s reforms were not universally embraced; some of his new charges resisted Mediterranean innovations. Ultimately, Cuthbert found that the job of wrestling fractious monks was too much for him. He sought permission to step down from his position and he left the community to live as a hermit. His first hermitage was constructed on a small island that floats right off the Lindisfarne beach. Known as St Cuthbert’s Island, one can still wade out to this minute stone circle at low tide. Just as Lindisfarne is cut off from the mainland twice a day, so too would Cuthbert have been free of contact with the monks when the water raced across the marshes.
That isolation proved insufficient, and Cuthbert sailed east to one of the Farne Islands. This group of shattered stones, flung down in a group off the Northumbrian coast, ensured privacy. Cuthbert lived in splendid isolation on the rocks, surrounded by sea birds and demons, until King Egfrid appointed him the next Bishop of Lindisfarne in 685. Messages announcing the good news were sent to the island. Cuthbert failed to respond. Finally, the king himself was forced to take boat and employ his royal presence to pry Cuthbert, like a limpet, off his spray-soaked rocks.
Cuthbert reluctantly assumed the post of Bishop of Lindisfarne, and spent the final two years of his life in pastoral duties. Having received warning that his end was approaching, he boarded a boat for his beloved hermitage on Farne Island. According to Bede, one of the monks asked him when next they would see Cuthbert, and the aged bishop replied, “When you bring my body back here.” Two months later, that unhappy day arrived, and the monks returned Cuthbert’s corpse to Lindisfarne, where it was interred beside the altar in the church. As we saw in an earlier installment of this series, his body did not remain long on Lindisfarne. Today it resides in Durham Cathedral.
Lindisfarne Castle. Author photograph.
The other significant landmark on the island stands seaward of the monastery ruins. Lindisfarne Castle was built in the sixteenth century to serve as a defensive fortification against the Scots. Many of the stones that make up its walls were taken from the defunct monastery. In 1901, the castle was acquired by Edward Hudson, and he employed Sir Edwin Lutyens to refurbish it. In addition to rehabilitating the castle, Lutyens also had the idea to invert some old herring fishing boats, and convert them into sheds, which remain to this day.
Herring Boat Sheds, Lindisfarne. Author photograph.
St Cuthbert’s Way is a fine walk through dazzling country. I am not certain, however, that it ever really felt like more than an arduous hike for me. Other than Lindisfarne, and possibly St Cuthbert’s Cave, there is no real connection between the trail and St Cuthbert’s life. Even the point of origin, Melrose Abbey, was not in existence when Cuthbert began his march.
Most of what we know about Cuthbert has been filtered through the myth-making of the Venerable Bede, and then further obscured by the passage of centuries. Older pilgrimage routes, like the Camino de Santiago de Compostella, have a history that reinforces their significance. The Spanish path follows a well-worn route that has been sanctified by the footsteps of millions of pilgrims over more than a thousand years. Each town, every church and monastery, has a history, a place in an overarching story. I think that is what was missing on St Cuthbert’s Way. It is new, ahistorical, a route that has very little real connection with the life of Cuthbert.
On the other hand, it was a lovely hike, a fine way to spend four days in the British countryside. | https://medium.com/the-peripatetic-historian/cuthberts-holy-island-fd3d482c064a | ['Richard J. Goodrich'] | 2020-12-21 13:05:39.164000+00:00 | ['Nonfiction', 'History', 'Outdoors', 'Christianity', 'Travel'] |
I Don’t Want To Be Brutal To Myself, But The Other Alternative Seems Unacceptable | Guilt runs through my veins and shame cuts the deepest. I can’t let go or re-wire my mind in a way that frees me from them.
Some days, I don’t see the point of continuing. I see flickers of light here and there and they have been sufficient enough to keep me going.
But I relapse. Again and again. I don’t know if the madness will be over. In some ways, it’s grown even worse. Perhaps I should cut myself some slack and forgive myself for falling into this mental snare again.
I can’t give myself grace. I don’t feel like I deserve it.
I re-write my inner dialogue and read it and see the same damn script all over again: I’m lazy. I should be doing more. I’m not talented enough. I could be doing much better. I should be doing all I can to make up for the lack of output early on. Nobody will think I’m worth anything unless I do something impressive.
Is there an utter lack of hope?
I’m reminded of how much I matter. That I belong. And I’m so grateful for that. It’s saved my life so many times.
That’s what I hold onto, but then an external cue or trigger cuts me down again, reminding me of how much I lack, how lazy I seem compared to others, and how far behind I am when it comes to reaching bigger goals.
Using negativity to motivate myself isn’t working. But I don’t know how else to soldier on because I’m wired in such a way that shame and mental punishment are the only correct ways to proceed. To toughen up with brute mental force. To treat every emotion as an imposter. To apply willpower as if a gun was pointed to my head.
Trust me, when I am called “soft” or “fragile,” I just want to scream, “You clearly haven’t been in my head, look at how much I’ve been growing increasingly harsh towards myself, acknowledging the cruel and callous reality around me.”
But why does this bother me so much? Why does it matter how people judge my energy, productivity, and overall progress?
I hate being shamed for being too slow.
I hate being shamed for not reaching my goals faster.
And most of all, I hate when people assume that I’m lazy and making excuses, but as most clinically depressed people can concur, it’s not easy to “snap out of depression” and just bulldoze your way through the biggest obstacles.
I still push through writer’s block. I still dream, scheme, and seek to overpower my own weaknesses. I despise any weakness and my pride is still wounded from seeing how insufficient I truly am — my two choices are to either eradicate as much of my weaknesses as I can or to accept them. And no matter how much I want to believe that I am lovable (and can even succeed) in spite of those weaknesses, I just can’t. I hate seeing how lacking I am and nothing can change my brutal self-treatment unless I actually grow to become more than what I am.
Not much is motivating me except my dark side. I wish it weren’t so, but shame keeps cutting me to the core that ignoring it and pretending that everything is just “love, peace, joy, and light” won’t suffice because the ravenous dark side of me has only grown stronger the more I attempt to do and the more harshly I look at my past self and berate her for every wrongdoing, setback, excuse, delay, and insufficiency that I can’t stand even now.
Do I want inner peace? Yes.
Do I want total self-acceptance and not to externalize my worth or put it up for anyone to judge? Hell yes.
However, I can’t tolerate laziness. I can’t tolerate not being as good as I want to be. I can’t tolerate losing myself but at the same time I also can’t tolerate others judging my true self as insufficient, unworthy, and worse than average.
But again, it’s all on me. Rationally, nobody is expecting so much of me — at least not that I’m aware of. And even if someone were, why should I let a negative opinion of me cause me to self-destruct to atone for my sorry existence? I should not have to live this way.
It’s going to take longer than expected to get through the worst of the storm. 2020 has not been my year and I highly doubt that I will “crush it” in 2021. I’ve set smaller goals that are easier to reach, and the humiliation that comes from not doing as much as other people is something I need to get over because expecting too much of myself seems to have the opposite effect — making me less productive, more shameful, and more hesitant.
Am I ready to love myself as I am? I don’t know. This alternative seems unacceptable, as if it’s the easy way out. But then again, I hate how dependent I am upon external measures of greatness and how much I keep losing myself because of it. So perhaps, this alternative really is the only way out.
All I can do is take it day by day. It’s a trying process, but I’m tired of the brutal way I treat myself because all I’m doing is dying inside — I need to save myself before it’s too late. | https://medium.com/song-of-the-lark/i-dont-want-to-be-brutal-to-myself-but-the-other-alternative-seems-unacceptable-f8f742ba9c84 | ['Lark Morrigan'] | 2020-12-01 04:34:57.561000+00:00 | ['Self', 'Depression', 'Mental Health'] |
A Simple Guide On Using BERT for Binary Text Classification. | Update Notice II
Please consider using the Simple Transformers library as it is easy to use, feature-packed, and regularly updated. The article still stands as a reference to BERT models and is likely to be helpful with understanding how BERT works. However, Simple Transformers offers a lot more features, much more straightforward tuning options, all the while being quick and easy to use! The links below should help you get started quickly.
Update Notice I
In light of the update to the library used in this article (HuggingFace updated the pytorch-pretrained-bert library to pytorch-transformers ), I have written a new guide as well as a new repo. If you are starting out with Transformer models, I recommend using those as the code has been cleaned up both on my end and in the Pytorch-Transformers library, greatly streamlining the whole process. The new repo also supports XLNet, XLM, and RoBERTa models out of the box, in addition to BERT, as of September 2019.
1. Intro
Let’s talk about what we are going to (and not going to) do.
Before we begin, let me point you towards the github repo containing all the code used in this guide. All code in the repo is included in the guide here, and vice versa. Feel free to refer to it anytime, or clone the repo to follow along with the guide.
If your internet wanderings have led you here, I guess it’s safe to assume that you have heard of BERT, the powerful new language representation model, open-sourced by Google towards the end of 2018. If you haven’t, or if you’d like a refresher, I recommend giving their paper a read as I won’t be going into the technical details of how BERT works. If you are unfamiliar with the Transformer model (or if words like “attention”, “embeddings”, and “encoder-decoder” sound scary), check out this brilliant article by Jay Alammar. You don’t necessarily need to know everything about BERT (or Transformers) to follow the rest of this guide, but the above links should help if you wish to learn more about BERT and Transformers.
Now that we’ve gotten what we won’t do out of the way, let’s dig into what we will do, shall we?
Getting BERT downloaded and set up. We will be using the PyTorch version provided by the amazing folks at Hugging Face.
Converting a dataset in the . csv format to the . tsv format that BERT knows and loves.
format to the . format that BERT knows and loves. Loading the .tsv files into a notebook and converting the text representations to a feature representation (think numerical) that the BERT model can work with.
files into a notebook and converting the text representations to a feature representation (think numerical) that the BERT model can work with. Setting up a pretrained BERT model for fine-tuning.
Fine-tuning a BERT model.
Evaluating the performance of the BERT model.
One last thing before we dig in, I’ll be using three Jupyter Notebooks for data preparation, training, and evaluation. It’s not strictly necessary, but it felt cleaner to separate those three processes.
2. Getting set up
Time to get BERT up and running.
Create a virtual environment with the required packages. You can use any package/environment manager, but I’ll be using Conda.
conda create -n bert python pytorch pandas tqdm
conda install -c anaconda scikit-learn
(Note: If you run into any missing package error while following the guide, go ahead and install them using your package manager. A google search should tell you how to install a specific package.) Install the PyTorch version of BERT from Hugging Face.
pip install pytorch-pretrained-bert To do text classification, we’ll obviously need a text classification dataset. For this guide, I’ll be using the Yelp Reviews Polarity dataset which you can find here on fast.ai. (Direct download link for any lazy asses, I mean busy folks.)
Decompress the downloaded file and get the train.csv, and test.csv files. For reference, the path to my train.csv file is <starting_directory>/data/train.csv
3. Preparing data
Before we can cook the meal, we need to prepare the ingredients! (Or something like that. <Insert proper analogy here>)
Most datasets you find will typically come in the csv format and the Yelp Reviews dataset is no exception. Let’s load it in with pandas and take a look.
As you can see, the data is in the two csv files train.csv and test.csv . They contain no headers, and two columns for the label and the text. The labels used here feel a little weird to me, as they have used 1 and 2 instead of the typical 0 and 1. Here, a label of 1 means the review is bad, and a label of 2 means the review is good. I’m going to change this to the more familiar 0 and 1 labelling, where a label 0 indicates a bad review, and a label 1 indicates a good review.
Much better, am I right?
BERT, however, wants data to be in a tsv file with a specific format as given below (Four columns, and no header row).
Column 0: An ID for the row
Column 1: The label for the row (should be an int)
Column 2: A column of the same letter for all rows. BERT wants this so we’ll give it, but we don’t have a use for it.
Column 3: The text for the row
Let’s make things a little BERT-friendly.
For convenience, I’ve named the test data as dev data. The convenience stems from the fact that BERT comes with data loading classes that expects train and dev files in the above format. We can use the train data to train our model, and the dev data to evaluate its performance. BERT’s data loading classes can also use a test file but it expects the test file to be unlabelled. Therefore, I will be using the train and dev files instead.
Now that we have the data in the correct form, all we need to do is to save the train and dev data as .tsv files.
That’s the eggs beaten, the chicken thawed, and the veggies sliced. Let’s get cooking!
4. Data to Features
The final step before fine-tuning is to convert the data into features that BERT uses. Most of the remaining code was adapted from the HuggingFace example run_classifier.py, found here.
Now, we will see the reason for us rearranging the data into the .tsv format in the previous section. It enables us to easily reuse the example classes that come with BERT for our own binary classification task. Here’s how they look.
The first class, InputExample, is the format that a single example of our dataset should be in. We won’t be using the text_b attribute since that is not necessary for our binary classification task. The other attributes should be fairly self-explanatory.
The other two classes, DataProcessor and BinaryClassificationProcessor, are helper classes that can be used to read in .tsv files and prepare them to be converted into features that will ultimately be fed into the actual BERT model.
The BinaryClassificationProcessor class can read in the train.tsv and dev.tsv files and convert them into lists of InputExample objects.
So far, we have the capability to read in tsv datasets and convert them into InputExample objects. BERT, being a neural network, cannot directly deal with text as we have in InputExample objects. The next step is to convert them into InputFeatures.
BERT has a constraint on the maximum length of a sequence after tokenizing. For any BERT model, the maximum sequence length after tokenization is 512. But we can set any sequence length equal to or below this value. For faster training, I’ll be using 128 as the maximum sequence length. A bigger number may give better results if there are sequences longer than this value.
An InputFeature consists of purely numerical data (with the proper sequence lengths) that can then be fed into the BERT model. This is prepared by tokenizing the text of each example and truncating the longer sequence while padding the shorter sequences to the given maximum sequence length (128). I found the conversion of InputExample objects to InputFeature objects to be quite slow by default, so I modified the conversion code to utilize the multiprocessing library of Python to significantly speed up the process.
We will see how to use these methods in just a bit.
(Note: I’m switching to the training notebook.)
First, let’s import all the packages that we’ll need, and then get our paths straightened out.
In the first cell, we are importing the necessary packages. In the next cell, we are setting some paths for where files should be stored and where certain files can be found. We are also setting some configuration options for the BERT model. Finally, we will create the directories if they do not already exist.
Next, we will use our BinaryClassificationProcessor to load in the data, and get everything ready for the tokenization step.
Here, we are creating our BinaryClassificationProcessor and using it to load in the train examples. Then, we are setting some variables that we’ll use while training the model. Next, we are loading the pretrained tokenizer by BERT. In this case, we’ll be using the bert-base-cased model.
The convert_example_to_feature function expects a tuple containing an example, the label map, the maximum sequence length, a tokenizer, and the output mode. So lastly, we will create an examples list ready to be processed (tokenized, truncated/padded, and turned into InputFeatures) by the convert_example_to_feature function.
Now, we can use the multi-core goodness of modern CPU’s to process the examples (relatively) quickly. My Ryzen 7 2700x took about one and a half hours for this part.
Your notebook should show the progress of the processing rather than the ‘HBox’ thing I have here. It’s an issue with uploading the notebook to Gist.
(Note: If you have any issues getting the multiprocessing to work, just copy paste all the code up to, and including, the multiprocessing into a python script and run it from the command line or an IDE. Jupyter Notebooks can sometimes get a little iffy with multiprocessing. I’ve included an example script on github named converter.py )
Once all the examples are converted into features, we can pickle them to disk for safekeeping (I, for one, do not want to run the processing for another one and a half hours). Next time, you can just unpickle the file to get the list of features.
Well, that was a lot of data preparation. You deserve a coffee, I’ll see you for the training part in a bit. (Unless you already had your coffee while the processing was going on. In which case, kudos to efficiency!)
5. Fine-tuning BERT (finally!)
Had your coffee? Raring to go? Let’s show BERT how it’s done! (Fine tune. Show how it’s done. Get it? I might be bad at puns.)
Not much left now, let’s hope for smooth sailing. (Or smooth.. cooking? I forgot my analogy somewhere along the way. Anyway, we now have all the ingredients in the pot, and all we have to do is turn on the stove and let thermodynamics work its magic.)
HuggingFace’s pytorch implementation of BERT comes with a function that automatically downloads the BERT model for us (have I mentioned I love these dudes?). I stopped my download since I have terrible internet, but it shouldn’t take long. It’s only about 400 MB in total for the base models. Just wait for the download to complete and you are good to go.
Don’t panic if you see the following output once the model is downloaded, I know it looks panic inducing but this is actually the expected behavior. The not initialized things are not meant to be initialized. Intentionally.
INFO:pytorch_pretrained_bert.modeling:Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
INFO:pytorch_pretrained_bert.modeling:Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
(Tip: The model will be downloaded into a temporary folder. Find the folder by following the path printed on the output once the download completes and copy the downloaded file to the cache/ directory. The file should be a compressed file in .tar.gz format. Next time, you can just use this downloaded file without having to download it all over again. All you need to do is comment out the line that downloaded the model, and uncomment the line below it.)
We just need to do a tiny bit more configuration for the training. Here, I’m just using the default parameters.
Setting up our DataLoader for training..
Training time!
Now we’ve trained the BERT model for one epoch, we can evaluate the results. Of course, more training will likely yield better results but even one epoch should be sufficient for proof of concept (hopefully!).
In order to be able to easily load our fine-tuned model, we should save it in a specific way, i.e. the same way the default BERT models are saved. Here is how you can do that.
Go into the outputs/yelp directory where the fine tuned models will be saved. There, you should find 3 files; config.json , pytorch_model.bin , vocab.txt .
directory where the fine tuned models will be saved. There, you should find 3 files; Archive the two files (I use 7zip for archiving) config.json, and pytorch_model.bin into a .tar file.
and into a file. Compress the .tar file into gzip format. Now the file should be something like yelp.tar.gz
file into format. Now the file should be something like Copy the compressed file into the cache/ directory.
We will load this fine tuned model in the next step.
6. Evaluation
Time to see what our fine-tuned model can do. (We’ve cooked the meal, let’s see how it tastes.)
(Note: I’m switching to the evaluation notebook)
Most of the code for the evaluation is very similar to the training process, so I won’t go into too much detail but I’ll list some important points.
BERT_MODEL parameter should be the name of your fine-tuned model. For example, yelp.tar.gz .
The tokenizer should be loaded from the vocabulary file created in the training stage. In my case, that would outputs/yelp/vocab.txt (or the path can be set as OUTPUT_DIR + vocab.txt )
(or the path can be set as ) This time, we’ll be using the BinaryClassificationProcessor to load in the dev.tsv file by calling the get_dev_examples method.
to load in the file by calling the method. Double check to make sure you are loading the fine-tuned model and not the original BERT model. 😅
Here’s my notebook for the evaluation.
With just one single epoch of training, our BERT model achieves a 0.914 Matthews correlation coefficient (Good measure for evaluating unbalanced datasets. Sklearn doc here). With more training, and perhaps some hyperparameter tuning, we can almost certainly improve upon what is already an impressive score.
7. Conclusion
BERT is an incredibly powerful language representation model that shows great promise in a wide variety of NLP tasks. Here, I’ve tried to give a basic guide to how you might use it for binary text classification.
As the results show, BERT is a very effective tool for binary text classification, not to mention all the other tasks it has already been used for.
Reminder: Github repo with all the code can be found here. | https://medium.com/swlh/a-simple-guide-on-using-bert-for-text-classification-bbf041ac8d04 | ['Thilina Rajapakse'] | 2020-04-17 07:44:55.104000+00:00 | ['NLP', 'Bert', 'Data Science', 'Pytorch', 'Artificial Intelligence'] |
We Were 27 Minutes Into the Zoom Call and I had to Pee | Of course the call started late. All Zoom calls do, I’ve learned that since we became a completely remote workforce. I also learned Zoom is a video conferencing tool, not just a conference calling tool. Funny story actually.
It was the first call I ever had on Zoom, a one-on-one with my manager, who 7 minutes in politely asked: “You know I can see you, right?” I was flossing my teeth shirtless. I had been seen.
Anyway, on the morning of this dilemma, I had been drinking water all morning. And by morning I mean for the past 45 minutes since waking up at 9:15 am. It was a 10 am call. I was binge drinking water because I read on one of those health blogs that water boosts immunity, and I’ll take all the immunity-boosting I can get right now.
Okay, I actually just read the headline, but sure drinking more water doesn't sound like something that will hurt me so I’m all in.
In the 45 minutes since I woke up, I had finished two liter-sized carafe's of lemon water and was feeling great. Spritely even. I quickly brushed my teeth and logged on to join the call. My bladder hadn’t filled up.
Yet.
27 minutes into a call is far past an appropriate time to interrupt the momentum and pause the conversation for a bathroom break because “I need to go pee quick.”
To be honest, I’m not even sure there ever is an appropriate time to say “I need to go pee quick” on a Zoom call.
27 minutes into an hour-long call, also means 33 more minutes of the call, which is far too long of a time to hold a now bursting bladder.
This, my friends, is what we calling being in a predicament.
And this, my friends, is when I stopped hearing anything being said on the call.
I mean yes my body was still in the chair, but I was hypnotized by this sensation.
The teapot! No, you cannot use the teapot. Okay, you’re right.
*7 seconds later
The teapot! Dammit, you can’t use the teapot! You just brewed a fresh pot of green tea and you’ll ruin it. Then you’ll go make coffee and didn’t we promise each other you were going to take this time and invest in overcoming your coffee caffeine dependency? Fuck! Okay, you’re right.
This was the conversation in my head during minute 28 of the call. Also known as minute one of my absence from the call.
During minute 29, I pulled it together and started piecing together a plan.
To my immediate left was an empty ceramic bowl, stained orange from the remnants of my breakfast, Papaya, presenting as an option for me to relieve myself into.
Oh, don’t shake your head at me. You’ve been here before, and you’ve had these same thoughts, you’re just too cowardly to admit it. Coward!
How could I pull this off?
I’d go on mute, slide my pants down to my knees with my right hand, slide the bowl to my knees with my left, and be done in less than a minute. I’d then sit the bowl on the ground out of sight until the end of the call.
Shit, I drank both of those liters of lemon water, and if my math is correct, which it probably isn’t, this bowl isn’t going to be big enough. Then what’ll I do?
The math checked out. A fair point.
Next!
There’s a printer within reach. I’ll print a photo of my face, tape it to my chair, slide down from my chair to the floor, crawl to the bathroom, pee, flush, no actually not flush because they’ll hear that, crawl back to my chair, and replace the photo.
Hmm, sounds too complicated. Why not just bring the computer with me, keep it on my face, pee sitting down, and then walk back to my desk?
Yes, I actually considered bringing my laptop with me to pee while keeping my video and sound on for a team Zoom call. This is unchartered territory people.
I know, I have aluminum foil!
I used to do this all the time to hang up from phone calls. I’ll tear a piece off, crinkle it near my mic, and end the call. Then, I’ll message into our slack channel “Internet down, brb”, go pee, and then call back in.
Kinda sounding like a genius right now if I’m being honest with you all.
But no, that wouldn’t work. That would still totally interrupt the call.
I was running out of ideas.
Despite congratulating myself for my in-the-moment creativity, which was now sparking consideration for a career change, maybe a lateral move into idea generation, I was still bursting and only 3 minutes had passed.
Folks, I kid you not. I was not going to make it.
Then, it happened.
I saw the light.
Like a Dr. Phil guest who has been to heaven, I SAW THE LIGHT!
I found my peace.
At this minute of the call, minute 32, after exhausting all sly options, I, as a 28-year-old, came to terms with the fact that I was simply going to pee my pants on this Zoom call. Now, I hadn’t peed my pants in over 24 years, but I also hadn’t worn exclusively pajamas for 3 weeks straight in over 24 years, so my rationale was checking out.
And once I came to terms with this reality.
Bliss.
Complete, warm, bliss.
Richie. Human | https://medium.com/playback-memoirs/we-were-27-minutes-into-the-zoom-call-and-i-had-to-pee-e0f0d1b3a44c | ['Richie Crowley'] | 2020-04-19 19:30:56.941000+00:00 | ['Tech', 'Awkward', 'Comedy', 'Coronavirus', 'Humor'] |
What A Different Christmas This Year! | What A Different Christmas This Year!
2020 was like no other
Photo by Ben White on Unsplash
Most people had small family gatherings or none at all
With the coronavirus pandemic still raging across the world, Christmas 2020 was very different for most people. With travel restrictions and social distancing recommended, people were not able to spend time with family members and friends as they normally would during the Christmas holidays.
People were advised to spend time with only those within their own households. Large gatherings and parties which are normal during the holidays were discouraged. Some people were unwilling to comply with the recommendations and held large parties. Either they exposed a lot of people to the dreaded COVID-19 virus or their parties were a failure because people were unwilling to attend because of possible exposure to the virus.
It was a different Christmas in many ways for people as travel was restricted so they were not able to spend the time with friends and family. Others did not heed the advice and got on planes or in cars to visit loved ones. Many people did travel, but most seemed to have taken precautions such as social distancing and the wearing of masks.
For people who do not have family and friends with whom to spend the holidays, this Christmas may not have been any different than what is normal for them. They may have spent the time alone as they do every holiday. They were in isolation, but it was no different for them.
Life was different this year, and Christmas was not the same for people who normally have loved ones with whom to share special occasions and holidays. They were not able to spend the time with the people who mean the most to them.
It is sad that many people do not have anyone with whom to share the holidays. For them, the coronavirus pandemic caused problems, but the holidays were no different. Homeless people likely did not see much difference in how Christmas was spent.
We should feel thankful and blessed if we have family and friends who care about us whether we were able to see them or not. | https://medium.com/illumination/what-a-different-christmas-this-year-e9b0e6cefa33 | ['Floyd Mori'] | 2020-12-26 23:25:56.188000+00:00 | ['Pandemic', 'Holidays', 'Coronavirus', 'Family', 'Christmas'] |
Awakenings: The romantic science of Oliver Sacks | Photo by Matt Hardy on Unsplash
After one week of treatment (and on a dose of 2 gm. L-DOPA daily), Mrs B. started talking -quite audibly for the first time in many years, although her vocal force would decay after two or three short sentences, and her new-found voice was low-pitched, monotonous, and uninflected…With raising of the dose to 3 gm. L-DOPA daily, Mrs B…now showed considerable spontaneous activity…She was much more alert, and had ceased to show any drowsiness or ‘dullness’ in the course of the day. Her voice had acquired further strength, and the beginnings of intonation and inflection: thus one could now realize that this patient had a strong Viennese accent, where a few days previously her voice had been monotonous in timbre, and, as it were, anonymously Parkinsonian. – Oliver Sacks in his book ‘Awakenings,’ pp. 69–70
The above excerpt comes from one of several case studies Oliver Sacks conducted as a young physician in the spring of 1969 at Mount Carmel, an institution housing post-encephalitic patients who contracted encephalitis lethargica or sleeping-sickness, only to suffer from “post-encephalitic parkinsonism” for decades, lasting until the end of life. Sleeping-sickness was an epidemic that lasted for a brief time after World-War One and “disappeared” in 1926, however it left affected individuals in a perpetual state of a severe kind of quasi-immobility, only able to make the subtlest of movements with great and concentrated effort.
The story of Dr. Sacks’ exciting time in 1969, when after administering the newly discovered ‘miracle drug’ L-DOPA he witnessed patients who had been thought of as ‘extinct volcanoes’ suddenly come alive with vibrance and vitality was dramatized in the 1990 film “Awakenings” starring Robert DeNiro and Robin Williams.
Oliver Sacks and Robin Williams on the set of the 1989 movie ‘Awakenings.’ Credit: Oliver Sacks
Although the story itself is fascinating, something I found intriguing was Sacks’ falling out with the medical community and the difficulty he had in publishing the dramatic descriptions of his patients’ awakenings which were however followed by a complex aftermath of “sometimes bizarre, and unpredictable states,” too unpredictable and complex to be simply considered as “side-effects.” Sacks emphasized:
“These could not, I indicated (in a letter to the Journal of the American Medical Association published in 1970), be seen as ‘side-effects,’ but had to be seen as integral parts of an evolving whole. Ordinary considerations and policies, I stressed, sooner or later ceased to work. There was a need for a deeper, more radical understanding” (p. xxxi).
He received ample backlash from the medical community, some saying he was against the ‘miracle drug’ L-DOPA by mentioning its ‘side-effects’ while some claiming he was just making it all up. He wrote up his year’s worth of research findings on his patients at Mount Carmel into a properly formatted medical article, complete with “statistics and figures and tables and graphs” and then submitted it to various medical journals: all rejected his paper, sometimes with “vehemently censorious, even violent, rejections, as if there were something intolerable in what I had written…”
“This confirmed my feeling that a deep nerve had been struck, that I had somehow elicited not just a medical, but a sort of epistemological, anxiety — and rage.”
Sacks would later publish the full account of his time at Mount Carmel a few years later in his 1973 book Awakenings, with several descriptively rich case-studies of over a dozen patients. And although it was well-received by the general public, according to Sacks it elicited a heavy silence — or as he put it, “a mutism” on the topic– from his colleagues in the medical field which left Sacks feeling alienated from the medical community. | https://medium.com/swlh/awakenings-the-romantic-science-of-oliver-sacks-54f160c32ae2 | ['Gavin Lamb'] | 2020-02-09 22:00:00.610000+00:00 | ['Rejection', 'Philosophy', 'Knowledge', 'Science', 'Perseverance'] |
How To Make Your Potential Clients Like You Without Having To Answer The Phone | Any salesman will tell you: the key to making a sale is to make the customer happy. Well, when you are an attorney, potential new clients that are calling you need to feel a sense of either happiness or hopefulness that you will be able to help them. Here’s how you can make your potential clients like you before you even get the chance to speak one on one with them.
Always Have a Live Person Answer the Phone:
When potentially new clients are calling you, they aren’t calling with the intention of leaving a voicemail. In fact, recent studies have shown that voicemail retrieval is down, and the amount of voicemail being left is down as well. People do not trust that these recorded messages will be heard, and therefore will hang up on move on in their search for help. When you have a live receptionist inside your office or a virtual receptionist at your answering service answer the call, it exerts professionalism, and allows the caller to experience the intake, which will let him/her know you will be calling them back to discuss his/her case. Even better, answer your own phone when you can!
Make a Lasting Impression:
Having a live representative of your firm answer your phone creates the right first impression. This way, your callers know you are dedicated enough to your practice to make sure that there cry for help did not fall on deaf ears. If you simply could not answer the phone because you weren’t available to speak, but you can send the potential client an email or a text, do it as soon as possible. The quick follow up will let callers know that you are always reachable and that you handle matters swiftly.
Establish Your Web Presence:
Your website has to look new. Think of your web presence as your physical appearance. If a potential client came to hire you for your legal services and you were dressed in sweatpants and a stained t-shirt, they will sprint in the opposite direction. If your website is outdated, looks hard to navigate, IS hard to navigate, isn’t mobile friendly, or doesn’t provide helpful information, you will get the same result as the sweatpants example. Make sure your website is easy to navigate, mobile friendly, looks brand new, and has helpful information so that your potential clients are impressed when they visit your website!
Genuinely Do Some Good Work:
This part is up to you. You don’t have to give away millions of dollars, but taking one pro-bono case and getting it in the local news won’t break your bank. Help someone out and do a nice thing, and people will see you as the local hard-working and genuine person that you are! People saying nice things about you will probably make you feel all warm and fuzzy, too.
Happy Hunting!
answeringlegal.com
P: 631–686–9700
facebook.com/answeringlegalinc
twitter.com/answeringlegal
linkedin: Answering Legal, Inc.
instagram: @answeringlegal
Nick Werker
Nick is the Head Content Strategist, Social Media Manager, PPC Campaign Manager, and the Executive Head Writer at Ring Savvy/Answering Legal, Inc. Nick’s talent for story-telling enables him to write and recognize the type of content users want to read. | https://medium.com/answering-legal-inc/how-to-make-your-potential-clients-like-you-without-having-to-answer-the-phone-f7e2c9ae0a08 | ['Frank Cordeira Jr.'] | 2016-08-10 14:34:30.288000+00:00 | ['Call Center', 'Leads', 'Design', 'Virtual Assistant', 'Freelancing'] |
Predicting Returns with Fundamental Data and Machine Learning in Python | Feature Selection and Modeling
This section will be split into our two major modeling tasks: asset price modeling and modeling of returns. For each, we will perform feature engineering, then compare the efficacy of imputation techniques, train machine learning algorithms, and then interpret our modeling results.
Modeling Asset Price
The purpose of this task is to model the prices of the securities using the fundamental data related to the companies, in order to perform a pseudo-fundamental analysis of intrinsic value to develop a value investing-style trading strategy based on the residuals of our model, which represent margin of safety, or in other words, how over or under valued a given security is compared to what the model estimates its value to be. The hypothesis is that the residuals will be correlated to the returns of the securities over the six month period since the data were scraped, as overvalued stocks would see their prices move down as the market adjusts toward their true value, and undervalued stocks would move up.
In financial analysis, intrinsic value is at least partially subjective, as different analysts will arrive at different estimates through construction of their own individual proprietary pricing models, meaning that in this experiment we are actually trying to model a hidden variable using the current trading price. The logic behind why this may work is that markets are at least partially efficient, meaning that the current prices of assets reliably reflect their value, with some error and noise present. So, while we are using market price as a target variable, the hope is that the model we build finds the way that the features contribute to prices across the market, and that our residuals will reflect deviations from actual (intrinsic) value.
Feature Selection:
As discussed above, using features which contain information about the current price of the securities will cause target leakage, and undermine our goal of estimating intrinsic value. For example, price to earnings ratio (P/E) is calculated as the price of a security divided by its earnings per share (EPS). Since we have EPS and P/E present in our features, the model could easily factor out the current price of the security, and too accurately reflect the current trading prices, rather than the intrinsic value. Therefore, we must be careful to remove or modify all features which would allow such target leakage.
The first step is to remove all features which are directly related to price, such as anything to do with periodic high/low, percent above/below these marks, and any open/close/last/bid/ask related features. The price ratios can have the prices factored out by dividing the current price of the securities by the ratio itself, leaving only the earnings/book/sales/cash flow behind, which represent fundamental information about the company. We will also remove any columns with highly incomplete data, and columns which are datetime, since none of these are useful in this task. Annual Dividend % and Annual Dividend Yield represent the same thing, and the latter is missing more values, so it will be dropped.
<class 'pandas.core.frame.DataFrame'>
Index: 501 entries, A to ZTS
Data columns (total 65 columns):
% Held by Institutions 495 non-null float64
5yr Avg Return 501 non-null float64
Annual Dividend % 394 non-null float64
Beta 488 non-null float64
Change in Debt/Total Capital Quarter over Qua...473 non-null float64
Days to Cover 495 non-null float64
Dividend Change % 405 non-null float64
Dividend Growth 5yr 354 non-null float64
Dividend Growth Rate, 3 Years 392 non-null float64
EPS (TTM, GAAP) 490 non-null float64
EPS Growth (MRQ) 487 non-null float64
EPS Growth (TTM) 489 non-null float64
EPS Growth 5yr 438 non-null float64
FCF Growth 5yr 485 non-null float64
Float 495 non-null float64
Gross Profit Margin (TTM) 422 non-null float64
Growth 1yr Consensus Est 475 non-null float64
Growth 1yr High Est 475 non-null float64
Growth 1yr Low Est 475 non-null float64
Growth 2yr Consensus Est 497 non-null float64
Growth 2yr High Est 497 non-null float64
Growth 2yr Low Est 497 non-null float64
Growth 3yr Historic 497 non-null float64
Growth 5yr Actual/Est 497 non-null float64
Growth 5yr Consensus Est 497 non-null float64
Growth 5yr High Est 497 non-null float64
Growth 5yr Low Est 497 non-null float64
Growth Analysts 497 non-null float64
Historical Volatility 501 non-null float64
Institutions Holding Shares 495 non-null float64
Interest Coverage (MRQ) 389 non-null float64
Market Cap 501 non-null object
Market Edge Opinion: 485 non-null object
Net Profit Margin (TTM) 489 non-null float64
Operating Profit Margin (TTM) 489 non-null float64
P/E Ratio (TTM, GAAP) 451 non-null object
PEG Ratio (TTM, GAAP) 329 non-null float64
Price/Book (MRQ) 455 non-null float64
Price/Cash Flow (TTM) 468 non-null float64
Price/Earnings (TTM) 448 non-null float64
Price/Earnings (TTM, GAAP) 448 non-null float64
Price/Sales (TTM) 490 non-null float64
Quick Ratio (MRQ) 323 non-null float64
Return On Assets (TTM) 479 non-null float64
Return On Equity (TTM) 453 non-null float64
Return On Investment (TTM) 437 non-null float64
Revenue Growth (MRQ) 487 non-null float64
Revenue Growth (TTM) 489 non-null float64
Revenue Growth 5yr 490 non-null float64
Revenue Per Employee (TTM) 459 non-null float64
Shares Outstanding 495 non-null float64
Short Int Current Month 495 non-null float64
Short Int Pct of Float 495 non-null float64
Short Int Prev Month 495 non-null float64
Short Interest 495 non-null float64
Total Debt/Total Capital (MRQ) 460 non-null float64
Volume 10-day Avg 500 non-null float64
cfra 479 non-null float64
creditSuisse 337 non-null object
ford 493 non-null float64
marketEdge 484 non-null float64
marketEdge opinion 484 non-null object
newConstructs 494 non-null float64
researchTeam 495 non-null object
theStreet 496 non-null object
dtypes: float64(58), object(7)
memory usage: 258.3+ KB
Things are looking cleaner. The next step for this task is to get rid of the analyst ratings, since these are not really fundamental. Another issue to deal with is that we have duplicates of P/E ratio. The column named ‘P/E Ratio (TTM, GAAP)’ is of object data type, and we have the same thing below with ‘Price/Earnings (TTM, GAAP)’ with a numeric dtype. We also have ‘Price/Earnings (TTM)’ that incorporates what is called non-GAAP earnings, which purposefully leave out any large nonrecurrent expenses that the company has had recently which may obfuscate financial analysis, and are considered favorable for purposes such as ours. Thus, we will drop the GAAP earnings, and keep the non-GAAP earnings. PEG ratio is the P/E ratio divided by the annual EPS growth rate, both of which we already have, so it can be dropped.
Things are getting cleaner, but we have more considerations to make. The Market Cap column is still encoded in string format, but it also contains information about the prices, and will cause target leakage. Market capitalization is calculated as the number of outstanding shares times the current price of the security, and since we have a Shares Outstanding column, we need to get rid of Market Cap altogether. Also, we have not checked our dataset for nan’s evil stepbrother: inf. Let’s drop Market Cap and perform this check.
True
We have indeed located some culprits. Not to worry, since we were already planning to deal with missing data, we will just re-encode these as nans to be dealt with later.
And it is done. We need to check now to make sure that there are no companies in our feature set that aren’t present in our target data. We can perform a check with a list comprehension as follows:
['AGN', 'ETFC']
These are familiar faces, they are the companies which have been acquired by others over the period of study. We need to drop them from our feature set.
Finally, we need to remove the pricing information from the pricing ratios by dividing the current price by each ratio. This can be done like so:
<class 'pandas.core.frame.DataFrame'>
Index: 499 entries, A to ZTS
Data columns (total 52 columns):
% Held by Institutions 493 non-null float64
5yr Avg Return 499 non-null float64
Annual Dividend % 392 non-null float64
Beta 486 non-null float64
Change in Debt/Total Capital Quarter over Qua...471 non-null float64
Days to Cover 493 non-null float64
Dividend Change % 403 non-null float64
Dividend Growth 5yr 354 non-null float64
Dividend Growth Rate, 3 Years 392 non-null float64
EPS (TTM, GAAP) 488 non-null float64
EPS Growth (MRQ) 485 non-null float64
EPS Growth (TTM) 487 non-null float64
EPS Growth 5yr 437 non-null float64
FCF Growth 5yr 483 non-null float64
Float 493 non-null float64
Gross Profit Margin (TTM) 420 non-null float64
Growth 1yr Consensus Est 473 non-null float64
Growth 1yr High Est 473 non-null float64
Growth 1yr Low Est 473 non-null float64
Growth 2yr Consensus Est 494 non-null float64
Growth 2yr High Est 495 non-null float64
Growth 2yr Low Est 494 non-null float64
Growth 3yr Historic 495 non-null float64
Growth 5yr Actual/Est 495 non-null float64
Growth 5yr Consensus Est 494 non-null float64
Growth 5yr High Est 495 non-null float64
Growth 5yr Low Est 494 non-null float64
Growth Analysts 495 non-null float64
Historical Volatility 499 non-null float64
Institutions Holding Shares 493 non-null float64
Interest Coverage (MRQ) 388 non-null float64
Net Profit Margin (TTM) 487 non-null float64
Operating Profit Margin (TTM) 487 non-null float64
Quick Ratio (MRQ) 322 non-null float64
Return On Assets (TTM) 477 non-null float64
Return On Equity (TTM) 451 non-null float64
Return On Investment (TTM) 435 non-null float64
Revenue Growth (MRQ) 485 non-null float64
Revenue Growth (TTM) 487 non-null float64
Revenue Growth 5yr 488 non-null float64
Revenue Per Employee (TTM) 457 non-null float64
Shares Outstanding 493 non-null float64
Short Int Current Month 493 non-null float64
Short Int Pct of Float 493 non-null float64
Short Int Prev Month 493 non-null float64
Short Interest 493 non-null float64
Total Debt/Total Capital (MRQ) 458 non-null float64
Volume 10-day Avg 498 non-null float64
Book (MRQ) 453 non-null float64
Cash Flow (TTM) 466 non-null float64
Earnings (TTM) 447 non-null float64
Sales (TTM) 488 non-null float64
dtypes: float64(52)
memory usage: 210.5+ KB
There we have it, a clean feature set with all continuous numeric variables. It should be noted at this point that multicollinearity is present among the features here, but since we will not be concerning ourselves with feature importances in this task of modeling current price, we do not need to address it, since the accuracy of the model will not be negatively impacted. We will, however, be dealing with multicollinearity in the next task of modeling returns, where feature importances will be examined.
Imputing Missing Data:
Scikit-learn offers a variety of effective methods of imputation between their SimpleImputer, KNNImputer, and IterativeImputer classes. The SimpleImputer can utilize multiple strategies to fill nans: using the mean, median, mode, or a constant value. The KNNImputer utilizes K Nearest Neighbor modeling to estimate the missing values using the other data columns as predictors. The IterativeImputer is experimental, and allows for any estimator to be passed into it, which it uses to estimate missing values in a round-robin fashion. It is highly recommended that the reader check out the documentation for these classes in the previous links.
The best choice between these options depends on the data, task, and algorithm at hand, which is why it is generally best practice to train a model instance with each imputation method and compare the results. Below, I will establish some helper functions which are modified versions of the code found in the scikit-learn documentation links above, which will help us compare these imputation methods in the context of our problem. Note that the IterativeImputer requires importing a special item called ‘enable_iterative_imputer’ in order to work. Let’s import what we need and make our functions, which will generate cross validation scores for each of our imputation methods, and give us results we can use to compare them.
Notice that scaling is built into this process. It is now helpful to combine all off the above functions into a wrapper function that will make testing and graphing the scores of all the imputation methods for various regressors and tasks easy without having to repeat any code.
Excellent, now we have a framework for testing imputations methods for any estimator that we choose, and returning convenient axes objects to view them. The notebooks in the repository go into a comparison of many different regressors for the task of modeling current price, but to save space here, we will focus on the regressor which demonstrated the best performance for this task: scikit-learn’s GradientBoostingRegressor. First, we instantiate an out-of-the-box regressor, and pass it into the wrapper function with our data to see how the imputation methods compare. Note that the compare_imputer_scores function takes in a list of estimators to be used with the IterativeImputer, which can have variable parameters, and we need to make this list to pass it into the function.
The top chart represents the scores from the SimpleImputer, KNNImputer, and out-of-the-box IterativeImputer. The bottom chart compares the performance of the IterativeImputer used with each of the estimators in the estimators list. We can see some solid R squared scores all around, but the best performance is happening with the KNNImputer. It is always relieving to see a less computationally intense imputer (ie not the IterativeImputer) with the highest score before performing a grid search, because the IterativeImputer takes much longer, and when fitting thousands of models this does make a noticeable difference.
Modeling:
Now that we have a selection for the best regressor/imputer combo, we can do a grid search to find the optimal hyperparameters for our model, and then move on with our investigation of the residuals.
Awesome, after a while we have our best tuned model. Let’s see the optimal parameters and the predictive accuracy they produced.
Best Model:
r-squared: 0.8738820298532683 {'imputer__n_neighbors': 5,
'regressor__learning_rate': 0.1,
'regressor__max_depth': 2,
'regressor__n_estimators': 1000,
'regressor__subsample': 0.7}
Nice, we can see that the model has a solid R squared score of .87, using 1000 weak learners with a max depth of only 2 levels per tree, subsampling 70% of the data with a learning rate of 0.1.
Now that we have an asset pricing model trained with our data, we can see how the actual current prices deviate from the model’s predictions in the residuals, then look to see if these residuals are correlated with the returns since the date of the scrape. As a reminder, the hope is that the higher above the model estimate an asset price is, the more it can be expected to move downwards, or the lower below the model estimate an asset price is, the more it would be expected to move upward toward the estimated value over time. Since residuals are calculated as actual minus predicted, the residuals should be negatively correlated with the returns, if our hypothesis is correct. Let’s first generate the residuals, and take a look at them.
We can see that the residuals have a thin, fairly symmetrical distribution around zero, with some big outliers. Let’s see if we can see any linear relationship between them. I will be removing outliers past three standard deviations and fitting a linear regression model to do this. To see this coded out, refer to the repository, it is just a simple linear regression of the log returns using the residuals as the independent variable.
This linear regression, while showing a slight negative slope, offers an R squared of .004, and a p-value of .186 for the coefficient relating the residuals to the returns, so this is not indicating a powerful linear relationship. Thus, the hypothesis that the residuals of our asset pricing model would be correlated to the returns over the six month period since the scrape is not supported. Disappointing though this may be, this is the nature of science. Investigating this on longer time periods would likely be appropriate, since this hypothesis was derived from a value-investing perspective, and value investors typically plan to hold assets for much longer time periods, generally over a year at the very least. Incidentally, for shorter time periods such as the one used in this study, one could argue that overvalued or undervalued securities would be experiencing price momentum, attracting traders to trade with the current trend, thereby feeding it, and thus these securities may be likely to move more in the same direction in the short term before the market corrects itself toward their underlying value. Additionally, the residuals used above have a very tight distribution, because the regression was trained using all of the samples for which residuals were calculated. It may be worthwhile to repeat this process on a holdout set and compare the results.
Modeling Returns
Now we move on to our second task: modeling the returns over the six month period since the date of the scrape. This can be done a number of ways, here we will do three. The first will be to try and regress the continuous values of the returns since the scrape using the data as predictors, the second will be to create a binary classifier to predict gainers/losers (returns above or less than or equal to zero), and the third will be to classify stocks which over or underperformed the market (returns above or less than or equal to the average of returns of the index). The reason for the two different classification tasks, as mentioned above, is that a bull (rising) market will cause all stocks within an index to move upward on average, as similarly a bear (falling) market will cause them all to move downward on average. By subtracting the mean of returns from the returns for this particular time period, the model may be better generalized to application on a different time period. Unfortunately, we do not have data to test another time period, so we will not be able to test that theory, nor will be be able to evaluate the performance of a model trained on one time period on predicting the returns of another. We will be able to test the model on holdout sets of securities, albeit in the same time period as the training. Despite this unfortunate detail, a model using our features which can successfully predict winners and losers over any time period is an indication that the features have predictive power, and that further study on different time periods in the future is merited. We will also be able to observe the feature importances of the models, which may provide insight into which of the features contributes the most in predicting returns.
Let’s keep our target variables in a convenient data frame:
Feature Selection:
Since we are not predicting the prices of securities this time, having features which contain information about the price is no longer an issue, which means we can include features that we left out last time. However, since we want to get an accurate look at feature importances, we are going to need to deal with multicollinearity among the features to make that possible. Sometimes a model can get more accuracy by leaving multicollinearity alone, so it is generally best to train a separate model each way and compare them, using the most accurate for predictions, and using the one trained with multicollinearity managed for analysis. In this case, it was found that the classification models were even more accurate after multicollinearity was managed, but that the (already disappointing) performance of the regression models suffered as a result of removing multicollinearity. In order to save space here, I will briefly touch on the results of regression, which were not very impressive, and then move on to removing multicollinearity and performing classification.
First thing to do for any of the upcoming tasks is to drop features we will definitely not be using, and see what next steps should be taken to clean the data. We are starting again from the totally raw data frame.
<class 'pandas.core.frame.DataFrame'>
Index: 501 entries, A to ZTS
Data columns (total 83 columns):
% Held by Institutions 495 non-null float64
52-Wk Range 501 non-null object
5yr Avg Return 501 non-null float64
5yr High 501 non-null float64
5yr Low 501 non-null float64
Annual Dividend % 394 non-null float64
Ask 494 non-null float64
Ask Size 492 non-null float64
B/A Ratio 492 non-null float64
B/A Size 501 non-null object
Beta 488 non-null float64
Bid 499 non-null float64
Bid Size 492 non-null float64
Change in Debt/Total Capital Quarter over Qua...473 non-null float64
Closing Price 500 non-null float64
Day Change % 501 non-null float64
Day High 501 non-null float64
Day Low 501 non-null float64
Days to Cover 495 non-null float64
Dividend Change % 405 non-null float64
Dividend Growth 5yr 354 non-null float64
Dividend Growth Rate, 3 Years 392 non-null float64
EPS (TTM, GAAP) 490 non-null float64
EPS Growth (MRQ) 487 non-null float64
EPS Growth (TTM) 489 non-null float64
EPS Growth 5yr 438 non-null float64
FCF Growth 5yr 485 non-null float64
Float 495 non-null float64
Gross Profit Margin (TTM) 422 non-null float64
Growth 1yr Consensus Est 475 non-null float64
Growth 1yr High Est 475 non-null float64
Growth 1yr Low Est 475 non-null float64
Growth 2yr Consensus Est 497 non-null float64
Growth 2yr High Est 497 non-null float64
Growth 2yr Low Est 497 non-null float64
Growth 3yr Historic 497 non-null float64
Growth 5yr Actual/Est 497 non-null float64
Growth 5yr Consensus Est 497 non-null float64
Growth 5yr High Est 497 non-null float64
Growth 5yr Low Est 497 non-null float64
Growth Analysts 497 non-null float64
Historical Volatility 501 non-null float64
Institutions Holding Shares 495 non-null float64
Interest Coverage (MRQ) 389 non-null float64
Last (size) 501 non-null float64
Market Cap 501 non-null object
Market Edge Opinion: 485 non-null object
Net Profit Margin (TTM) 489 non-null float64
Operating Profit Margin (TTM) 489 non-null float64
P/E Ratio (TTM, GAAP) 451 non-null object
PEG Ratio (TTM, GAAP) 329 non-null float64
Prev Close 501 non-null float64
Price/Book (MRQ) 455 non-null float64
Price/Cash Flow (TTM) 468 non-null float64
Price/Earnings (TTM) 448 non-null float64
Price/Earnings (TTM, GAAP) 448 non-null float64
Price/Sales (TTM) 490 non-null float64
Quick Ratio (MRQ) 323 non-null float64
Return On Assets (TTM) 479 non-null float64
Return On Equity (TTM) 453 non-null float64
Return On Investment (TTM) 437 non-null float64
Revenue Growth (MRQ) 487 non-null float64
Revenue Growth (TTM) 489 non-null float64
Revenue Growth 5yr 490 non-null float64
Revenue Per Employee (TTM) 459 non-null float64
Shares Outstanding 495 non-null float64
Short Int Current Month 495 non-null float64
Short Int Pct of Float 495 non-null float64
Short Int Prev Month 495 non-null float64
Short Interest 495 non-null float64
Today's Open 501 non-null float64
Total Debt/Total Capital (MRQ) 460 non-null float64
Volume 500 non-null float64
Volume 10-day Avg 500 non-null float64
Volume Past Day 501 non-null object
cfra 479 non-null float64
creditSuisse 337 non-null object
ford 493 non-null float64
marketEdge 484 non-null float64
marketEdge opinion 484 non-null object
newConstructs 494 non-null float64
researchTeam 495 non-null object
theStreet 496 non-null object
dtypes: float64(73), object(10)
memory usage: 348.8+ KB
B/A Size is encoded as strings, but we have numeric features for Bid Size and Ask Size, so we do not need the B/A Size feature at all, and it can be dropped. There is also the duplicate P/E Ratio (TTM, GAAP) feature in string format that we found earlier that can be dropped again. There are two columns, Market Edge Opinion and marketEdge opinion which are the same, and since there is a numeric counterpart to this analyst rating in the marketEdge column, we can drop both of the market edge opinion columns. The Volume Past Day feature is a categorical variable in string format that tells us whether the previous trading day had light, below average, average, above average, or heavy volume compared to a typical trading day for the security. This feature could be one hot encoded, but since there is a Volume 10-day Avg feature which is numeric and more descriptive of recent volume trend, we will just drop Volume Past Day. That leaves us with five features with an object dtype to manage before we begin the imputation step: 52-Wk Range, Market Cap, creditSuisse, researchTeam, and theStreet. Let’s drop the unneeded columns, then take a closer look at these 5 columns.
It looks as though we can modify the strings in 52-Wk Range to give us two numeric columns: 52-Wk Low and 52-Wk High. Then, we can move on to converting the Market Cap column, but we will need to know all of the unique letters that it contains, which represent some order of magnitude for the number they are associated with. Let’s treat 52-Wk Range, and check our result.
Alright, this has worked. Now we need to determine what letter suffixes the numbers in the Market Cap column have, so we can build a function to convert this column into a numeric dtype. We can also drop the old 52-Wk Range column.
array(['B', 'T', 'M'], dtype=object)
Here we can see that the column has B, T, and M suffixes expressing amounts in billions, trillions, and millions, respectively. Knowing this, we can make a function to treat this column.
Excellent, another success story. The old column can be dropped. Now we need to check if there are any companies in our feature set that aren’t present in the log_returns. Recalling from earlier, we should expect to find AGN and ETFC in this list.
['AGN', 'ETFC']
Just as suspected. We need to drop these from the index of our feature set. Since we also know from earlier that we have infs that need to be re-encoded as nans, we will do that in this step as well.
Now we have just one more chore before we can move on to dealing with multicollinearity: we need to one hot encode the categorical analyst ratings that were encoded as strings. Since we have fixed all of the other columns, we can simply call the pandas get_dummies method, and make sure that we set drop_first to True, in order to avoid the dummy variable trap.
<class 'pandas.core.frame.DataFrame'>
Index: 499 entries, A to ZTS
Data columns (total 82 columns):
% Held by Institutions 493 non-null float64
5yr Avg Return 499 non-null float64
5yr High 499 non-null float64
5yr Low 499 non-null float64
Annual Dividend % 392 non-null float64
Ask 492 non-null float64
Ask Size 490 non-null float64
B/A Ratio 490 non-null float64
Beta 486 non-null float64
Bid 497 non-null float64
Bid Size 490 non-null float64
Change in Debt/Total Capital Quarter over Qua...471 non-null float64
Closing Price 498 non-null float64
Day Change % 499 non-null float64
Day High 499 non-null float64
Day Low 499 non-null float64
Days to Cover 493 non-null float64
Dividend Change % 403 non-null float64
Dividend Growth 5yr 354 non-null float64
Dividend Growth Rate, 3 Years 392 non-null float64
EPS (TTM, GAAP) 488 non-null float64
EPS Growth (MRQ) 485 non-null float64
EPS Growth (TTM) 487 non-null float64
EPS Growth 5yr 437 non-null float64
FCF Growth 5yr 483 non-null float64
Float 493 non-null float64
Gross Profit Margin (TTM) 420 non-null float64
Growth 1yr Consensus Est 473 non-null float64
Growth 1yr High Est 473 non-null float64
Growth 1yr Low Est 473 non-null float64
Growth 2yr Consensus Est 495 non-null float64
Growth 2yr High Est 495 non-null float64
Growth 2yr Low Est 495 non-null float64
Growth 3yr Historic 495 non-null float64
Growth 5yr Actual/Est 495 non-null float64
Growth 5yr Consensus Est 495 non-null float64
Growth 5yr High Est 495 non-null float64
Growth 5yr Low Est 495 non-null float64
Growth Analysts 495 non-null float64
Historical Volatility 499 non-null float64
Institutions Holding Shares 493 non-null float64
Interest Coverage (MRQ) 388 non-null float64
Last (size) 499 non-null float64
Net Profit Margin (TTM) 487 non-null float64
Operating Profit Margin (TTM) 487 non-null float64
PEG Ratio (TTM, GAAP) 329 non-null float64
Prev Close 499 non-null float64
Price/Book (MRQ) 453 non-null float64
Price/Cash Flow (TTM) 466 non-null float64
Price/Earnings (TTM) 447 non-null float64
Price/Earnings (TTM, GAAP) 447 non-null float64
Price/Sales (TTM) 488 non-null float64
Quick Ratio (MRQ) 322 non-null float64
Return On Assets (TTM) 477 non-null float64
Return On Equity (TTM) 451 non-null float64
Return On Investment (TTM) 435 non-null float64
Revenue Growth (MRQ) 485 non-null float64
Revenue Growth (TTM) 487 non-null float64
Revenue Growth 5yr 488 non-null float64
Revenue Per Employee (TTM) 457 non-null float64
Shares Outstanding 493 non-null float64
Short Int Current Month 493 non-null float64
Short Int Pct of Float 493 non-null float64
Short Int Prev Month 493 non-null float64
Short Interest 493 non-null float64
Today's Open 499 non-null float64
Total Debt/Total Capital (MRQ) 458 non-null float64
Volume 498 non-null float64
Volume 10-day Avg 498 non-null float64
cfra 478 non-null float64
ford 491 non-null float64
marketEdge 482 non-null float64
newConstructs 492 non-null float64
52-Wk Low 499 non-null float64
52-Wk High 499 non-null float64
marketCap 499 non-null float64
creditSuisse_outperform 499 non-null uint8
creditSuisse_underperform 499 non-null uint8
researchTeam_hold 499 non-null uint8
researchTeam_reduce 499 non-null uint8
theStreet_hold 499 non-null uint8
theStreet_sell 499 non-null uint8
dtypes: float64(76), uint8(6)
memory usage: 303.1+ KB
Regression
Excellent, we have a clean data frame to perform regression with. The study found that the variety of regression algorithms from scikit-learn and xgboost that were tested had consistently lackluster performance in the task of using these data to regress the returns since the date of the scrape. The best performance was provided by the RandomForestRegressor from scikit-learn, with an average r-squared score of just below .21 after either zero or mean imputation, with a wide variance of scores. Below we can see these results.
These scores are not particularly impressive, but they are high enough to indicate that the model is effective on some level. R-squared tells us the proportion of the variance of the target which is explained by the model, and even though a large proportion of this variance is not being explained, a significant portion of it is, meaning there is some predictive power being provided by our features. Although the results from regression are somewhat disappointing, the upcoming classification tasks were quite fruitful. As mentioned above, the performance of classification was actually aided by the removal of multicollinearity, so from here I will demonstrate that removal first, then move on to the classification model building.
Removing Multicollinearity:
Since we want to get meaningful insights from the feature importances of the models that we train, we need to now deal with multicollinearity among the features. To start this process, we need to generate a heatmap to visualize the correlation matrix of all of the features. We can do this with a combination of the pandas .corr method and seaborn’s heatmap.
The first thing to notice is that Annual Dividend % has a negative correlation with many price based features, especially with 5yr Avg Return. We can look at a scatter plot of this relationship to get a more detailed understanding.
We can see a strong negative correlation here. It can also be noticed that some of these annual dividends are quite large, all the way up to 17.5% of the price per share. Whenever a dividend is paid out to shareholders, it is subtracted from the price of the share on the date of payment. Thus, if an annual dividend is 5%, then the stock will need to have gained 5% that year in order to break even after the all dividends are paid out. This explains the negative correlation seen above: the higher the dividend payouts, the more share price is reduced as a result, and the less the gains in price per share will be for the year. We will remove the Annual Dividend % feature.
Next, we can see that all of the price related features are highly correlated, which is surprising to no one. Although features like 5yr High, 5yr Low, Ask, Bid, Closing Price, Day High, Day Low, etc. are all expressing distinct things about price, they are so closely correlated that there will be no way to determine their individual effects on the model, and since we will be looking at feature importances, we need to cull our price-related features. Additionally, the growth estimates have some strong multicollinearity, so we will drop some of these as well.
Another strong correlation exists between Float and Shares Outstanding. These are very similar features. Float Represents the number of shares available for public trade, and Shares Outstanding is the total number of shares that a company has outstanding. These are bound to be highly related, but we can see just how related with another scatter plot.
Above we can see what verifies our understanding of these features: a company has a certain number of shares outstanding, but some of those shares may not be available for the public to trade. Thus we see that Float is never above Shares Outstanding, and the relationship is mostly perfectly linear along the line of slope 1 (identity function). We can drop the Float feature in favor of Shares Outstanding.
Return on Assets and Return on Investment are very strongly correlated. As explained in this article, cross-industry comparison of Return on Assets may not be meaningful, and it is better to use Return on Investment in these cases, including the one we find ourselves in. Thus, we will drop Return on Assets.
Net Profit Margin is Operating Profit Margin minus taxes and interest, and therefore the two are highly correlated. Since the prior contains more information, and they have the same number of missing values, we can drop Operating Profit Margin.
Volume is highly correlated to Volume 10-day Avg, and since the latter contains more information, we can drop the prior. Historical Volatility has strong correlation with Beta, which is a similar metric expressing relative volatility to the market. We will drop Historical Volatility in favor of Beta.
Price/Earnings (TTM) and Price/Earnings (TTM, GAAP) are highly correlated. As mentioned above, the non-GAAP metric is considered more useful in quantitative financial analysis, since it leaves out large non-recurrent costs that may have appeared on recent financial statements, so we will drop the feature calculated with GAAP earnings.
Dividend Growth Rate, 3 Years, and Dividend Growth 5yr are highly correlated, since they contain a lot of similar historical information. The 3 year growth rate should be informative enough to predict returns over a six month period, so we will drop Dividend Growth 5yr.
We can also seem some collinearity occurring among the Short Interest related features. Let’s view these features together to see what they look like.
We can see that the Short Int Pct of Float and Short Interest columns are the same, but with more resolution in the prior, so we can drop the latter. Short Int Current Month and Short Int Prev Month are highly correlated with one another, and it stands to reason that the Current Month feature is more valuable looking forward, so we can drop the Short Int Prev Month.
This is looking much better. Although there is still some correlation present between variables, we have dealt with most of the redundant features and multicollinearity. This is a good place to move on to our imputation and modeling phases.
Classification — Class 1: Gainers/Losers
Now we move on to our classification tasks, both of which have binary categorical targets. The first class will be gainers/losers, the second will be over/under performers relative to the market
Imputing Missing Data (Class 1: Gainers/Losers):
This study showed that the XGBClassifier from the xgboost package had superior predictive performance on this target, so we will investigate it here. For full performative comparison of various classifiers, see the notebooks.
We can see that the SimpleImputer is giving us decent performance, though with a wide variance. The IterativeImputer with the DecisionTreeRegressor is doing the best, and with the KNeighborsRegressor it is doing nicely with a tighter variance, which sometimes can be preferable for robustness. One thing to always keep in mind when preparing to do a grid search using these imputers is that the IterativeImputer makes the process take a lot longer, for reasons implicit in the name. In this study, both were compared, and it turned out that the SimpleImputer was both faster and produced a better model, so we will demonstrate with this imputation method here.
Modeling (Class 1: Gainers/Losers):
We can now construct a pipeline and a grid search to build an optimal model for this task, which we can then analyze.
After a while, we have our completed grid search. Let’s look at the best cross validation score and parameters that it produced.
Best CV roc_auc score: 0.7014277550220106 {'clf__colsample_bylevel': 1,
'clf__colsample_bytree': 1,
'clf__learning_rate': 0.001,
'clf__max_depth': 5,
'clf__n_estimators': 1000,
'clf__reg_lambda': 0.5,
'clf__subsample': 1}
The best auc score from the cross validation was a .701, which is neither awesome nor terrible. Let’s see how the predicted probabilities for the test set relate to the log returns.
The r-squared for this simple linear regression between the mode’s predicted probabilities and log returns is very low at 0.059, but the coefficient for the probabilities is significant with a p-value of 0.015. We can see the imbalance of the classes in the scatter plot above, since as the market was uptrending during the period of study, the majority of stocks in the index had gains, and are therefore members of the gainer class. Let’s look now at what the probability of selecting a gainer at random would have been by dividing the number of securities in the gainer class by the total number of securities in the index.
0.7294589178356713
Here we can see that an investor would have had a 73% chance of randomly picking a gainer during the time period of study due to the uptrending market. We can now start to see why it does not make much sense to model gainers/losers if we intend to gain insights or make a predictive model which will be useful during different time periods, because the proportion of gainers/losers (class membership) will change depending on what the behavior of the market is, and this model may be attributing gains to the predictive features that had less to do with those features and more to do with the movement of the market. In a moment, we will remove the average return of the market to adjust for this, and model over/under performers relative to the market, but first let’s take a quick look at the accuracy and roc curve for this model.
Accuracy Score (test set): 0.81
We can see that the model predicted the correct class 81% of the time, which sounds impressive, but knowing that the probability of randomly selecting a gainer was 73%, it isn’t quite so, and we know that this accuracy could vary widely in another time period where the market trended differently. The auc score on the test set is 0.68, which is not super impressive. Let’s move on to modeling our last target, now that we know that it will give us the most robust insight into the performance of stocks relative to each other.
Classification — Class 2: Over/Under Performers
As we can see from the modeling and analysis above, there is a significant disadvantage to modeling gainers/losers over a given time period, because the category that a stock falls into is highly dependent on the behavior of the overall market during that time period. This can lead the model to be biased in how it evaluates the contributions of features to stock performance because some of that performance can be attributed to the market’s behavior, and not to the underlying features of the stock itself. In order to adjust for this, we can subtract the average return of the market over the time period from the return of each security, thereby excluding the influence of the overall market movement from the target, and focusing solely on the differences in performance among the securities in the index. By modeling relative performance, an investor can construct a hedged portfolio using a long/short equity strategy based off of the model predictions, which should then be robust to varying market conditions.
Imputing Missing Data (Class 2: Over/Under Performers):
The XGBRFClassifier from xgboost was found to be the most effective classifier for this task. Let’s take a look at how the imputation methods get along with this classifier.
We can see that the best performance is being achieved using the SimpleImputer, and although these results are not particularly exciting, some hyperparameter tuning of the classifier will help. Let’s move on to a grid search.
Modeling (Class 2: Over/Under Performers):
After running the cell above, we have a fitted grid search object that will have our optimal estimator fit to the training set. Let’s inspect some of the features of this estimator.
Best CV training score: 0.690298076923077 {'clf__colsample_bylevel': 0.8,
'clf__colsample_bynode': 1.0,
'clf__colsample_bytree': 0.6,
'clf__learning_rate': 0.001,
'clf__max_depth': 7,
'clf__n_estimators': 100,
'clf__reg_lambda': 0.75,
'clf__subsample': 0.8}
The best cross-validation score in the grid search on the training set was an roc_auc of 0.69, which is neither great nor terrible. We can now use the best estimator from the grid search to generate predicted probabilities for the holdout set, then see if there is a linear relationship between these probabilities and the actual log returns of the securities.
We can see that the numeric range of these predicted probabilities is extremely slim, but that the classifier is still effective. The r-squared for this simple linear regression of the log returns using the predicted probabilities is very low at .088, but the coefficient for the probability is significant with a p-value of .003, meaning that there is a linear relationship between the log returns and the predicted probabilities generated by the model. What this means for us is that rather than simply choosing securities above/below a threshold probability, one could create a more conservative portfolio by creating a window around a threshold probability and only selecting stocks to long or short that are outside of that window of probabilities. For simplicity, here we will look at just longing or shorting on either side of our chosen threshold, but first we need to figure out what the optimal threshold for this model is by looking at the roc curve.
We can see that the roc_auc score for the holdout set is .70, which isn’t too bad. We can also see that there appears to be a sweet spot right around a True Positive Rate of just under 0.8 and a False Positive Rate around 0.4. We can create a data frame of these rates with their thresholds using the roc_curve function, and use it to determine what the best threshold to use for our model is.
Great, now we can slice into this data frame to find the probability threshold that corresponds with the sweet spot we saw on the roc curve above.
And there we have it, the sweet spot between tpr and fpr appears to be at index 21. We can get the full resolution of the threshold by indexing the column, and use it to manually generate predictions using this threshold. It will be useful to compare the predictive accuracy using our chosen threshold with the accuracy attained by using a standard probability threshold of 0.5.
Accuracy Score with Standard Threshold: 0.62
Accuracy Score with Selected Threshold: 0.69
As we can see, fine tuning our classification threshold has given us another 7% on our overall predictive accuracy, bringing us up to 69%. This is considerably better than a random guess, which for this target had a 53% chance of randomly picking an over performer. We can see what the probability of correctly picking an overperforming stock by dividing the number of overperforming stocks by the total number of stocks in the index.
0.5270541082164328
We can see that the model is indeed giving us a predictive advantage over randomly selecting an overperforming stock. Now that we have our predictions, we can set about constructing a portfolio using a long/short equity strategy, and see how the returns of this portfolio would compare to buying and holding the market index over the same time period.
Before we move on to portfolio construction, however, let us take a look at the information on feature importances provided to us by the model. There are two ways to look at this: the first is to look at the feature importances from the training process, and the second is to look at permutation importances on the holdout set. Comparing the two can provide interesting insights into how the model behaves with the two sets of data. First, let’s look at the feature importances of the training process.
Above we can see the relative importances of the features in the model fit, which it uses to make predictions. Interestingly, the leading feature is the analyst rating from Ford Equity Research, perhaps they would be pleased to know. Beneath this we see the list of our fundamental features, all of which are shown to be playing an important role in estimation, apart from some of the dummy variables. This is very informative, but it only gives us a view into how the model built itself to best fit the training set. In order to see how these features contribute to the predictive accuracy of the holdout set, we need to use permutation importances. Permutation importances are generated by iteratively shuffling each feature (breaking the relationship to the target) and measuring the loss of predictive performance caused by doing so. If shuffling a certain feature leads to a major loss in performance, then it can be said to be an important feature in prediction. Let’s look at this below.
Above we can see the distributions of how much the model performance was affected by 10 shuffles of each feature. We can notice that the order of the importances has been shuffled around somewhat, although there are some consistencies. This is due to the fact that what is important in predicting the target for the securities of the training set may be different than what is important in predicting the target for securities found in the testing set. Features which are found toward the top of both lists can be assumed to be important overall, such as Growth 1yr High Est, Revenue Per Employee (TTM), ford, marketCap, and Beta. In the case of some of the permutation importances toward the bottom of the chart, random shuffling of the features actually improved the predictive accuracy on the holdout set!
Although we know that certain features are important to the model, their relationships to the target variable may not be totally clear to a human being, as we can see below by plotting the log returns over the (apparently highly valuable) ford ratings.
Ford is an important feature with an unclear relationship to the target.
Constructing a Portfolio
To demonstrate how our model can be useful to an investor, we will construct a basic long/short equity portfolio using the model’s predictions of over/under performers. The benefit of the long/short equity strategy is that the investor is hedged against the market by taking an equal amount of long and short positions. This way, the portfolio is not affected by overall upward or downward movement of the market, and instead is only affected by how well the investor has predicted the relative performance of securities within it. Our model from above is correctly identifying over and under performers with an accuracy of 69%, which isn’t perfect, but when this predictive power is combined with a long/short equity strategy, it can lead to a consistently profitable trading strategy by both diversifying the risk of the portfolio by taking many positions and also hedging against the market. If the market moves up, the investor will gain from their long positions while losing on their short positions, and conversely if the market moves down, they will gain on their short positions while losing on their long positions. By having predicted with better than random accuracy which securities were set to over or underperform the market, the gains from the winning side of the portfolio should average out to be bigger than the losses from the losing side, no matter which way the market moves. This is why it was so important to subtract the average return of the market from our target variable, because it led to developing a model which focused only on relative performances, ignoring the impact of the market.
We can simulate how such a portfolio would have performed using the securities of the test set by using the class predictions for each security to alter the sign of their respective log returns, and averaging all of these adjusted returns together. This would be the equivalent of the investor making equal dollar investments into each security in the test set, going short in any stock predicted by the model to underperform, and going long in any stock predicted to overperform. Since being short in a stock turns losses into gains and vice-versa, we will take all securities with predicted class 0 and reverse the signs.
Now we are ready to determine what the returns of a portfolio which took equal dollar value positions in each stock of the holdout set, going short in those predicted to underperform, and long in those predicted to overperform. We can do this as follows:
Portfolio Return (test set): 0.10186008839757786
Market Return (test set): 0.14362567579115593
We can see that the portfolio has underperformed the market considerably, but this is the nature of a hedging strategy. This is due to the fact that the market itself did very well over the time period studied, so the short side of the portfolio had losses which subtracted from the potential gains of buying and holding the market index. The sacrifice in comparative performance to such a booming market is made in order to protect the portfolio in the event that the market behaves poorly. To see this in action, let’s repeat this comparison, but with a simulated bear (falling) market, which we can create by subtracting the average market return twice from each security, thereby making the overall movement of the market the opposite of what actually happened. We can then compare how the portfolio would compare to buying and holding the market index under these averse circumstances.
Now we can repeat the process above to see how our portfolio performance would have compared to buying and holding this simulated bear market.
Portfolio Return (test set): 0.050753389653326986
Market Return (test set): -0.06931890230988941
Here we can see where the hedged portfolio design truly shines: where someone who had bought and held the market would have suffered losses, the hedged portfolio actually saw gains due to the short positions, thus demonstrating how the long/short equity portfolio strategy reduces market risk, and leads to more consistent profitability.
Visualizing the Portfolio Performance:
We can get a deeper understanding of what is happening by visualizing the performance of the portfolio vs the market. In this case, we are actually looking at the portion of the market which is within the holdout set. We can make a quick visual to see how the overall market differs from our test set, after we create a data frame representing the daily returns of all of the securities in the index by calling pandas .diff() method on the log_close data frame, which will calculate the daily change in log price, also known as log returns.
We can use the cumulative sum to represent the cumulative returns over time. Beneath we will look at how the test set compares to the entire index.
We can see above that the portion of the S&P 500 that is in the test set noticably outperformed the index, but that the overall shape is almost exactly the same. Recall above that the average for the holdout set was .14, while the index had an overall return of .11. Now, we can use the predictions of under and over performers given to us by the model to split the securities in the test set into long and short positions. The portfolio will go long in all companies predicted to outperform, and short in all that were predicted to underperform. To construct this portfolio, we will split the log_returns_full frame into the long side and short side, reverse the sign of the short side, and recombine the two. Then, we will look at a plot comparing the returns of a buy & hold strategy of the test set vs the long/short portfolio we have made.
Now we can really see much more detail of what is going on here. We see that the final return for the test set is at .14, and the total return for the portfolio is at .10, just as we calculated earlier, but now we can see the history leading up to that point. Notice that the portfolio is much less volatile than the buy & hold strategy, with the peaks and troughs much more subtle. This is due to the hedging. To get an even better idea of what is happening, we can look at the returns of the long side of the portfolio and the short side of the portfolio separately.
Here we see the mechanics behind our strategy most clearly. The green line corresponds to the collective returns from our long positions, and the red line shows that of our short positions. Notice how they almost look like reflections of each other, but that the green line goes further up than the red line goes down. This was the entire goal of our strategy in action! The mirrored peaks and troughs are what combine to create the smoother line of the portfolio vs the buy & hold strategy, since the market movements experienced by all of the stocks have been mostly canceled out by this hedging. The fact that the red line doesn’t lose as much as the green line gains is thanks to the help of our model, which successfully helped us pick stocks that were more likely to over or underperform the market. Since the short positions lose the investor money when the prices increase, the fact that these positions as a whole underperformed the market means that the losses will be minimized, while the positions that the investor is long in were likely to maximize gains.
To explain why we are willing to sacrifice profits for our hedging strategy, let’s create the simulated bear market, and see how the same portfolio would have behaved in this environment. To create the bear market, we can subtract the mean of each day’s log returns from each company’s log return that day twice, effectively reversing the flow of the market over this time period. Let’s do this, and create a visualization to verify that it has worked as expected.
We can see that this math has mirrored the cumulative returns of the index, and given us a simulation of a market downturn. Let’s see how the same portfolio positions, as determined by the predictions of our model, would have performed vs the test set in these market conditions.
Here we can see the beauty of our strategy. In these averse conditions, the profits generated from the short positions have outweighed the losses from the long positions, so that the portfolio has actually made gains while the buy & hold investor would have had losses, and without all of the dramatic swings to boot. Let’s look again at the two halves of the portfolio separately.
We can see that the red line representing the short positions is driving the profits, and that the predictive power of the model has again led to an overall win in these conditions by allocating our portfolio based on the securities most likely to over or underperform the market. Notice that the green line ends up just about breaking even, whereas the test set went down to -.06, meaning that our long positions have indeed overperformed the market. Since the short positions have collectively provided a cumulative log return of .16, they actually saw losses at -.16, meaning that they drastically underperformed the market during these hard times, according to our plan. | https://medium.com/analytics-vidhya/predicting-returns-with-fundamental-data-and-machine-learning-in-python-a0e5757206e8 | ['Nate Cibik'] | 2020-12-06 15:50:35.345000+00:00 | ['Quantitative Finance', 'Data Science', 'Python', 'Machine Learning', 'Finance'] |
Sexuality, stigma & discrimination, why do people even care? | Sexuality, stigma & discrimination, why do people even care?
My personal experience and some top tips
Pride Month
In the early hours of June 28th 1969 in Greenwich Village, New York City police raided a gay club named the Stonewall Inn. This raid started a riot among bar patrons and neighbourhood residents as police roughly hauled employees and patrons out of the bar, leading to six days of protests and violent clashes with law enforcement outside the bar on Christopher Street, in neighbouring streets and in nearby Christopher Park. The Stonewall Riots served as a catalyst for the gay rights movement in the United States and around the world.† To commemorate the Stonewall riots, we now hold Pride month every June to recognise the impact LGBT people have had in the world.
Annual LGBT Pride celebrations are a time where everyone shows up for inclusivity — including big companies. While companies such as Adidas, Calvin Klein, Under Armour, Spotify, Dr Martens and many more show their varying degrees of support in this time, it’s a real shame that many companies use this as a marketing opportunity and negate these values for the other 11 months of the year. Don’t be fooled, nine of the biggest, most LGBTQ-supportive corporations in America gave about $1 million or more each to anti-gay politicians in the last election cycle.*
There are many companies across the world hiring with complete inclusivity. This has endless benefits including a collective experience that can only enhance companies values and cultures. Check out some of the companies in the Stonewall Top 100 employers list. The list is compiled from the Workplace Equality Index — the UK’s leading benchmarking tool for LGBT inclusion in the workplace:
Inclusive companies website also has their own list which goes deeper into companies to show the UK’s top 50, that promote inclusion throughout each level of employment within their organisation. This list shows companies that hire without discrimination against age, disability, gender, LGBT and race. The Inclusive Top 50 UK Employers 2019/20 List has many names you will recognise and maybe even use every day including:
Sky
AutoTrader
NHS
E-on
Specsavers
Moneysupermarket
Bupa
As well as being a month long celebration, Pride is also a big opportunity to peacefully protest and raise political awareness of current issues facing the LGBTQI+ community. What most people know as ‘Pride’ is a parade in which there are street parties, community events, public speaking, festivals in the streets and educational events in most cities across the world.
My experience
I am not someone who gives out personal information like smiles, not everyone needs to know the ins and outs of my life, my thoughts and my feelings. I am however, an extremely open and honest person. These two things seem like they couldn’t coexist but it’s just about balance. You can be open and honest without giving information that is personalised to your experience. This is actually something I had to learn over many years due to inadvertently ‘lying’ through other people’s assumptions.
So a real-world example of this, for me, is sexuality. My personal feelings on this matter are generally that you shouldn’t have to ‘come out’ as gay, just as much as you shouldn’t have to ‘come out’ as straight. People have preset assumptions about you, and if you don’t meet those assumptions there’s a sense of unease — while, if you don’t correct the assumption you are seen as misleading. There’s also the idea that you have to fit in to a certain box, e.g. you are straight or gay — when in reality people come in all shapes, sizes, colours, orientations and that’s amazing.
I am not ‘straight’
There are so many words now for what I am… bisexual, pansexual? Whatever you call it, I have always been more interested in the things that people can choose and change than the things they can’t. If someone is an amazing, loving, considerate, genuine, caring person — these are things they can choose, and really, why wouldn’t you love someone like that? And while this is my life, it’s not information I feel the need to share when I meet people. The people I am close enough to, know this about me, and there will be a few people that are surprised by this post — and really, if you’re surprised you should be a little bit ashamed.
I am a young female UX designer in the financial sector, I am sure just from that statement you can imagine all of the stigmas I manage day to day. These stigmas definitely don’t come from my team — creative people are just built differently, they are so much more open to equality in the world. My career is probably one of the reasons I have grown the way I have in my personal life — for example, if I am asked about my past relationships I will openly say “ex-girlfriend” or share my experience, but otherwise it’s nobody’s business. If it’s not a conversation that would have been had without the knowledge, then it’s not a conversation I am starting.
Navigating your way through your feelings, whatever age that happens, is not easy in this world. Pride is a place where people can feel like they are different, they don’t need to explain themselves and they can just truly be who they are — whatever that entails. Even with the most amazing parents and friends, I was scared to tell them. The idea of having to even understand how to communicate something so deep and personal to people that would usually be there to help you is honestly terrifying. But, once you have that foundation it does get easier. And the way you end up telling the people you love is different for everyone and it may not be ‘perfect’ — I mean, I was a teenager, petrified and ended up sending my Mum a message to ‘let her know’, as if I was staying out for the night. And in hindsight, that was so true to the way I feel about it and I wouldn’t change that now.
Everyone is vulnerable, raw, real and weird — no matter how well hidden it is. When you approach people in any situation, I think that we could all take more caution in being respectful. You don’t know what people are going through and while I don’t usually share personal information like this, if this post could help even one person to feel accepted or educate someone as to why we celebrate pride and why it’s important — it’s worth it.
The reality
Around 40% of homeless youths are LGBT, this is mostly due to rejection from their friends and family. Along with this, gay and bisexual youth and other sexual minorities are
8 times more likely to have tried to commit suicide
6 times more likely to report high levels of depression
3 times more likely to use illegal drugs
Discrimination
When people decide to act on their prejudice, stigma turns into discrimination. Discrimination is when you treat people differently based on the groups, classes, or other categories to which they are seen to belong, and it is not okay.
Discrimination can take different forms, it’s not always something clear and aggressive, it can take the form of:
Obvious acts of prejudice and discrimination — e.g. someone who is open about being transgender or their sexual orientation and being refused employment or promotion due to this
of prejudice and discrimination — e.g. someone who is open about being transgender or their sexual orientation and being refused employment or promotion due to this More subtle forms of discrimination, but no less harmful, are reinforcement of negative stereotypes and feelings of isolation — e.g. use of the word ‘gay’ as a derogatory term or teaching your child that being different is a bad thing and to discriminate against people of a different size, race, culture or sexual orientation
The law for workplaces
Everyone has the right to be treated fairly at work and to be free of discrimination on grounds of age, race, gender, gender reassignment, disability, sexual orientation, religion or belief. For more information about discrimination in the workplace and what you can do about it, check out gov.uk.
Schools and Colleges
Schools have a responsibility to be diverse and empower young people to be open and inclusive. During School Diversity Week, primary and secondary schools as well as colleges across the UK celebrate lesbian, gay, bisexual and trans equality in education. In 2019, schools and colleges representing 1.4 million pupils signed up to take part and received their free toolkit from Just Like Us.
Educating young people
Just Like Us send LGBT+ young adult ambassadors to deliver talks and workshops championing LGBT+ equality. They speak honestly about who LGBT+ people are, and share their own stories growing up today to connect with all students in a powerful way. After their sessions, 86% of students understand why everyone should care about LGBT+ issues
Parents
A parent’s role and response in their child coming out is one of the most impactful interactions that will shape their thoughts and mental state. As a parent you need to be open, responsible, positive and know how to react. There’s so much information online that you can educate yourself with — and if your not a parent, you are a role model. You may have nephews, nieces, friends children, godchildren and even younger colleagues that may come out to you and you should be equipped with tools to know how to deal with the situation in a positive way.
Support
There are many charities and support groups that you can contact if you don’t have anyone to open up to or if you are struggling to connect with people that understand — but just know, you are not alone! It feels like a big step to contact any of these companies but just opening yourself up a little bit can introduce you to a world of acceptance and new friendship.
MindOut is a mental health service run by and for lesbians, gay, bisexual, trans, and queer people. Their vision is a world where the mental health of LGBTQ communities is a priority, free from stigma, respected and recognised. They do this by:
Listening to and responding to the LGBTQ experience of mental health
Offering hope through positive relationships and professional expertise
Preventing isolation, crisis and suicidal distress in LGBTQ communities
Providing safe spaces for people to meet and support each other
Helping people protect their rights and get their voices heard
Campaigning and creating conversations about LGBTQ mental health throughout the world
MindOut is needed because LGBTQ people:
do not get the support they need for their mental health from mainstream services
often feel isolated from LGBTQ communities
face additional discrimination, exclusion and minority stress
deserve a space where their identities are recognised and understood
Top tips
Whether you have children or not, you should educate yourself on how to make coming out a positive experience.
Do not pass on your discrimination to the children in your life — if everyone is inclusive now, the next generation will turn out inclusive. It’s that simple!
If you wouldn’t ask about someones sex life before, don’t ask now.
If you want details — ask google. We are not an encyclopedia of sexual orientation and shouldn’t need to explain this to everyone we meet.
Sexual orientation does not mean that you find every person of that sex attractive.
Sexual orientation is not a fun fact to be pulled out in ‘get to know you’ games or after a few drinks.
If we’re not close, it’s not okay for you to joke about my sexuality.
And finally… It’s never your place to reveal someone’s sexual orientation to other people.
* Forbes
† History.com Stonewall Riots | https://medium.com/an-injustice/sexuality-stigma-discrimination-why-do-people-even-care-8051e025b295 | [] | 2020-11-28 22:06:46.511000+00:00 | ['LGBT', 'Mental Health', 'Discrimination', 'Pride'] |
GPT-3 101: a brief introduction | GPT-3 101: a brief introduction
It has been almost impossible to avoid the GPT-3 hype in the last weeks. This article offers a quick introduction to its architecture, use cases already available, as well as some thoughts about its ethical and green IT implications.
Photo from https://unsplash.com/@franckinjapan
Introduction
Let’s start with the basics. GPT-3 stands for Generative Pretrained Transformer version 3, and it is a sequence transduction model. Simply put, sequence transduction is a technique that transforms an input sequence to an output sequence.
GPT-3 is a language model, which means that, using sequence transduction, it can predict the likelihood of an output sequence given an input sequence. This can be used, for instance to predict which word makes the most sense given a text sequence.
A very simple example of how these models work is shown below:
INPUT: It is a sunny and hot summer day, so I am planning to go to the…
PREDICTED OUTPUT: It is a sunny and hot summer day, so I am planning to go to the beach.
GPT-3 is based on a specific neural network architecture type called Transformer that, simply put, is more effective than other architectures like RNNs (Recurrent Neural Networks). This article nicely explains different architectures and how sequence transduction can highly benefit from the Transformer architecture GPT-3 uses.
Transformer architectures are not really new, as they became really popular 2 years ago because Google used them for another very well known language model, BERT. They were also used in previous versions of OpenAI’s GPT. So, what is new about GPT-3? Its size. It is a really big model. As OpenAI discloses on this paper, GPT-3 uses 175 billion parameters. Just as a reference, GPT-2 “only” used 1,5 billion parameters. If scale was the only requisite to achieve human-like intelligence (spoiler, it is not), then GPT-3 is only about 1000x too small.
Language Models are Few-Shot Learners, OpenAI paper.
Using this massive architecture, GPT-3 has been trained using also huge datasets, including the Common Crawl dataset and the English-language Wikipedia (spanning some 6 million articles, and making up only 0.6 percent of its training data), matching state-of-the-art performance on “closed-book” question-answering tasks and setting a new record for the LAMBADA language modeling task.
Use cases
What really makes GPT-3 apart from previous language models like BERT is that, thanks to its architecture and massive training, it can excel in task-agnostic performance without fine tuning. And here is when magic comes. Since it was released, GPT-3 has been applied in a broad range of scenarios, and some developers have come with really amazing use case applications. Some of them are even sharing the best ones on github or their own websites for everyone to try:
A non exhaustive list of applications based on GPT-3 are shown below:
Text summarizing
Regular Expressions
Natural language to SQL
Natural language to LaTeX equations
Creative writing
Interface design and coding
Text to DevOps
Automatic mail answering
Brainstorming companion
Dialog flows workbench
Guitar tablature generation
Time to panic?
The first question that comes to mind as someone working in the IT services market when seeing all these incredible GPT-3 based applications is clear: will software engineers run out of jobs due to AI improvements like these? The first thing that comes to my mind here is that software engineering is not the same as writing code. Software engineering is a much profound task that implies problem solving, creativity, and yes, writing the code that actually solves the problem. That being said, I do really think that this will have an impact on the way we solve problems through software, thanks to priming.
Just as humans need priming to recognize something we have never noticed before, GPT-3 does too. The concept of priming will be key for making this technology useful, providing the model with a partial block of code, a good question on the problem we want to solve, etc. Some authors are already writing about the concept of “prompt engineering” as a new way to face problem solving through AI in the style of GPT-3. Again, an engineering process still requires much more that what it is currently solved by GPT-3, but it will definitely change the way we approach coding as part of it.
GPT-3 has not been available for much time (and actually, access to its API is very restricted for the moment) but it is clearly amazing what developer’s creativity can achieve by using this model capabilities. Which brings us to the next question. Should GPT-3 generally available? What if this is used for the wrong reasons?
Not so long ago, OpenAI wrote this when presenting its previous GPT-2 model:
“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights.”
As of today, OpenAI still acknowledges this potential implications but is opening access to its GPT-3 model through a beta program. Their thoughts on this strategy can be found in this twitter thread:
It is good to see that they clearly understand that misuse of generative models like GPT-3 is a very complex problem that should be addressed by the whole industry:
Despite having shared API guidelines with the creators that are already using the API, and claiming that applications using GPT-3 are subject to review by OpenAI before going live, they acknowledge that this is a very complex issue that won’t be solved by technology alone. Even the Head of AI @ Facebook entered the conversation with a few examples on how, when being prompted for writing tweets over just one word (jews, black, women, etc.) GPT-3 can show harmful biases. This might have to do with the fact that GPT-3 has been trained on data filtered by Reddit and that “models built from this data produce text that is shockingly biased.”
And this is not the only threat. Advanced language models can be used to manipulate public opinion, and GPT-3 models and their future evolutions could imply huge risks for democracy in the future. Rachel Thomas shared an excellent talk on the topic that you can find here:
Data Bias is not the only problem with language models. As I mentioned in one of my previous articles, the political design of AI systems is key. In the case of GPT-3 this might have huge implications on the future of work and also on the lives of already marginalized groups.
As a funny (or maybe scary) note, even GPT-3 thinks GPT-3 should be banned!
How big is big enough?
Going back to the architecture of GPT-3, training a model of 175 billion parameters is not exactly cheap in terms of computational resources. GPT-3 alone is estimated to have a memory requirement exceeding 350GB and training costs exceeding $12 million.
It is clear that the results are amazing but, at which cost? Is the future of AI sustainable in terms of the compute power needed? Let me finish this article by using some sentences that I wrote for my “Is Deep Learning too big to fail?” article: | https://towardsdatascience.com/gpt-3-101-a-brief-introduction-5c9d773a2354 | ['David Pereira'] | 2020-08-24 18:01:42.747000+00:00 | ['OpenAI', 'Artificial Intelligence', 'NLP', 'Gpt 3'] |
Insane workflows with Slack dialogs and YellowAnt! | Insane workflows with Slack dialogs and YellowAnt!
Make Slack your de-facto command-center
Slack’s recent release of Dialogs is a complete game-changer — it has opened up a whole new world of possibilities and marks a turning point in enterprise ChatOps. Here at YellowAnt, we built a bot that lets you manage all your workplace apps through commands and buttons and create and execute powerful command and event triggered cross-application workflows. With our integration with Slack Dialogs, YellowAnt has not only become much easier to work with, but a lot more functional and sticky. You can now take actions across your apps with simple Slack Dialogs. It’s awesome — Check it out!
You can now send and reply to emails from Slack with YellowAnt
Here’s how…
Commands for GMail
Create a YellowAnt account and Integrate GMail from the YellowAnt marketplace
Your GMail account will be accessible through the command gmail
Type gmail
YellowAnt will show you a list of GMail commands
Under send, click on Run this command.
Send email dialog
This opens a dialog where you type out the message details
Click on Execute. Your mail will on its way!
Insane! Right?
Replying to emails from Slack with YellowAnt
YellowAnt GMail integration brings all your emails into Slack with some action buttons like Star, Mark as Read or Important, and Reply
Click on Reply. This opens the Gmail reply dialog
Type out the body of the reply. hit Execute
Voila! Your reply is sent! | https://medium.com/startup-frontier/insane-workflows-with-slack-dialogs-and-yellowant-711d88123cd0 | ['Vishwa Krishnakumar'] | 2017-10-01 14:02:32.297000+00:00 | ['Chatbots', 'AI', 'Slack', 'DevOps', 'Software Development'] |
5 Things Confident People Do Differently | 5. They Don’t Sell Themselves Short
How often do you doubt yourself? How often do you think that you’re not equipped to handle a situation in a relationship, not qualified for a promotion at work, not competent at any given skill? How often do you second guess your every move, doubt every decision you make?
Unfortunately, the resounding answer for most people is “too often.”
In our modern-day and age, self-doubt and low self-esteem run rampant. Confidence is fickle, running in the veins of the rarest — difficult to catch, and even more difficult to keep; there one moment, gone the next.
When it comes to understanding their abilities, confident people set the bar high. They don’t underscore their capabilities, strength, or ability to succeed. They possess a firm realization of their own momentous potential and believe in the undertakings they are capable of achieving.
Most importantly, confident people don’t deter themselves from an opportunity by underestimating what they’re capable of. In short, they refuse to be outsmarted by the little voice in their heads — they listen instead to the beating of their heart, the one that says, “you are enough.”
In its purest form, confidence is the building block to experiencing more. It rids the metaphorical stopper, allowing you to trust in your overwhelming capacity to achieve and deserve every wide-open door to opportunity. | https://medium.com/mind-cafe/5-things-confident-people-do-differently-e8720d4347db | ['Zoe Yu'] | 2020-11-18 17:21:41.272000+00:00 | ['Life Lessons', 'Self Improvement', 'Psychology', 'Advice', 'Inspiration'] |
How Social Media Affects Our Mental Health | Don’t Facebook your problems. Face them.
The power and influence of social media today are so great that they have penetrated under our skin and the development of new technologies has changed the world we live in — our habits, behavior, and communication. Special target, the prisoners of new ways of communication and social networks are young generations, especially teenagers. Social media has become the simplest way to communicate with each other and to exchange ideas, information, photos, and statuses. Since the social media entered into our private lives, we can freely say that they have a significant influence on our mental health.
So let’s see in which ways social media affects our mental health.
It can be addictive.
It’s obvious that social networks have a negative impact on our mental health, but it can easily become an addiction. According to Dr. Shannon M. Rauch at Benedictine University in Arizona, when your online posts are rewarded with comments and “likes” it serves as reinforcement, which can quickly develop into a habit that’s hard to break. Authors of a study from Nottingham Trent University conclude that there is a specific Facebook addiction disorder. It can cause a neglect of personal life, mental preoccupation, escapism and mood-modifying experiences. Also, people who excessively use social networks tend to conceal the addictive behavior. The addiction to social media especially depends on the time we spend on it. The more we use social networks, the more we stimulate the pleasure centers and dopamine production in our brain.
It causes a bad mood.
Social networks affect our mood and trigger more sadness than well-being. For example, Facebook allows people instant connection. “Offline” communications powerfully enhance well-being but interaction on Facebook may predict the opposite result for young people. It can cause social isolation which is the worst thing for our mental and physical health. One study includes a team which looked at how much people used 11 social media sites: Facebook, Twitter, Google+, YouTube, LinkedIn, Instagram, Pinterest, Tumblr, Vine, Snapchat, and Reddit. In the end, it turned out that the more time people spent on these sites, the more they perceived themselves as socially isolated.
It enhances comparison factor and jealousy.
This includes comparing our lives to others which are mentally unhealthy. We often scroll through our feed and comment to ourselves others statuses and photos, for example: „Look at what she got, it’s so cool!“ or „WOW, he’s in Portugal, I want to go there!“. In fact, we make judgments about how we measure up and we become jealous of other people. One study looked at how we make comparisons to other’s posts — are we feeling better or worse than our friends? It turned out that even feeling that another person is better than you makes you feel bad. Also, when we feel jealous we tend to post more and more just to present our life much better. It’s circle with no end. Jealousy and envy can lead to feelings of depression.
We feel that social media helps us.
When we use social media we think that it is good for our mood but we actually don’t feel very good. There is a disorder called FoMO (Fear of missing out). It’s a social anxiety characterized by a desire to stay continually connected with what others are doing and we think that is good to know all that stuff. It’s like an addiction to drugs or cigarettes. When people use drugs they think it will fix their problems but it’s just going to get worse and worse. In one study, one group of people were using Facebook, and other groups were doing some other activities. When they were done, the group who were using Facebook felt much worse than other groups. The key is that they thought they would feel better but, in the end, they didn’t.
We feel more social if we have more friends on social media.
This is totally incorrect. We can’t be more social if we have more friends on social media because it contains virtual friends and virtual communication which can’t be compared with real communication. If we use social media often for conversing with our friends, we can feel lonely. Loneliness can’t be fixed with more friendships on social media, because virtual friend time doesn’t have the therapeutic effect, as does time with real friends.
All of these facts show us that we live in an enchanted world. But that doesn’t mean that social media doesn’t have its benefits. It allows us to connect with people around the world or with friends we had lost touch with. Also, there are social networks where we can find a job or read about things in areas of our interests, but getting on social when you are bored or you need an emotional lift is a very bad idea. We have to learn to control the usage of social networks. It’s a great idea to take a break from social media — turn of the notifications, just to see how long you will endure, and how that would affect your mood. If you think you can do it, go on! | https://medium.com/digital-reflections/how-social-media-affects-our-mental-health-5d65f3690ead | ['Ivana Grgurević'] | 2017-11-15 07:31:01.199000+00:00 | ['Social Problems', 'Social Media', 'Mental Health', 'Digital Marketing', 'Communication'] |
Wishful Thinking Is The Secret Solution To Life’s Trickiest Problems | This version of wishful thinking would help you find your career, let go of fear and live your dream life.
Photo:Kazi Mizan/Unsplash
Many life’s problems that affect us every day seem to defy answer. A common one many people grapple with is: what should be my career?
A lot has been written about finding your dream career or doing what you love or following your passion, yet many people are still lost about exactly what to do.
There’s a question that works, however. A question that gives you an inroad into an otherwise intractable problem. It’s one question that the smartest career guides and counsellors have learnt to ask. You’ll find this question now in the most helpful career guides.
Finding the right balance of using a finite amount of time, with a finite amount of resources (mostly, money) to craft a career and make a living from it has understandably had many of us tied up in knots.
It’s almost like we never get the right answer. Even when we do — based on the facts we have — we still doubt if we’re really doing what we should be doing with our lives.
This question, however, throws the limitation of time and financial resources away, and ask something a little different.
It goes thus:
If you had all the time and money in the world [you want], what would you do? What career would you pick up?
The magic of this question is not its possibility, in fact, it’s an unrealistic scenario, but that’s kind of the point. And that’s why it’s worked so well, for many people to get clear about what they want to do.
Take the example of Esther, a 20-year old, school-drop out, that finally got clear and figured out what she wanted to do with her life, after being asked this question and taking time to think about it.
Here’s another one that’s often asked when someone is sensing you know what to do, but you’re letting fear hold you back goes thus. It goes to the bottom of revealing what is often really holding us back from living out our dreams:
“What Would You Attempt to Do If You Knew You Could Not Fail?” What would you do if failure wasn’t an option?
Building Your Castle In The Air
In the first example, because many people think the biggest roadblock is time or money. So, if these constraints were relaxed or were to completely disappear, what would you do? The second example, gets to the bottom of answering the question, what would you do if nothing could stop you?
There is a pattern to this provocative but very useful questions that have the power to tease out what you seem muddled about. You may have noticed it already.
We take a problem with constraints, then kick out the constraint and see if we can answer the problem without the constraint. There’s a name for this in science. It’s called relaxation.
There are many times when we are up against a thorny, gnarly, impassable problem that just seems to defy our best effort. Our options are stark either/or choices with no middle ground.
The examples above show one of the most popular uses of relaxation, also called constraint relaxation, which simply removes some of the constraints on a problem and makes progress on a looser form of the problem before coming back to reality.
Think of it as building your castle in the air first, then coming back down to put a foundation beneath it.
Continuous Relaxation
Another form of relaxation is to imagine you already have a blend of the two options you want to choose from. They already both exist in your life.
Imagine this as deciding between iced tea and lemonade. First, imagine a 50–50 “Arnold Palmer” blend and then round it up or down.
Imagine you have to round it up to iced tea, and delete lemonade, or imagine reducing iced tea to zero leaving only the lemonade and see how both feel. This gives you a big clue as to which matters more.
Possibilities Into Penalties
The third, Lagrangian Relaxation, turns impossibilities into mere penalties, teaching the art of bending the rules (or breaking them and accepting the consequences).
A rock band deciding which songs to cram into a limited set, for instance, is up against what computer scientists call the knapsack problem — a puzzle that asks one to decide which of a set of items of different build and importance to pack into a confined volume.
In its strict formulation, a knapsack problem is famously intractable, but that needn’t discourage our relaxed rock stars.
As demonstrated in several celebrated examples, sometimes it’s better to simply play a bit past the city curfew and incur related fines that to limit the show to the available slot. In fact, even when you don’t commit the infraction, simply imagining it can be illuminating.
To Sum Up
As an optimization technique, relaxation is all about being consciously driven by wishful thinking.
So when you have a problem that seems intractable, an either-or choice that’s hard to make, suspend reality and use wishful thinking, solve the new freedom-fused problem wishful thinking gives you, and things get clearer when you return to reality.
In other words, wishful thinking can be useful. Make your castle in the air, then build foundations under them.
Unless we're willing to spend all our lives striving for unattainable perfection every time we encounter a hitch, hard problems demand that instead of spinning our wheels, we imagine easier versions and tackle those first.
When applied correctly, this is not just wishful thinking, not fantasy or idle daydreaming. It’s one of our best ways of making progress. | https://medium.com/datadriveninvestor/wishful-thinking-is-the-secret-solution-to-lifes-trickiest-problems-8b90364e7cc3 | ['Mordecai Ayuz'] | 2020-11-09 06:02:16.483000+00:00 | ['Life Lessons', 'Productivity', 'Leadership', 'Problem Solving', 'Careers'] |
Announcing Pangeo Earthcube Award | An earlier version of this announcement was first shared as an email to the Pangeo google group on Aug. 31, 2017.
Dear Friends and Colleagues
I am thrilled to share the news that NSF has decided to fund a proposal to support the Pangeo project!
Last December I reached out to you to enquire about interest in a collaborative proposal, and many of you expressed your enthusiasm. After lots of back and forth with managers of different programs, we decided to submit to the EarthCube program, which funds cyberinfrastructure projects related to Earth Science. The specific solicitation we responded to is here. Our project is technically called an “EarthCube integration.” This type of project requires a close link to “Geoscience Use Cases,” i.e. actual science applications. The need to closely intertwine the technical development and the scientific applications determined the structure of our proposal and the makeup of the team.
In the end, we ended up with the following team:
Columbia / Lamont:
National Corporation for Atmospheric Research / Unidata
Anaconda (Formerly Continuum Analytics):
Although many other people were interested in and supportive of this initiative, the proposal team includes people who were actually in a position to formally collaborate on an NSF proposal. This unfortunately excluded lone grad students and postdocs, as well as Stephan Hoyer (creator of xarray) himself. My only regret is that there was no easy way to directly involve the broader Pangeo community; we were constrained by the realities of NSF’s policies. On the upside, we have entrained several new people from Lamont into the project, including some prominent senior scientists who are not yet Xarray users but who recognize the importance of the tools we are building for scientific progress. Their involvement strengthened the “Use Case” aspect of the proposal, and they provide an ideal test case for observing and evaluating the transition of a research group from commercial to open-source scientific software. Please welcome them to our community.
The public details of the awards can be found here, on the NSF website (Columbia, NCAR; Continuum is funded via a subaward through Columbia.) In total, we will receive $1.2M over a three year period. Our goal is to leverage this funding to move forward the goals we have collectively identified through our workshop last year and other ongoing discussions. We plan to conduct all of this work openly and transparently, involving the broader community in every way possible.
I have published the proposal Project Description via figshare under a CC BY 4.0 license on Figshare:
The title of the proposal is “Pangeo: An Open Source Big Data Climate Science Platform.” I encourage anyone interested to browse the proposal and offer your feedback and suggestions. You may share and adapt this document as you wish, but please acknowledge the authors.
We plan to conduct all of our work via the pangeo-data GitHub organization. In particular, we have a new pangeo repo we are using just as an issue tracker and wiki:
Please join in these discussions freely!
I want to sincerely thank everyone who has been part of this initiative so far. Your ideas, enthusiasm, and energy have provided the motivation to get us to this point. I’m so excited about taking Pangeo to the next level! | https://medium.com/pangeo/announcing-pangeo-earthcube-award-fefbe54acbec | ['Joe Hamman'] | 2018-06-06 15:57:07.910000+00:00 | ['Science', 'Data Science', 'Community', 'Funding'] |
Using Jackson annotations with Jackson-jr | Using Jackson annotations with Jackson-jr
(to rename and remove properties)
Something that I briefly covered earlier (in Jackson 2.11 features) is the new jackson-jr extension called jackson-jr-annotation-support . But as I have not gotten much feedback since that release, maybe it is time to re-review this additional functionality.
(note: if you are not familiar with Jackson-jr library itself, you may want to read “Jackson-jr for ‘casual JSON’” first before continuing)
Introduction of general “extension” mechanism for jackson-jr — not unlike full Jackson’s “modules” — coincided with the addition of first such extension, “jackson-jr-annotation-support”, which offers optional support for some of basic Jackson annotations for basic detection (and exclusion) of properties; renaming, aliasing, reordering (for output).
Enabling Jackson-annotations Extension for Jackson-jr
To use this extensions you will need to add a dependency in your build file; with Maven you would add:
<dependency>
<groupId>com.fasterxml.jackson.jr</groupId>
<artifactId>jackson-jr-annotation-support</artifactId>
<version>2.12.0</version>
</dependency>
and then you will need to register it with the JSON instance you use:
JSON j = JSON.builder().register(JacksonAnnotationExtension.std)
// add other configuration, if any
.build();
after doing this, support would be enabled for following annotations:
@JsonProperty for basic inclusion, renaming
for basic inclusion, renaming @JsonPropertyOrder for defining specific order of properties when writing
for defining specific order of properties when writing @JsonIgnore / @JsonIgnoreProperties for ignoring specified visible properties
/ for ignoring specified visible properties @JsonAlias for specifying alternate names to accept when reading
for specifying alternate names to accept when reading @JsonAutoDetect for changing default visibility rules for methods (can ignore public getters/setters and force use of annotations) and fields (can make non-public fields visible)
So let’s have a look at some common usage patterns.
Renaming fields, adding aliases
Perhaps the most common use case for annotations is that of renaming properties after auto-detection. Consider, for example, following case:
public class NameSimple {
@JsonProperty("firstName")
@JsonAlias({ "fn" })
protected String _first;
@JsonProperty("lastName")
protected String _last;
protected NameSimple() { }
public NameSimple(String f, String l) {
_first = f;
_last = l;
}
}
in this case we would both indicate use of respective fields to access 2 logical properties (instead of having to add getters and setters) and also rename them, so that compatible JSON would be like
{ "firstName":"Bob", "lastName":"Burger" }
or, considering additional alias we also specified, possibly:
{ "fn":"Bob", "lastName":"Burger" }
(but always serialized with “firstName”: alias only considered during deserialization)
As with regular Jackson, renaming only needs to be done by annotating one of accessors, for example:
public class ValueHolder {
// not needed for setting or getting, name does not matter
private int v; @JsonProperty("value")
public int getVal() { return v; }
// no need to repeat, gets renamed as well
public void setVal(int v) { this.v = v; }
}
Ignoring accessors
Another common use for annotations is to ignore property otherwise implied by existence of a getter or a setter. For example, consider this class, where by default we would see metadata property, but that is not meant to be serialized:
public class Point
public int x;
public int y;
protected XY() { }
public XY(int x, int y) {
this.x = x;
this.y = y;
} // diagnostics for logging, not wanted for serialization
@JsonIgnore
public Metadata getMetadata() {
return calculateMetadata();
}
}
alternative we may sometimes want to use an alternative, @JsonIgnoreProperties , especially if ignoring properties from the superclass:
// parent class has "getMetadata()", let's ignore:
@JsonIgnoreProperties({ "metadata" })
public class Point extends ValueWithMetadata
{
public int x, y;
}
Changing serialization order of properties
By default, Jackson-jr serializes properties in alphabetic order, but sometimes you may want to use different ordering. If so,
@JsonPropertyOrder({ "x", "y", "width", "height" })
public class Rectangle {
public int x, y;
public int width, height;
}
(note: there is no way to try to force “declaration order” — JDK does not guarantee such an ordering is available via Reflection, even for a single class, and trying to combine ordering across fields and methods, super/subclasses would make this futile exercise even if it di)
Changing auto-detection settings
By default, Jackson-jr auto-detects properties based on finding:
public getters and setters
getters and setters public fields (if field-detection enabled)
fields (if field-detection enabled) public “is-getters” (like boolean isEnabled() — if is-getter detection enabled)
but sometimes you might want to either prevent auto-detection altogether for certain kinds of accessors, or alternatively auto-detect accessors with lower visibility. You can use Jackson class annotation @JsonAutoDetect for this purpose. Declaration like this, for example:
@JsonAutoDetect(
setterVisibility = JsonAutoDetect.Visibility.ANY,
getterVisibility = JsonAutoDetect.Visibility.PROTECTED_AND_PUBLIC,
fieldVisibility = JsonAutoDetect.Visibility.NONE
)
public class MyValue {
}
would:
Auto-detect setter methods of any visibility type (even ones declared private ) Auto-detect public , protected and “package protected” getter methors (not just public ) Not auto-detect any fields, no matter what visibility (not even public )
this would reduce the need for per-accessor annotations or having to change accessor visibility levels. Auto-detection may still be overridden by any explicit annotations like @JsonProperty (to include, regardless of visibility) or @JsonIgnore (ignore regardless of visibility).
In addition to per-class annotation, you may also override the default visibility used by Jackson-jr: this is done when building extension itself, before registering it.
For example:
// start with default visibility configuration
JsonAutoDetect.Visibility vis =
JacksonAnnotationExtension.DEFAULT_VISIBILITY;
// change field-visibility:
vis = vis.withFieldVisibility(
JsonAutoDetect.Visibility.PROTECTED_AND_PUBLIC)); // and then build the JSON instance with extension
final JSON aj = JSON.builder()
.register(JacksonAnnotationExtension.builder()
.withVisibility(vis)
.build())
.build();
would use defaults otherwise, but change field auto-detection to allow detection of protected (and “package protected”) fields in addition to public ones. These defaults may be overridden by per-class @JsonAutoDetect settings, which in turn may be overridden by per-accessor annotations.
Limitations
Aside from only supporting a small subset of all Jackson annotations, there are some limitations regarding annotations that are supported:
No mix-in annotation support
Class annotations: only super-class annotations are inherited, super-interface annotations are not
Accessor annotations: no “accessor inheritance” (fields, methods) — when overriding methods, annotations from accessor in base class will not be found
Some annotations are only supported for classes, but not on accessors ( @JsonIgnoreProperties )
) Only renaming aspect of @JsonProperty is supported
Possible future additions
Although there is no plan to support all or even most Jackson annotations (if you want that, consider using “full” Jackson), support may be gradually extended in some areas based on feedback.
Some possible areas of improvement are:
@JsonValue
Consider supporting mix-in annotations
Maybe a subset of @JsonCreator functionality (property-based, explicit) should be supported
functionality (property-based, explicit) should be supported Naming strategies? (existing @JsonNaming cannot be used, unfortunately, as it is part of jackson-databind , not jackson-annotations )
If you have specific extension ideas, wishes, requests, feel free to file an RFE at Jackson-jr Issue Tracker. | https://cowtowncoder.medium.com/using-jackson-annotations-with-jackson-jr-51087850b95e | [] | 2020-12-16 06:10:59.530000+00:00 | ['Json', 'Jackson', 'Java'] |
Kubernetes authentication via GitHub OAuth and Dex | My name is Amet Umerov and I’m a DevOps Engineer at Preply.
Introduction to Kubernetes auth
We use Kubernetes for creating dynamic environments for devs and QA. So we want to provide them access to Kubernetes via Dashboard and CLI. Kubernetes vanilla doesn’t support authentication for kubectl out of the box, unlike OpenShift.
In this configuration example, we use:
dex-k8s-authenticator — a web application for generating kubectl config
Dex — OpenID Connect provider
GitHub — because we use GitHub at our company
Unfortunately, Dex can’t handle groups with Google OIDC, so if you want to use groups, try another provider. Without groups, you can’t create group-based RBAC policies.
Here is a flow of how Kubernetes authorization works:
The Authorization process
The user initiates a login request in the dex-k8s-authenticator ( login.k8s.example.com ) dex-k8s-authenticator redirects the request to Dex ( dex.k8s.example.com ) Dex redirects to the GitHub authorization page GitHub encrypts the corresponding information and passes it back to Dex Dex forwards this information to dex-k8s-authenticator The user gets the OIDC token from GitHub dex-k8s-authenticator adds the token to kubeconfig kubectl passes the token to KubeAPIServer KubeAPIServer returns the result to kubectl The user gets the information from kubectl
Prerequisites
So, we have already installed Kubernetes cluster ( k8s.example.com ) and HELM. Also, we have GitHub with organization name ( super-org ).
If you don’t have HELM, you can easily install it.
Go to the GitHub organization Settings page, ( https://github.com/organizations/super-org/settings/applications ) and create a new Authorized OAuth App:
The GitHub settings page
Fill the fields with your values:
Homepage URL: https://dex.k8s.example.com
Authorization callback URL: https://dex.k8s.example.com/callback
Be careful with links, trailing slashes are important.
Save the Client ID and Client secret generated by GitHub in a safe place (we use Vault for storing our secrets):
Client ID: 1ab2c3d4e5f6g7h8
Client secret: 98z76y54x32w1
Prepare your DNS records for subdomains login.k8s.example.com and dex.k8s.example.com and SSL certificates for Ingress.
Create SSL certificates:
cat <<EOF | kubectl create -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: cert-auth-dex
namespace: kube-system
spec:
secretName: cert-auth-dex
dnsNames:
- dex.k8s.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- dex.k8s.example.com
issuerRef:
name: le-clusterissuer
kind: ClusterIssuer
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: cert-auth-login
namespace: kube-system
spec:
secretName: cert-auth-login
dnsNames:
- login.k8s.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- login.k8s.example.com
issuerRef:
name: le-clusterissuer
kind: ClusterIssuer
EOF kubectl describe certificates cert-auth-dex -n kube-system
kubectl describe certificates cert-auth-login -n kube-system
Your ClusterIssuer le-clusterissuer should already exist, if you haven’t done it you can easily create it via HELM:
helm install --namespace kube-system -n cert-manager stable/cert-manager
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: le-clusterissuer
namespace: kube-system
spec:
acme:
server:
email: [email protected]
privateKeySecretRef:
name: le-clusterissuer
http01: {}
EOF cat << EOF | kubectl create -f -apiVersion: certmanager.k8s.io/v1alpha1kind: ClusterIssuermetadata:name:namespace: kube-systemspec:acme:server: https://acme-v02.api.letsencrypt.org/directory email:privateKeySecretRef:name:http01: {}EOF
KubeAPIServer setup
You need to provide OIDC configuration for the kubeAPIServer as below and update cluster:
kops edit cluster
...
kubeAPIServer:
anonymousAuth: false
authorizationMode: RBAC
oidcClientID: dex-k8s-authenticator
oidcGroupsClaim: groups
oidcIssuerURL: https://dex.k8s.example.com/
oidcUsernameClaim: email kops update cluster --yes
kops rolling-update cluster --yes
In our case, we use kops for cluster provisioning but it works the same for other clusters.
Dex and dex-k8s-authenticator setup
For connecting Dex you should have a Kubernetes certificate and key. Let’s obtain from the master:
sudo cat /srv/kubernetes/ca.{crt,key}
-----BEGIN CERTIFICATE-----
AAAAAAAAAAABBBBBBBBBBCCCCCC
-----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY-----
DDDDDDDDDDDEEEEEEEEEEFFFFFF
-----END RSA PRIVATE KEY-----
Clone the dex-k8s-authenticator repo:
git clone [email protected]:mintel/dex-k8s-authenticator.git
cd dex-k8s-authenticator/
You can set up a values file very flexible, HELM charts are available on GitHub. Dex will not work with default variables.
Create the values file for Dex:
cat << \EOF > values-dex.yml
global:
deployEnv: prod tls:
certificate: |-
-----BEGIN CERTIFICATE-----
AAAAAAAAAAABBBBBBBBBBCCCCCC
-----END CERTIFICATE-----
key: |-
-----BEGIN RSA PRIVATE KEY-----
DDDDDDDDDDDEEEEEEEEEEFFFFFF
-----END RSA PRIVATE KEY----- ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
path: /
hosts:
- dex.k8s.example.com
tls:
- secretName: cert-auth-dex
hosts:
- dex.k8s.example.com serviceAccount:
create: true
name: dex-auth-sa
issuer: https://dex.k8s.example.com/
storage: #
type: sqlite3
config:
file: /var/dex.db
web:
http: 0.0.0.0:5556
frontend:
theme: "coreos"
issuer: "Example Co"
issuerUrl: "https://example.com"
logoUrl: https://example.com/images/logo-250x25.png
expiry:
signingKeys: "6h"
idTokens: "24h"
logger:
level: debug
format: json
oauth2:
responseTypes: ["code", "token", "id_token"]
skipApprovalScreen: true
connectors:
- type: github
id: github
name: GitHub
config:
clientID: $GITHUB_CLIENT_ID
clientSecret: $GITHUB_CLIENT_SECRET
redirectURI: https://dex.k8s.example.com/callback
orgs:
- name: super-org
teams:
- team-red config: |issuer:storage: # https://github.com/dexidp/dex/issues/798 type: sqlite3config:file: /var/dex.dbweb:http: 0.0.0.0:5556frontend:theme: "coreos"issuer: "Example Co"issuerUrl: "https://example.com"logoUrl: https://example.com/images/logo-250x25.pngexpiry:signingKeys: "6h"idTokens: "24h"logger:level: debugformat: jsonoauth2:responseTypes: ["code", "token", "id_token"]skipApprovalScreen: trueconnectors:- type: githubid: githubname: GitHubconfig:clientID: $GITHUB_CLIENT_IDclientSecret: $GITHUB_CLIENT_SECRETredirectURI:orgs:- name:teams: staticClients:
- id: dex-k8s-authenticator
name: dex-k8s-authenticator
secret: generatedLongRandomPhrase
redirectURIs:
- https://login.k8s.example.com/callback/ envSecrets:
GITHUB_CLIENT_ID: "1ab2c3d4e5f6g7h8"
GITHUB_CLIENT_SECRET: "98z76y54x32w1"
EOF
And for dex-k8s-authenticator:
cat << EOF > values-auth.yml
global:
deployEnv: prod dexK8sAuthenticator:
clusters:
- name: k8s.example.com
short_description: "k8s cluster"
description: "Kubernetes cluster"
issuer: https://dex.k8s.example.com/
k8s_master_uri: https://api.k8s.example.com
client_id: dex-k8s-authenticator
client_secret: generatedLongRandomPhrase
redirect_uri: https://login.k8s.example.com/callback/
k8s_ca_pem: |
-----BEGIN CERTIFICATE-----
AAAAAAAAAAABBBBBBBBBBCCCCCC
-----END CERTIFICATE----- ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
path: /
hosts:
- login.k8s.example.com
tls:
- secretName: cert-auth-login
hosts:
- login.k8s.example.com
EOF
Install Dex and dex-k8s-authenticator:
helm install -n dex --namespace kube-system --values values-dex.yml charts/dex
helm install -n dex-auth --namespace kube-system --values values-auth.yml charts/dex-k8s-authenticator
Check it (Dex should return code 400, and dex-k8s-authenticator — code 200):
curl -sI https://dex.k8s.example.com/callback | head -1
HTTP/2 400 curl -sI https://login.k8s.example.com/ | head -1
HTTP/2 200
RBAC configuration
Create ClusterRole for your group, in our case with read-only permissions:
cat << EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-read-all
rules:
-
apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- storage.k8s.io
resources:
- componentstatuses
- configmaps
- cronjobs
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- ingresses
- jobs
- limitranges
- namespaces
- nodes
- pods
- pods/log
- pods/exec
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
- statefulsets
- storageclasses
- clusterroles
- roles
verbs:
- get
- watch
- list
- nonResourceURLs: ["*"]
verbs:
- get
- watch
- list
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
EOF
Create ClusterRoleBinding configuration:
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dex-cluster-auth
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-read-all
subjects:
kind: Group
name: "super-org:team-red"
EOF
Now you are ready to start testing.
Tests
Go to the login page ( https://login.k8s.example.com ) and sign in with your GitHub account:
Login page
Login page redirected to GitHub
Follow the instructions to create kubectl config
After copy-pasting commands from the login web page you can use kubectl with your cluster:
kubectl get po
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 3d kubectl delete po mypod
Error from server (Forbidden): pods "mypod" is forbidden: User "[email protected]" cannot delete pods in the namespace "default"
And it works! All users with GitHub account in our organization can read resources and exec into pods but have no write permissions.
Stay tuned and subscribe to our blog, we will publish new articles soon :) | https://medium.com/preply-engineering/k8s-auth-a81f59d4dff6 | ['Amet Umerov'] | 2019-06-14 16:23:26.083000+00:00 | ['Dex', 'Github', 'Authentication', 'Kubernetes', 'DevOps'] |
Get Off The Internet | Photo by Harry Knight on Unsplash
“Card games, now that’s a good way to pass the time.”
An older woman observes my group of fifteen high school students huddled in circles playing games. We’ve just spent a week in the Dominican Republic and are enjoying a layover in the Miami airport before our final flight home.
Throughout the week, card games have been the go-to evening pastime. The kids teach each other new games, joke, and converse over the cards. That same spirit of joy passes here in the airport.
I overhear the woman observing my students and return to the email demanding attention on my phone.
The day we left for the Dominican Republic, Pew Research released a new study showing 95% of US teenagers own a smartphone. The same study reports 89% of teenagers are connected to the internet multiple times per day or more.
Teenagers get a bad rap.
Read any study and you’ll believe they’re over-connected and addicted to their phones. While often unreported, adults are just as connected as teens. The difference? Teenagers have a better grip on their boundaries.
In my work with students it’s clear they understand healthy digital boundaries. Adults, not so much. Did I mention I was the one playing on my phone while all the teens were engaged with each other?
Teenagers are “digital natives”. Ubiquitous connection is their first language. Adults are “digital immigrants”. Instead of naturally understanding technology and healthy boundaries, we’ve had to force ourselves to learn both. The technology we’re good with. The boundaries? Not so much.
The alternatives to healthy boundaries are burnout and broken relationships. These are not good alternatives. We need to learn when to use our phones and when to put them down.
In the fall, Apple is releasing a new iOS feature called Screen Time. This will analyze and record app usage and allow for limits. Tell your phone you only want to use Facebook for ten minutes a day and when you hit the limit, Apple will cut you off.
Sure, having our phones remind us to put them down seems silly, but if it helps us set healthier boundaries then I’m all for it.
Don’t want to use Screen Time? Follow Dexter Thomas on Twitter. He routinely tweets to “get off the internet”. It’s a good reminder to put the phone down. | https://justincox.medium.com/get-off-the-internet-38497840083d | ['Justin Cox'] | 2018-07-08 19:37:02.318000+00:00 | ['Internet', 'Technology', 'Social Media', 'Apple', 'Life'] |
Java Tips — Managing arrays. An introduction for create, manipulate… | Java Tips — Managing arrays
An introduction for create, manipulate and use the arrays
Photo by Ilze Lucero on Unsplash
To store in memory more than a variable is possible to use the array; the arrays are a structure that may contain variables defined as primitive types or defined an object.
The syntax of an array declaration is characterized by the use of square brackets [].
Declaration and initialization of an array
To declare an array are required two information: name and type. Instead for the initialization is required the dimension. In details:
Name : the array is itself a variable and needs a name to be declared
: the array is itself a variable and needs a name to be declared Type : every array contains homogeneous variables and so is needed to define of the type of objects that intends to include into the array
: every array contains homogeneous variables and so is needed to define of the type of objects that intends to include into the array Dimension: the dimension of an array specify ahead the maximum number of variables that it can contain
Examples of array declaration
An array can be instantiated using the new command, specifying the number of object that can contain and that need to be positive, or specifying into the braces the list of values. In the first case the array will be initialized with the default values based on type, otherwise in the second case the array will have a dimension equal to several values contained in the list.
Examples of array initialization
Once an array object is created, its length never changes.
Access and scan an array
The positions in the array start with 0 until n-1 where n-1 is the maximum number of an object included in the array. To directly access a position into an array is needed specify the dimension-1 of the object that to get, for example:
Examples of direct access to an array
It is also possible to scan the entire or portion of an array using a for or while loop. During the loop it can be incremented or decremented the index that specifies the position of the variable into the array. In the next picture there are two example of scan (with for and while):
Examples of array scan (for and while loop)
In the next picture there is the console output of execution of the previous code:
Examples of array scan execution (for and while loop)
To avoid exceptions is important to maintain control to the index while scanning the array; the most common exception that can catch is ArrayIndexOutOfBoundsException that specify that the index used to access a position is less 0 or greater then dimension-1.
Array-of-array
The type of an array could be itself an array: in this way it is possible to create a multidimensional array that can be created, initialized and accessed like the array mono dimensional.
The declaration of an array-of-array expect that in the type definition is specified that the type is an array:
Examples of array-of-array declaration
To initialize the array now it is necessary to specify the dimension of the main array and the dimension of the array included into the main array; the dimension can be different and need to respect the rules of the mono dimensional array. The array included into the main array will have all the same dimension. Also in these case once an array object is created, its length never changes.
Examples of array-of-array initialization
To directly access a specified position of an array-of-array is needed to specify two coordinate: one coordinate get the position into the main array and the second coordinate get the position into the array got before.
Example of direct access to a specific position
The execution of this piece of code extracts the character “L” from the previous array declaration.
In the same way it is possible to scan entire or portion of an array-of-array by varying the two (or more) coordinate:
Examples of for and while loop used to scan the array-of-array
Conclusion
The array is the simplest data structure included in the Java SE: it is easy to declare, to initialize and to use.
In contexts where is possible to have the fixed dimension or is the direct access more frequent is recommended the use of array; otherwise in contexts dynamic where the dimension of the array is variable is recommended the Collection of the java.util.* library that implements many feature to simplify the use and the development.
Git: repository | https://medium.com/quick-code/java-tips-managing-arrays-1dbe836d5c | ['Marco Domenico Marino'] | 2019-10-25 08:35:58.751000+00:00 | ['Java', 'Programming Languages', 'Programming', 'Software Development'] |
4 Lessons I Learned From Sending 100 Christmas Cards That Will Make You Happier | 4 Lessons I Learned From Sending 100 Christmas Cards That Will Make You Happier James Ware Follow Dec 23 · 4 min read
“How do you address a Christmas card to the Queen?” Of all the things I’ve Googled in this most surreal of search years, this was perhaps the most unusual yet. Even the Queen will be lonely this Christmas, and for this reason she had made my Christmas card list, along with 99 others.
Normally, I don’t write a single Christmas card but if ever it felt fitting to send end-of-year tidings then it was in 2020. In a year that’s been so unclear, I decided to set myself a nice round target of 100 cards. Writing to whoever came to mind, whether because they had somehow improved my year or because I hadn’t connected with them in a while, the range of recipients is almost certainly the only ever list to contain the names of both my 95-year-old great aunt Auntie Win and David Guetta (unless she’s snuck to Ibiza for F*** Me I’m Famous without telling me). As I sat there writing each card, it became clear to me that there was some real value to this seasonal session, beyond just learning which is the best stretch to relieve hand cramp.
1. The Trouble With Tunnel Vision
Initially, this showed me quite how narrowing our terminal tunnel vision is. I would like to think of myself as a relatively considerate person who cares about others, but the reality is that I get so caught up day to day in my own perceived problems that I rarely think of those outside my immediate inner circle, even those I consider close friends. So, just setting myself up to call people to mind proved an excellent exercise, since it’s not something our minds are going to do of their own accord. And this year more than ever, it was abundantly clear that everyone’s been through a lot.
2. If You Want To Feel Less Lonely, Help Someone Else Feel Less Lonely
This also reinforced another of this year’s revelations: that isolation can be a self-reinforcing experience. When we don’t come into contact with anyone, we’re less likely to contact others. I certainly have experienced that recently when living and working at home on my own, that each day I don’t speak to anyone I become less inclined to reach out and start a conversation. But what the filter of loneliness blocks out is how happy to hear from us other people are. And all we have to do is to reach out to them. In this case, I thought that suddenly contacting some people I hadn’t spoken to all year out of the blue to get their addresses might get a mixed reception. But without exception everyone was delighted to be thought of. It’s certainly made me consider how I could incorporate this more into my regular routine. Sure, I probably won’t write to 100 people on a grey January day, but even sending one message a day to someone I haven’t spoken to recently feels like it would be enormously worthwhile.
3. The Power of Using Two Wonder Words: Thank You
The greatness of gratitude is something everyone has tried to harness in 2020. I’d read earlier in the year about how effective gratitude letters can be and was amazed how satisfying it is even just to write the words “THANK YOU” to someone you’re grateful to. From thanking one of my most supportive friends to the manager of my favourite Premier League soccer team, the very act of simply writing those words is more moving than you would expect.
4. To Feel Better, Do Something For Someone Who Has It Worse
83 cards in, I finally reached the stage where I began running out of ideas for recipients. Scanning my mental contacts, I suddenly thought of my college football groundsman, Barrie, who I hadn’t spoken to in years. There’s no way I would have just sat down to write to him, but completing his card I knew that he would be as happy to hear from me as all the other people combined. Feeling this, I began thinking of who else would most benefit just from receiving a simple card. Searching online, I found addresses for care homes appealing for cards for their residents. Writing to them, knowing that it would remind them that someone in the outside world was thinking of them felt richly rewarding. The video appealing for cards for 101-year-old John in Norfolk nearly moved me to tears. John had been married to his wife for 70 years before she died as a result of contracting Covid-19 earlier in the year. In the video, John admitted stirringly that “I do get lonely”. As I’m sure many messages he received told John, I felt to remind him that, “You are not alone”. This year has shown more than most that that’s all any of us really want to feel. And, as this experience showed me, paradoxically, one of the best ways of feeling that is reaching out to others.
Reach Out For Your Reward
So, if you’re looking to feel festive this crazy Christmas, when it’s rarely felt more difficult to do so, a rewarding recommendation is reaching out to even just one person. It may sound like a cliché, but it really will make your and their day. | https://medium.com/curious/the-4-lessons-i-learned-from-sending-100-christmas-cards-that-will-make-you-happier-ddaaf849302b | ['James Ware'] | 2020-12-23 23:11:05.719000+00:00 | ['Happiness', 'Self Improvement', 'Writing', 'Christmas', 'Mindfulness'] |
Python: The Fastest Way to Find an Item in a List | Let’s go back to our “while loop” vs. “for loop” comparison. Does it matter if the element we are looking for is at the beginning or at the end of the list?
This time, we are looking for number 9702, which is at the very end of our list. Let’s measure the performance:
There is almost no difference. “While loop” is around 22% slower this time (710/578≈1.223). I performed a few more tests (up to a number close to 100 000 000), and the difference was always similar (in the range of 20–30% slower).
So far, the collection of items we wanted to iterate over was limited to the first 10 000 numbers. But what if we don’t know the upper limit? In this case, we can use the count function from the itertools library.
count(start=0, step=1) will start counting numbers from the start parameter, adding the step in each iteration. In my case, I need to change the start parameter to 1, so it works the same as the previous examples.
count works almost the same as the "while loop" that we made at the beginning. How about the speed?
It’s almost the same as the “for loop” version. So count is a good replacement if you need an infinite counter.
A typical solution for iterating over a list of items is to use a list comprehension. But we want to exit the iteration as soon as we find our number, and that’s not easy to do with a list comprehension. It’s a great tool to go over the whole collection, but not in this case.
Let’s see how bad it is:
That’s really bad — it’s a few times slower than other solutions! It takes the same amount of time, no matter if we search for the first or last element. And we can’t use count here.
But using a list comprehension points us in the right direction — we need something that returns the first element it finds and then stops iterating. And that thing is a generator! We can use a generator expression to grab the first element matching our criteria.
The whole code looks very similar to a list comprehension, but we can actually use count . Generator expression will execute only enough code to return the next element. Each time you call next() , it will resume work in the same place where it stopped the last time, grab the next item, return it, and stop again.
It takes almost the same amount of time as the best solution we have found so far. And I find this syntax much easier to read — as long as we don’t put too many if s there!
Generators have the additional benefit of being able to “suspend” and “resume” counting. We can call next() multiple times, and each time we get the next element matching our criteria. If we want to get the first three numbers that can be divided by 42 and 43 - here is how easily we can do this with a generator expression:
Compare it with the “for loop” version:
Let’s benchmark both versions:
Performance-wise, both functions are almost identical. So when would you use one over the other? “For loop” lets you write more complex code. You can’t put nested “if” statements or multiline code with side effects inside a generator expression. But if you only do simple filtering, generators can be much easier to read.
Generator expression combined with next() is a great way to grab one or more elements based on specific criteria. It's memory-efficient, fast, and easy to read - as long as you keep it simple. When the number of "if statements" in the generator expression grows, it becomes much harder to read (and write). | https://medium.com/swlh/python-the-fastest-way-to-find-an-item-in-a-list-19fd950664ec | ['Sebastian Witowski'] | 2020-09-25 08:06:39.373000+00:00 | ['Best Practices', 'Python', 'Performance', 'Tips And Tricks', 'Writing Faster Python'] |
Node.js for eCommerce: Top 10 Advantages Node.js Provides to E-commerce Industry | Node.js for eCommerce: Top 10 Advantages Node.js Provides to E-commerce Industry
Why is Node.js popular in the eCommerce industry?
Do you know the eCommerce business is booming nowadays? The rise of the eCommerce industry has seen drastic growth in the last 4–5 years. This massive growth rate is seen in countries like the USA, UK, Australia, and India & in some Asia pacific region.
You are getting curious how eCommerce industry is related to the node.js, Right?
Let me first explain about Node.js.
What is Node.js?
Node.js gained popularity after it was the first release in 2009. Basically, Node.js event-driven open-source JavaScript framework uses to build scalable applications.
Why is Node.js popular in the eCommerce industry?
There are many JavaScript frameworks available that are used for building an eCommerce website. Have you ever wonder why Node.js is popular with the eCommerce industry? According to stack overflow, the most popular choice for the developing eCommerce website is Node.js. The top reason is the stability and scalability of Node.js that industries are moving towards it.
Some of the large industries like LinkedIn, Netflix, Walmart, Uber, eBay, Walmart, NASA, Paypal, and LinkedIn are used Node.js to build their successful website.
What are the advantages of Node.js in the eCommerce industry and Why you should choose Node.js in the eCommerce industry?
It provides easy scalability One can learn easily Offers high performance Node.js has Hugh support of community people It is open-source Node.js is cross-platform Node.js is Cost effective Secure and Fast performance Easy communication and management Uniformity in data streaming
Top advantages to use Node.js in eCommerce: It provides easy scalability
Scalability and speed these two are the curtail factor while you build an eCommerce platform. Also, the main motto of any eCommerce website is to engage more people. If you want to develop an app for the eCommerce industry, then you have to ensure the ability of application. Node.js offers excellent scalability and speed, even for simple applications.
Top advantages to use Node.js in eCommerce: One can learn easily
Node.js programming language is straightforward to learn. Developers find Node.js quick learning language.
Top advantages to use Node.js in eCommerce: Offers high performance
If you compare Node.js with languages like PHP, it offers high performance with top security. Node.js gets an application for supreme protection and the fastest speed. For the eCommerce industry, which contains lots of products and processes, Node.js developers can quickly build a website. It offers seamless coding and able to handle errors proficiently, which saves time.
Top advantages to use Node.js in eCommerce: Hugh support of community people
Every technology has its own community. But Node.js has Hugh and supportive community over others.According to the stack overflow survey, Node.js kept around 49.9% of respondents with the most popular framework. This large community of Node.js gives solutions for developing eCommerce websites.
Top advantages to use Node.js in eCommerce: open-source
One of the most prominent open-source platform in JavaScript is Node.js. It means one can easily add features without any cost. This leads to the cost-effectiveness of Node.js. Apart from costing, Node.js offers several benefits in eCommerce as one can use more functionality and features.
Top advantages to use Node.js in eCommerce: Cross-platform
Node.js programming language, which can be used for front end and back end development. If you want to develop a website, your developer has to require knowledge of other technology as well. But if you hire Node.js developer for your eCommerce development website, you do not need to hire other developers. I think that is the crucial reason Node.js becomes the best option to build any eCommerce project.
Top advantages to use Node.js in eCommerce: Cost-effective
There are mainly two reasons that Node.js is Cost-effective. First is, we all know that hiring a full-stack developer is cheaper compare to hire separate front end and back end, developer. As we discussed above, Node.js can be used in both as front end and back end, which gives Cost-effective solutions if you it with another language. Secondly, Node.js is an open-source framework that makes it available without involving any cost. These are the key reasons which make Node.js budget-friendly.
Top advantages to use Node.js in eCommerce: Secure and Fast performance
Node.js is known for its safe, secure, and fast performance. The reason for this quick speed is, it is developed on Google Chrome’s Version 8. This javascript structure deals with the security factor incredibly. This permits a burden-free expansion of payment gateway in your eCommerce site. A mainstream case of a first-class organization that is utilizing Nodejs to help their overwhelming web traffic is Paypal.
Top advantages to use Node.js in eCommerce: Easy communication and management
The furious challenge in the eCommerce business has gone a long way past the offering of the product. In contemporary occasions, client administrations hold a significant spot in fulfilling and keeping clients in this industry. Node.js doesn’t linger behind in assisting in this worry also. Because of its capacity to deal with both the customer side programming and server-side programming, it turns out to be very simple to interface with customers always with no issue.
Top advantages to use Node.js in eCommerce: Uniformity in data streaming
The transferring of video works at a profoundly quick pace in Node.js programming. Do you know how? The explanation for this is the way wherein its architecture has been created. It permits moving HTTP demands and comparing results through a data stream.
This makes the handling of records amazingly simple. In a layman’s language, Node.js codes will decrease your bounce rates as your clients will have the option to settle on a moment buy choice. This is one of those reasons that states Node.js as the best structure for eCommerce development.
Final Thoughts:
Without any doubt, if you are a new business startup or you are in the early stage of eCommerce website development, Node.js is the best platform. Node.js is a robust, fast, scalable, and secure solution for your eCommerce development with a budget-friendly option.
If you plan to develop an eCommerce website and have a project in your mind, don’t hesitate! Opt-out top Node.js developers and avail top-notch Node.js development services to build your project.
I explained the top 10 advantages to use Node.js in the eCommerce industry. Some of the large enterprises are also using Node.js for building their websites also Node.js is the best tool for eCommerce development. Hence, all the points justify that Node.js is the best language for the eCommerce industry.
Happy Reading & Keep Learning!! | https://medium.com/quick-code/top-10-advantages-node-js-provides-to-e-commerce-industry-best-tool-to-build-an-ecommerce-website-a34578615bea | ['Nisha Vaghela'] | 2020-07-17 18:11:51.948000+00:00 | ['Nodejs', 'JavaScript', 'Ecommerce', 'Node', 'Development'] |
State of the Paycheck Protection Program Loans — A Checkup | Were the largest of the PPP loans granted fairly?
Photo by Bermix Studio on Unsplash
The Paycheck Protection Program was initiated in April 2020. Administered by the Small Business Association (SBA), these forgivable loans were meant to relieve small businesses in America some of the potentially devastating strain imparted by the onset of the COVID-19 pandemic. Congress appropriated $649 billion for this program, and over 500 million loans were granted, at an average amount of $111,000.
As you can imagine, this program was enormously popular and there was an immediate feeding frenzy to snap up these loans as soon as the program opened. After all, businesses were suffering and desperate. And who wouldn’t want what will ostensibly become free money?
My physical therapy practice received one of these loans, so I am familiar with the application process and what are meant to be the rules for forgiveness of the loan. The amount of loan money that a business could apply for would be calculated in a couple of different ways, but generally it was to be based upon the average monthly payroll costs over the previous year. In order to qualify to have the loan forgiven, the business was supposed to spend at least 60% of the funds on payroll costs and the rest on rent and utilities, during the course of a certain coverage period of either 8 or 24 weeks after obtaining the loan. The rules for loan forgiveness are still being hashed out in congress, but roughly speaking, if after that coverage period the business can show documentation that the loan money was spend appropriately, that is, that it actually went toward protecting paychecks, the the loan will be forgiven.
In the rush to push out the money to businesses that needed it, a lot of concern arose around whether these loans were applied for legitimately and handed out fairly. Clearly they were not in every case, as described in stories like this. So when I found that the SBA and the U.S. Treasury released their data from the PPP loans that had been granted, I felt it was my duty to dig in and see whether any unsavory patterns emerged.
My analysis focused on the set of all 662,515 loans of $150,000 or more that were granted between April 3 and August 8, 2020. For each of these loans, the amount granted was lumped into one of 5 categories. I decided to examine whether we could reliably guess which companies received loans of $1 million or more, based upon the information collected here.
The features in the data that I decided to work with to make these predictions were the state where the business was located, the type of business, the number of jobs reported as being supported by the loan, the date the loan was approved, the lender (grouped into three categories according to how many loans each lender administered), and the NAICS code category. NAICS codes are 7-digit codes indicating the type of industry a business is in. Since there were thousands of these, I simplified these into their first digits. The digits 1 through 9 now represented broader categories that included many different subtypes of industry, kind of like the Dewey Decimal System. Okay, I may be dating myself with that mention.
I was a bit worried going in, that including the feature “Jobs Reported” would constitute leakage of information from features into the model’s prediction structure. After all, one of the main pieces of information that went onto the PPP loan application was the number of jobs, along with total monthly payroll calculated for these jobs. However, after examining the relationships between the features and the data labels, I decided that the relationship between Jobs Reported and the range of the loan funded was not as straightforward as we would suspect. See Figure 1.
Figure 1
Most companies receiving loans in the smallest category reported trying to support less than 50 jobs. However, even in this category, there were plenty of outliers reporting all the way up to 500 jobs (the maximum allowed to be considered a small business for the purpose of this type of loan). On the other hand, while most of the companies that received loans larger than $5 million reported over 300 jobs, there were some in this category (and all the others) that actually reported 0 jobs! I have not been able to discover what anomaly in the loan process allowed this to happen, or what it meant.
After the data wrangling was done, I split the large data frame into training, validation, and test sets. 87.6% of the companies in the data frame did not receive loans greater than $1 million, so this was established as the baseline accuracy level for our model’s predicting capabilities to beat.
I then fit a logistic regression model to the data, using one-hot encoding for the categorical data, simple imputing for missing values, and standard scaling for comparability of model coefficients. The model that emerged produced an accuracy score of 92.5% on both the training and validation data. The precision for discerning true positives was 83%, and sensitivity (or recall) for picking up on these extra large loans was only 49%.
In order to interpret this model, we’d like to look at the size and sign of the coefficients. We should keep in mind, however, that we should be careful about the meaning of “prediction” in this case. The PPP program is a short-lived phenomenon, so we’re not really that interested in being able to predict how much a company will squeeze out of the federal government in the future at the taxpayer’s expense. While this would be nice to know, these data are more likely to be able to help us appreciate the fairness of the program as it has unfolded thus far. As mentioned, these loans were applied for and dolled out according to the number of FTE’s — “full time equivalents” — a company had on payroll. We know therefore, that if the world is as it should be, jobs reported should be by far the most influential factor in how big a loan a company gets. We are looking for whether there were any other factors that affected the outcome of these loans in a way that we would not want to see. Figure 2 shows the coefficients from the logistic regression model that were larger (in magnitude) than 0.1.
Figure 2
As we would expect and hope, the number of jobs reported had by far the greatest influence on how large a loan a company received, according to this logistic regression model.
Other than number of jobs reported, a business was also slightly more likely to get a loan of > $1 Million if it was in an industry that falls into the NAICS category starting with 2 (energy, mining, construction, contractors), 5 (media, insurance, finance and other services), 3 (manufacturing of all types), 4 (wholesalers, retailers, transportation providers and warehousing/storage), or if they were a non-profit company. Honorable mention went to any company hailing from the state of New York.
Conversely, the strongest feature other than jobs reported was one that seemed to hurt a company’s chances of getting a very large loan, and that was operating in the entertainment or food service industry. In addition, it seems that as dates marched on, very large loans were less likely to be handed out. Of course, this could have had to do with who was applying for loans at each point in time and the features of these companies.
While this model fit fairly well and was intuitive to interpret, it is possible that there were some non-linear relationships between the features presented and the size of loans distributed. I also would have liked to see better operating characteristics. While I liked the interpretability of the logistic regression model, I didn’t like its low recall. We are interested in good recall, so that we can detect the cases where loans of > $1 million were granted, and understand the factors that seemed to be related to that outcome.
On a quest for better sensitivity to extra large loans, I went on to also fit a random forest model. This model produced an accuracy of 98.8% for the training data, and 92.6% for the validation data. It had slightly worse precision than the logistic regression model, at 73%, but slightly better sensitivity, at 63%.
Again, what I was most interested in the way of interpretation of the model, was to make sure that Jobs Reported was the most influential feature on the outcome of loan size. Of course we are also curious as to what other features seemed to influence the loan outcome, and in what way. Figure 3 shows the permutation importances of features in this model, i.e. how much of an effect there would be if we were to randomly permute the values in each of the given variables and then refit the model, permuting the values of one variable at a time.
Figure 3
The good news is that again, Jobs Reported had by far the most influence in the outcome of these loans. At the same time, the NAICS category, i.e. the type of industry a business was in, did also seem to have a non-negligible effect, as did the date that the loan was approved.
1 = farming, hunting, fishing, 2 = Energy, mining, construction, 3 = Manufacturing, 4 = Wholesalers, retailers, transportation, 5 = Services, media, finance, 6 = Schools, healthcare, community groups, 7 = Sports, culture, restaurants, 8 = Miscellaneous services and repairs, 9 = Government.
The first thing to note is that across all NAICS categories, the probability of obtaining an extra large loan is not monotonically increasing with Jobs Reported. Rather, each category again has a certain number of companies that reported 0 jobs. This may be the most problematic finding of all, and I am apparently not the first person to notice, as you can see by reading this and this.
In addition, companies in certain industry categories obtained smaller loans for the number of jobs they reported than others did. This interaction effect is most dramatic in the “7” category - sports teams, cultural entities and restaurants - in which a company was not likely to have received a loan of greater than $1 million unless they had the maximum number, 500, jobs reported. Contrast that with the “2” category - energy, mining and contractors - in which businesses received these extra large loans at a rate of over 70% once they reported 97 jobs.
We know that we are living in economic times that are unprecedented for our generation and at least one or two before us. The role of the federal government is to step in quickly and dramatically to bolster the economy when disaster hits, and the PPP program was meant to do just that. At the same time, pumping so much money into small businesses so quickly is bound to come at a cost where it comes to the ability to oversee proper distribution of that money.
The code for this analysis can be found here. | https://debbiecohen-22419.medium.com/state-of-the-paycheck-protection-program-loans-a-checkup-10289751ee25 | ['Debbie Cohen'] | 2020-10-23 16:19:21.193000+00:00 | ['Small Business Loans', 'Python', 'Classification Models', 'Payroll Protection'] |
Depression² | Depression²
How to fight a one-two punch right now
Photo by Kelly Sikkema on Unsplash
On a good day, I struggle with depression. Maybe a little less; perhaps it’s more annoying those days and less utterly disruptive. But it’s there, all the same. Now, add in a global pandemic and orders restricting my ability to go out and socialize with my lifeline friends and places, and we’ve got quite a problem on our hands. I’m not doing well some days, if I’m going to be completely honest.
I’m not alone in this. Citizens of the world — even healthy, neurotypical people — are reporting an increase in depression and anxiety symptoms. But what’s to be done? Not everyone has access to a psychiatrist or therapist in these difficult times — and those resources alone are not a panacea that will magically solve an international mental health crisis.
There are small things we can all do in tough, unprecedented times that act as pieces of the puzzle needed to feel a little better, a little more stable, a little more yourself.
Build a routine and stick to it
Set an alarm and get up at the same time every day (or at least every work day; I won’t begrudge anyone sleeping in on their days off). Get up, get dressed, eat breakfast. Try to find some semblance of the normal life you had before and imitate it. Work the same hours you always did (whether you still have a job or you’re re-tiling the guest bathroom in your spare time). Eat meals at around the same time every day. Make time for exercise. Set aside time to engage in the hobbies that you love. Schedule time to text or call or video chat with friends and family. Make a daily routine and schedule and stick to it.
Mind your sleep habits
Maintain a consistent bed time. Try to get up at around the same time each day. Get those 7–9 hours of sleep if you can. Avoid the dangerous mire of taking long naps during the day — they can exacerbate existing sleep pattern problems. Sleep in a quiet, cool, dark room. Try not to use your phone in bed (that blue light can keep your brain awake late into the night and affect your sleep quality). Turn on Do Not Disturb and settle in for a good night’s rest.
Exercise regularly
This is a great way to get a quick mood boost. Take walks while maintaining safe distance or try bodyweight exercises at home like squats or yoga practice. There are plenty of free workout videos on Youtube or — if you have the money — consider fitness apps to help you plan a workout. Some gyms are even offering home workout routine videos for folks who are stuck at home.
Reach out to people
This is the time to strengthen your connections; you need them right now. Lacking face-to-face contact with our loved ones can do deep damage to people — especially extroverts. No, a FaceTime call isn’t the same as happy hour with your friends, but it’ll have to do for now. Better to have a good enough solution than no solution at all.
Get help if you need it
There’s no shame in therapy or medication and there’s nothing wrong with needing financial help (can a family member loan you some money for the short term?). Call your doctor. Look into telemedicine or a therapy app. Lean on your friends and family, but be sure to ask first if they have the energy to listen to you talk about what worries you — they may be going through things on their own end that they haven’t divulged to you and it’s not fair to lay your problems onto someone who can’t currently deal with their own first.
The new normal
Things right now are more than a little strange. And who knows when we’ll get back to our old lives — if we ever can revert to a pre-COVID world, that is. And honestly? We probably can’t. This crisis has laid bare the failings of the American healthcare system and to want to go back to “normal” means a return to a failed concept that regularly kills people from neglect.
For now, take it one day at a time. Prioritize what you need to do and be kind to yourself if you’re struggling to take care of you and your family right now. Most of the population alive today has never been through anything quite like this. We’re all a little uncertain how to adjust to such rapidly changing times.
And that’s okay. We’re not really sure what the right way to handle this collective trauma is. You simply have to do the best you can with what you have for now while holding out hope for a better (eventual) tomorrow. | https://medium.com/invisible-illness/depression%C2%B2-2c153415aa65 | ['Deidre Delpino Dykes'] | 2020-04-20 16:50:50.815000+00:00 | ['Wellness', 'Mental Illness', 'Covid 19', 'Self Care', 'Depression'] |
Having a Spiritual Practice Can Help You Deal with Coronavirus-Related Stress | Having a Spiritual Practice Can Help You Deal with Coronavirus-Related Stress
How turning to spiritual practices can become an ally in this trying time.
Photo by Andre Moura on Pexels
When I ask people what first words or phrases come to mind when someone mentions the word “spirituality,” most of them would answer God, Jesus, pray, and religion. This is not surprising from people coming from the only Christian nation in Asia — a whopping 90% of the Philippine population is Christian.
Spirituality has been making waves in subcultures in the recent years, and the rise in popularity of the ancient Hindu practice of yoga and meditation has contributed to this. Yoga is the 4th fastest growing industry in the US, and an average of 40 million Americans practice it and about 400 million people worldwide. You can see this tangibly in the yoga and meditation communities. If a stranger wearing Japa Mala beads sees another sporting something of similar symbolism, chances are, they would smile at each other, maybe put their hands together in prayer to the chest, and give a slight bow. I’ve seen this happen countless times regardless of race, culture, gender, and religion.
But what does spirituality or being spiritual really mean?
Spirituality can be so many different things to people. Commonly, people would say that it is something experienced when participating in organized religion or ritual, or being in a place of power such as a church, temple, or meditation room. Praying or chanting as a group can bring about a spiritual, almost mystic, experience that closely connects to “being one with the world” and “having tremendous faith and trust.” This can be a means of solace, support, and peace especially in difficult times.
Many would share, too, that it can be felt through non-religious events such as getting in touch with the core of their being or the Divine Self through practices such as private prayer, yoga, meditation, art, or a walk in nature (Psychology Today, n.d.).
Discussions on spirituality date back in time, from philosophical and existential questions thousands of years ago. We all have big questions that science tries to answer, but for some reason, the answers aren’t enough: Where did we come from? Is there life after death? Who is the Observer? (You’ll get that last one if you are a fan of quantum physics.) Most people believe that there is something or someone far greater than us humans that is incomprehensible, but somehow, we are able to experience it.
Hungarian-American psychologist and author of the bestseller Flow: The Psychology of Optimal Experience Mihaly Csikszentmihalyi (don’t worry, I also cannot pronounce his last name) became famous for his definition of flow, i.e. “a highly focused mental state conducive to productivity, where every action, movement, and thought follows inevitably from the previous one, like playing jazz.” In essence, being in a state of flow is meditative, effortless, and could even be spiritual. This is common for people who experience inner peace and a sense of calm during or after doing something creative or engaging in an immersive activity without distractions.
Psychologists such as Carl Jung, Rollo May, and Viktor Frankl recognized spirituality as an essential part of psychological well-being. Exeperiencing an aha moment, for example, can be classified as a spiritual event in itself, when things suddenly click and fit together, like pieces of a puzzle. Finding clarity in the meaning and purpose of one’s life is also considered a spiritual experience. Another example is feeling a deep sense of love, connection, and compassion toward people, animals, and all living beings. These are all said to heighten our awareness of being and existence — an enlightenment, an awakening.
Dr. Stephen Diamond, writer on Psychology Today says, “In psychology, spirituality is best characterized by psychological growth, creativity, consciousness, and emotional maturation, and entails the capacity to see life as it is — the good, the bad, and everything in between — and to still love life nonetheless.” Facing one’s shadow, embracing reality fully and without judgment, and/or learning to forgive are great examples of spiritual experiences.
So what role does spirituality play in supporting our mental health during the time of COVID-19?
It doesn’t matter what your vehicle might be in arriving to your spiritual destination — what’s important to note is that turning to one’s faith has been empirically proven to help people deal with illnesses and tragedies such as losing a loved one, going through unexpected major life changes, surviving natural disasters, and, yes, in this case, having to deal with a global pandemic.
When something that is not within our control (or anyone’s for that matter) enters our lives, we turn to faith.
Whether that means meditating for hours, praying the rosary, hitting your Tibetan singing bowls, doing yoga first thing in the morning, saging your entire house, turning to your art or music, laying down all of your crystals under the full moon, or just listening to your breath, know that these are all valid spiritual resources and coping mechanisms. You are connecting to a source of power that is relevant and meaningful to you, and no one else.
Having a spiritual practice helps us adjust to what is uncontrollable and paves way to these things:
Acceptance.
What we are going through globally is not normal. Our days are hazy, the news has nothing much to keep us going. Most of us are in limbo. We learn to accept this reality more quickly with spiritual grounding, letting us find our new normal. Then we sit, pause, and reflect.
Some good journal prompts or conversation starters during this time can revolve around admitting your current state. Someone asked me last week, “What about your life right now makes you feel miserable?” I had much to say, but as I was saying them, I also found myself practicing gratitude. This helped me process my emotions and let go of them almost immediately after.
Communicating your feelings honestly, whether written or otherwise, can help you face your truth with a clearer perspective. It can be messy at first, but after you sort out the noise, you’ll have 20/20 vision and it will be easier to shift your mindset.
Faith.
“Relax, nothing is under control.”
I love this quote, commonly used in Zen cartoons. To most of us who are control-freaks, this can be alarming, but a dash of spiritual practice can make this philosophy delectible.
My meditation teacher, Eileen Tupaz, puts it nicely, “Bow down to the circumstances we cannot control and let them pass.”
Our brain is wired to look for patterns that may or may not exist. In this case, since we are all experiencing COVID-19 for the first time, we are only about creating these new medical and situational patterns. Forcing a concrete definition of things based on non-empirical data (also known as ungrounded and uneducated speculation) can cause major stress.
When we let go of our control over the things we truly cannot have a hand in, we also release struggle, tension, anger, and disappointment, among many others. This is where faith comes in. Faith is the complete trust and confidence in someone or something much larger than us.
However, having faith does not mean we should also let go of things we can control, which leads me to the third one:
Motivation.
Once the distinction between things you can and cannot control becomes clear, the next thing is to set your focus on the things you can control.
While I understand the objective of articles going around that talk about letting go of productivity goals, I also believe that listing and acting on things to do is a great coping mechanism. Awareness and acceptance of what we are experiencing can bring us on a downward spiral and can be counterintuitive, and so finding the motivation to act and rise above depression, grief, and anxiety can help our mental health tremendously. | https://rachbonifacio.medium.com/having-a-spiritual-practice-can-help-you-deal-with-coronavirus-related-stress-f31ff1e1f522 | ['Rachel Bonifacio'] | 2020-04-01 10:55:50.801000+00:00 | ['Mental Health', 'Spirituality', 'Covid 19', 'Self', 'Spiritual Growth'] |
My Breakfast Lies To Me, And I Like It | I’m the King of Wishful Eating.
Somewhere around the beginning of the year I decided to stop eating pasta every day and give this low carb shit a try. I was committed to calling it out on its hooey and returning to the diet of my choice shortly thereafter. The issue: It’s actually lovely.
I feel like a normal person without constant stomach problems and the visual appearance of perhaps swallowing a whole honeydew melon when I keep my carb intake at a minimum. It’s been an adjustment, but it’s hard to argue with all this abundant energy and lack of discomfort.
It’s a cruel joke, especially for someone who enjoys breakfast as much as I do. If you’ll notice, many of the best breakfast foods (cereal, pancakes, bagels, english muffins, toast, waffles) are comprised entirely of carbs. I’ve gone the egg route in order to circumvent things but my cholesterol and I are starting to feel weird about our relationship. So what’s a breakfast-loving human person who feels better when she doesn’t eat carbs supposed to do when she gets hungry around 8am? For extra fun, I’m lactose intolerant, too.
I’ll tell you how I’ve cracked it, how I’ve survived: lies. It’s all a falsehood, an act of deception that everyone involved is willing to accept. I’ve learned the value of lies, they’ve kept me alive. That’s not true, that’s a line from Three Musketeers with Kiefer Sutherland and Rebecca DeMornay, but it fits here.
Your kitchen is a movie studio of special effects and bullshit when you know how to use it correctly, and I’m happy to help others on their journey toward weaving absolute yarns with the food they eat all in the name of health and proper fiber intake. The highest quality lies are the lies we’ll believe, and my current favorite breakfast is the best lie I’ve ever told myself.
It begins with a Norwegian crisp bread substance I purchased at Trader Joe’s. It’s made of nuts, seeds, and the will to live. I top this with vegan cream cheese, also purchased at Trader Joe’s because there are better versions of it available but I don’t have a car. And lastly, the real deal-sealer, is topping this prison ration with Everything Bagel Seasoning. Then you consume it and remark on how the debris left behind while eating is actually pretty similar to the real deal. You don’t think this is going to be good, you don’t think it’s going to be satisfying, but it’s a magic trick on a rectangle and I love it.
I believe the lie. I believe I’m eating a bagel with its cream cheesy savory seasoning goodness all while consuming no more than ten net grams of carbs total. The breakfast item is so tasty sometimes I wonder if I’ve done something wrong, as if I’ve broken a law of physics in some way. Am I proud of the depths I’ve gone to in order to convince myself bagels are still a part of my life? Yes. Do I still want bagels every day? Also yes. I have found an edible tall tale that’s so good I am willing to dance this manipulative tango.
I hope in sharing this list of ingredients (I don’t think it counts as a recipe if you can remember it all in your head), I am able to bring breakfast joy to the lives of others similarly encumbered by whatever the actual hell gluten does to our innards. I hope I have brightened the day of those sick to death of a random vegetable scramble before the bell peppers go bad. I hope you can all enjoy this tasty, somewhat nutritious, and comically easy foray into carb reduction. This is the best breakfast lies can buy. Believe me. | https://shanisilver.medium.com/my-breakfast-lies-to-me-and-i-like-it-c73a2735010f | ['Shani Silver'] | 2019-07-28 14:37:57.849000+00:00 | ['Recipe', 'Writing', 'Food', 'Life Hacking', 'Humor'] |
Not Your Typical “Mom” | How an under-the-radar series evolved from a typical network sitcom into one of the boldest and most nuanced depictions of addiction and recovery in television history.
The cast of “Mom” at PaleyFest 2018 (from left to right: Jamie Pressly, Anna Faris, Allison Janney, Mimi Kennedy, and Beth Hall)
Like all people, my personal life has been affected by drug and alcohol abuse. I have beloved family members who have engaged in lifelong battles with it and friends that have come precipitously close to the edge of disaster. But I have a less common vantage point through my work as a clinical psychologist.
In my two years of training at the VA Medical Center, I worked with Veterans struggling with substances every day. In the pain management clinic, I worked with Veterans who became addicted to prescription opiates after years of struggling with chronic pain from service-related injuries. In the women’s clinic, I worked with female Veterans who abused alcohol to numb the pain associated with sexual trauma. In the HIV clinic, I worked with gay male Veterans who used crystal meth as an antidote to the shame of the identities they were forced to hide during their service. The paths to addiction and the drugs of choice frequently differed, but the resulting destruction was remarkably similar.
Countless films and television series have touched on addiction with differing success. Two of the best films that come to mind are Traffic and Requiem for a Dream, both of which use ensemble casts and intersecting plot lines to examine multiple aspects of the impact of addiction. Other highlights are harrowing character studies like Trainspotting, Thirteen, The Basketball Diaries, Flight, and Leaving Las Vegas. Portraits of addiction are common on the small screen as well. The pilot of Murphy Brown featured the main character returning from a stint at the Betty Ford Center. Don Draper’s alcoholism was a constant presence from start to finish on Mad Men. And recent streaming and premium cable hits like Orange is the New Black, Nurse Jackie, and Shameless delve deeply into the subject matter. But arguably no long-running series has ever put addiction and recovery front and center on an ongoing basis until Mom.
Promotional image for “Mom” (Copyright CBS)
The Evolution of “Mom”
Mom did not start out with addiction and recovery front and center. When it premiered on CBS in the fall of 2013, it was marketed as a family comedy from super producer Chuck Lorre (Two and a Half Men, The Big Bang Theory). The first season centered on Christy Plunkett (House Bunny and Scary Movie comedienne Anna Faris), a 35-year-old waitress at an upscale restaurant in Napa, California juggling her children Violet (Sadie Calvano) and Roscoe (Blake Garrett Rosenthal), her pot-smoking ex-husband Baxter (Matt L. Jones), her affair with her married boss Gabriel (Nate Corddry), and the return of her estranged mother Bonnie (7-time Emmy winner and reigning Best Supporting Actress Oscar winner Allison Janney). The fact that Christy and Bonnie were drug addicts in early recovery was mentioned frequently but not necessarily central, as most of the first season focused on Christy’s dating life, Violet’s unplanned pregnancy, and Bonnie’s ire at Christy’s desire to get to know her biological father.
Fast forward to its current season (its fifth) and things look very different. The show focuses equally on Christy and Bonnie and the supporting cast is comprised almost entirely of their female support network with whom they work the Alcoholics Anonymous program. These women are Marjorie (Dharma & Greg’s Mimi Kennedy), a woman with decades of sobriety who serves as a sponsor to many; Jill, a divorced socialite who is forced to confront harsh new realities as she struggles to maintain her sobriety; and Wendy (Beth Hall), a meek nurse who finally comes into her own when she gets clean. The plot arcs this season have focused on the relapse of Jill and Bonnie’s half-brother Ray (Leonard Roberts), the transfer of addictions from substances to junk food and caffeine, the futile search for quick fixes and miracle cures for the disease of addiction, and how the codependency fostered by their particular twelve-step program threatens the survival of Bonnie and Christy’s new romantic relationships, with a paraplegic former stunt man (Prison Break’s William Fichtner) and his estranged brother (Wings’ Stephen Weber).
The shift in the show occurred fairly gradually over the second and third seasons, as everyone except Christy and Bonnie exited the show. Christy’s daughter Violet turned 18 and went off to make her own destructive decisions. Her son Baxter went to live with her ex-husband who had now cleaned up his act by marrying an affluent, uptight woman (Less Than Perfect’s Sarah Rue). Christy ended her affair with her boss and the restaurant changed management. With these changes, the parenting child-rearing element and the workplace antics were all but eliminated and the focus shifted to addiction and recovery.
What makes Mom unique isn’t merely that it’s a multi-camera network sitcom with female addicts front and center — although that in and of itself is quite an anomaly. What truly makes it worthy of greater attention and acclaim than it has received is how accurately it does it. Over the course of the five seasons, Bonnie and Jill relapsed and retained sobriety. Two key recurring characters abandoned recovery for good and returned to risky behaviors; Regina (Oscar winner Octavia Spencer) because she felt God was all she needed and Ray because he was just too deeply in denial about his affliction. Christy became a sponsor for the first time and had to cope with her sponsee returning to drug use and fatally overdosing. Christy became reunited with her biological father only to have him die of a heart attack. We learned about the rape that sent Christy spiraling into drug use and the childhood abandonment that turned Bonnie into an ice-cold monster. (In fact, the season three premiere in which we meet Bonnie’s mother, played by Oscar winner Ellen Burstyn, is a series highlight). Marjorie has struggled to reconnect with her estranged son, had her faith shaken by the relapse of her own sponsor, and had her life upended by her new husband’s massive stroke. Christy and Bonnie even became homeless for a time.
But the transition to bolder subject matter hasn’t robbed the show of its light. It remains genuinely funny and has celebrated profound growth in its characters. These women have made amends, put their lives back on track, and found love. Perhaps most importantly, they have found each other and given each other the supportive family system they were all lacking. The juxtaposition of tragedy and humor may seem jarring, but it is the only way the show can be authentic. This is how recovery works. It’s a messy, nonlinear process filled with enormous triumphs and tragic setbacks.
Mom’s willingness to tackle dark subject matter alone doesn’t necessarily set it apart in sitcom history. Shows like All in the Family, Maude, M*A*S*H, The Golden Girls, Family Ties, Roseanne, and others have been interrupting the laugh track for socially important drama for nearly a half century. But this trend has faded in recent years, as series have gravitated toward one of three categories — the half hour sitcom that is 100% comedy, the hourlong drama that is 100% serious, and the occasional “dramedy,” which is more often than not just an hourlong drama that has elements of humor. The half hour sitcom that frequently depicts tragedy is an anomaly in the current landscape.
Although it has never garnered the media attention of co-creator Chuck Lorre’s other recent CBS hits like Two and a Half Men and The Big Bang Theory, it has nevertheless become a steady performer for the network. It holds its own in the ratings in a tough time slot (Thursdays at 9pm) where it airs opposite the Will & Grace revival and ABC’s buzzy block of Shonda Rhimes dramas. It has also won some awards, although these are entirely confined to Janney’s performance. The esteemed actress won two of her seven Emmys in the category of Outstanding Supporting Actress in a Comedy Series for the show and has since been promoted to the category of Outstanding Lead Actress. Janney is undeniably brilliant. It’s hard to believe that this is her first time doing a multi-camera comedy. Although, after seven seasons on The West Wing, several stints on Broadway, and a slew of feature films, it would be foolish to assume there is any medium she can’t excel at. But the attention heaped on Janney, however deserved, distracts from the fact that this truly is an ensemble show. Anna Faris turns in a truly impressive performance in each and every episode, while Mimi Kennedy and Jamie Pressly have done award-worthy work of their own. (And I have no doubt Beth Hall could as well if they gave her some meatier material.) Then there’s the writing. Although the jokes don’t always exactly hit the bullseye and some episodes — particularly those about the main characters’ love life — can be a tad run-of-the-mill, the remarkably skillful balance of the comic and tragic is something truly special and worthy of admiration. In my opinion, the combination of the pitch perfect acting and brave writing makes it one of the best and most important comedies currently on television.
My Afternoon with “Mom”
Allison Janney uses Anna Faris’s back to sign an autograph at PaleyFest 2018
A couple of weeks ago, I had the opportunity to spend an afternoon in the presence of the cast and co-creator of Mom at PaleyFest, the Paley Center for Media’s annual festival that fetes various television series. After screening an upcoming episode, the five primary cast members and co-creator came out for a Q&A that lasted over an hour. There were two primary topics of discussion. One, which I have covered at length here, is how the show evolved to focus on addiction and recovery. The other, which I barely touched on here, is how unique it is that the show is a woman-dominated affair. The beautiful sisterhood that exists among the ensemble was evident throughout and they shared a variety of moving and humorous anecdotes covering topics like their own struggles with anxiety, the struggle to break into Hollywood, what it’s like to be an aging actress in Hollywood, and the impact of one of their own having astronomical success (Janney’s recent Oscar win came in the midst of filming this season).
The talented and charming Anna Faris graciously joined me for a grainy selfie.
I had the chance to ask a question of the panel. After remarking on how important I found the series to be particularly as a mental health professional, I asked them if they received pushback from the network about moving their show into darker territory. Co-creator Gemma Baker stated emphatically that they received nothing but support from all involved. It certainly didn’t hurt that co-creator Chuck Lorre, one of the most successful and influential men in the entertainment industry, is in recovery himself. When the panel ended, the cast graciously took selfies, signed autographs, and had meaningful conversations with the fans. I got to take selfies with Anna Faris and Beth Hall, chat with Mimi Kennedy, and although I didn’t get to interact with Allison Janney, I had the pleasure of being a few feet away as she immersed herself in the crowd, taking selfies and cracking jokes.
It was clear at the panel that the show didn’t start off trying to be the voice of women in recovery. Nevertheless, a few dozen episodes in they realized that that’s what their show was destined to be and they embraced it wholeheartedly. Through a steadily evolving process of eliminating what doesn’t work and elaborating upon what is working. In that way, the evolution of Mom is kind of like recovery itself. But without the major setbacks. Mom has continued to grow and shine since it’s premiere nearly 5 years ago. | https://medium.com/rants-and-raves/not-your-typical-mom-87bf6b2d8829 | ['Richard Lebeau'] | 2018-04-09 21:54:38.260000+00:00 | ['Mental Health', 'Recovery', 'Television', 'Addiction', 'Feminism'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.