title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
The easy grain bowl you’ll want to make all throughout the winter | This post was originally published on blog.healthtap.com on January 9, 2018.
As we get deep into the dark and cold months of winter, we find ourselves turning toward nourishing, hot, and hearty foods to warm our bodies and our spirits.
This simple, nutrient rich grain bowl is effortless to throw together, and is a perfect option for dinner when you want something savory and filling but don’t want to spend too much time in the kitchen. Extra roasted veggies also make perfect leftovers for your lunches and meals all week long.
This bowl nails comforting, hearty flavor while being a perfectly balanced meal: dark leafy greens are topped with seasonal roasted veggies, whole grains, protein-packed chickpeas, and nourishing monounsaturated fat from creamy, sliced avocado. This nutritionally powerful, veggie-based winter bowl is sure to be something you’ll want to whip up all throughout the rest of these chilly months.
Winter Grain Bowl
What you’ll need:
Cooked quinoa
Raw (or steamed) kale
Sweet potato
Can of chickpeas, drained and rinsed
Bell pepper
Avocado
Spices: salt, pepper, paprika, rosemary, garlic powder
Olive oil
To Serve:
Lemon tahini dressing
1/3 cup tahini
1/3- 1/2 cup water (depending on desired consistency)
2 tbsp lemon juice
1 tablespoon olive oil
1 garlic clove, minced
Salt and pepper, to taste
What you need to do:
Preheat your oven to 400F. Slice your sweet potato into wedges, and evenly coat them with olive oil, salt, pepper, and rosemary. Brush your bell pepper with olive oil, and sprinkle with salt and pepper.
Spread your veggies evenly on a baking sheet, and roast them or 35–40 minutes until the bell pepper is slightly blackened and the sweet potato wedges are soft throughout.
While your veggies are roasting, Add the chickpeas, a tablespoon of olive oil, 1/2 tsp smoked paprika, 1/4 tsp garlic powder, and salt and pepper to taste in a skillet. Stir the chickpeas over medium heat for about 5 minutes until they become hot and a little crispy.
To make the dressing, combine all ingredients in a blender and blend until creamy.
Once the veggies, chickpeas, and dressing are done, slice your avocado. Finally, layer your bowls with kale, quinoa, sweet potatoes, peppers, chickpeas, and add sliced avocado over the top. For a finishing touch, drizzle all with your lemon tahini dressing. Enjoy!
Author: Maggie Harriman | https://medium.com/healthtap/the-easy-grain-bowl-youll-want-to-make-all-throughout-the-winter-51b5ebe40978 | [] | 2018-02-01 17:41:45.483000+00:00 | ['Nutrition', 'Healthy Foods', 'Wellness', 'Recipe', 'Healthy Eating'] |
#FluentFriday Tweet Chat Follow-Up | #FluentFriday Tweet Chat Follow-Up
All of your unanswered questions answered by Principal Design Lead Joey Pitt and Sr. Dev Writer Mike Jacobs — with more than 280 characters.
Last week, we hosted our inaugural #FluentFriday tweet chat where Principal Design Lead Joey Pitt and Sr. Dev Writer Mike Jacobs answered the community’s Fluent Design questions.
Coffee, pastries, and soda for breakfast in the Tweetuation Room.
We had an hour to respond to as many questions as we could but quickly ran out of time before we got to them all! We promised to follow up with those we didn’t get to, so here they are. Hope they help.
Question 1:
Answer: Principal Program Manager Paul Gusmorino answered this question here and we wanted to add a little more context.
As we incorporate Fluent Design into more apps and the Windows shell, we’re trying new things and different approaches. The upside to this experimentation is we get to innovate; the downside is that it can create inconsistencies. After every round of innovation/trying new things, there’s a stabilization period where we determine what works best and start enforcing consistency. To learn more about our iteration cycle, check out this Q&A with Joey Pitt.
Question 2:
Answer: We’ve already added acrylic to the start menu, reveal to the live tiles, and we’re looking at other UX patterns to make live tile curation better. These are just early explorations of how we are taking the start menu to its next evolution.
Question 3:
Answer: One thing we’ll share at Microsoft Build is how we’re moving our color and material systems forward, and we’ll do this in a way that reinforces hierarchy and helps you focus on what you’re doing. Make sure to sign-up for the Fluent Design: Evolving our Design System session at Microsoft Build to find out more.
Question 4:
Answer: Font rendering is optimized differently on Windows and OSX. OSX optimizes for aesthetics, and Windows for legibility. The OSX rendering is truer to the outlines the font designer drew, but it introduces more blurry grey pixels and a bit increased weight. Windows reduces or eliminates that blur for legibility, but the characters are a bit more blocky at smaller sizes, which is why we don’t have the grey that OSX does. Neither is inherently better, they’re just different design choices.
Check out our eBook Now Read This, which includes a chapter on font rendering.
Question 5:
Answer: Currently, apps like Photos use Connected Animation, but we know it’s not used everywhere. We’re definitely making it easier in XAML in the next release of Windows. We are discussing this and more at the What’s New for Windows UX Developers: Fluent and XAML session at Microsoft Build.
Question 6:
Answer: Some Fluent Design effects (such as acrylic) use the GPU, which can increase power consumption. Windows disables these features depending on your power settings, and users have the option of turning off these effects altogether.
Question 7:
Answer: While Windows shell supports colorizing your taskbar with accent colors, we are tracking this request in the Feedback Hub. Upvote if you haven’t already!
Question 8:
Answer: Great idea! We’re currently redesigning the Microsoft Design site which will give some additional cues for how to implement Fluent. We’re also planning on including more designer-focused video tutorials in the future. In the meantime, check out our developer-oriented video series. | https://medium.com/microsoft-design/fluentfriday-tweet-chat-follow-up-8ff55869299 | ['Microsoft Design'] | 2019-08-27 17:28:01.784000+00:00 | ['UX Design', 'Fluent Design System', 'Microsoft', 'Design'] |
Factory farms: A pandemic in the making | Factory farms: A pandemic in the making
Factory farms are petri dishes for animal-borne viruses, which have caused pandemics before, and will do so again
Photo credit: Mercy For Animals Canada via Flickr (CC BY 2.0)
In March 2009, the first case of a novel H1N1 influenza virus infection was reported in the small community of La Gloria in the Mexican state of Veracruz. The virus quickly spread through Mexico and the United States, and in June 2009 the World Health Organization officially declared it a pandemic. Within a year, the Centers for Disease Control and Prevention (CDC) estimates, it had killed up to 575,400 people worldwide.
Early reports suggested that the source of the outbreak lay in the factory-style pig farms in the area around its epicenter in Veracruz. Subsequent tests, however, traced the genetic lineage of the virus to a strain that had emerged in an industrial hog farm in Newton Grove, N.C., in the late 1990s, where it had circulated and evolved among pigs before crossing to humans.
Most recent pandemics, including the one we’re currently experiencing, have been the result of zoonotic viruses “ spilling over” to humans from animals. In many cases, this spillover hasn’t occurred via so-called “exotic” animals in faraway markets, as is believed to have been the case with COVID-19, but through domestic livestock.
Most livestock today are raised in “concentrated animal feeding operations” (CAFOs) — more commonly known as factory farms. In these industrial-scale facilities, the proximity of thousands of genetically similar animals, packed together in unsanitary, overcrowded spaces and vulnerable to disease due to the stress placed on their immune systems by these living conditions, provides the ideal environment for viruses and other pathogens to circulate, mutate, and evolve the ability to cross over to human populations.
Research shows that these farms can act as “amplifiers” for the spillover and spread of viruses. One recent model based on data from hog farms shows that workers at these facilities, being in close proximity to animals and thus at increased risk of contracting a virus, can be a “bridging population” for transmission of diseases from pigs to humans. The study found that a higher percentage of factory farm workers in a given community leads to a higher rate of human influenza cases in that community, concluding that a human influenza epidemic due to a new virus could be amplified in a local community and beyond by the presence of a factory farm nearby.
Most of the major pandemics of recent decades can ultimately be traced back to birds, bats or other wildlife, but because these creatures are so genetically different from us it’s difficult for viruses to jump directly to humans without some other species acting as an intermediary. Historically this intermediary has often been pigs. Being genetically quite similar to us, and with similar immune systems, pigs are ideal “mixing vessels” in which viruses picked up from other animals are “genetically rearranged” to be able to cross over to human populations. In particular, it’s believed that pigs are the primary source of influenza pandemics, because they can pick up the virus from both birds and humans and act as incubators for new strains that combine genetic traits from both, and thus make the relatively easy jump to humans.
Industrial pig farms have been the source of a range of disease outbreaks over recent years, the 2009 H1N1 outbreak being a case in point. In this instance, the new virus is thought to have arisen from a “ reassortment” of bird, swine and human influenza viruses combined with a Eurasian pig flu virus. Similarly, in the 1990s, factory farms were at the epicenter of a deadly Nipah virus outbreak, believed to have been the result of pigs in CAFO operations in Malaysia contracting the virus from bats and passing it on to farm workers, causing an outbreak of fatal encephalitis among pig farmers.
But it’s not just pigs. Studies have indicated that industrial poultry farms can be similarly lethal amplifiers of disease, as was the case with the 2006 HPAI (highly pathogenic avian influenza) outbreak and the H5N1 avian flu in the late 1990s, both of which originated in Chinese poultry farms. Avian flu spreads quickly in chickens and is thought to have been picked up and carried further afield by migratory birds in the vicinity of these farms. The virus is still mutating to this day, and continued outbreaks in industrial poultry farms worldwide — including in Thailand, Nigeria, France, and in just the last couple of months, India and China — are providing new opportunities for the virus to mutate into a form capable of moving even more easily among both animals and humans.
Factory farms are a relatively recent development in agriculture. Until the late twentieth century, most of the world’s food animals were dispersed across numerous diversified small to mid-sized farms growing a mixture of different crops and raising different kinds of livestock. In the space of just a few decades, a combination of unrestrained corporate power, wrongheaded agricultural policy and inadequate environmental and public health regulations — all of which can be remedied if we so choose — has led to a system of intensive, industrialized food production that poses serious risks to both animal and human health.
COVID-19 is the latest in a growing catalog of public health disasters stemming directly from humans meddling with wildlife, and it’s right that we should be exploring every avenue to figure out exactly how it emerged and to ensure that nothing like it ever happens again. But while the spotlight is currently trained on animal husbandry practices on the other side of the world, we also need to recognize that our own agricultural systems are creating hotbeds for disease outbreaks, potentially no less devastating than this one, right here on our own doorstep. A growing scientific consensus and a history of painful experience show us that averting future pandemics begins with transitioning away from factory farms and toward means of food production that pose less danger to our environment and our health. | https://medium.com/the-public-interest-network/factory-farms-a-pandemic-in-the-making-bcd559dba090 | ['James Horrox'] | 2020-05-05 20:27:32.614000+00:00 | ['Environment', 'Agriculture', 'Factory Farming', 'Public Health', 'Covid 19'] |
Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game | Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game
Pluribus used incredibly simple AI methods to set new records in six-player no-limit Texas Hold’em poker. How did it do it?
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
I had a long conversation with one of my colleagues about imperfect information games and deep learning this weekend and reminded me of an article I wrote last year so I decided to republish it.
Poker has remained as one of the most challenging games to master in the fields of artificial intelligence(AI) and game theory. From the game theory-creator John Von Neumann writing about poker in his 1928 essay “Theory of Parlor Games, to Edward Thorp masterful book “Beat the Dealer” to the MIT Blackjack Team, poker strategies has been an obsession to mathematicians for decades. In recent years, AI has made some progress in poker environments with systems such as Libratus, defeating human pros in two-player no-limit Hold’em in 2017. Last year, a team of AI researchers from Facebook in collaboration with Carnegie Mellon University achieved a major milestone in the conquest of Poker by creating Pluribus, an AI agent that beat elite human professional players in the most popular and widely played poker format in the world: six-player no-limit Texas Hold’em poker.
The reasons why Pluribus represents a major breakthrough in AI systems might result confusing to many readers. After all, in recent years AI researchers have made tremendous progress across different complex games such as checkers, chess, Go, two-player poker, StarCraft 2, and Dota 2. All those games are constrained to only two players and are zero-sum games (meaning that whatever one player wins, the other player loses). Other AI strategies based on reinforcement learning have been able to master multi-player games Dota 2 Five and Quake III. However, six-player, no-limit Texas Hold’em still remains one of the most elusive challenges for AI systems.
Mastering the Most Difficult Poker Game in the World
The challenge with six-player, no-limit Texas Hold’em poker can be summarized in three main aspects:
1) Dealing with incomplete information.
2) Difficulty to achieve a Nash equilibrium.
3) Success requires psychological skills like bluffing.
In AI theory, poker is classified as an imperfect-information environment which means that players never have a complete picture of the game. No other game embodies the challenge of hidden information quite like poker, where each player has information (his or her cards) that the others lack. Additionally, an action in poker in highly dependent of the chosen strategy. In perfect-information games like chess, it is possible to solve a state of the game (ex: end game) without knowing about the previous strategy (ex: opening). In poker, it is impossible to disentangle the optimal strategy of a specific situation from the overall strategy of poker.
The second challenge of poker relies on the difficulty of achieving a Nash equilibrium. Named after legendary mathematician John Nash, the Nash equilibrium describes a strategy in a zero-sum game in which a player in guarantee to win regardless of the moves chosen by its opponent. In the classic rock-paper-scissors game, the Nash equilibrium strategy is to randomly pick rock, paper, or scissors with equal probability. The challenge with the Nash equilibrium is that its complexity increases with the number of players in the game to a level in which is not feasible to pursue that strategy. In the case of six-player poker, achieving a Nash equilibrium is computationally impossible many times.
The third challenge of six-player, no-limit Texas Hold’em is related to its dependence on human psychology. The success in poker relies on effectively reasoning about hidden information, picking good action and ensuring that a strategy remains unpredictable. A successful poker player should know how to bluff, but bluffing too often reveals a strategy that can be beaten. This type of skills has remained challenging to master by AI systems throughout history.
Pluribus
Like many other recent AI-game breakthroughs, Pluribus relied on reinforcement learning models to master the game of poker. The core of Pluribus’s strategy was computed via self-play, in which the AI plays against copies of itself, without any data of human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy.
Differently from other multi-player games, any given position in six-player, no-limit Texas Hold’em can have too many decision points to reason about individually. Pluribus uses a technique called abstraction to group similar actions together and eliminate others reducing the scope of the decision. The current version of Pluribus uses two types of abstractions:
· Action Abstraction: This type of abstraction reduces the number of different actions the AI needs to consider. For instance, betting $150 or $151 might not make a difference from the strategy standpoint. To balance that, Pluribus only considers a handful of bet sizes at any decision point.
· Information Abstraction: This type of abstraction groups decision points based on the information that has been revealed. For instance, a ten-high straight and a nine-high straight are distinct hands, but are nevertheless strategically similar. Pluribus uses information abstraction only to reason about situations on future betting rounds, never the betting round it is actually in.
To automate self-play training, the Pluribus team used a version of the of the iterative Monte Carlo CFR (MCCFR) algorithm. On each iteration of the algorithm, MCCFR designates one player as the “traverser” whose current strategy is updated on the iteration. At the start of the iteration, MCCFR simulates a hand of poker based on the current strategy of all players (which is initially completely random). Once the simulated hand is completed, the algorithm reviews each decision the traverser made and investigates how much better or worse it would have done by choosing the other available actions instead. Next, the AI assesses the merits of each hypothetical decision that would have been made following those other available actions, and so on. The difference between what the traverser would have received for choosing an action versus what the traverser actually achieved (in expectation) on the iteration is added to the counterfactual regret for the action. At the end of the iteration, the traverser’s strategy is updated so that actions with higher counterfactual regret are chosen with higher probability.
The outputs of the MCCFR training are known as the blueprint strategy. Using that strategy, Pluribus was able to master poker in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used.
The blueprint strategy is too expensive to use real time in a poker game. During actual play, Pluribus improves upon the blueprint strategy by conducting real-time search to determine a better, finer-grained strategy for its particular situation. Traditional search strategies are very challenging to implement in imperfect information games in which the players can change strategies at any time. Pluribus instead uses an approach in which the searcher explicitly considers that any or all players may shift to different strategies beyond the leaf nodes of a subgame. Specifically, rather than assuming all players play according to a single fixed strategy beyond the leaf nodes, Pluribus assumes that each player may choose among four different strategies to play for the remainder of the game when a leaf node is reached. This technique results in the searcher finding a more balanced strategy that produces stronger overall performance.
Pluribus in Action
Facebook evaluated Pluribus by playing against an elite group of players that included several World Series of Poker and World Poker Tour champions. In one experiment, Pluribus played 10,000 hands of poker against five human players selected randomly from the pool. Pluribus’s win rate was estimated to be about 5 big blinds per 100 hands (5 bb/100), which is considered a very strong victory over its elite human opponents (profitable with a p-value of 0.021). If each chip was worth a dollar, Pluribus would have won an average of about $5 per hand and would have made about $1,000/hour.
The following figure illustrates Pluribus’ performance. On the top chart, the solid lines show the win rate plus or minus the standard error. The bottom chart shows the number of chips won over the course of the games.
Pluribus represents one of the major breakthroughs in modern AI systems. Even though Pluribus was initially implemented for poker, the general techniques can be applied to many other multi-agent systems that require both AI and human skills. Just like AlphaZero is helping to improve professional chess, its interesting to see how poker players can improve their strategies based on the lessons learned from Pluribus. | https://medium.com/dataseries/remembering-pluribus-the-techniques-that-facebook-used-to-master-worlds-most-difficult-poker-game-d91ead459fac | ['Jesus Rodriguez'] | 2020-12-01 16:15:23.351000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence'] |
Reinventing Product Discovery at the Financial Times | Wait, wasn’t I doing product discovery already?
Well, kinda. Let’s illustrate how we might have approached this before:
User research helps us uncover an unmet need in business travel We prototype a digital travel guide and test this with users — they like it Following user feedback, we build the guides Our goal is to grow habit — readers coming back to the guides repeatedly
When we look at the results, we see that 60% of users visit a guide and never do so again.
We ask ourselves — how can we get users to visit more than once? We decide to add a newsletter sign-up option, our hunch is that this will encourage users to come back repeatedly
Does it work? Yes, kinda.
The crucial mistake we made here is in not fully understanding the user problem — why were users not visiting again? Knowing this could have sent us down a completely different path.
Other mistakes we sometimes made were in user testing:
We’d typically test high fidelity prototypes
We might only test only one or two
We’d try and find a ‘winner’
The danger here is that we could narrow down to one solution far too quickly — potentially missing a much bigger opportunity or key piece of insight.
The end result of these mistakes was a tendency to go for the safe and familiar over the bold and uncertain.
Solving our discovery problem by ‘storming’
Recognising that we weren’t doing discovery as effectively as we could, we got together representatives from Product, Research, Design and Engineering to design new ways of doing things. This was a ‘storm’ — 1 week focussed only on this, with a proposal to our leadership team at the end of it.
The key thing here is that we solved it bottom-up, not top-down. Our sponsor from the leadership team, Monica Todd, empowered us to find the solution ourselves.
The output was a framework for discovery and a ‘Discovery Guild’ to support our teams in the process. You can read about how the Guild is bringing about culture-change here.
9 months after we introduced our discovery process, where are we now?
All of our customer facing teams have now adopted this new approach to product development.
It’s all about the user A greater emphasis on uncovering the problems to solve means that we now understand our users better than ever
A greater emphasis on uncovering the problems to solve means that we now understand our users better than ever New approaches to design and research We soon learned that our old approaches constrained us creatively — we now actively encourage new ways to ideate, test and validate our solutions
We soon learned that our old approaches constrained us creatively — we now actively encourage new ways to ideate, test and validate our solutions More ambitious solutions For initiatives like our homepage project, we have seen solutions that are far bolder than what we’d see in the past
For initiatives like our homepage project, we have seen solutions that are far bolder than what we’d see in the past A more open and supportive culture Our Discovery Guild creates a safe space for product-people to share successes, failures and what we’ve learned along the way
Moving from a process to a mindset
Our first iteration of this discovery framework has given us a better understanding of our users, new ways of approaching problems and a shift in our product culture.
That said, there’s always room for improvement — looking back, it’s clear that our process was powerful in creating a cultural shift but is perhaps too heavy for where we want to get to. Our vision for the future is discovery as a mindset, not a process.
This means greater confidence in knowing how to explore and tackle problems. More freedom in determining what approach to take, rather than one set process. More comfort with risk and a greater tendency towards experimentation.
We’d love to hear how you approach discovery in your organisations. We’d be happy to speak to you to share our experiences in more detail.
Please feel free to reach out to me at [email protected] | https://medium.com/ft-product-technology/reinventing-product-discovery-at-the-financial-times-23583c39e74f | ['Martin Fallon'] | 2020-12-11 13:14:06.396000+00:00 | ['Product', 'Product Management', 'Discovery', 'Design', 'UX'] |
Leverage Python and Selenium based Automation | You right click on any element that you want the xpath of.
You inspect the element.
Go to the element in the HTML in the developer console.
Right click on the element.
Click on Copy and go down to the option Copy XPath.
Refer above image for this.
All the statements with sleep() are there to simulate some delays. Also there are try blocks in the code in order to not stop execution if there is any error occured.
Lines 4–8
These are the import statements to include all the necessary libraries used for this project.
Lines 11–23
This is a method for logging into your Instagram account. The webdriver enters your credentials in the browser controlled by this script and executes the commands given to it. Line 17 requires you to add your username and line 20 requires you to add your password.
Lines 26–34
This is a method to click on the pop-ups that you get in between the login and the home page of your account.
Lines 37–39
A driver object based on webdriver.Chrome() is created. This is the driver emulating all the user actions. It is followed by calling of login() and post_login() methods described above.
Lines 41–51
The hashtag_list is the list of hashtags you have selected based on your niche. You need to add them here as strings separated by commas.
Lines 43–45
These are the lines where you get the list of already followed users. This is there in order to not unfollow the users that you have already followed. When you run the bot for the first time, uncomment the line 43 and comment lines 44 and 45. Next time onwards, comment 43 and uncomment 44 and 45 and don’t forget to change the file name on line 44.
Lines 47–51
These are variables keeping track of new followed people, new likes and comments posted. This information will be printed at the end when the bot is done executing.
Lines 53–119
Let’s understand this big chunk of code in steps: | https://medium.com/dataseries/leverage-python-and-selenium-based-automation-56a92e707745 | ['Tarun Gupta'] | 2020-12-25 14:58:39.298000+00:00 | ['Python', 'Instagram', 'Bots', 'Automation', 'Towards Data Science'] |
Working with Cloud Spanner and Java | We’ve gone into the architectural details of Google Cloud Spanner in previous posts, and now it is time to get a little deeper into the details of building an application using Google Cloud Spanner.
If you decide to build your application on Cloud Spanner, you can rely on ANSI 2011 SQL support and client libraries for multiple languages. There are great tutorials that help you get started, though they don’t go into much depth regarding the different options when using Java; Data Manipulation Language or Mutations via the client libraries, or SQL/DML via the two JDBC drivers.
I’m not going to go full depth on these concepts, but I hope to provide enough information to help you understand your different options as a Java developer working with Cloud Spanner. To make sure I got the details right, this article and the code written for it (which you can clone here) got technical expertise assistance from Java expert Peter Runge (more appropriately prunge-helix on Github)
We will be using the same schema as the Google Cloud Spanner getting started guides, which is explained in detail on the Schema and data model page in the Cloud Spanner documentation.
We are essentially creating a music application, and our catalog contains details on Singers and their Albums. The strong parent child relationship between singers and albums lends itself well to a unique Cloud Spanner optimisation called interleaved tables, which are described on that page, and well worth understanding.
Examples and Options
ORMs and the JDBC driver
If you are a seasoned Java programmer, it may be easier or more relevant to use an ORM or the JDBC driver to interact with Cloud Spanner.
ORMs can also make it easier to manipulate data in Cloud Spanner in your language of choice without having to write DML. In many cases these are a wrapper around the existing Cloud Spanner APIs. For example in Java with spring, spring-cloud-gcp-starter-data-spanner uses the Cloud Spanner APIs (com.google.cloud.spanner.*) to execute statements.
When following modern programming practices, it is much easier and consistent to use ORMs to interact with the database compared with interspersing DML in your code. As ORMs often make use of the existing client libraries, all the benefits of working with DML vs Mutations etc. are maintained.
For ORM with SpringData, we will first create the singers table:
Now we will create the Albums table :
And of course we have to create the interfaces:
And now we can use the tables:
There are two JDBC drivers including an open source driver written by Google. It makes use of the client libraries to connect to Cloud Spanner, and allows you to execute SQL and by extension DML.
If your statements require many objects to be held in memory prior to execution, it may be more efficient to use the JDBC driver to execute statements against the database. Large statements that require multiple joins, group-bys, and aggregations, may be onerous to manage in an object oriented manner, and it may be simpler to write a single DML statement containing those actions instead. Though, in terms of execution, the latter example is not expected to be roughly the same in either ORM and DML
Of course, if you are connecting an off the shelf application it is likely that the simplest integration would be by connecting via the JDBC driver.
A quick note on SQL/DML
Cloud Spanner supports ANSI 2011 compatible SQL, enabling you to query databases using declarative SQL statements that specify what data you want to retrieve.
There are SQL best practices that can help Cloud Spanner to find the relevant data in the most efficient way, and understanding how Cloud Spanner executes SQL statements can go a long way to improve performance. For example, use of parameters and secondary indexes are two of the ways that query performance can be improved.
Data Manipulation Language (DML) and Partitioned DML
DML can be used to INSERT, UPDATE, and DELETE statements in the Cloud Console, gcloud command-line tool, and client libraries. DML is designed for transaction processing, where Partitioned DML is designed for bulk updates and deletes, with minimal impact on concurrent transaction processing. This is achieved in Partitioned DML by partitioning the key space and running the statement over partitions in separate, smaller-scoped transactions.
DML statements are executed inside read-write transactions acquiring locks only on the columns you are accessing. For reads, shared locks are used to ensure consistency, with writes or modifications resulting in exclusive locks.
The following DML best practices will help improve performance, and minimise locking.
Now we will execute the same steps illustrated in the ORM example, by using the Java JDBC to execute DDL and DML statements
Mutations
A Mutation represents a sequence of inserts, updates, and deletes that Cloud Spanner applies atomically to different rows and tables in a Cloud Spanner database. These are executed via the Mutation API.
Although you can commit mutations by using gRPC or REST, it is more common to access the APIs through the client libraries.
Peter Runge will publish a post on working with DML and Mutations next week if you want to delve a little deeper into that topic.
Since this is the third example, we are going to assume you have created the tables, and save some time by just using the Mutation API to add data to our Singers and Albums tables
If you just wanted to use the standard client library, the getting started guide takes you through the same example which we reference below, and the code is published on github
The client libraries are also used by the ORM and JDBC drivers, so you can also use them to execute DDL: | https://medium.com/google-cloud/working-with-cloud-spanner-and-java-16e44ebc63b6 | ['Ash Van Der Spuy'] | 2020-11-16 20:57:55.573000+00:00 | ['Cloud Spanner', 'Java', 'Database', 'Object Relational Mapping', 'Google Cloud Spanner'] |
8 Tips for Marketing | Photo by Merakist on Unsplash
Before Going on I want to mention that most of this article I was able to write thanks to my colleague and friend, Head of Marketing Department of Fnet Sona Madoyan.
What is especially important marketing trends? When developing a marketing strategy, you always need to take into account the specifics of the industry; requirements, and specifics of potential buyers. In addition to all this, you always need to follow the news and a few important “marketing rules” that are relevant at all times.
So, what to do?
1. Be smart spending Marketing budget
Photo by Kelly Sikkema on Unsplash
Since the marketing budget is largely insufficient to take advantage of all the ways and means fo) implementation of marketing goals, it is very important not to focus the entire budget on one direction, but to diversify it. You need to choose the means and ways that are most effective and show results faster. And to evaluate the effectiveness of the chosen path or advertising tool, you can use the ROAS (Return on Ad Spend) indicator.
2. Create unique content
Photo by Will Francis on Unsplash
Focus your forces on content marketing and create original content, especially giving preference to video content.
As the famous marketer David Baba would say․
𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐦𝐚𝐫𝐤𝐞𝐭𝐢𝐧𝐠 𝐢𝐬 𝐥𝐢𝐤𝐞 𝐚 𝐟𝐢𝐫𝐬𝐭 𝐝𝐚𝐭𝐞. 𝐈𝐟 𝐚𝐥𝐥 𝐲𝐨𝐮 𝐝𝐨 𝐢𝐬 𝐭𝐚𝐥𝐤 𝐚𝐛𝐨𝐮𝐭 𝐲𝐨𝐮𝐫𝐬𝐞𝐥𝐟, 𝐭𝐡𝐞𝐫𝐞 𝐰𝐨𝐧’𝐭 𝐛𝐞 𝐚 𝐬𝐞𝐜𝐨𝐧𝐝 𝐝𝐚𝐭𝐞.
3. Combine online and offline strategies
Photo by Campaign Creators on Unsplash
Even though the usage of digital platforms and being active in a digital environment are prioritized. You need to reach the audience in several ways and inform about your product/service. It is also important to use offline tools. And it is very important to properly and effectively combine online and offline marketing strategies: they must be long-lasting and additional.
4. Gamify the offer
Photo by JESHOOTS.COM on Unsplash
Another important trend to follow is Gamification. It is probably no secret that almost always interactive, critical, and useful content provides a wider audience. And because people like to compete with each other through games, it may provide more engagement.
5. Consider each generation
Photo by Jessica Lewis on Unsplash
Generation Z (as the digital generation is often called) is the youth that created and will continue to create demand. Therefore, except for specific goods and services, in all other cases, it is extremely important to follow generation Z, since marketing and product trends are built by this generation, and products (goods and services) must be formed in accordance with their expectations.
6. Find more partners
Photo by Paweł Czerwiński on Unsplash
Creating partnerships and relationships with companies in different areas can lead to synergy. By combining your audience with partners and other resources, joint marketing activities with lower costs can bring the greatest results.
7. Customize your offer
Photo by Mick Haupt on Unsplash
The offer of any product or service must be segmented and personalized on a behavioral basis. Try to make an offer in such a way that everyone who receives it is sure that this product or service is for them.
8. Direct all funds to promote sales
Photo by NordWood Themes on Unsplash
Despite the divergence in views and ideas, and structure of some companies, Marketing and Sales department are like brothers working towards the same goal. Sales is a part of Marketing, and Marketing is a component of Sales, and it is important to aim your marketing strategy at stimulating sales.
I’ll elaborate on the issues concerning marketing-sale relationships in other articles.
Thank you very much for reading this article, hope you’ve enjoyed it. Special thanks to Sona Madoyan, Head of Marketing Department of Fnet, for making this article possible.
If you have any questions feel free to ask it in comments or contact me directly via Facebook, Twitter, or LinkedIn. Stay safe and best luck! | https://uxplanet.org/8-tips-for-marketing-deb7eddac139 | ['Daniel Danielyan'] | 2020-08-24 16:14:50.441000+00:00 | ['Marketing', 'UX Research', 'Digital Marketing', 'Sales', 'Success'] |
Confessions of an Obsolete Child Actor | Confessions of an Obsolete Child Actor
Being cast in ‘School of Rock’ was a defining moment in my life — for better or worse
Me, now. Photo: Sarah Elizabeth Larson
A few months ago, I was in hair and makeup for a feature with one of my castmates, a 12-year-old girl. She was on set with her mom and little brother. He was playing games on a phone while the mother and daughter ran lines together. When the mom stopped her kid mid-sentence to give her a line reading, I was instantly transported back to my youth. I felt bad for my castmate. I felt bad for my sisters, who spent years waiting in the car with my mom while I was in guitar lessons or at auditions. I felt bad for all the other kids in all the waiting rooms of all the auditions. Did any of us really want to be there?
Of course, I was there by choice that day — if you don’t count all the choices that led me to pursue acting in the first place. Back in 2003, I was cast as Katie in the film School of Rock. Katie was 10 years old, played bass guitar, and had about five lines that mostly consisted of one word each. I got to meet some of my idols, attend the MTV Movie Awards (hosted by America’s then-sweetheart Lindsay Lohan), and travel the world — all before I got my first period. Then, after my brief break from obscurity, I fell into the classic child actor pattern. I’ve spent the last 16 years of my life trying to be anything but “that girl from that thing” despite the blunt reality: No one even cares that much.
Me, age 10. Photo: Wendy Brown
Let me preface this by saying that I am absolutely grateful for the experience as a whole. For those who reach out to me expressing that School of Rock inspired them to pick up an instrument. For the femmes who let me know Katie was their first queer crush. (Does this make me a queer icon? If so, love that for me.) For all the opportunities that followed. And especially for my castmates, who I see as forever family. Nothing will ever diminish these factors. However, I do have some very complicated feelings about School of Rock, so let’s dive in, shall we?
From as early as I can remember, my parents told me I was “destined to be a star.” They were the textbook definition of toxic stage parents. They praised me and gave me all the validation and attention in the world. They spoiled me. They called me perfect and beautiful. They kept a journal of all the adorable and charming things I’d do and say. I started taking guitar lessons when I was four and became the family’s little prodigy, against my own will. It was expected that if I were to make an appearance at a family function, my guitar would be there, too. My mom would coach and critique me from the sidelines.
At school, I desperately wanted to be liked and to fit in. All of the kids in my class were either in dance or sports, so we had nothing in common. I was bullied immensely for being the “weird classical music girl,” and my only friends were my sisters and my guitar. When I was nine, I was on NPR’s From the Top, a radio show that showcased kids who played classical music. A few months later, a casting director reached out to my guitar teacher expressing interest in having me audition for Untitled Jack Black Project. I didn’t know what any of this meant. I was 10; all I really cared about was ice cream and having, I don’t know, one friend who wasn’t a blood relative or an inanimate object.
Initially, I read for the band manager role (which eventually went to Miranda Cosgrove—hey, sis) and played a few classical songs on guitar. For the callback, I was asked to “rock out.” My parents bought me a kid-sized electric guitar, and I played “American Woman” by Lenny Kravitz. I found out I’d booked it the next day. They told me I’d be playing a character they wrote specifically for me and that I’d be leaving in two days for New York, where I’d live in a hotel with my mom for four months. The idea that Mike White, Jack Black, and Richard Linklater saw something in me still blows my mind.
I got to live the Eloise fantasy I never knew I wanted. And then we wrapped.
While on set, I met 14 kids who were underdogs like me. We all fell in love with each other pretty much instantly, and our moms were a cast of their own (and honestly could have had a highly entertaining reality television show). To this day, we have a family text thread where we champion each other’s exciting lives.
On set, I was a walking panic attack. I would fuck up my lines; I would look into the camera and ruin takes. When I looked into that lens, what I saw was my entire family saying, “Don’t fuck this up for us,” and my bullies laughing at me and calling me weird. All this to say that off-screen, it was fun as hell. We’d have cast and crew karaoke parties and play Dance Dance Revolution between takes. I got to see Heather Headley and Adam Pascal in the original Broadway cast of Aida. I got to eat room service every night. I got to live the Eloise fantasy I never knew I wanted. And then we wrapped.
I went home to Chicago, and because kids are assholes, I was bullied even more when I came back to school. I’ll never forget one girl who came up to me and asked me to sign her lunch card, then tore it up and threw it in the trash in front of me. When we started the press tour, I was pulled out of school and got to be with my friends again. Upon seeing myself on the big screen at the premiere, I judged myself for being the tallest girl in the cast, for having bags under my eyes and weird teeth, for having a fat belly and no breasts. I started hating my body and developed an eating disorder.
I remember being pulled out of school to go to the Toronto International Film Festival (brag) when I was 11. At an afterparty, having snuck a sip of champagne and snacking on a cup of wasabi peas, I had the realization that I was no longer a kid. I had a job now, and my job was to book another big movie so I could pay my parents’ mortgage. Sometimes, I questioned whether I continued to act for myself or for them. My mom, despite having zero experience in the film industry, had by then taken on the role of my manager. She was always throwing in her unhelpful two cents when it came to my appearance. Neither of us really knew what we were doing. We’d drill lines together in the car on the way to auditions. She was more off-book than I was. She would futz with my hair and tug at my clothes in the lobby. If I did a good job at an audition, I’d get Panera; if I did a great job, I’d get Panera and a Frappuccino.
On message boards (what a time 2003 was), grown men would sexualize me, commenting, “The bassist is going to grow up to be hot” and “Can’t wait ’til she’s 18.” My mom would read the comments online for hours on end, relaying all of the negative ones to me. When I was in sixth grade, a strange man in a trench coat came to my school and tried to take photos of me, and absolutely nothing was done about it. For the first time, I felt unsafe existing. When my parents brought this to my school’s administration, the principal said, “I guess that’s the price of fame.” I was transferred to a smaller private school immediately. “What a relief,” I thought. “I can start fresh, leave the bullies and stalkers behind. I won’t even mention School of Rock. I can go back to being a kid.”
But every time I entered a new school, it would only take a few days before someone found out my secret. I went to three different high schools, and at each one, kids would scream School of Rock quotes at me in the halls. It was annoying and embarrassing. I constantly felt trapped. If I reacted to them positively, I was labeled a bragging snob. If I reacted negatively or ignored them, I was labeled a cold, ungrateful bitch. Every time someone brought up the movie, I didn’t think of my personal highlights, like meeting the Olsen twins or eating Kobe beef with Jack Black and my dad in Tokyo or being on Sharon Osbourne’s talk show. I thought of the girl ripping up my autograph in the cafeteria. I thought of the trench coat guy coming to my school. I thought of my mom reading the awful comments on the message boards, the bullying, and the shame of being sexualized as a 10-year-old.
From the age of 14, I used drugs, alcohol, sex, food, and self-harm to numb all of this pain. I’ve survived dozens of toxic relationships and three suicide attempts. I’m not saying all of this is because I played bass in a movie when I was a kid but because I spent over a decade terrified that I’d peaked at 10 years old.
Even recently, over half of the comments on my social media are from dudes who had a crush on the 10-year-old me (some of them are really gross, and I want to thank my friends who never hesitate to drag those goblins). Sometimes the comments are people asking me why I stopped acting, which fills me with rage. Actors are worth so much more than their IMDb credits.
Sometimes the idea of a TMZ headline reading “That one girl from School of Rock dead from overdose at 27” is all it takes to keep me from a relapse.
Today, I live in Los Angeles, where I work for a skin care company. I still act and perform. I’ve traveled the country as a stand-up comedian and performed in several plays, web series, indie feature films, and bands. I’ve been fortunate enough to be welcomed into Chicago’s theater and comedy scenes. I’ve competed on NBC’s Bring the Funny. And still, no credit or feat is as cool as the fact that I have been in recovery from alcoholism and addiction for two years (and frankly, it’s fucking hard to maintain sobriety, but sometimes the idea of a TMZ headline reading “That one girl from School of Rock dead from overdose at 27” is all it takes to keep me from a relapse).
I’m grateful that School of Rock happened. It’s a great film, and it was, to its core, a fun experience. I’m grateful for the fans who picked up an instrument because of us. And I’m even grateful to my parents; I recognize now that they have unresolved trauma of their own. They were simply doing their best, and unfortunately, their best resulted in some pain. But I get to recover from that pain every day, through therapy and self-reparenting.
To this day, I still get recognized randomly at airports and coffee shops. People ask if I’m “the girl from School of Rock.” For a long time, I used to say no and keep walking, but now that I’m in a better place emotionally, I humbly say yes. I no longer carry resentment for people who only know of me as “that girl from that thing.” I know deep within my bones that I’m so much more — and that’s good enough for me. | https://humanparts.medium.com/tales-of-an-obsolete-child-actor-92a120f08576 | ['Rivkah Reyes'] | 2020-04-13 22:22:57.294000+00:00 | ['Mental Health', 'Culture', 'Film', 'Self', 'Life Lessons'] |
My Learning Trajectory, Chapter One: Books, Courses, Total Worth, and Total Hours | My Learning Trajectory, Chapter One: Books, Courses, Total Worth, and Total Hours
How I acquired all my knowledge with “only” 1074 hours and saved myself more than €1000
Key words and ideas
Amount of books read, total hours spent on reading books, courses and spaced repetition, and total worth in money; Distinguishing between deliberate and non-deliberate practice.
Foreword
I have tried to quantify the amount of hours and money spent on books and courses since the year 2015. I will write them down here together with my thoughts about the numbers. All the things I have written in this autobiography is thanks to everything I have quantified in this chapter, or at least, 90% of everything I have written about. My behavior of quantifying things is meant to give me a perspective how efficient and how much time I spent acquiring all my knowledge.
Books read and total hours
Since the year 2015, I have read a total of 47 books, the majority of which are read in the year 2018–2019, because of the lack of public school (school ≠ education). My current goal is to read approximately 20 books a year, although more is always welcome. They mostly consist of nonfiction, scientific or research books. You can see all the books I have read by either googling “Goodreads, Lorenz Duremdes” or going to this link: https://www.goodreads.com/review/list/83183601-lorenz-duremdes?shelf=read
Because they are mostly scientific or research books, I estimate it takes me around 10 hours to read one book, coupled with the fact that I tend to memorize them as much as possible, something I achieve with help from the website called ‘Quizlet’, that brings me to 14 hours. It takes around 30 minutes to complete one Quizlet ‘set’, which I spread over a 2 to 3 year time with spaced repetition for a frequency of 7 times, and 30 minutes multiplied by 7 divided 60 minutes (an hour), will get you 3.5 hours or approximately 4. You can see my Quizlet profile with this link: https://quizlet.com/WilliamJamesSidis
47 books multiplied by 14 hours gives us 658 hours of deliberate practice.
P.S. my Quizlet spaced repetition schedule in days is: 7 > 14 > 28 > 60 > 120 > 240 > 365
Books read and total worth
Now onto their total worth in terms of money. I do keep track of it in my google sheets document, and currently all my 47 books are worth approximately €763.98. Now, I used the word ‘approximately’, because here is the plot twist: I actually paid 0 euros for all my 47 books. I do tend to look on websites like Amazon all the time how much a book is worth, while reading. Another way is to say an average book costs approximately €15, multiply that number with 47 and you get €720, a number close to my own approximation.
Courses: total hours and worth
As of the year 2019, I have followed four courses:
Finance Learning How to Learn: Powerful mental tools to help you master tough subjects Science of Exercise Existential Well-being Counseling: A Person-centered Experiential Approach
Together, they are worth approximately €337 and require 272 hours.
If we add the hours spent on spaced repetition on Quizlet, we get 288 hours. Again, I have spent 0 euros to gain all this knowledge, because I am counting how much the certificates would cost (which is optional after completion).
Total hours writing
Another way to gain knowledge and to learn is to write, namely this autobiography in my case. It takes around 2 hours for me to write one chapter and have written 64 chapters including this one so far, which brings me to 128 hours.
Bonus: spent time in the ‘gym’
So this subchapter is more of a bonus since I want to use this chapter to explain how I gathered all the knowledge of this autobiography, the time it took and potential money it would cost.
I have been going to the ‘gym’, or rather, my own home gym since the year 2016. I try to train 6 times a week, but let’s count deload weeks and times of sickness into the equation too and it becomes more like 4 days a week on average. I spend around 30 minutes to 2 hours in the gym depending on how I feel, so that’s an average of one hour. The calculation over 3 years time becomes: 1 hour multiplied by 4 days multiplied by 52 weeks in a year multiplied by 3 years = 624 hours.
Deliberate practice: total time and worth
So putting the time spent on reading books, courses, and writing time together, we get 658 hours plus 288 hours plus 128 hours = 1074 hours of deliberate practice.
The total worth would be: €763.98 plus €337 = €1100,98. Again, I have spent 0 euros on this all.
Non-deliberate practice: total time
What I would see as non-deliberate practice that still adds to my knowledge base are things like gaming, reading random articles without trying to memorize everything, watching documentaries, daydreaming, etc. Let’s say the time spent on non-deliberate practice, that also happens to be effective, is ¼ of the time spent on deliberate practice. This gives us the number 268.5 hours. Together with total time spent on deliberate practice, we get 1342.5 hours.
Bonus: total total time and average time spent every day
Now, if we want to count gym time too, we get 1966.5 hours over 3 years time (2016–2019). Divide this number by 3 years and 365 days, and we get approximately 1.8 hours of personal development every day. That’s not a lot, but the majority (like 80% in the area of courses and books) of it is spent when I finished high school. It reminds me of this quote:
“I have never let my schooling interfere with my education.” ―Mark Twain
Subscribe for more content: https://mailchi.mp/261ae9e13883/autibiography | https://medium.com/superintelligence/10-02-2019-my-learning-trajectory-chapter-one-books-courses-total-worth-and-total-hours-6d106650d323 | ['John Von Neumann Ii'] | 2019-11-10 20:29:01.483000+00:00 | ['Course', 'Reading', 'Autobiography', 'Books', 'Learning'] |
A Bold And Beautiful Salad For Summer Days | A Bold And Beautiful Salad For Summer Days
This feisty vegan fajita salad will leave you wanting more.
Feisty Fiesta Salad, photo by author
Okay, so full disclosure: I’m OBSESSED with walnut “meat!” I’ve been making up any excuse to make it and use it. It’s one of my favorite meat substitutes right now because it is sooo easy to make and as delicious as you allow it to be. If you season it well, you’ll be licking your fingers and asking “Walnuts?! What walnuts?!” It’s meaty, hearty and reminiscent of minced chicken or pork on its own. If you add mushrooms to the mix with the right seasoning, you can easily get a beefy flavor. And, you know what else?You can control the salt, and season it exactly to your liking unlike the prepackaged vegan sausages that I love.
I tried Beyond Meat Hot Italian Sausages and I’m not going to lie, they SLAP! However, this here walnut meat is a great quick substitute for days when you want to eat a little cleaner.
You might be asking yourself why I am waxing poetic about walnut meat when this is a fajita salad recipe, but the truth is, the walnut meat is the centerpiece of this recipe for me. Along side some crisp sweet peppers, onions, corn, cucumbers and tomatoes (optional), the walnut meat makes this the perfect fajita salad. Not to mention, it’s literally the only thing that you need to cook in this entire recipe. AND, you don’t even have to do that; you can make your walnut meat raw if you like because as Tabitha Brown says “that’s your business!”
If you don’t like walnuts, try pecans. If you have an allergy or nuts are not your jam, you can add whatever vegan mince or grounds you like. For the love of all that is holy, just season them up really well.
Ingredients:
1/4 cup of walnut pieces
Trini green seasoning
Paprika
Garlic powder
Roucou/ Goya Sazon/ achiote powder (optional)
Liquid aminos (soy sauce, coconut aminos, or tamari will work)
2 medium-sized cucumbers diced
1 can of sweet corn (drained and washed)
1–2 small white onions julienned
1 medium-sized green bell pepper (sweet pepper) julienned
1 medium-sized tomato diced
1 large clove of garlic
Red chili flakes (optional)
Black pepper
Olive oil
Lime juice
Mustard
Agave (honey or brown sugar will work) | https://medium.com/one-table-one-world/a-bold-and-beautiful-salad-for-summer-days-ac6ac3c49e7b | ['Melissa A. Matthews'] | 2020-07-07 14:31:01.325000+00:00 | ['Summer', 'Cooking', 'Vegan', 'Food', 'Recipe'] |
An Introduction to Azure Stream Analytics Job | Stream Analytics Pipeline, Source: docs.microsoft.com
Introduction
The capability of an Azure Stream Analytics Job is a lot, here in this post we are going to discuss a few of them. An Azure Stream Analytics is basically an engine which processes the events. These events are coming from the devices we have configured, it can be an Azure IoT Dev Kit (MXChip) or a Raspberry Pi and many more. The stream analytics job has two vital parts
Input source
Output source
The input source is the source of your streaming data, in my case, it is my IoT Hub. And the output source is the output what you are configuring. I had configured the output to save the data to an Azure SQL database. Let’s just stop the introduction part now and start creating our own Stream Analytics.
You can always see this article on my blog here.
Background
I recently got my MXChip (Azure Iot Dev Kit) and I was surprised with the capabilities that device can do. It has a lot of sensors within the device, like temperature, humidity, pressure, magnetometer, security etc. Then I thought it is time to play with the same. So the basic idea here was to,
Configure the device to send the data to the IoT Hub Select the IoT Hub as a stream input Send the output to an SQL Server database
In this article, we are going to concentrate on how to create a Stream Analytics Job and how you can configure the same to save the stream data to the SQL Server database.
Prerequisites
To do the wonderful things, we always need some prerequisites.
Azure Subscription MXChip Azure IoT Dev Kit An active IoT Hubows Driver Kit (WDK) 10 IoT Core ADK Add-Ons Windows 10 IoT Core Packages The Raspberry Pi BSP Custom FFU image we have created
Creating the Azure Stream Analytics Job
Login to your Azure Portal and click on the Create a resource, and then search for the “Stream Analytics job”.
Once you clicked on the Create button, it is time to specify the details of your job.
Job Name Subscription Resource Group Location Hosting Environment
I would strongly recommend you to select the same resource group of your IoT Hub for the Stream Analytics Job as well so that you can easily delete the resources when there are not needed. Once the deployment is successful you can go to the resource overview and see the details.
Configure Inputs
In the left menu, you can see a section called Job topology, that’s where we are going to work. Basically, we will be setting the Inputs and Outputs and then we will be writing a query which can take the inputs and send the values to the configured output. Click on the Inputs label and click on Add stream input and then select the IoT Hub.
In the next screen, you will have options to select the existing IoT hub and to create a new IoT Hub. As I have already created an IoT hub, I would select the existing one.
Please be noted that you are allowed to use special characters in the Input alias field, but if you use such, please make sure to include the same inside [] in the query, which we will be creating later. About the special characters in Input alias field
Once you are successfully configured the Inputs, then we can go ahead and configure the outputs.
Configure Outputs
Click on the Outputs from the Job topology section and click Add, and then select the SQL Database.
You can either create a new Database or select the one you had already created. I used the existing database and table.
Configure the Query
Once you click the label Query on the left pan, you will be given an editor where you can write your queries. I am using the below query.
SELECT
messageId,
deviceId,
temperature,
humidity,
pressure,
pointInfo,
IoTHub,
EventEnqueuedUtcTime,
EventProcessedUtcTime,
PartitionId
INTO
streamoutputs
FROM
streaminputs
As you can see that I am just selecting the fields I may need and saving the same to our stream outputs. You can always select all the fields by using the select * query, but the problem with that is, you will have to set up the table columns in the same order of the stream data. Otherwise, you may get an error as below.
Encountered error trying to write 1 event(s): Failed to locate column ‘IoTHub’ at position 6 in the output event Stream analytics query error
If there are any errors, you can see that in the Output details.
Run the Stream Analytics Job and See the Data in the Database
As we have already done the initial set up, we can now start our Stream Analytics Job, please make sure that the IoT Hub is running and the device is sending data to the IoT Hub. If everything is working as expected, you will be able to see the data in the SQL server database. You can either connect your MXChip device to the network and test this or use the custom simulator app.
If you are using the Simulator console application, make sure that you are giving the device id, key and the IoT hub uri correctly, otherwise you will get an unauthorized error as explained here.
Test the Stream Analytics Job Inside the Portal
You also have an option to test the functionality in the portal itself. The only thing you will have to do is to prepare the sample input data. I have prepared the sample JSON data as follows.
[
{
"deviceId": "test-device",
"humidity": 77.699449415178719,
"pointInfo": "This is a normal message.",
"temperature": 32.506656929620846
},
{
"deviceId": "test-device",
"temperature": 52.506656929620846,
"humidity": 17.699449415178719,
"pointInfo": "This is a normal message."
},
{
"deviceId": "test-device",
"temperature": 42.506656929620846,
"humidity": 57.699449415178719,
"pointInfo": "This is a normal message."
}
]
Now we can go to the Query section and upload the sample data file for our inputs.
In the next window, you can select the JSON option and upload your JSON file.
Click the Test button, and now you should be able to see the output as below.
Conclusion
Wow!. Now we have learned,
What is Azure Stream Analytics Job
how to create Azure Stream Analytics Job
how to add Inputs to the Azure Stream Analytics
how to add Outputs to the Azure Stream Analytics
how to add custom Query in Azure Stream Analytics
how to Test the Stream Analytics Query with sample data
You can always ready my IoT articles here.
You can always follow me here on Medium and Twitter.
Your turn. What do you think?
Thanks a lot for reading. Did I miss anything that you may think which is needed in this article? Could you find this post as useful? Kindly do not forget to share me your feedback.
Kindest Regards
Sibeesh Venu | https://medium.com/medialesson/an-introduction-to-azure-stream-analytics-job-24fa5e76f48f | ['Sibeesh Venu'] | 2019-01-22 14:36:28.207000+00:00 | ['Cloud Computing', 'Azure', 'IoT', 'Stream Analytics', 'Iot Hub'] |
You’ll Never Love Your Past as Much as You Love Your Future | You’ll Never Love Your Past as Much as You Love Your Future
When are we the happiest?
Photo by Clay Banks on Unsplash
A 15-year-old’s greatest wish is to be 18, and yet, most 21-year-olds will say their 18-year-old selves were kind of dumb — even though both are just three years away from that age.
No matter how you change the numbers, this phenomenon will apply almost universally in one form or another.
When I was 8, I desperately wanted to be 10, like my neighbor who seemed so much stronger and smarter than I was at the time. When I was 10, I didn’t feel any different — maybe because I had no 8-year-old neighbor to compare myself to.
When I was 20, I thought by 30, I’d have life figured out. It was only at 23 that I looked around and wondered: “Why is nothing happening?” Nothing was happening because I wasn’t doing. I started right then, and, seven years later, I’m still going. I will turn 30 in two months, and now my 20-year-old self looks like an idiot.
I’m sure in my 30s, I’ll think my 40s will be much better, only to realize I’m still nearly as clueless about life at 45, yet not without that same patronizing smile back at my 30-year-old self that I now hold whenever I think of my early 20s.
Why is that? Why do we enjoy looking forward so much yet can only laugh and shake our heads when we look back? Well, in a nutshell: You’ll never love your past as much as you love your future. No one ever does.
In your future, the perfect version of you always exists. Everything is wide open. You feel as if you can achieve anything and everything, probably all at the same time. Your plans are intact. Your goals are in reach. Time is still flexible.
In your past, everything has already happened. There are no more pieces to be moved around. They’re all in place, and no matter whether you like the puzzle you’ve pieced together or not, you’ll always spot many places where you could have done better.
The perfect version of you never materialized. Most plans went to hell. Many goals fell out of reach. And time is just gone altogether. That can be demoralizing, but it’s just part of life.
Retirees don’t get as much satisfaction out of their past careers as college graduates expect from their future ones. Twenty-somethings don’t feel as autonomous as their teenage selves would have hoped to feel. Stressed moms don’t have it together as much as they believed they would before they gave birth.
This is a frustrating game you can play all your life — or you can realize that “all this looking back is messing with your neck.” At the end of the day, it matters not how well your past stacks up against your once imagined future. It only matters that you were content with the present as you lived through it.
At what age are we the happiest? That’s an impossible question, highlighted by the fact that you can find a theory for each major age bracket to back it as the answer.
There’s “the U-bend of life,” a theory that suggests happiness is high when we’re young, declines towards middle age, bottoms at 46 on average, then goes back up and reaches new heights in our 70s and 80s.
The idea is that family stress, worries about work, and anxiety about how our peers perceive us peak when we’re in the thick of life. As we get older, we care less about opinions and find contentment in what we have rather than what we hope to achieve.
When Lydia Sohn asked 90-somethings what they regretted most, however, she found the opposite: People were happiest when they were busy being the glue of their own social microcosmos — usually in their 40s.
Every single one of these 90-something-year-olds, all of whom are widowed, recalled a time when their spouses were still alive and their children were younger and living at home. As a busy young mom and working professional who fantasizes about the faraway, imagined pleasures of retirement, I responded, “But weren’t those the most stressful times of your lives?” Yes of course, they all agreed. But there was no doubt that those days were also the happiest.
At what age are we the happiest? It’s not only an impossible question, it’s an unnecessary one to ask. The answer will be different for every person to ever live, and our best guess is that it’ll be a stretch of days on which you felt fairly satisfied with life rather than a singular event or short period of exuberant bliss.
What we do know is that your best shot at stringing together a series of such “everything is good enough” days is neither to get lost in future castles in the sky nor to constantly commiserate how unlike those castles your past has become. You’ll have to abandon both the future and the past in favor of the present.
Imagine you have two choices: You can either be happy every day of your life but not remember a single one, or you can have an average, even unsatisfying life but die wholeheartedly believing you’re the happiest person in the world.
It matters not which one you choose because in both scenarios, you’ll die on a good day. One sacrifices the past, the other the future, but the present is what counts.
You’ll never love your past as much as you love your future, but that’s okay because life is neither about tomorrow nor about yesterday. It’s about today — and if you make today a good day with your thoughts, actions, and decisions, the idea of age will soon fade altogether. | https://ngoeke.medium.com/youll-never-love-your-past-as-much-as-you-love-your-future-3b44dff0f6d3 | ['Niklas Göke'] | 2020-12-28 11:29:07.755000+00:00 | ['Happiness', 'Mindfulness', 'Psychology', 'Aging', 'Life'] |
What Drives Apple’s Innovation Engine? | Source: Apple
What Drives Apple’s Innovation Engine?
How to design an organization for continuous innovation
Over the last decade, I have held roles with complete P&L ownership of a business unit, as a result often believed that greater end-to-end control led to more effectiveness. This of course refers to conventional management wisdom, where business units are run as independent divisions, and GMs have complete accountability and control of the business. I was a staunch believer in the absoluteness of this model, until now.
What changed? I came across this HBR article that presents a case study of an ‘unconventional’ model that Apple has used so effectively to drive innovation. It quotes:
“Apple is not a company where general managers oversee managers; rather, it is a company where experts lead experts”.
This is where expertise is aligned with decision rights. Think of it as vertical ownership of functions rather than horizontal ownership of a product line or business unit. The key assumption here is that it’s easier to train an expert to manage well than to train a manager to be an expert.
Source: Team Analysis
Example:
At Apple, a team of experts creates deep expertise in a given area, where they can learn from one another. For instance, Apple has more than 600 experts on camera hardware technology in a group that is led by Graham Townsend, a camera expert (Source: HBR). Now, since iPhones, iPads, laptops, and desktop computers all have cameras, these experts would have been split across different teams had Apple been organized into business units. This could have diluted their collective learning and ability to make progress towards a singular goal: make the best cameras for all Apple devices. Now this team has pushed the boundaries of the cameras to a level, where cameras have become one of the most beloved features of the devices. This is less likely to have happened in the divisional model.
So why does organization structure matter?
As famous historian Alfred Chandler argued, “structure follows strategy”. Once you have a clear strategy, the structure should enable the execution of that strategy.
Source: Team Analysis
Apple’s structure fuels its strategy flywheel, where the mission of building the best products on earth helps it attract the best experts. The structure then empowers these experts to lead other experts, further fueling their deep understanding and expertise in their respective areas. This translates into the creation of best-in-class products, which delivers great experiences for users and industry-leading profits for Apple. These profits turn into handsome rewards for employees, which further help in attracting and retaining top talent.
The link between Apple’s strategy and structure, and how that drives innovation is evident as Apple’s leaders believe that world-class talent wants to work for and with another world-class talent. As the HBR article says, “It’s like joining a sports team where you get to learn from and play with the best.”
What are the key elements of such an organization structure?
Ownership: functional vs. divisional — fundamentally, you need to ask whether you want to align accountability with control (divisional) vs. align expertise with decision rights (functional). This will then drive your entire strategy of the type of talent you would recruit. Control mechanism: at Apple, one has accountability without control, which means that one’s leadership abilities to influence and collaborate with others are more important than the authority that their title bestows. This also means that one's ability to control the outcomes and influence others to follow her/him is dependant on the reputation that is built by delivering results. Controlling with authority is easy, but accountability without control is real hard work and can be messy. As the article describes — “Good mess” happens when various teams work with a shared purpose. “Bad mess” occurs when teams push their own agendas ahead of common goals. Financial Strategy: Are you in an organization that primarily manages short-term goals, i.e. quarterly financial targets? This means decisions to invest in long term projects are mostly driven by short-term targets managed by GMs who are incentivized to protect these metrics. Conversely, when you have experts making such decisions, they are in a better position to weigh the short-term costs against the long-term value. As per the article — “at Apple, the finance team is not involved in the product road-map meetings of engineering teams, and engineering teams are not involved in pricing decisions.” Decision-making process: This may be the most important enabler for innovation. I have seen far too many times, good ideas not evolving because someone on top didn’t agree. But when most decisions are driven by healthy debate amongst different functions who disagree, push back, promote or reject ideas, and build on one another’s ideas to come up with the best solutions, the results are often better. It requires a different type of leadership — where leaders inspire, prod, or influence colleagues in other areas to contribute toward achieving their goals. Sounds more like democracy, which is often messy, but makes most progress over time. Incentives: If incentives are aligned to win as a team, not as an individual, then team members operate very differently. At Apple, various functions work through their differences with one common goal — build the best products that are commercially successful. Thus, the incentives are aligned to the overall performance of the company, not to the success of individual products.
Closing thoughts:
As large organizations and their business models are being disrupted by technology, it’s time to rethink the “organization structure”. It’s time to challenge the conventional divisional set-up and build a team of experts, led by experts, who have both the expertise and the decision rights, to build best-in-class solutions. It’s not going to be easy, but if Apple is an example to follow, then it could definitely be worth it.
DISCLAIMER: This article represents solely my personal views and interpretations from an HBR article. It does not represent the views of any organization. It is only meant to share my learnings from publically available information and does not represent any confidential information.
Amit Rawal is a Sloan Fellow at Stanford’s Graduate School of Business. He has spent the last decade building and scaling e-commerce ventures for 40%+ of the world’s population. At Stanford, he is focused on bringing together tech, design, and data to create joyful shopping experiences. He is a data geek and loves tracking all kinds of health and wellness metrics. He can be reached at [email protected].
Links: Linkedin, Twitter, Instagram, Website | https://medium.com/swlh/what-drives-apples-innovation-engine-35d7c4fca166 | ['Amit Rawal'] | 2020-11-11 05:57:04.574000+00:00 | ['Leadership', 'Apple', 'Technology', 'Innovation', 'Digital'] |
How Do Gradient Boosting Algorithms Handle Categorical Variables? | A fantastic shot of the Falcon Heavy rocket ascension — credit (Unsplash)
Previously, we investigated the differences between versions of the gradient boosting algorithm regarding tree-building strategies. We’ll now have a closer look at the way categorical variables are handled by LightGBM [2] and CatBoost [3].
We first explain CatBoost’s approach for tackling the prediction shift that results from mean target encoding. We demonstrate that LightGBM’s native categorical feature handling makes training much faster, resulting in a 4 fold speedup in our experiments. For the XGBoost [1] adepts, we show how to leverage its sparsity-aware feature to deal with categorical features.
The Limitations of One-Hot Encoding
When implementations do not support categorical variables natively, as is the case for XGBoost and HistGradientBoosting, one-hot encoding is commonly used as a standard preprocessing technique. For a given variable, the method creates a new column for each of the categories it contains. This has the effect of multiplying the number of features that are scanned by the algorithm at each split, and that is why libraries such as CatBoost and LightGBM implement more scalable methods.
Processing of Categorical Variables in CatBoost
Ordered Target Statistics Explained
CatBoost proposes an inventive method for processing categorical features, based on a well-known preprocessing strategy called target encoding. In general, the encoded quantity is an estimation of the expected target value in each category of the feature.
More formally, let’s consider the category i of the k-th training example. We want to substitute it with an estimate of
A commonly used estimator would be
which is simply the average target value for samples of the same category as xⁱ of sample k, smoothed by some prior p, with weight a > 0. The value p is commonly set to the mean of the target value over the sample.
The CatBoost [3] method, named Ordered Target Statistics (TS), tries to solve a common issue that arises when using such a target encoding, which is target leakage. In the original paper, the authors provide a simple yet effective example of how a naive target encoding can lead to significant errors in the predictions on the test set.
Ordered TS addresses this issue while maintaining an effective usage of all the training data available. Inspired by online algorithms, it arranges training samples according to an artificial timeline defined by a permutation of the training set. For each sample k from the training set, it computes its TS using its own “history” only; that is, the samples that appear before it in the timeline (see example below). In particular, the target value of an instance is never used to compute its own TS.
Table 1: Ordered Target Statistics in CatBoost, a toy example
Values of x̂ⁱ are computed respecting the history and according to the previous formula (with p = 0.05). In the example of Table 3, x̂ⁱ of instance 6 is computed using samples from its newly assigned history, with x̂ⁱ = thriller. Thus, instance 1 is used, but instance 3 is not.
In the CatBoost algorithm, Ordered TS is integrated into Ordered Boosting. In practice, several permutations of the training set are defined, and one of them is chosen randomly at each step of gradient boosting in order to compute the Ordered TS. In this way, it compensates for the fact that some samples TS might have a higher variance due to a shorter history.
A Few Words on Feature combinations
In addition to Ordered TS, CatBoost implements another preprocessing method that builds additional features by combining existing categorical features together. However, processing all possible combinations is not a feasible option as the total grows exponentially with the number of features. At each new split, the method only combines features that are used by previous splits, with all the other features in the dataset. The algorithm also defines a maximum number of features that can be combined at once, which by default is set to 4.
Native Support of Categories in LightGBM
LightGBM provides direct support of categories as long as they are integer encoded prior to the training. When searching for the optimal split on a particular feature, it will look for the best way of partitioning the possible categories into two subsets. For instance, in the case of a feature with k categories, the resulting search space for the algorithm would be of size 2ᵏ⁻¹–1.
In practice, the algorithm does not go through all possible partitions and implements a method derived from an article from Fisher [4] (On Grouping for Maximum Homogeneity — 1958) to find the optimal split. In short, it exploits the fact that if the categories are sorted according to the training objective, then we can reduce the search space to contiguous partitions. This significantly reduces the complexity of the task.
In the experiment below, we investigate the benefits of using categorical feature handling instead of one-hot encoding. We measure the mean fit time and best test scores obtained with a randomized search on subsets of different size of the airlines dataset. This dataset, whose statistics are summarized in Table 2, contains high cardinality variables which make it suitable for such a study.
Table 2: A short description of the airlines dataset
The results show that both settings achieve equivalent performance scores, but enabling the built-in categories handler makes the LightGBM faster to train. More precisely, we achieved a 4 fold speedup on the full dataset.
Figure 1: Importance of LightGBM’s categorical feature handling on mean fit time
Table 3: Importance of LightGBM’s categorical feature handling on best test score (AUC), for subsets of airlines of different size
Dealing with Exclusive Features
Another innovation of LightGBM is Exclusive Feature Bundling (EFB). This new method aims at reducing the number of features by bundling them together. The bundling is done by regrouping features that are mutually exclusive; that is, they never (or rarely) take non-zero values simultaneously. In practice, this method is very effective when the feature space is sparse, which, for instance, is the case with one-hot encoded features.
In the algorithm, the optimal bundling problem is translated into a graph coloring problem where the nodes are features and edges exist between two nodes when the features are not exclusive. The problem is solved with a greedy algorithm that allows a rate of conflicts 𝛾 in each bundle. With an appropriate value for 𝛾, the number of features (and thus the training time) are significantly reduced while the accuracy remains unchanged.
How does EFB Affect Scalability?
We investigated the importance of EFB on the airlines task. In practice, we did not notice any effect of EFB on fit time when using the categorical feature handler of LightGBM. However, EFB did improve the training time by leveraging the sparsity introduced by OHE as shown in Figure 2. The results with categorical feature handling enabled (lgbm) are shown as a reference point.
Figure 2: Importance of EFB on mean fit time, when categorical variables are OHE
Tweaking XGBoost’s missing value handler
XGBoost does not support categorical variables natively, so it is necessary to encode them prior to training. However, there exists a way of tweaking the algorithm settings that can significantly reduce the training time, by leveraging the joint use of one-hot encoding and the missing value handler !
XGBoost: A Sparsity-Aware Algorithm
In order to deal with sparsity induced for instance by missing values, the XGBoost split-finding algorithm learns from the data at each split a default direction for these values. In practice, the algorithm tests two possible grouping for the instances with missing values (left and right), but these points are not visited one by one like the others. This saves a lot of computations when the data is very sparse.
What is interesting is that this particular feature is not limited to missing values as we usually understand them. In fact, you can choose any constant value you want to play the role of missing value, when your data does not contain any. This becomes very handy when working with datasets for which one-hot encoding introduces many zeros entries.
Leveraging the sparsity introduced by one-hot encoding
We investigated the importance of setting the missing parameter of the split-finding algorithm to 0 (instead of numpy.nan, the default value in the Python implementation), on the training of the airlines dataset. The results reported in the figure below are for the approx tree-building method, but the same observations were made for exact and hist.
Changing the missing parameter to 0 results in a significant reduction of training time. More precisely, we observed a 40× speedup for exact and approx on the full dataset, and a 10× speedup for hist.
Figure 3: Importance of the ‘missing’ parameter on mean fit time of XGBoost (tree-building method is approx)
As shown in Table 4, this small change does not seem to affect the performance scores in any significant way, making it a practical tip for when working with datasets with no actual missing data.
Table 4: Importance of the ‘missing’ parameter on best test score (AUC), for subsets of airlines of different size
Takeaways
Because of the way Gradient Boosting algorithms operate, optimizing the way categorical features are handled has a real positive impact on training time. Indeed, LightGBM’s native handler offered a 4 fold speedup over one-hot encoding in our tests, and EFB is a promising approach to leverage sparsity for additional time savings.
Catboost’s categorical handling is so integral to the speed of the algorithm that the authors advise against using one-hot encoding at all(!). It is also the only gradient boosting implementation to tackle the problem of prediction shift.
Finally, we demonstrated that in the absence of true missing data, it is possible to leverage XGBoost’s sparsity aware capabilities to gain significant speedups on sparse one hot encoded datasets, achieving up to a 40× speedup on the airlines dataset.
References
[1] Chen, T. & Guestrin, C. XGBoost: A scalable tree boosting system. Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 13–17-Augu, 785–794 (2016).
[2] Ke, G. et al. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017-Decem, 3147–3155 (2017).
[3] Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V. & Gulin, A. Catboost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018-Decem, 6638–6648 (2018).
[4] Walter D. Fisher (1958) On Grouping for Maximum Homogeneity, Journal of the American Statistical Association, 53:284, 789–798, DOI: 10.1080/01621459.1958.10501479 | https://medium.com/data-from-the-trenches/how-do-gradient-boosting-algorithms-handle-categorical-variables-e56ace858ba2 | ['Pierre Louis Saint'] | 2020-07-03 12:19:21.416000+00:00 | ['Machine Learning', 'Data Science', 'Xgboost', 'Lightgbm', 'Python'] |
Upgrading Python lists | Upgrading Python lists
Adding useful functionalities to Python lists
Image source: JoeyBLS photography
Introduction
Python lists are good. But they’re not great. There is so much functionality that can be easily added to them but is still missing. Indexing with booleans, easily creating dictionaries from them, appending more than one element at a time, so on and so forth. Well, not anymore.
Fastai has come up with their own data structure called L . It can do everything that a Python list can do and much more.
The purpose of this article is to show you how easy it is to write such useful functionalities on your own. Especially if you are a beginner, try creating a mini-version of this library. Try writing some of the functionalities you hoped existed. It’ll be a good learning experience.
For now, let’s learn about L. Here is the Colab link if you’d. Make a copy on your Colab (File->Save a copy in drive)and run the first cell only once. Your notebook will crash. But you’ll be ready to use the library right away.
Google Colaboratory Link.
What is L? | https://towardsdatascience.com/upgrading-python-lists-35440096ec36 | ['Dipam Vasani'] | 2020-03-23 17:07:57.285000+00:00 | ['Programming', 'Python'] |
Evolution to Advanced Coding : Any Kid Can Code | PYTHON IS OBJECT ORIENTED PROGRAMMING LANGUAGE.
What does this mean? In python, everything is an object and object has one good thing they can be assigned to variable and instance can be created for the objects.
What is OOP (Object oriented programming)?
OOP is the concept which preaches to create objects. And, objects contains their own properties and functions. Let us correlate this to real life, any object like computer mouse, has its own properties and functions. Properties: mouse has button (right/left), it has scroller on top etc. Functions: move cursor, click, scroll etc. I hope it makes easy to correlate coding to real life examples.
Here in programming, object is created as instance of class which is created using keyword “class” and then, inside class we can create different variables and functions (this we have learnt earlier). Those functions can be used by object and we can create any number of objects in a class. You can refresh your basic knowledge:
Here we go, we have exposure to most important concept of programming i.e, oOP. We are growing in the same manner as the word looking like.
Benefit of using OOP: Modularity, reusability and scalability. We will go in depth when the time comes or we will understood when we do practice.
How it makes code compact — Let us assume you bought a computer mouse, it has various functions and it works plug n play manner. If you want to attach it to laptop, TV or any other device, it will function as same. So, you need not to buy multiple mouse. This is just to understand the concept of object as an instance. We will deep dive into that as and when required.
Just focus that object is an instance of class and there can be many instances to the class. Animal is class and dog, cat etc are their instances with different functions
OOP has many other concept like polymorphism, inheritance, encapsulation and abstraction. We will learn all over the time.
You can understand how important and easy it will become, if we have class widgetObject which allows to create different instance and it has function of move object. Let us now wait too much. First do the program using loop and then use the concept just learnt. And, I will leave it to you to see the difference for understanding and easiness. | https://laxman-singh.medium.com/evolution-to-advanced-coding-any-kid-can-code-40121a1d6c52 | ['Laxman Singh'] | 2020-12-02 15:09:26.543000+00:00 | ['Python', 'Python3', 'Kids', 'Python Programming', 'Kids And Tech'] |
Why Books are the Key To Learning A Language On Your Own | Photo by Lysander Yuen on Unsplash
Why Books are the Key To Learning A Language On Your Own Fraser Mince Follow Sep 7 · 6 min read
Whenever I am asked how I was able to succeed in many languages in a relatively short period of time, I always make a bow in spirit to the source of all knowledge: books Kató Lomb
Learning a language is challenging. It takes a ton of time and consistency, and even then it is really easy to feel stuck. You can spend hundreds of hours doing Duolingo or taking classes only to still feel like there is this giant gap between you and actual fluency. It only becomes more difficult if you are trying to learn independently. Many spend a lot of time trying to discover how to learn a language on their own and end up feeling very lost.
It can start to feel like there’s a divide that’s impossible to cross. You may know how to say some basic expressions but the second someone starts to speak, everything you know seems to disappear.
You, learning a language, probably
It’s not uncommon to feel stuck at some point in your language learning journey. You may feel like if you moved to a foreign country, and you had the immersion you would learn but short of that it feels impossible.
But there are ways to learn a language quickly at home. All you need to do is simulate immersion by consuming content you love in your target language. One of the most underrated ways to do this is by reading novels.
Now if you’re like me I know what you’re thinking: “oh someday that would be amazing! I just need to get to the point where I can even begin reading”.
Maybe you have even tried picking up a book like Harry Potter in a language you’re learning. “This will be great! How hard can it be?” You say. But then you open it. “Wow, that’s a lot of words. And I know like six of them”.
That first feeling of being overwhelmed is often enough to scare people away. Looking at that first page is just intimidating. So why is reading worth your time? | https://medium.com/language-lab/why-books-are-the-key-to-learning-a-language-on-your-own-9b6f2f60813c | ['Fraser Mince'] | 2020-09-10 09:39:31.636000+00:00 | ['Language', 'Books', 'Fluency', 'Language Learning'] |
How to Use the Kaggle API in Python | Datasets
Kaggle gives us several options for downloading datasets. The two you’re most likely to use are for downloading competition datasets, or standalone datasets.
A competition dataset is related to a current or past competition, for example, the dataset used in the Sentiment Analysis on Movie Reviews competition.
Standalone datasets are not accompanied by a competition and can be uploaded by anyone — like this 1.6M Sentiment of Tweets dataset.
We use two different methods for each of these.
Competition Datasets
We can see that our dataset is paired with a competition through the URL of the dataset, it will always begin with kaggle.com/c/ — the c representing competition.
To download a competition dataset, we use the competition_download_file method, take the competition name (given in the URL) and write:
Here we download both the training and test datasets to the current directory ./ — both are zipped.
Alternatively, we can simply download all competition datasets with:
api.competition_download_files('sentiment-analysis-on-movie-reviews', path='./')
You may need to setup your local directory to receive them without error — I always find downloading each individual dataset more convenient.
Standalone Datasets
On the dataset page, we can see the user’s name and the dataset name (or in the address bar). We put both together like user/dataset , and execute dataset_download_file like so:
This will download the zipped file into our current directory ./ .
Again, just like we did with the competition datasets, we can download all files for a specific dataset like so:
api.dataset_download_files('kazanova/sentiment140', path='./')
Unzipping
A final point, every dataset you download with the Kaggle API will be downloaded as a ZIP file. You can unzip the data manually, or simply use Python like so:
Once unzipped, we read our data into Python as per usual! | https://medium.com/python-in-plain-english/how-to-use-the-kaggle-api-in-python-4d4c812c39c7 | ['James Briggs'] | 2020-11-25 06:41:44.974000+00:00 | ['Python', 'Technology', 'Data Science', 'Programming', 'Machine Learning'] |
Our A/B Testing Formula (The Easiest Way To Improve Performance By 2x Or More) | Why Test?
The majority of ads will fail. So unless you’re expecting a neverending streak of luck, you’ll need a process for separating the losers from the winners.
That’s where A/B testing comes in. It’s simply the process of testing two or more ad variations against each other, analyzing the results, and doing less of what doesn’t work & more of what works.
It’s hard to overstate how important this is. We frequently see ads perform 2x, 5x or even 10x better than others. And the first ads are almost never among the top performers.
So if you’re not running at least one test at any given time, you’re leaving money on the table.
What To Test
First, you’ll need a bunch of copy angles and creatives. This is a huge topic in and of itself, so we won’t get into it here. Let’s just assume you have them.
Great! Where do you start?
You’ll want to go as BIG as possible with the first test. An example would be testing a professionally shot studio photo vs an unedited UGC (User Generated Content) video, or short CTA-focused ad copy vs long-form storytelling copy. The more contrast you add, the easier it will be to analyze the results and hone in one the winning angles.
We like to start things off with a 2 x 2: two creatives and two copy. We find that this strikes a good balance between simplicity and effectiveness.
2 x 2 Is An Easy Way To Get Started
The results from the first test determine what we do next. For instance, let’s say we find that there was a huge difference in performance between the creatives but no real difference between the copy variations. We would then isolate that variable, i.e. test a handful of creatives with the same copy.
If you’ve got a large budget and want to be as hands-off as possible, you can use Facebook’s Dynamic Creative. It does work and we do use it, but most times we prefer to have more control.
When To Test
When it comes to testing, we believe that frequency is more important than volume. It’s better to run multiple tests with a few variations, then a few tests with multiple variations.
That’s why we use a two-day testing cycle. On any given ad account we’re analyzing and implementing new ad copy and creative up to three times per week (Monday, Wednesday, Friday).
Usually, the number of tests you can run is limited by the budget. But even on the accounts where we’re not able to test new variations every two days, we still analyze and monitor performance.
Note: Adding a new ad to an existing ad set will reset the learning phase. It may make sense to use a designated campaign for testing.
How To Test
Analyzing the results is maybe the most challenging part.
There’s a lot that goes into the analysis, but there are a couple of simple tools that can do most of the heavy lifting. Having a simple process is extremely helpful for removing emotion and making informed decisions quickly.
We like to use this decision tree (credit: https://commonthreadco.com) as a guide.
Ad Kill Decision Tree
In order to be able to use the decision tree, you’ll need to have target CPAs for all correlated variables. Here’s a document that will help you with that.
When needed, we also use a Significance Calculator.
A/B Testing Like A Scientist
You simply plug in the numbers and the calculator tells you how confident you can be that any difference in performance is real and not due to random variation.
Since the whole point of Facebook’s targeting is to move away from random sampling, this obviously isn’t a perfect tool. But it’s useful as a reality check.
After The Test
The results from the test are documented and handed over to the creative team. We use a simple scoring scale (Poor, Okay, Good) and qualitative comments.
Scoring the ads can be more art than science, given the many factors involved. That’s why it’s best done by a media buyer who spends a lot of time in the ad account.
In Summary
By the time you’re reading this, we may have made a few (or many) changes to our A/B testing process. Nevertheless, the general principles and concepts apply.
Good luck! | https://medium.com/rho-1/our-a-b-testing-formula-the-easiest-way-to-improve-performance-by-2x-or-more-1d2b222dfdf7 | ['Josua Fagerholm'] | 2020-03-17 23:54:07.163000+00:00 | ['Digital Advertising', 'Advertising', 'Marketing', 'Digital Marketing', 'Facebook Marketing'] |
6 Reasons We Need to Reform the Peace Corps | 6 Reasons We Need to Reform the Peace Corps
From a Former Peace Corps Volunteer (RPCV Tanzania)
Source: Unsplash, Simon Berger
1. It is a form of systematic racism, for those it claims to serve and for those who serve.
The words, “systematic racism,” seem to be everywhere these days. However, it is crucial that we acknowledge that the words, “systematic racism,” do not refer to a system filled with racists. Instead, these words, refer to a system that would uphold racism and disproportionately harm and subjugate people of certain races even if no racists were present. Those leading the effort to decolonize Peace Corps, @decolonizingpc discussed systematic racism, saying that, “Even after adding more volunteers of color, more anti-racism trainings, more reforms (including the ones [they] have proposed on [their] page), Peace Corps will still be a neocolonialist organization because of the imperialistic goals of U.S. foreign policy.
Which brings us to Number 2 —
Soft power imperialism, such as providing financial aid or human resources for development, functions best under the pretense of altruism, though it remains predominantly self-serving.
2. It is an inherently imperialistic organization.
What does it mean to be an imperialistic organization, you may ask? Imperialism is an ideological framework, oftentimes carried out with government policy that works to extend the rule or authority of one country over another country. Such policies have historically been carried out under the guise of “civilizing” and “developing” other nations, employing both hard power, such as military force, but also soft power. Soft power imperialism, such as providing financial aid or human resources for development, functions best under the pretense of altruism, though it remains predominantly self-serving. Self-serving in what ways, you might ask?
Well, on to Number 3 —
3. It is a neocolonialist organization.
While Peace Corps holds dear to certain values and goals, it has always been an organization that functions mostly to serve U.S. foreign policy and the volunteers over the people that they are serving. In other words, Peace Corps functions as the United States’ most prominent soft power asset. In doing so, it is, by its very nature, an organization rooted in neocolonialism, or, “the practice of using economies, globalization, cultural imperialism, and conditional aid to influence a country.” In other words, we have traded direct political and military control, for a softer, but perhaps more insidious, form of control.
4. Father-Knows-Best Paternalism Meets The White Mans Burden
Imperialistic policies rely heavily on paternalism, which “limits a person’s or group’s liberty or autonomy and is intended to promote their own good.” A classic example of both imperialism and paternalism working together would be the 19th century European, “Scramble for Africa,” in which the African continent was sliced and divided in order to reap the benefits of its myriad natural resources. European nations — imbued with a sense of superiority that they saw as their divine providence from God himself — invaded African nations using the framework of paternalism to pillage and completely fracture traditional African ways of life and their political structures. Additionally, they imposed “patriarchal social structures into European-dominated hierarchies and imposed Christianity and Western ideals.” The effects of this so-called scramble still permeate African policy today. At the turning point of the 19th century, this seemingly pre-ordained calling to “civilize” other nations, was cemented in the poem, “The White Man’s Burden (1899),” which called upon the superior white man to go forth and colonize these far-off lands.
Neocolonialist policy that cloaks itself in good intention, lives at the intersection of “the white man’s burden” and the developing world’s need for self-determination and autonomy.
In the 20th century, the conceptual framework of the white man’s burden has been used by proponents of decolonization, to critique foreign expansionism and interventionism. Arguing that neocolonial programs more often than not perpetuate the idea that so-called developing nations are unable to embrace self-determination. Neocolonialist policy that cloaks itself in good intention, lives at the intersection of “the white man’s burden” and the developing world’s need for self-determination and autonomy.
So, why do post-colonial nations still struggle for autonomy? Well, it’s far more complicated than a 6 point list could cover, but let’s dip our toes in —
5. Peace Corps’ aid structure is based on conditional financial and human resource aid that has no proven long-term results for those it claims to serve.
Aid on the African Continent is a problem. It is a complex, goliath of a problem. One need only read Dambisa Moyo’s scathing book, Dead Aid, in order to get the picture of the international development industrial complex. She makes the argument that the aid industry in Africa is not only ineffective, it is “malignant.” Over the last 50 years, more than $1 trillion in development aid has been given to Africa. She argues that this aid has, “failed to deliver sustainable economic growth and poverty reduction — and has actually made the continent worse off.”
While the entirety of the Peace Corps’ financial structure could, and should be investigated, here, we are going to unpack only one part of this structure: small grants, which are organized by the volunteers and then in-theory allocated to the communities in which they serve. As explained by those running the @decolonizingpc Instagram,
“The entire process for the Small Grants Program completely relies on the presence of the volunteer, from the application and fundraising to monitoring and evaluation… Peace Corps practices do not live up to [its primary goal of sustainability] because project funding by the Small Grants Program requires the presence of the volunteer, who at any moment can leave site permanently without notice. It should also not be up to Peace Corps or any volunteer to decide what sustainability looks like for a community.”
Peace Corps’ structure attempts to move away from the aid industry — in the sense that it sends (in theory) skilled volunteers — abroad to help build sustainable programs rather than blindly throwing money at the problem. This type of aid is not conditional in a quid pro quo sense, but rather that the aid is conditional on the volunteer being there. And if the volunteer must be there for the aid or benefit to be reaped, well then, the goal of sustainability is called into question, and dare I say, inherently flawed. This became startlingly transparent in March 2020, when thousands of volunteers were suddenly pulled from their host countries due to COVID-19 — leaving communities in a lurch, funding stalled, and crucial projects never to be finished.
The conditional aid structure and very nature of both Peace Corps promise and its inability to create sustainable change, calls to question whether or not it has a place in the global community. People that work in the aid/development world love to say, “the goal is to work ourselves out of a job.” And yet, it remains a financially fruitful industry for those employed by it, including the Peace Corps.
6. It relies on a Westernized model of development.
During my time as a Peace Corps volunteer, while I traveled, while I read books about the aid industry and the developing world, one question always seemed to creep in from the recesses of my mind: “Developing toward what?”
What exactly do we mean when we say, “a westernized model of development?” Two well-regarded Iranian scholars and economists claimed that, “The western model of development prioritizes technological modernization, free-market economy, a democratic political system, and western health systems as the basis for development.” So, these items are used as metrics to measure the success of a nations’ development. Yet, those nations that we consider successfully developed (i.e. Britain or the U.S.) reached their status, “at the expense of slavery, war, other gross human rights violations, and overexploitation of the environment within and beyond their borders.”
What does Peace Corps have to do with this? Well, back to those at @decolonizingpc who have been actively speaking out and unpacking this issue: | https://tyleranne04.medium.com/6-reasons-we-need-to-reform-the-peace-corp-c6c1a329ed00 | ['Tyler A. Donohue'] | 2020-10-28 19:08:18.693000+00:00 | ['Development', 'Travel', 'White Privilege', 'Peace Corps', 'Volunteering'] |
Pharmaceuticals | Three Problems and a Solution
Pharmaceuticals
We are a cornucopia of chemicals. At what point does that get too much?
Last time, I told you about plastics contaminating the soil and water, and causing death and destruction everywhere. But did you know that it’s not just plastics? Ironically, it could also be the very medicines designed to keep you alive and well.
If you remember from elementary school science, water does the same thing over and over again: condenses, precipitates, infiltrates, transpires, and evaporates. Although 70% of the world is covered with water, less than 3% is potable which means our water supply is very limited.
The same water that’s been on the planet since day one is still here — no more and no less — which means we’re all drinking dinosaur pee, and because of modern industrialized living, it’s just getting more degraded over time.
In the USA, there are certain Maximum Contaminant Levels (MCLs) for certain chemicals; the level above which they should not appear in drinking water. But not all chemicals have been studied, especially not in all combinations. For the most part, we’re not looking at what happens when chemicals combine because there are just too many combinations. How would you ever do control studies for all of them?
We are a cornucopia of chemicals. Some we ingest on purpose, some are thrust upon us through the air, the water, our skin via our clothing, and some come through our food. No matter how we get them, they’re a part of modern life. The manufacture of pharmaceuticals requires tons of water — at its inception, at its conclusion, and everywhere in between. It also requires pure water.
And since the earth’s water bodies and our human bodies both depend on clean water for survival, we need to make sure our interests, and water’s interests, are aligned.
What is today known as the Food and Drug Administration, the FDA, started as The Pure Food and Drug Act of 1906 after Upton Sinclair released, “The Jungle” in 1906 which described the horribly unhygienic conditions in the Chicago stockyards.
Who worked in those stockyards? Immigrants. People who came in from another country, even though that makes life very difficult, because life back home was even worse. Then, like today, the lower socio-economic rungs of society most often have the fewest environmental protections, as well as very little say in the matter.
Today, the FDA approves drugs and is our watchdog, but its reach is limited.
The FDA doesn’t have authority to recall a product unless it’s been misbranded or adulterated. All other recalls — including for safety — are up to the manufacturer to initiate. This means that at best, the pharmaceutical and cosmetics industries are self-policing and at worst — people are going to die — like with the Vioxx scandal where over 100,000 people suffered heart attacks before Vioxx was recalled.
But there’s more and that is: how are these often very powerful drugs affecting our water?
A 2009 study from the University of Exeter found hormones in the water were causing fish mutations. There’s a class of drugs known as Anti-androgens — manmade environmental chemicals that either mimic or block sex hormones.
They’re used in cancer treatments and other drugs — as well as pesticides — and they reduce fertility in male fish, causing a feminizing effect, which is a condition called Intersex. These “chemical cocktails” don’t just affect industrialised areas. According to USGS, intersex is a global issue affecting even wild-caught fish.
However, there is also some good news on the plastics front. Scientists have discovered a bacterium that eats plastic. Studies are still in the preliminary stages, but they look promising. Then there’s the worms, generally used as fish bait, which have also been found to have a taste for plastic.
And finally there’s my favourite, the plastic eating mushroom, Pestalotiopsis microspora which is a rare species from the Amazon rainforest that enjoys snacking on plastic and converting it into clean soil. It’s also tasty sautéed in olive oil and garlic! Kidding aside, Pestalotiopsis microspora is edible because somehow during the process of digesting the plastic, the mushroom removes all the toxins and converts them to clean soil.
On the legal side, on February 10, 2020, Senator Tom Udall (D — NM) and representative Alan Lowenthal (D — CA) introduced the Break Free From Plastic Pollution Act which, among other things, goes after single use plastic bags: the ones with a 15-minute working life that seem to always end up in the ocean. It’s not law yet, but fingers crossed.
What if I told you there was a chemical that can cause endocrine disruption? Surprise, it’s Triclosan! It was great at killing microorganisms which is why hospitals started using it as a sterilization agent in the 1970s. Because of its effectiveness, manufacturers started adding it to soaps, toothpastes, and other products as an antibacterial agent in overwhelming numbers.
What happened next? The CDC found Triclosan present in 75% of the U.S. population’s urine samples. Its overuse had resulted in the population developing immunity to the chemical’s sterilization features, so it wasn’t so effective anymore. Further studies found when Triclosan reacts with sunlight it degrades to form dioxin in surface water. Dioxin causes cancer, reproductive problems, damages the immune system, and can disrupt hormones, and like plastic, it takes a very long time to break down.
In September 2016, the FDA issued a final rule banning over-the-counter antiseptic wash products that contained Triclosan — along with 18 other chemicals — because manufacturers had failed to demonstrate safety from long-term exposure.
The manufacturers weren’t shocked. They’d already been feeling enormous public pressure and so had begun removing Triclosan from soaps and toothpaste several years earlier.
But here’s the twist: Triclosan is also classified as a pesticide and used as a material preservative in many products such as fabrics, vinyl, plastics, and textiles which are regulated by the Environmental Protection Agency or EPA. Triclosan’s conditional registration was up in 2018, but at that time, EPA determined there wasn’t enough information to pull the product from shelves so Triclosan is still being studied and used.
Triclosan is a great example of overlapping regulations. When used as a beauty aid, like in antibacterial soaps, it’s regulated by the FDA because it’s a personal care product, and when used as a pesticide, it’s regulated by EPA which means we have one chemical and two different results, leaving water to sort out the mess.
Look — Plastics and pharmaceuticals help us live longer, eradicate diseases like smallpox, and hopefully, COVID, they treat cancer, provide antibacterial protections, and overall do many other wonderful things all to make life better and easier .… but easier isn’t always better when there’s chemical residue left behind.
There’s enormous pressure on our water to do everything we’re asking of it and if we don’t get our waste streams under control, instead of saving us, the very chemicals we use everyday to make life better are going to sink us, and water along with us.
If we’re going to improve recycling, we need to start with improving the coding system and get rid of the misleading advertising, but, more importantly, reduce our waste stream. Sounds to me like it’s time to skip the plastic bottle, and buy yourself a stainless steel model, and then belly on up to your safe and regulated kitchen tap and fill that baby up.
As for drugs, take your remaining drugs to places that dispose of them properly, and never ever ever flush them down the toilet. The good news is that the manufacture of pharmaceuticals requires pure water so at least our interests are aligned with manufacturers there.
It’s always good to have an ally.
Feeling helpless? Once you see everything that’s going wrong with plastics and PFAS, it’s easy to throw up your hands and give up hope. The problem’s just so big, there’s nothing you could possibly do to help…is there? Well, like many stories, this one’s going to have a hopeful ending. Before we get there, however, it’s important to know about one other substance that’s damaging the environment too. Because then we can — to use an ugly metaphor — kill three birds with a stone, instead of just one. Stay tuned for Tuesday! | https://medium.com/snipette/pharmaceuticals-68ccaa9f8ff6 | ['Pam Lazos'] | 2020-11-01 07:02:10.375000+00:00 | ['Environment', 'Pollution', 'Pharmaceuticals Industry', 'Corporation'] |
Beyond Cage: Nam June Paik | The object of this essay is the analysis of the artistic connection between American composer and thinker John Cage and the Korean artist Nam June Paik. My aim is to highlight the influence that Cage had on Paik’s work and to demonstrate that Paik reacted to Cagean thought and furthered its conclusions it in an attempt to step out of its shadow and adventure into new realms of media experimentation and philosophical inquiry. I started thinking about their relationship as a result of the research I conducted as an intern for the Talbot Rice Gallery in Edinburgh, in preparation for the 2013 Edinburgh International Festival exhibition Transmitted Live: Nam June Paik Resounds.
The Meeting of Two Minds
Nam June Paik, John Cage and David Tudor after the Concert «Kompositionen» at Atelier Mary Bauermeister, Cologne, 6 October 1960, Photo: Klaus Barisch, Courtesy Galerie Schüppenhauer
Biographically, the two men shared a lifelong friendship that spans from their first meeting in 1958 to the death of John Cage in 1992. Although 20 years older than Paik, Cage’s respect for the younger artist and intellectual was manifest in their correspondence. A number of collaborations and homages linked the two artistically, including Paik’s first public appearance in Hommage à John Cage (1959), the score Gala Music for John Cage’s 50th Birthday (1962), the videotape A tribute to John Cage (1973) featuring Cage himself, the sound piece Empty Telephones (1987), and the 1990 video sculpture Cage from Family of Robots and Cage in Cage (1993), following his death.
A shared cultural context is the common ground for the development of their ideas. Cage was Zen Buddhist in spiritual outlook and was attracted to Oriental philosophy in an attempt to escape the philosophical hermeticism of Western thought. Paik was born in Korea in 1932 and arrived in Germany after studying history of art, music and aesthetics in Tokyo. In Europe, he came into contact with a vibrant art scene. The Westerner with Eastern sensibilities and the Easterner fascinated by the cultural history of the West met in Germany. The seeds of Fluxus, the Neo-Avant-Garde movement of the 1960s, had been planted by John Cage during a series of lectures he gave at the New School of Social Research in New York City (1957-1958) in which he introduced the notions of indeterminacy and chance operation in art praxis – the former extracted from Zen teachings, the latter drawn from Marcel Duchamp’s example. These classes were attended by La Monte Young and George Brecht, two important figures of the movement. Prior to this lecture series, a string of “happenings” staged by Yoko Ono had prefigured Fluxus, in parallel to a number of concerts involving Nam June Paik and Cage himself through 1960-61 in the studio of Mary Bauermeister in Cologne. Since its inception, Paik was situated at the very heart of the international movement by virtue of his close relationships with founders Cage and George Manciunas, as well as his unique intellectual preoccupations. Fluxus developed an aesthetic that was very similar in scope with the Dada movement of decades earlier. It sought to destabilize traditional modes of art production, presentation, interaction and institutionalization. It was distinctly anti-commercial, employing comic irony in its critique of the establishment. Throughout his career, Paik made crossovers from high to popular art and back. However, his work always had a powerful philosophical core, inspired by Fluxus, that held his executions together. Living in the Rhineland, an experimental region for the arts at the time, Paik was at the main hub of a vast network of individuals that exchanged artistic ideas with great ease. The historical backdrop — the Cold War, its ideological implications in politics and the stability that followed the “economic miracle” (Wirtschaftswunder) of the 1950s — is also relevant, as it provided Fluxus artists with a platform for activistic engagement in their socio-political context.
Transferring Paradigms Between Music and the Visual Arts. Paik Beyond Cage
As a reference point in the structural analysis of the respective oeuvres of Cage and Paik, Marcel Duchamp is particularly important because the concepts he employed influenced both artists. Duchamp’s artistic approach varied between spontaneity and elaboration. His readymades were unintentional; as he himself declared, the “creation” of them is reduced to choosing one object over another. The Large Glass, in contrast, necessitated careful planning and meticulous work. But the two needed not be mutually exclusive: he devised a complicated chance method in selecting notes for his (only) musical piece, Erratum Musical (1913).
In the 1950s, working with chance factors was a characteristic of New Music, a movement represented by composers such as John Cage, Luciano Berio, Karlheinz Stockhausen, among others. Cage’s seminal 1952 piece 4'33" featured a silence that lasted four minutes and thirty three seconds. Cage sought to foreground the unexpected elements in the environment over the expectancy of sound in the piece according to his dicta of indeterminacy. The work of Austrian composer Arnold Schoenberg was crucial in Cage’s conceptual arrival at his landmark score, as Schoenberg had managed to equalize the value of pitches in musical strips using his influential twelve-tone technique. Post-Schoenberg music was atonal, in which serialism dominated and emphasis on certain pitches was removed. Cage took this idea further and posited that all sounds, not just tones, were equal class citizens in a musical score. In doing so, he extended the concept of music to include the ostensibly aberrational or unwanted category of “noise,” and ultimately to swallow silence itself. Silence in Cagean thought is not, however, the complete absence of sound but rather an empty space that can be filled by life’s limitless noises. 4'33"’s contents depend on the environment of the receiver. Cage himself realised in 1951 after a visit to Harvard University’ anechoic laboratory that there was no such thing as absolute silence – that even internal sounds such as his heart beating could disrupt apparent silence. Cage thus fulfilled the requirements of both Duchampian chance procedure and Zen indeterminacy, famously declaring that he does not discriminate between intention and non-intention.
Like Duchamp before him, Cage was also meticulous in execution. Cage’s Williams Mix is a prime example of his processual scruple: his first audiotape composition, its four-minute runtime reveals thousands of pieces of audiotape assembled to play in parallel on multiple soundtracks.
Poster for
Exposition of Music — Electronic Television Courtesy Zentralarchiv des internationalen Kunsthandels, Cologne
This dualism of chance/meticulous assemblage exists in Paik’s work with television sets as well. Paik spent months learning the intricacies of electrical engineering in secret to prepare his landmark exhibition of 1963, Exposition of Music — Electronic Television, which featured twelve variously modified TV sets. In the Afterlude to the Exposition of Experimental Television, Paik states that “Indeterminism and variability is the very underdeveloped parameter in the optical art, although this has been the central problem in music in the last ten years.” He pays homage to both Duchamp and Cage and at the same time declares the intention of going beyond them. With the Electronic Television segment of his 1963 exhibition he aimed to study the indeterminism of television sets. When an unexpected accident occurred (one of the TV sets broke, thus displaying a mere horizontal line on the screen), Paik integrated it into his exhibition, naming it Zen for TV. The exhibition was a participatory event that involved all the senses of the viewer and could be regarded as the forerunner of both video art and interactive art. What Duchamp had managed to achieve with his concept of an open artwork governed by chance and variability, Cage and New Music had duly responded with the notion of an open work in music, in turn prompting a response from Paik in a video-based form of visual art. Thus, the chain of conceptual influence runs from Duchamp and the historical avant-garde to Cage and then to Nam June Paik.
Zen for TV (1963-1975), Courtesy of Estate of Nam June Paik, Seoul. Photo: MUMOK, Vienna
But for Paik, a mere translation of the indeterminism prefigured by Duchamp and co-opted by Cage from music to optical art was insufficient. Paik understood that he needed to push the idea of inserting chance elements into the artwork beyond the sonic realm. His first departure from Cage in this sense was his different treatment of “prepared pianos.” Cage prepared his pianos for a practical reason: while composing for performances, he observed that the space is only big enough to accommodate a single piano and had to compress in one instrument the sounds native to the keyboard alongside the thuds, crashes, and jingles of percussive apparatus. To variate, he inserted various objects to the strings in the piano that would make different sounds in the action of playing. Thus, his pianos could return to normal functional ones. In their modified state, they evoked the idea of randomness, surprising the audience.
Paik’s pianos could not return to their initial state. Once modified or destroyed they remained permanently so. Nam June first used a piano in his Hommage à John Cage of 1959, where he tore off ten of the piano’s strings and played it first as a stringed instrument and then as a percussive one before finally destroying it. In his solo show Exposition of Music — Electronic Television, 1963, Paik prepared six pianos in various stages of destruction and modification. He attached a blow drier to one of them, Klavier Integral, that could be triggered by a key press, a bra and barbed wire making it threatening and tabooing for the audience. In doing so, Paik offered a different sensory experience, beyond that of sound, that assumed a tactile and visual nature as well. Phenomenologically, his piece assumed a synesthetic character. Paik’s grand plan was that of extending the boundary of music into a sort of “integral art” that would encompass both the visual and the performative/theatrical as well as the sculptural, and, through the use of the interior of the villa which housed the exhibition, even the architectural, generating meaning through layout. Summarising his intentions, even before his manifesto of 1963, Paik declared in 1959: ‘Schoenberg wrote ‘atonal’. John Cage has written ‘a-composition’. Me, I write ‘a-music’.’
In his quest for an integral art, video art as outputted by TV sets proved to be the perfect multisensory experience. A flow of electrons that could be manipulated infinitely, the moving image was the texture of this integral art. But at the time, the television, just like the radio, was broadcasting preset sensory data. Paik’s conversion of the TV from a reproductive machine displaying pre-determined pieces to an open, productive one can also be linked to Cage and the influence chain that I have mentioned. With Imaginary Landscape No. 4, Cage had for the first time introduced the idea of discarding predetermined scores. The “site specific” and “live” attributes of the performance were a consequence of using the material that the radio stations beamed and not prepared pieces by Cage himself (incorporating randomness). As early as the 1920s, the idea of using traditional reproductive instruments for productive purposes (converting them) was traversing Western European creative circles. Cage’s piece corresponds to this concept but only as regards reception. Cage’s prepared piano alters the sound, Paik’s prepared pianos for the 1963 Exposition of Music event triggered events in the environment when keys are operated: a transistor radio would play, a key would shut down the lights in the room, and so on. Like the pianos, and like Cage’s radios, the TVs were not instrumental in reproducing a pre-established piece but were production tools in their own rights. And like Cage’s radios, the material they used relied on broadcasts from local stations. The whole set-up was an “open artwork,” and the audience was the main performer. To put it simply, Paik stripped away the original function of the TV (which is to reproduce) in order to convert it, exciting the visitors (and their instinctual need to play and touch) into making never-before-seen images, as productive machines.
Assimilating McLuhan: On Media as Extension of Perception. Correctives to Attention Deficits Induced by Media Culture
Cage’s Imaginary Landscape No. 4 consisted in twenty-four performers working with twelve radios and a conductor operating them, modifying the station, the pitch and the volume. The piece was site specific and “live,” playing with indeterminacy, as we have seen, as the performers only operated with sounds that depended on the station. It also introduced silence as a compositional element one year before 4'33". The follow-up piece, 0'00" was a third that used this element. Germane to this piece was the observation that in a media information soaked world, attention is a scarce resource. The pieces were minutes long exercises in heightened sensibility of perception. Cage specifically thought of media as ‘expansion to man’, expansion of perception. Radio as extension to man is an idea that can be traced around the same time to Marshal McLuhan, the Canadian media communications philosopher. McLuhan was highly influential to both Cage and Paik. In his 1964 book, Understanding Media: The Extensions of Man, McLuhan creates a theoretical framework for understanding contemporary media culture. His premise is that all technology is in essence an extension of human abilities and senses. The printed book, the radio, TV and even clothing are all extending what humans can do. Because of this, technology destabilizes the natural balance of our senses and in turn affects the sensibility of society. In a subliminal way, the invention of new media was, according to McLuhan, the main factor for cultural shifts in the West. The effects of media change, in essence, the structure of the world in which we live in. His famous idea ‘the medium is the message’ is a consequence of this: because we don’t make a conscious decision to participate in the dialogue that a medium opens, we permit the medium to impose its own assumptions upon us and thus transform it into the actual message as it shapes our world in the process. Any medium that heightens one sense to the detriment of the other four leads to individuation: phonetic language and then the movable type, invented in the 15th century, delved us into a world characterized by the primacy of vision above all other senses and in which individuals could detach themselves from a body of society that was less and less tribal. The advent of the electronic age, however, starting with the telegraph restored a certain balance to the senses and reconnected us into a global neural network, exteriorizing the human nervous system and bombarding it with an abundance of information.
But because media creates its own environment, just as ‘electric light is pure information’, ‘a medium without a message’ whose content is anything it shines on (the perfect McLuhanian metaphor), some are beneficial to certain messages while others are not. More exactly, media themselves can be hot or cool, depending on the participation of the audience.
‘A hot medium is one that extends one single sense in ‘high definition’. High definition is the state of being well filled with data. (…) Hot media are therefore low in participation and cool media are high in participation and completion by the audience. Naturally, therefore, a hot medium like radio has very different effects on the user from a cool medium like the telephone.’
Radio is seen as hot because it broadcasts continuously and offers all information in a straightforward way. TV is cool because it is immersive. ‘Radio will serve as background- sound (…) TV will no work as background, it engages you. You have to be with it.’ It is low definition and breaks away from uni-sensory experience by employing both sound and vision. Because of this it is the perfect gateway into the neural network.
Cage was the first to take control of the stream of media information through his modified radio transmissions and sounds. As the media information became more abundant, the pieces enabled the listener, through silence, to be more attentive. Silence enables reflection on perception itself and on corporeality and attention. It heightens ones sense but by breaking the mode of broadcasting where the ear is subjugated by the transmission, it ‘cools down’ the media, in McLuhanian terms.
In Zen for Film (an eight-minute-long white screen of Fluxus noiseless content), Paik referenced Cage’s silence. He invited Cage and Cunningham to watch an hour long film. As Cage thought about the similarities between their works, he stated:
“Offhand, you might say that all three actions are the same. But they are quite different. The Rauschenberg paintings [White Paintings]… become airports for particles of dust and shadows that are in the environment. My piece 4'33" becomes in performances the sounds of the environment. Now, in the music, the sounds of the environment remain, so to speak, where they are, whereas in the case of the Rauschenberg paintings the dust and shadows, the changes in light and so forth, don’t remain where they are but come to the painting. In the case of the Nam June Paik film that has no images on it [Zen for Film]…, the focus is more intense. The nature of the environment is more on the film, different from the dust and the shadows that are the environment falling on the painting, and thus less free.’”
Paik replied three years later:
‘N.B. Dear John, The nature of the environment is much more on TV than on film and painting. In fact, TV (its random movement of electrons) IS the environment of today.’
The McLuhian concept of the medium being the message is present here, although phrased differently. In his quest of blurring the roles between producer, performer and audience, Paik seemed to have noticed that the coolness of the TV promotes audience participation much more than all other mediums. McLuhan noticed this in 1964: ‘The cool TV medium promotes depth structures in art and entertainment alike, and creates audience involvement in depth as well.’
The Age of TV is the advent of the exhibition of art as a multi-sensory, deep experience that stimulates the non-visual senses as well. Paik understood this and was pursuing it with his ‘integral art’ as early as 1963.
Random Access. The Experiencer Free in Time and Space
In a truly immersive, integral art, audience participation had to be total. The degree of freedom of the experiencer of the artistic performance must be absolute. In a text from 1963, About the Exposition of Music, he observed that in most indeterminate music, the composer gives freedom to the interpreter but not to the audience. This held true for Cage’s work as well. The listeners had the option of listening or abstaining from it and this binary choice system was the same as it was for classical music. Moreover, the flow of time was in one direction just as the playing was from beginning to end. He explains further:
“The audience cannot distinguish the undetermined time or sounds of the interpreter… The problem becomes more confused if the interpreter has a ’rehearsal’…, or if the interpreter plays it many times as his favourite ‘repertoire’… this is the prostitution of the freedom… if the interpreter rehearses even only once, the degree and the character of the indeterminacy becomes the same as in classical, if not baroque, if not Renaissance, if not medieval music. This is why I have not composed any undetermined music, or graphical music, despite my high respect for Cage and his friends.”
His plan for Symphony for 20 Rooms was an attempt to remediate this problem. The listeners could move freely from one room to another, from one auditory experience to another. When Paik said ‘I am tired of renewing the form of music – serial or aleatoric, graphic or five lines, instrumental or bellcanto, screaming or action, tape or live… I hope must renew the ontological form of music…’ he was referring to these problems.
This brings us to the ideas of random access and variability which are central to Paik’s video art. Listeners (any kind of experiencer for that matter) had to have phenomenological options for the events they were presented. His Random Access (sticking bits of audio tape on the wall and then using the needle of a player to read them in any order the listener wants) is a direct translation of that principle.
Random Access (1963/2000) Photo: Erika Barahona Ede Courtesy of The Solomon R. Guggenheim Foundation, New York & Nam June Paik Estate, Seoul
Paik also adopted Cage’s idea that “music is a chronology.” He referred to his art as TIME art. His 1963 exhibition put equal weight on the TV and the objects sonores and Zen objects. He made it clear that the second part of the show was going to be on electronic art (Electronic Television) rather late in the planning stages. He kept his work on TVs secret and taught himself electronics to understand the technical principles he was working with. The objective was to convert the TV into a self-referential form and also deal with the phenomenological state of the experiencer. He defined freedom in terms of time, saying that all musical experiences are essentially strips of time. The purpose of time art was thus to liberate the experiencer from unidirectional time. He realized this in different ways. One is the simultaneous perception of images from thirteen independent TVs in the Wuppertal 1963 show. The experiencer has the possibility of choosing independent flows of information and experience and thus different chronologies. Time is also in a direct relationship with space, so another way to translate this experiment is to give the experiencer freedom to pursue spatiality as he wishes. This is his project in Symphony for 20 Rooms. The idea is that different paths in space yield different chronological sequences of sound.
A transnational artist, Nam June Paik is the embodiment of the Electronic Age from a rich multicultural perspective. An artist of the wave and the electron, his experiments with various media paved the way for an entire generation of manipulators of sound and image. John Cage’s influence on his thought is almost impossible to quantify. It seems that for every intuition that Cage had regarding the nature of artistic intention and production, Nam June Paik reacted with passionate dedication. Cage’s role was to push boundaries with thirsty curiosity. Paik’s was to explore their outer regions with wild imagination. Their enduring friendship produced a fruitful intellectual dialogue, both directly through collaborations and indirectly through the enormous provocation that Cagean thought posed to Paik, challenging him to escape its cage. Their beautiful relationship stands as a testament to the collaborative power of men, ever rising the scaffolding of human spirit. | https://medium.com/history-of-art/nam-june-paik-escaping-the-cage-d5f6fdfdd750 | ['Liviu Tanasoaica'] | 2018-01-15 12:49:06.540000+00:00 | ['Video Art', 'Art History', 'Art', 'Music'] |
Why Brand Strategy Matters More Than Ever, Even Online | Why Brand Strategy Matters More Than Ever, Even Online
The biggest lesson I’ve learned in my eighteen years in the industry is that design is not art
Photo by Kaleidico on Unsplash
Design needs to help solve a particular problem — usually a business one. And we need to become more aware and considerate about that. Brand Strategy is one way how we can marry design and business results more efficiently.
If, as a designer, you had a client dismiss your perfect design concept and were told to start again, you may have just met a client from hell. And if, as a client, you needed to ask the designer you’ve hired for half a dozen revisions to something as simple as a business card, you may have hired an incompetent mac operator.
Or, in both cases, it could mean you need to adopt a more strategic approach.
But first, a quick story about how I got started in the world of strategic brand design. | https://medium.com/better-marketing/why-brand-strategy-matters-more-than-ever-even-online-8cc1ba1fe486 | ['Ilya Lobanov'] | 2020-11-20 15:44:31.355000+00:00 | ['Online Strategy', 'Strategic Design', 'Branding', 'Marketing', 'Brand Strategy'] |
My Love, Let’s Throw Away Our Books and Live | My Love, Let’s Throw Away Our Books and Live
Let us get out
Photo by Eugenio Mazzone on Unsplash
All these books full of letters are too heavy for our lives.
We are saturated of these abstract signs running from page to page.
So much dust, we don’t dare make a move that would displace some air,
We hold our breaths and our voices like in a dead church dedicated
To abstract knowledge. Come in the garden and read some poetry aloud.
I love listening to your voice and your silence, your breath cleanses miasmas.
Right now, I want to hear your clear laughter and give you a tender kiss. | https://medium.com/illumination/my-love-lets-throw-away-our-books-and-live-7e8b6599e354 | ['Jean Carfantan'] | 2020-06-26 21:48:06.136000+00:00 | ['Poetry', 'Books', 'Kiss', 'Garden', 'Love'] |
First neural network for beginners explained (with code) | Creating our own simple neural network
Let’s create a neural network from scratch with Python (3.x in the example below).
import numpy, random, os
lr = 1 #learning rate
bias = 1 #value of bias
weights = [random.random(),random.random(),random.random()] #weights generated in a list (3 weights in total for 2 neurons and the bias)
The beginning of the program just defines libraries and the values of the parameters, and creates a list which contains the values of the weights that will be modified (those are generated randomly).
def Perceptron(input1, input2, output) :
outputP = input1*weights[0]+input2*weights[1]+bias*weights[2]
if outputP > 0 : #activation function (here Heaviside)
outputP = 1
else :
outputP = 0
error = output – outputP
weights[0] += error * input1 * lr
weights[1] += error * input2 * lr
weights[2] += error * bias * lr
Here we create a function which defines the work of the output neuron. It takes 3 parameters (the 2 values of the neurons and the expected output). “outputP” is the variable corresponding to the output given by the Perceptron. Then we calculate the error, used to modify the weights of every connections to the output neuron right after.
for i in range(50) :
Perceptron(1,1,1) #True or true
Perceptron(1,0,1) #True or false
Perceptron(0,1,1) #False or true
Perceptron(0,0,0) #False or false
We create a loop that makes the neural network repeat every situation several times. This part is the learning phase. The number of iteration is chosen according to the precision we want. However, be aware that too much iterations could lead the network to over-fitting, which causes it to focus too much on the treated examples, so it couldn’t get a right output on case it didn’t see during its learning phase.
However, our case here is a bit special, since there are only 4 possibilities, and we give the neural network all of them during its learning phase. A Perceptron is supposed to give a correct output without having ever seen the case it is treating.
x = int(input())
y = int(input())
outputP = x*weights[0] + y*weights[1] + bias*weights[2]
if outputP > 0 : #activation function
outputP = 1
else :
outputP = 0
print(x, "or", y, "is : ", outputP)
Finally, we can ask the user to enter himself the values to check if the Perceptron is working. This is the testing phase.
The activation function Heaviside is interesting to use in this case, since it takes back all values to exactly 0 or 1, since we are looking for a false or true result. We could try with a sigmoid function and obtain a decimal number between 0 and 1, normally very close to one of those limits.
outputP = 1/(1+numpy.exp(-outputP)) #sigmoid function
We could also save the weights that the neural network just calculated in a file, to use it later without making another learning phase. It is done for way bigger project, in which that phase can last days or weeks. | https://towardsdatascience.com/first-neural-network-for-beginners-explained-with-code-4cfd37e06eaf | ['Arthur Arnx'] | 2019-08-11 09:03:20.174000+00:00 | ['Perceptron', 'Artificial Intelligence', 'Neural Networks', 'Guides And Tutorials'] |
Six Wrong Predictions Reported By the New York Times | The New York Times is one of the prominent American daily newspapers with millions of readers in the US and across the globe.
During his tenure as the President of the United States, Donald Trump attacked the New York Times and other media outlets, consistently labeling them “fake news.” In contrast to his remarks, the New York Times has won 130 Pulitzer Prizes — more than any other newspaper. Established in 1851, it has been an influential newspaper in the US and around the world for decades. It’s known as a national “newspaper of record,” based on the Encyclopedia Britannica.
While acknowledging the New York Times’ reputation and credibility, I shed light upon six predictions reported by this newspaper that are untrue now. These predictions were made about Airplanes & Flying, Laptops, Apple & iPhone, Twitter, Television, and Automobiles.
1. On Flying: We won’t be able to fly in millions of years
On October 09, 1903, the New York Times published a piece about the future of flying titled “THE FLYING MACHINES THAT DO NOT FLY,” which stated:
“… it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years — provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials.”
On December 17, 1903, the Wright brothers flew their first airplane.
A decade ago, how many of us thought that producing flying cars might be a myth? Now, we not only have flying cars but also flying Gravity Jets, or let’s say flying humans.
What’s next?
Photo by Natali Quijano on Unsplash
2. On Laptop computers: No one would be interested in a Laptop
A New York Times article from 1985 discussed that few people would be interested in carrying a personal computer and laptop. The article titled “THE EXECUTIVE COMPUTER” states:
“On the whole, people don’t want to lug a computer with them to the beach or on a train to while away hours they would rather spend reading the sports or business section of the newspaper… the real future of the laptop computer will remain in the specialized niche markets. Because no matter how inexpensive the machines become, and no matter how sophisticated their software, I still can’t imagine the average user taking one along when going fishing.”
As of February 2019, over 70% of the US households had either a laptop or a desktop computer at home. Laptop computers continue to shrink in size but become more powerful in terms of capacity. “The first floppy disk, introduced in 1971, had a capacity of 79.7 kB” Now, even a Notepad file is larger than 80 kilobytes. Are kilobytes still relevant?
Don’t you take your laptop along when going fishing? Forget about notebooks; our mobile phones, tablets, and other similar devices are more accessible now — they’re more portable, personal, and closer to our hearts and EYES.
Photo by Campaign Creators on Unsplash
3. On Apple and iPhones: They will never succeed | They will never have a phone either
Apple was established in 1976. Two decades later, the New York Times wrote that Apple would fail, quoting a Forrester Research analyst:
“Whether they stand alone or are acquired, Apple as we know it is cooked. It’s so classic. It’s so sad.”
A decade later, in 2006, another article reported that Apple might never produce a cell phone:
“Everyone’s always asking me when Apple will come out with a cell phone. My answer is, ‘Probably never.’” David Pogue, The New York Times.
Apple released its iPhone in 2007. Since then, 2.2 billion iPhones have been sold. And Apple? It’s the most valuable brand in the world as of 2020.
Photo by Neil Soni on Unsplash
4. On Twitter: Only the illiterate might use it
A New York Times article discussed the emergence of Twitter in which a reference to Bruce Sterling’s earlier remarks on Twitter was also made. He is The New York Times’ best-selling science-fiction writer and journalist. In 2006, he was with the idea that Twitter would not be prominent amongst the intellectuals, but only the illiterate might use it:
“Using Twitter for literate communication is about as likely as firing up a CB radio and hearing some guy recite ‘The Iliad.’” — Bruce Sterling, The New York Times.
President Donald Trump has tweeted over 17000 tweets only through the first two-and-a-half years of his presidency — and the most literate people have retweeted it. As of 2018, Twitter has had over “321 million monthly active users.” Many politicians, celebrities, intellectuals, and other highbrows use it too.
Unlike the prediction made in 2007, it’s only the illiterate that cannot use Twitter, as it’s hard to say so many things in a few characters.
Photo by MORAN on Unsplash
5. On Television: It will not be a competitor of broadcasting, and people will not have time to watch it
In 1939, a New York Times article suggested that people won’t have time to watch television. For this reason, it cannot compete with other forms of media such as newspapers and radio.
The article narrates:
“The problem with television is that the people must sit and keep their eyes glued on a screen; the average American family hasn’t time for it.”
Based on a 2019 estimate, “307.3 million people ages 2 and older live in US TV households.” Now, you can watch the TV even in the toilet. You don’t need to carry your TV, but your phone or tablet.
But there is one thing: Newspapers are not gone. Over 69% of the US population still read newspapers. Based on Forbes, print remains the most common medium, with 81 percent reading this format.
According to studies, 2.5 billion people read print newspapers daily.
What happened to the TV industry? As of 2015, “An estimated 1.57 billion households around the world owned at least one TV set.” Since more people typically live in a household, billions of people watch TV every day — more than those who read newspapers.
Photo by Dave Weatherall on Unsplash
6. On High-Speed Automobiles: We won’t be able to drive over 80 miles per hour
Reporting on the dangers of high-speed driving, a New York Times article suggested that our brains cannot guide a car with any speed over 80 miles per hour. It reported a debate between two experts that took place in Paris in 1904.
The article says:
“It remains to be proved how fast the brain is capable of traveling […] If it cannot acquire an eight-mile per hour speed, then an auto running at the rate of 80 miles per hour is running without the guidance of the brain, and the many disastrous results are not to be marveled at.”
In 1894, Benz Velo had 12 mph (20 km/h). In 1904, it was claimed that no speed over 80 mph is plausible. In 2017, Koenigsegg Agera RS was produced with a production speed of 277.87 mph (447.19 km/h).
In Germany, there are no speed limits on most highways. | https://medium.com/swlh/six-wrong-predictions-reported-by-the-new-york-times-252c0f4b8e32 | ['Massùod Hemmat'] | 2020-12-15 19:03:49.558000+00:00 | ['The New York Times', 'Predictions', 'Journalism', 'Technology', 'Politics'] |
How to overpower HiPPO syndrome to make better design decisions | How to overpower HiPPO syndrome to make better design decisions
HiPPO: Highest Income Paid Person’s Opinion.
I know, I know, real HiPPOs don’t dress like that but that’s where creative freedom comes in, illustration by Quovantis
If you have Lead, Manager, Director, VP, or any job title which resonates with a leadership position then you might be perceived as a HiPPO. Without you even realizing it.
Getting labeled as a HiPPO could be a noteworthy achievement of your career, but not so much when you want to make decisions harnessing the diversity and collective wisdom of your group.
You see, when you are perceived as a HiPPO, what you say — even a mere harmless suggestion — has the potential to be interpreted as a decision.
It happens because your team thinks you must be good at your job otherwise you wouldn’t be in a leadership position. Or pessimistically, it’s ultimately your neck on the line then why not roll with your decision?
It leads to poor decision making as your team members’ ideas don’t see the light of the day. And if you don’t nip this HiPPO syndrome in the bud then it eventually creates a slippery slope to autocracy — or mediocrity.
Does it mean you should stop sharing suggestions? Does it mean you shouldn’t trust your experience or intuition which you’ve sharpened, over the years, by handling diverse situations? Does it mean that you put aside the learnings from your failures and successes? Does it mean that you shouldn’t exercise the authority of your own position to make decisions?
Absolutely not. That’s not what I’m suggesting.
Here is what you can do if you want to have spirited conversations, bring forth everyone’s ideas, make informed decisions, nip the HiPPO syndrome in the bud -
01. Practice silent design selections
Zen voting i.e. silent critique in motion, illustration by Quovantis
This technique is a great group dynamics leveler. It not only solves the problem of HiPPO syndrome but gets rid of groupthink altogether.
To practice it, hang your design options like a curator hangs paintings in a Museum. And invite every design team member to vote on their preferred design options. This silent-critique of design options rather than discussing them openly gives everyone a fair chance to cast their vote without getting sucked into groupthink. Also, I would encourage you (the perceived HiPPO) to vote last. This makes sure that you don’t influence other team members accidentally.
Jake Knapp popularized this silent critique method in his seminal book, Sprint. We fondly call this method Zen Voting.
02. Lead with questions rather than answers
Start with a question, illustration by Quovantis
Some of us have this habit of saying — “I think we should….” or “We must try…” while sharing suggestions to the problems that have just been conceived.
When a HiPPO uses words like ‘should’ and ‘must-try’, it comes out as an imposition, rather than a suggestion. Also, when a HiPPO shares such suggestions first, it promotes groupthink and impedes creative thinking to solve the problem.
Rather than starting with your suggestions, open up the conversation by saying — “How might we <your problem statement here>?” — to seek suggestions from your team. It ignites your design team to bring in their collaborative, and creative spirit to solve the problem rather than choose one of your suggestions.
And this way you demonstrate that ideas win over titles in your group.
03. Trust the data
a sine wave of data and you thought Mathematics wasn’t useful, illustration by Quovantis
Experience leads to wisdom — and hones your intuition. But, it doesn’t mean your intuition is always right.
So take a pause whenever you feel like saying “Okay, why don’t we do this…”.
Reflect on the user persona and see if it solves the problem for them. Or, see if there is any usage analytics or research data to back your claim for the proposed solution.
Consider this — your users are dropping off at your site’s checkout page and you suggest a redesign to increase user engagement by including the cross-sell options. Before proceeding, stop and think if it would work. Do you have data to prove it?
Would adding more options at checkout solve the cart abandonment issue or complicate it further? Wouldn’t it make more sense to simplify the checkout process and help users focus only on the products in their shopping cart rather than increasing their cognitive load by making them look at more products?
(I know, I know, no designer in their sane mind would ever pose this kind of a solution. It was merely a hyperbolic hypothetical scenario.)
In case you’re designing it for the first time and don’t have any data to look into, invest in designing multiple options. You could do multivariate testing to establish what works best rather than just relying on your intuition.
04. Ditch the head-chair
This looks more like a throne rather than a head-chair, but I’m sure you get the point, illustration by Quovantis
This is plain ol’ common sense. Some leaders end up taking the head-chair in meetings. Nothing wrong with letting people know who is the boss. Well, if that’s your thing.
But, if you are fostering a collaborative spirit where only the best ideas survive, little things like the position of your chair could have a subliminal impact.
PsychologyToday recommends sitting in the second or third spot on a big table to signal you being part of the team. And, you are here to collaborate not dictate.
05. Ask the AWE question
Seriously, what else?, illustration by Quovantis
AWE: ‘And What Else?’
Rather than pitching your suggestions, encourage your team to generate more design ideas. Ask the AWE question until it becomes obvious that the team has exhausted all their creative options. And then, only then, present your ideas.
You would be (pleasantly) surprised that your team would be able to come up, most of the time, with a solution that you were itching to propose. And if for whatever reason, they aren’t able to, then you can always go last.
This builds the team’s creative, and confidence muscle as they get to be accountable for solving their own problems rather than taking the official decree from their HiPPO a.k.a You.
06. Ask your team to consider their existing commitments
To be or not to be, illustration by Quovantis
You bring in this question when you’re about to arrive at a design decision. It helps your team members focus, especially the overzealous and overcommitted ones — “If you are saying yes to this design option, what are you saying no to?”
This becomes even more pertinent when the team is about to implement one of your suggestions.
It helps them reflect on their existing workload, and see if they aren’t signing up for commitments they can’t keep up. It makes them consider the time schedule to complete this design option and pushes them to prioritize better.
This question either helps them sign up to commitments they can deliver on, or keep on discovering solutions that can be completed within a given timeframe. | https://uxdesign.cc/how-to-overpower-hippo-syndrome-to-make-better-design-decisions-3c037ab305b3 | ['Tarun Kohli'] | 2020-12-26 14:52:19.154000+00:00 | ['Product Design', 'Leadership', 'UX Design', 'Product Management', 'Design'] |
9 UI/UX must tools for designers | 9 UI/UX must tools for designers
There are some tools that UI/UX designer must know and some tools that are good to know. Let's go through some of them!
image by https://www.netlingshq.com/blog/best-ui-design-tools-2019/
1. Google
image by https://www.dawsondawsoninc.com/google-it-infographic/
Well, some might say that Google isn’t exactly a tool, but without Google, we would be still in the dark ages. Any doubt, any question, any obstacle probably can be solved with Google. Unless it’s the case in which you are an exceptional wonder and at the very top of the field, there are always people with more experience and/or just have the answer you are seeking. In the UI/UX (same with most professions) you are in constant learning and reading.
When you don’t know something — google it. When you are not sure of something — Google it. Even many AB tests you’re planning to do — Google it, the answer is probably already there.
Of course in this section, I am including — Medium, Youtube, Reddit, Quora, Wiki How, anything that Google will bring up.
2. Pen and Paper
Photo by William Iven on Unsplash
Maybe this one is obvious, but it’s maybe the best wireframing tool and not only that. I suggest everyone use them more often. Besides rough sketches there are card sortings, gathering ideas, and problems, writing some notes. I advise you to train your hand at sketching, it’s good to be good at that. Even if you won’t ever draw anything, it’s a good thing that your rough sketches would look nice, especially when a potential client is watching.
3. Sketch/Figma/Adobe XD
Today’s three main UI design tools are used by the vast majority of UI/UX designers. All three are very similar with slight differences.
3.1 Sketch
image by https://search.muz.li/NGZkM2QyNDMz
SketchApp is the Godfather of all design tools. It’s the Mercedes-Benz, the first 100% UI/UX design tool in the industry. The vast majority of the prototyping tools work well with Sketch. Before Sketch web designers worked with Photoshop/Illustrator/Corel. And to be honest, after Sketch it is pointless to use those tools if you’re not designing some very unique website/app where simple shapes won’t make it. And don’t get me wrong, I respect Photoshop more than any other design tool, but the scope of photoshop is too big for UI design. Corel Draw and Illustrator are vector-based software mainly used to create logos, printing design, illustrations, etc.
3.2 Figma
Figma is my favorite tool. It took everything the best from Sketch and added many things that Sketch missed. The best value of Figma is that it is browser-based (so doesn’t depend on the platform) and everything is synced; one team member changes something, and it is already changed in the whole project, without the need of publishing the changes. Another one of the pros of Figma is that CSS is already there and you don’t have to use a third tool such as Zeplin or Invision Studio’s Inspect to handoff. By the way — Figma is always improving, they recently even added scrolling animation to its prototype.
3. 3 Adobe XD
XD is a go tool when you are working at a fast pace. It’s the tool that solves problems in a shorter time, but it has almost the same problems that Sketch has (except that Sketch is Mac only, and XD is available on both Windows and Mac), also there is no inner Shadow in XD (what’s up with that?).
I can go on and on about UI Design tools, but I guess that’s a topic for another day.
4. Prototyping with InVision Studio/Proto.io/Marvel/Origami
4.1 InVision Studio is a bundle of 4 great tools that are very useful for UI/UX designers.
Prototyping is not just a great tool with cool interaction animations.
is not just a great tool with cool interaction animations. The Inspect is for CSS handoff to developers.
is for CSS handoff to developers. Freehand tool helps with wireframing, whiteboard interviews, sitemaps, and generally, it acts as a pen and paper on your computer with many useful templates already there.
tool helps with wireframing, whiteboard interviews, sitemaps, and generally, it acts as a pen and paper on your computer with many useful templates already there. Craft tool is basically a UI design tool.
Also, it works great with Sketch.
4.2 Proto.io is a prototyping tool that helps designers to create real-looking hi-fi prototypes.
4.3 Marvel is another tool that helps you create from lo-fi to hi-fi prototypes, Wireframes as well as CSS and HTML handoff. Another prototype tool that with its great interactions makes it look like the final product.
4.4 Origami is the tool that maybe makes the most advanced, real-looking interactions and it works well with Sketch.
There are cons though: it doesn’t hand off the code of interactions, works only on Mac, and the learning is hard. It can be very challenging for beginners.
5. Zeplin
Zeplin is a tool that translates UI into CSS. It is a great tool for handoff and collaboration. And it works great with Sketch, XD, Photoshop, and many more. I use Zeplin relatively rare as Figma has its main functionality and as mentioned before I am a Figma fan.
6. Google Analytics
I know, I have already mentioned Google but Google Analytics is a whole other tool. As the name suggests — it analyzes. It’s a great tool for gathering statistics about how your website does in the field, receiving quantitative data, etc.
7. Strategy with Flowmap/Balsamiq
Photo by Amélie Mourichon on Unsplash
7.1 Flowmapp is a tool that helps you with the strategy at the beginning of the project. It’s a great tool to create IAs, Sitemaps, user flows.
7.2 Balsamic is a simple yet great tool for wireframing. It almost doesn’t have any learning curve, anyone can work with it. There are already many elements of wireframes and with just a simple drag and drop you can make a pretty good wireframe.
8. Qualitative research with Bugsee/Appsee/Hotjar
8.1 Bugsee is a tool that aims for bugs and crashes of mobile apps.
8.2 Appsee on the other hand is not focusing on bugs. It helps to understand the users and optimize UX and Performance.
8.3 Hotjar is a tool that does website analysis and gives feedback from users. It also helps to learn about users and their experiences in the product. It has features such as recordings of user journeys, form analysis, surveys, recruitment of testers, etc.
9. User testing tools: User Report/Usabilla
9.1 User Report is another great tool that is based on surveys and feedbacks. It works as a part of your website/app and helps you to learn about your users as well as connect with them. It also has Google Analytics integration.
9.2 Usabilla is a feedback collection software. It provides real-time feedback from users. It also helps you target your questions and timing. | https://uxplanet.org/9-ui-ux-must-tools-for-designers-df60745d990e | ['Daniel Danielyan'] | 2020-12-19 07:17:39.882000+00:00 | ['UX', 'UI', 'Design', 'Tools', 'Success'] |
A Systematic Approach to Dynamic Programming | Approaches to DP
The two main approaches to dynamic programming are memoization (the top-down approach) and tabulation (the bottom-up approach).
So far we’ve seen that recursion and backtracking are important when applying the DP premise of breaking a complex problem into smaller instances of itself. However, none of the code snippets above classify as DP solutions even though they use recursion and backtracking.
For a naive recursive solution to apply as a DP solution it should optimize to caching the results of computed sub-problems. In the short definition of DP above, the emphasis is on solving smaller instances only once — with a strong emphasis on “only once.”
Memoization
Memoization = Recursion + Caching
Our framework for a dynamic-programming-worthy problem said that it usually contains overlapping sub-problems. Remember the Fibonacci code above? If we create a recursive tree to compute the seventh Fibonacci number we get this:
Notice how many times we solve the same sub-problem. For example, Fib(3) is computed five times, and every fib(3) call recursively calls two more fib. That’s ten function calls solving the same fib(3) sub-problem.
Now we start talking DP! Instead of solving the same problem multiple times, why don’t we solve it just once and store it on some data structure in case we need it later? That is memoization!
Fib code optimized to caching.
This approach is the easiest of the two DP approaches presented here. Once you can get a recursive solution to the problem, just make sure you cache the solutions to the sub-problems. Before you make recursive calls to solve a sub-problem, check if it was already solved. Notice that here we do a trade: To archive time efficiency, we’re willing to give up memory space to allocate all computed sub-problems’ solutions.
Dynamic programming usually trades memory space for time efficiency.
When caching your solved sub-problems you can use an array if the solution to the problem depends only on one state. For example, in the fib code above, the solution to a sub-problem is the ‘nth Fibonacci’ number. We can use n as an index on a 1D array, where its value represents the solution to the fib(n) sub-problem.
Sometimes the solution to the problem may depend on two states. In this case, you can cache the results using a 2D array, where columns represent one state and rows represent the other. For example, in the famous Knapsack problem (which we’ll explore later) we want to optimize for total value, given a maximum weight constraint and a list of items. A knapsack sub-problem may look like this: KS(W, i) → (Max value), where we interpret it as: “What is the maximum value I can get with a weight ‘W’ and considering the ‘ith’ item?.” Therefore if we want to cache this solution, we need to take both states into account, and that can be accomplished using a 2D array.
Memoization is great — we have the elegance of a problem described recursively, and we’re solving overlapping sub-problems only once. Well, not everything is that great. We’re still making a bunch of recursive calls. Recursion is expensive both on processor time and memory space. Most recursive functions will consume call stack memory linearly with the number of recursive calls needed to complete the task.
There are special types of recursive functions, known as tail-recursion, that don’t necessarily increase the call stack linearly if optimized correctly. These can execute on a constant call stack space. Without going into many details, tail-recursive functions perform the recursive call at the end of its execution, meaning that its stack frame is useless thereafter. The same stack memory space can be reused to hold the state for the next recursive call. The problem that arises is in dealing with the return address. We want to make sure that after the recursive tree ends, you return to the instruction that started the series of recursive calls. Feel free to do some research on this topic.
Recursive functions always carry the weight of potential stack overflows issues. The following is a Python command to check the recursion depth limit. If I try to use Python with recursion to solve a problem whose solution involves a recursive depth of more than 1000 calls, I’ll get a stack overflow exception. That quantity can be increased, but we get into language-specific topics.
Recursion depth limit in Python
In defense of recursive programming, we must say that recursive functions are often easier to formally prove. Recursive functions provide you with the same repetitive behavior of raw loops but without in-block state changes, which is a common source of bugs. States in recursive programming are updated by passing new parameters to new recursive calls, instead of being modified as the loop progresses.
Tabulation
Tabulation aims to solve the same kind of problems but completely removes recursion. Removing recursion means we don’t have to worry about stack overflow issues, as well as the common overhead of recursive functions. In the tabulation approach to DP (also known as the table-filling method) we solve all sub-problems and store their results on a matrix. these results are then used to solve larger problems that depend on the previously computed results. Because of this, the tabulation approach is also known as a bottom-up approach. It’s like starting at the lower level of your recursive tree and moving your way up.
Tabulation can be much more counterintuitive than recursive-plus-cached memoization solutions. But it’s also much more efficient in terms of time complexity and space complexity if we take into account the call stack memory which increases linearly with the number of recursive calls — again, assuming it’s not tail-recursion optimized.
If you go back to the steps initially presented in this piece, you’ll find that tabulation is the last step in the systematic approach to DP. This is because it’s easier to get to a tabulation solution by first solving the problem with recursion and backtracking, then optimizing it to caching with memoization techniques, if necessary, and finally making a few adjustments to update it to a final bottom-up solution. Later you will see a few tricks to achieve that. But first, let’s see how tabulation works.
We’ve been talking about states a lot, but we still do not have a formal definition of what we mean by states on our DP context. What I understand by states are parameters that affect the outcome of a recursive call. States are what differentiate one call from another and allow us to explore different choices and get an optimal result. We’ll get some practice defining states at the end of this article.
Since tabulation proposes a bottom-up approach, we must first solve all sub-problems that a larger problem may depend on. If we don’t solve the smaller problem, we can’t move on to solve the larger one. In tabulation, we use one for-loop for every state of the problem. But where do I make it start? Where do I make it end? To answer that, let’s explore the following recurrence relation.
Let’s say we have a function that solves some optimization problem — call it optimal or OP for short. And let’s assume that the nature of the problem makes OP(n) depend on OP(n-1), so OP(n) = OP(n-1).
This recurrence relation is telling you that you cannot know what OP(n) is if you don’t know what OP(n-1) is. That means we need to start at the lowest value of n, say 0, and solve every sub-problem all the way to n. That’s the trick: If your recurrence relation shows that your states are decreasing, then your loops should be increased so you compute every sub-problem that larger problems depend on. Remember bottom-up.
This will become clear when we apply all the strategies learned to real problems. And guess what? That will start now. | https://medium.com/better-programming/a-systematic-approach-to-dynamic-programming-54902b6b0071 | ['Fabian Robaina'] | 2019-08-15 02:11:49.849000+00:00 | ['Programming', 'Computer Science', 'Python', 'Dynamic Programming'] |
Why Every Freelance Marketplace That Goes Public Becomes aStartup Titanic? | Photo by K. Mitch Hodge
Fiverr just went public. Another freelance marketplace will bite the dust.
Why So Serious — Why So Pessimistic?
All mega-size freelance platforms are public companies, now. Freelancer dot com is a “veteran” in this field. Upwork will have to wait for a few more months to light its first public birthday candle. Fiverr didn’t even have the time to clean up after their NYSE party.
And, there you have it, the “Freelance Triumvirate” went public, with no exceptions. That’s not a coincidence. Actually, I think I see a clear pattern.
Every freelance marketplace’s public journey has to go through these five phases.
The First Enthusiastic Phase
The enthusiasm of freelance platforms at their stock market debuts is simply overwhelming. I dare to say, it can be quite contagious. All you can see is the confetti rain, but you can’t hear a thing. The ringing of the stock market bells can be deafening. Some of these bells became the victims of this enthusiasm.
By default, the initial share prices jump sky-high during the first 24 hours after the stock market debut. Upwork had the most “modest” debut with “just” 50%, give it or take. Fiverr hit almost the 100% increase compared with the initial IPO price. Freelancer dot com is still the absolute record-breaker because
What’s happening after the first phase is over?
The Second Stock Market Roller Coaster Phase
In this phase, the IPO honeymoon is far from over. You go up. You go down. That’s a completely normal thing. That’s why nobody doesn’t bother to panic.
When you look at the stock market graphs, they all look the same, don’t they?
ASX: FLN
NASDAQ: UPWK
NYSE: FVRR
How long does this up and down phase last? Well, I give it a year.
The Third Phase — The First Taste of Bitter Reality
After the first year as a public company, every freelance marketplace gets the wake-up call. The trouble is that this call is hidden in the financial reports.
The serious investors know all too well that the numbers never lie. You just have to make sure you’re looking at the right numbers.
Let’s take Upwork’s report for the first quarter of 2019.
Source: Upwork
If you compare Upwork’s revenues for the first three months in 2018 and 2019, then you can cheer up. There are positive changes of 16.4% for the total revenue and 20.7% for the gross profit. Absolutely nothing to worry about. On the contrary, you can still ride the optimistic wave.
However, if you dig a little deeper, you can’t avoid a nasty surprise.
Source: Upwork
The total operating expenses have increased by 13.3%. If you leave out the provision for transactions losses you get the increase of almost 15%. The most troubling part is that the general and administrative costs have jumped to 28.7%. There’s no happy end here, make no mistake about it.
The Fourth Phase — The Real Stock Price Cold As Ice
Your stock market roller coaster ride eventually has to come to its end. Once your stock prices stop going up what you get is the real value. Look at the attached graphs.
Lassie is coming home. The initial IPO price you began your public journey with will be the last and the only price of your shares. The trouble is the moment when you can’t even get this initial price.
If you can sell your shares as long as they’re worth something, then your investment adventure into the freelance universe may not leave you in tears.
So, which phase our public freelance marketplaces are currently in? Well, Freelancer dot com is deep into the fourth phase. Upwork is in the second phase. And, of course, Fiverr just got the sweet taste of the first phase.
The Fifth Phase — The End of Freelance Days
Can the stock market of the freelance platforms collapse? There’s as ominous symbolism between the years 1929 and 2029. I sure hope for the sake of all freelancers that the history won’t repeat itself. However, all of these graphs aren’t encouraging.
Why did the most popular and powerful freelance marketplace decide to go public in the first place? Well, I’m not a Wall Street guru, but I know that there’s one reason and one reason only for any company to go public. Then need the money. Is this their last and the best option? If so, then the freelance industry, as we know it, is doomed.
Is The IPO Way — The Only Way for Freelance Platforms?
If you don’t remember Guru, then you know nothing about the freelance history. This is arguably the oldest freelance platform. They have been around for almost twenty years. Hey, that’s really something. This freelance marketplace has had more ownership shifts than you can count, but they have never filed for IPO (to the best of my knowledge).
If you have never heard about goLance, then you will never learn about the future of freelancing. They won two American Business Awards and the People’s Choice Award. For a relatively small privately-owned freelance marketplace, that’s really something. What’s even more important, their CEO Michael Brooks strikes me as an entrepreneur who doesn’t build to sell.
What’s Going To Happen When We Come to the End of Our Freelance Road?
One day, Freelancer dot com, Upwork, and Fiverr will find themselves together in the fifth phase. I sure hope, I would be a retired freelancer by then. I also hope, I wouldn’t have to say — I told you so! | https://medium.com/build-something-cool/why-every-freelance-marketplace-that-goes-public-becomes-a-startup-titanic-bf71d6ead06 | ['Nebojsa Todorovic'] | 2019-06-18 18:57:30.683000+00:00 | ['IPO', 'Startup', 'Tech', 'Fiverr', 'Freelancing'] |
Four Books You Need to Read About School Shootings | I am not, however, advocating for guns to be taken away. There are benefits to guns. No, what made me rage-debate that bumper sticker was its loose logic, its sloppy facts, its off-the-mark assumptions. These same qualities plague much of the discussion around school shooters.
News about school shooters is inescapable now, but not so long ago, it was much rarer. In fact, “school shooters” was not a common term until the late 1990s, and only in the last few years have they been studied as a specific category of killer.
Here are four books that date from this proto-era. Think of them as setting the stage for our current gun control moment.
Erik Larson, Lethal Passage, 1994
Image from Amazon
On December 16, 1988, sixteen-year-old Nicholas Elliott walked into his high school, Atlantic Shores Christian School in Virginia Beach, Virginia, with murder on his mind. His target: another student named Jacob Snipes, who had been taunting Nicholas (Jacob was white, Nicholas black). A teacher, Karen Fairley, tried to stop Nicholas; he killed her and kept moving. He wounded another teacher, shot at a third, and menaced a group of students before he was subdued. Three Molotov cocktails were found in his locker. His book bag held the makings of a pipe bomb.
Erik Larson, who would go on to write nonfiction bestsellers such as The Devil in the White City and Dead Wake: The Last Crossing of the Lusitania, tells Nicholas’s story in Lethal Passage. He pioneers a lot of elements that are now standard. For instance, he lays out some shocking statistics:
70,000 Americans killed by guns in 1991–1993 150,000 gun-related injuries per year 8,050 people killed or wounded in Los Angeles County in 1991 (13 times the number of Americans killed in the First Gulf War)
He also quotes a student who told one newspaper that “All the kids said he was going to shoot someone.” Such quotes are turned up about every shooter, it seems.
The heart of the book is not the shooting but the gun Nicholas used: a Cobray M-11/9. Larson gives the history of this type of gun, starting with its invention by Gordon Ingram in the 1960s. He then traces Nicholas’s particular Cobray from assembly line to Atlantic Shores, highlighting the frauds and failures by which the piece ended up in an angry teenager’s hands.
“I researched that book very, very carefully,” Larson told me during an interview in 2016. “I learned to shoot, and I gotta say that shooting a handgun is a lot of fun.”
He praised guns as “exquisite works of engineering” before discussing what prompted him to write this book over two decades ago: gun culture, an all-too-familiar argument.
Gun culture bothered Larson then, and it bothers him now because “society bears all the costs of irresponsibility. We have to shift the costs to the gun owner. What that means is, yes, there should be a licensing process. There is nothing in the Second Amendment that says you can’t license and register firearms. Nothing.”
In a preview of the Parkland teens’ message, Larson reserved his most astringent criticism for the National Rifle Association, calling it “dystopian and paranoid” and claiming that the organization “is not about guns at all. It’s about libertarian politics.”
Dave Cullen, Columbine, 2009
Image from Amazon
Nicholas Elliott was a prototype. Over the next decade, more shooters appeared. All have been eclipsed by more recent killers — all but two: Eric Harris and Dylan Klebold.
On April 20, 1999, the two murdered fifteen people and injured twenty-four others inside Columbine High School in Columbine, Colorado. The modern notion of school shooters was born in the bloodbath of that day. According to Malcom Gladwell, Harris and Klebold “laid down the ‘cultural script’ for the next generation of shooters.”
With infamy, of course, comes mythology. There were reports that certain students were targeted, that there were no warning signs, that the killers were misfits who had been bullied. Dave Cullen’s Columbine is an encyclopedic rebuttal of these myths.
Cullen’s big reveal is that Harris and Klebold, despite being most people’s definition of “school shooters,” were actually bombers. Their plan was to blow up their school. To that end, they planted two 20-pound propane bombs in the cafeteria, wiring them to detonate at 11:17am. Their shotguns and semi-automatics would be trained on people fleeing the burning building, and they had another set of bombs in their vehicles, set to go off at noon to take out first responders.
Harris — and if you take away a single thing from Cullen’s book, it should be this — was a sociopath.
All the bombs failed, thank God. Yet the attack was still well organized. The two planned for a year, dreaming of a widespread massacre, a strike at society itself. Their school was the first step, chosen for its convenience.
Nor were the two outcasts. Both had friends, were reasonably popular, played sports, joined clubs. Klebold was more withdrawn, depressive and suicidal, although he had a hot temper.
Harris — and if you take away a single thing from Cullen’s book, it should be this — was a sociopath. We know this from his journals, his website, and his home videos. Seemingly sweet and deferential, polite on the surface, he was stone cold underneath.
Cullen sums it up this way: “Klebold was hurting inside while Harris wanted to hurt people.”
Peter Langman, Why Kids Kill: Inside the Minds of School Shooters, 2009
Image from Amazon
It seems 2009 was the year for landmark school shooter books. In that year, psychologist Peter Langman released his long-awaited study, Why Kids Kill. It was one of the first books to examine school shooters as a unique subset of killers.
Langman calls such killers “rampage school shooters,” which he defines as “students or former students [who] attack their own schools.” Their actions are “public acts, committed in full view of others,” and their victims are both people they dislike and people “shot randomly or as symbols of the school.”
The heart of the book is an examination of ten shooters: Evan Ramsey, Michael Carneal, Andrew Golden, Mitchell Johnson, Andrew Wurst, Kip Kinkel, Eric Harris, Dylan Klebold, Jeffrey Weise, and Seung Hui Cho. They range in age from 23 (Cho) to 11 (Golden). Some killed only one or two people, whereas Cho murdered thirty.
Langman classifies each as psychopathic, psychotic, or traumatized. Psychopathic shooters are narcissists, lacking in empathy, normal on the surface yet sadistic. Psychotic shooters have hallucinations, delusions, disorganized thoughts, eccentric beliefs, and odd behavior. Traumatized shooters grew up as victims of abuse, domestic violence, and chaotic households.
Like Cullen, Langman is committed to debunking school shooter myths, calling them “factors that do not explain.” These factors are
Gun culture (though Larson does indict this)
Antidepressants like Prozac or Luvox
Detachment from school or feelings of alienation
Violent video games, movies, or television
Rejection
Depression
Bullying
If you wonder what Langman would make of more recent shooters like Adam Lanza and Nikolas Cruz, wonder no more: he has written a follow-up book, School Shooters: Understanding High School, College, and Adult Perpetrators, and maintains a website on the subject.
Stephen King, Rage, 1977
Image from Amazon
If school shooters had a Bible, it would doubtless be Rage. Written by Stephen King in 1977 under the name Richard Bachman, it is the story of Charlie Decker, a high school senior who, after being expelled, grabs a pistol from his locker, runs to his algebra class, and murders the teacher, Jean Underwood.
The students become his hostages. When another teacher, Peter Vance, tries to enter the room, Charlie kills him as well. Police show up, and the standoff lasts four hours, with Charlie agreeing to release the captives. When the police chief enters the classroom, Charlie moves as if to shoot him but is shot instead. He survives and ends up in a psychiatric hospital in Augusta, Maine.
At least five actual shooters have a known connection to this novel.
Jeffrey Lyne Cox (1988), who held sixty students at gunpoint in San Gabriel, California, was said by a friend to have read Rage over and over.
Dustin Pierce (1989), who had a nine-hour standoff with police in McKee, Kentucky, had a copy of Rage in his bedroom.
Scott Pennington (1993), who shot and killed a teacher and a school custodian in Grayson, Kentucky, wrote an essay on Rage and was upset that it received a low grade.
Barry Loukaitis (1996), who shot a teacher and three classmates and held some students hostage, said to them, “This sure beats algebra, doesn’t it?” (Charlie Decker in Rage comments that his act “sure beats panty raids.”)
Michael Carneal (1997), who shot eight students, had a copy of Rage in his locker.
After the Carneal incident, King told his publisher to “take the damned thing out of print.” It is the only King novel to be so consigned. He doesn’t think Rage turned those boys into killers; he saw the book as “a possible accelerant which is why I pulled it from sale. You don’t leave a can of gasoline where a boy with firebug tendencies can lay hands on it.” | https://pisancantos43.medium.com/four-books-you-need-to-read-about-school-shootings-d6ee23eda06b | ['Anthony Aycock'] | 2019-01-15 21:03:30.579000+00:00 | ['Guns', 'Schools', 'Shooting', 'Children', 'Books'] |
Meet Edgar Goetzendorff — ARK’s Newest Full-Stack Developer | Given the extensive roadmap that we want to tackle this year, it was necessary to bring in more developers to help speed up development. Our newest hire is Edgar, whom most of our community already know under the username ‘dated’.
As our roadmap for this year is packed with new and upcoming products and services (MarketSquare, Deployer, Desktop Wallet v3, Mobile Wallet v2, Core v3, and Platform SDK) we needed more developers who write solid code, have a proven track record of being reliable and are familiar with ARK and all its mechanics. Who better knows that than our all-star participant and winner of our GitHub Development Bounty Program multiple times over, Edgar Goetzendorff.
About Edgar
Edgar’s childhood started with computers, as his father is a programmer and always had old computer parts lying around at home. Edgar used to pick the best parts and assemble his own computers. He was constantly repairing and tinkering with things when something broke — figuring out while trying to fix them. Edgar was around the age of 12 when he started building simple websites using HTML and CSS. One of his first hobby projects at the time was developing a website that could showcase achievements and gear for players of an Italian online text-based role-playing game. Software programming soon caught Edgar’s attention and one thing led to another.
During his studies and his work towards a degree in computer science, he worked at an online travel agency that focuses on B2B and VIP travel, redesigning the backend applications and helping out with day-to-day operations and customer support. For the last three years, he has been employed by a company that builds software solutions for traffic engineering and public transportation.
These days, he is mostly working with JavaScript and TypeScript, the predominant languages of ARK technology. Other languages and frameworks he is familiar with are Python, Ruby and PHP. In the past, he’s used CakePHP and Yii PHP frameworks, but as ARK is using Laravel for its projects, he wants to get his hands dirty and learn that as one of his next professional goals.
Edgar will, first and foremost, help with the development of the next generation of Desktop Wallet that is coming out this year, but he is versatile and will jump around on other products as needed.
When asked how he learned about ARK:
Actually I found out about ARK through one of its early Bridgechains. Only after submitting some bugfixes for the then available commander on GitHub and unexpectedly receiving my first bounty, I joined the ARK Slack and was instantly hooked by the warm and welcoming community which ultimately allowed me to become a Forging Delegate.
Outside of his career, he is a husband and a father. In his free time, he likes to solve riddles and go geocaching, as well as photography.
Welcome to the ARK family, Edgar! We wish you the best in continuing to do the great work we have seen from your long tenancy in the Development Bounty Program. | https://medium.com/ark-io/meet-edgar-goetzendorff-arks-newest-full-stack-developer-4f38e396fc39 | ['Rok Černec'] | 2020-07-20 18:53:57.367000+00:00 | ['Development', 'Cryptocurrency', 'Blockchain', 'Crypto', 'Developer'] |
4 Books by Caribbean Authors You Should Read | June is recognized as Caribbean-American Heritage Month. It is a time to recognize the significance of Caribbean people and their descendants in US history and culture. One way to learn about Caribbean influence, not just in the US, but globally, is through books. Hence my writing this post to encourage you to purchase and read books by Caribbean authors or authors of Caribbean heritage.
These Ghosts Are Family by Maisy Card
These Ghosts Are Family is a transgenerational family sage spanning over 200 years that details the ripple effect of ancestral decisions on present-day life.
The novel begins by revealing that Stanford Solomon is actually Abel Paisley, a man who faked his own death and stole the identity of his best friend.
And now, nearing the end of his life, Stanford is about to meet his firstborn daughter, Irene Paisley, a home health aide who has shown up for her first day of work to tend to the father she thought was dead.
These Ghosts Are Family revolves around the consequences of Abel’s decision and tells the story of the Paisley family from colonial Jamaica to present-day Harlem.
The story of each member of the family was very unique as they try to create an identity outside of their family history and trauma. I haven’t read much about slavery in Jamaica, so it was “interesting” and educational to read this. We often learn about the life of slaves in the American South, but not so much in the Caribbean so the book opened my eyes to the experiences of slaves in Jamaican plantations. I did feel like the book jumped around a lot between generations and character, so it was a little difficult to follow at first. However, once I got used to the structure it flowed much smoother.
“Even though they were just words, they built a world that she couldn’t stop thinking about, that she felt trapped inside every night.”
Surge by Jay Bernard
Image by Uju Onyishi
Surge is a collection of poems about the 1981 New Cross Fire, a house fire at a birthday party in south London that killed thirteen people all of whom were Black. The fire was initially believed to be a racist attack, but there was a sense of indifference from the police, the government and the press.
The collection also talks about the Grenfell fire on 14th June 2017. A case where institutional indifference to working-class lives left 72 people dead. The lack of justice and accountability in both cases exemplify Britain’s racist past and present.
The collection begins with the arrival of the Windrush Generation into Britain, followed by the New Cross Fire and then into present-day. The first few poems are told through the voices of ghosts and then it goes into real bodies. There was a lot of shifts in perspectives both between and within poems and that was done so effortlessly. I really enjoyed reading the collection. Some of the poems really spoke to me. But there were some that I didn’t quite understand (the struggles of reading poetry), though I was able to find some Youtube videos where Bernard reads and discusses the poems and that was extremely beneficial.
“Me seh blood ah goh run for di pain of di loss”
The Perseverance by Raymond Antrobus
Image by Uju Onyishi
The Perseverance is a collection of poems about the D/deaf experience in a hearing world, the author’s identity as a British-Jamaican and some poems about his father.
Reading this collection made me confront a privilege that I have, but hardly ever think about. I don’t know what else to say about this except that it was powerful and incredible. So much so that less than 12 hours after reading it for the first time I decided to reread the collection.
“Proving people wrong is great but tiring.”
Queenie by Candice Carty-Williams
Image by Uju Onyishi
Queenie is a year in the life of a 25-year-old Black woman of Jamaican heritage living in London. At the start everything is okay. She’s living with her white boyfriend and has a job she worked hard to get. But then he wants to go on a break, so Queenie has to move out. And let’s just say she did not handle the break well.
She starts doing badly at work and having unprotected sex with various guys that showed her no respect. As the story goes on we learn that she experienced some childhood trauma that completely destroyed her self esteem and self-regard. And because of that, her default is self-sabotage.
The book touches on so many heavy topics including racism in Britain, micro-aggressions in the workplace, complicated family dynamics, the fetishization of the Black woman’s body and mental health issues. It did a good job in portraying the stigma surrounding going to therapy in the Black community. I also liked that Carty-Williams did not rush Queenie’s healing process. The story flowed so smoothly and it was written so vividly.
I was really rooting for Queenie, but I couldn’t help but be annoyed by a lot of her actions. She is also such a contradictory character but the fact that she is such a flawed character make things more realistic. She stays current on the issues of police brutality and the Black Lives Matter movement but then she harbours so much self-hate and allows her body to be used by white men that just don’t care about her. Oh and don’t get me started on her relationship with Black men. It just goes show hoe white supremacist ideologies are so deeply rooted in our subconscious.
I can’t recommend this book enough. | https://medium.com/the-open-bookshelf/5-books-by-caribbean-authors-you-should-read-4ef0cb084cd4 | ['Uju Onyishi'] | 2020-06-18 11:58:29.978000+00:00 | ['Reading', 'Books', 'Book Review', 'Book Recommendations', 'Caribbean'] |
มาทำ auto-deploy Vue.js ขึ้น Firebase Hosting ด้วย Bitbucket pipeline กันเถอะ | Let you learn and share your Firebase experiences with each other.
Follow | https://medium.com/firebasethailand/auto-deploy-vue-to-firebase-hosting-with-bitbucket-pipline-7d552163b27 | ['Sorawit Trutsat'] | 2019-07-21 19:38:30.209000+00:00 | ['Ci Cd Pipeline', 'Pipeline', 'Vuejs', 'Bitbucket', 'Firebase'] |
Applying Behavioral Science to Machine Learning | Applying Behavioral Science to Machine Learning
The emerging field of machine behavior tried to study machine learning models in the same way social scientists study humans.
I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Understanding the behavior of artificial intelligence(AI) agents is one of the pivotal challenges of the next decade of AI. Interpretability or explainability are some of the terms often used to describe methods that provide insights about the behavior of AI programs. Until today, most of the interpretability techniques have focused on exploring the internal structure of deep neural networks. Last year, a group of AI researchers from the Massachusetts Institute of Technology(MIT) published a paper exploring a radical approach that attempts to explain the behavior of AI observing them in the same we study human or animal behavior. They group the ideas in this area under the catchy name of machine behavior which promises to be one of the most exciting fields in the next few years of AI.
The ideas behind machine behavior might be transformational but its principles are relatively simple. Machine behavior relies more on observations than on engineering knowledge in order to understand the behavior of AI agents. Think about how we observe and derive conclusions from the behavior of animals in a natural environment. Most of the conclusions we obtain from observations are not related to our knowledge of biology but rather on our understanding of social interactions. In the case of AI, the scientists who study the behaviors of these virtual and embodied AI agents are predominantly the same scientists who have created the agents themselves which is the equivalent of requiring a PH.D in biology to understand the behavior of animals. Understanding AI agents goes beyond interpreting a specific algorithm and requires analyzing the interactions between agents and with the surrounding environment. To accomplish that, behavioral analysis via simple observations can be a powerful tool.
What is Machine Behavior?
Machine Behavior is a field that leverage behavioral sciences to understand the behavior of AI agents. Currently, the scientists who most commonly study the behavior of machines are the computer scientists, roboticists and engineers who have created the machines in the first place. While this group certainly has the computer science and mathematical knowledge to understand the internals of AI agents, they are typically not trained behaviorists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behavior or social theory. Similarly, even though behavioral scientists understand those disciplines, they lack the expertise to understand the efficiency of a specific algorithm or technique. From that perspective, machine behavior sits at the intersection of computer science and engineering and behavioral sciences in order to achieve a holistic understanding of the behavior of AI agents.
As AI agents become more sophisticated, analyzing their behavior is going to be a combination of understanding their internal architecture as well as their interaction with other agents and their environment. While the former aspect will be a function of deep learning optimization techniques, the latter will rely partially on behavioral sciences.
Understanding the Behavioral Patterns in AI Agents
Ethology is the field of biology that focuses on the study of animal behavior under natural condition and as a result of evolutionary traits. One of the fathers of ethology was Nikolaas Tinbergen, who won the 1973 Nobel Prize in Physiology or Medicine based on his work identifying the key dimensions of animal behavior. Tinbergen’s thesis was that there were four complementary dimensions to understand animal and human behavior: function, mechanism, development and evolutionary history. Despite the fundamental differences between AI and animals, machine behavior borrows some of Tinbergen ideas to outline the main blocks of behavior in AI agents. Machines have mechanisms that produce behavior, undergo development that integrates environmental information into behavior, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behavior. An adaptation of Tinbergen’s framework to machine behavior can be seen in the following figure:
Based on the previous framework, the study of machine behavior focuses on four fundamental areas: mechanism, development, function and evolution across three main scales: individual, collective and hybrid.
For a given AI agent, machine behavior will try to explain its behavior by studying the following four areas:
1. Mechanism: The mechanisms for generating the behavior of AI agents are based on its algorithms and the characteristics of the execution environment. At its most basic level, machine behavior leverages interpretability techniques to understand the specific mechanisms behind a given behavioral pattern.
2. Development: The behavior of AI agents is not something that happens on one shot but it rather evolves over time. Machine behavior studies how machines acquire (develop) a specific individual or collective behavior. Behavioral development could be the result of engineering choices as well as the agent’s experiences.
3. Function: An interesting aspect of behavioral analysis is to understand how a specific behavior influences the lifetime function of an AI agent. Machine behavior studies the impact of behaviors on specific functions of AI agents and how those functions can be copied or optimized on other AI agents.
4. Evolution: In addition to functions, AI agents are also vulnerable to evolutionary history and interactions with other agents. Throughout its evolution, aspects of the algorithms of AI agents are reused in new contexts, both constraining future behavior and making possible additional innovations. From that perspective, machine behavior also studies the evolutionary aspects of AI agents.
The previous four aspects provide the a holistic model to understand the behavior of AI agents. However, those four elements don’t apply the same way when we are evaluating a classification model with a single agent than a self-driving car environment with hundreds of vehicles. In that sense, machine behavior applies the previous four aspects across three different scales:
1. Individual Machine Behavior: This dimension of machine behavior attempts to study the behavior of individual machines by themselves. There are two general approaches to the study of individual machine behavior. The first focuses on profiling the set of behaviors of any specific machine agent using a within-machine approach, comparing the behavior of a particular machine across different conditions. The second, a between-machine approach, examines how a variety of individual machine agents behave in the same condition.
2. Collective Machine Behavior: Differently from the individual dimension, this areas looks to understand the behavior of AI agents by studying the interactions in a group. The collective dimension of machine behavior attempts to spot behaviors on AI agents that don’t surface at an individual level.
3. Hybrid Human-Machine Behavior: There are many scenarios in which the behavior of AI agents is influenced by their interactions with humans. Another dimension of machine behavior focus on analyzing behavioral patterns in AI agents triggered by the interaction with humans.
Machine behavior is one of the most intriguing, nascent fields in AI. Behavioral sciences can complement traditional interpretability methods to develop new methods that help us understand and explain the behavior of AI. As the interactions between humans and AI becomes more sophisticated, machine behavior might play a pivotal role to enable the next level of hybrid intelligence. | https://medium.com/dataseries/applying-behavioral-science-to-machine-learning-cd219d88a7c7 | ['Jesus Rodriguez'] | 2020-12-26 10:55:32.398000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Thesequence', 'Artificial Intelligence'] |
Gender Inference with Deep Learning | Gender Inference with Deep Learning
Fine-tuning pretrained convolutional neural networks on celebrities
Photo by Alex Holyoake on Unsplash
Summary
I wanted to build a model to infer gender from images. By fine-tuning the pretrained convolutional neural network VGG16, and training it on images of celebrities, I was able to obtain over 98% accuracy on the test set. The exercise demonstrates the utility of engineering the architecture of pretrained models to complement the characteristics of the dataset.
Task
Typically, a human can distinguish a man and a woman in the photo above with ease, but it’s hard to describe exactly why we can make that decision. Without defined features, this distinction becomes very difficult for traditional machine learning approaches. Additionally, features that are relevant to the task are not expressed in the exact same way every time, every person looks a little different. Deep learning algorithms offer a way to process information without predefined features, and make accurate predictions despite variation in how features are expressed. In this article, we’ll apply a convolutional neural network to images of celebrities with the purpose of predicting gender. (Disclaimer: the author understands appearance does not have a causative relationship with gender)
Tool
Convolution neural networks (ConvNets) offer a means to make predictions from raw images. A hallmark of the algorithm is the ability to reduce the dimensionality of images by using sequences of filters that identify distinguishing features. Additional layers in the model help us emphasize the strength of often nonlinear relationships between the features identified by the filters and the label assigned to the image. We can adjust weights associated with the filters and additional layers to minimize the error between the predicted and observed classifications. Sumit Saha offers a great explanation that is more in-depth: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
There are a number of pretrained ConvNets that have been trained to classify a range of images of anything from planes to corgis. We can save computation time and overcome some sampling inadequacy by employing the weights of pretrained models and fine-tuning them for our purpose.
Dataset
The CelebA dataset contains over 200K images of celebrities labeled with 20 attributes including gender. The images are from the shoulders up, so most of the information is in the facial features and hair style.
Example image available from CelebA
Modeling
Feature Extraction
We’re going to use the VGG16 pretrained model and fine tune it to best identify gender from the celebrity images.
vgg=VGG16(include_top=False, pooling=’avg’, weights=’imagenet’,
input_shape=(178, 218, 3))
We use “include_top=False” to remove the fully connected layer designed for identifying a range of objects the VGG16 was trained to identify (e.g. apples, corgis, scissors), and we download the weights associated with the ImageNet competition.
Table 1 below shows the convolutional architecture for VGG16; there are millions of weights for all the convolutions that we can choose to either train or keep frozen at the pretrained values. By freezing all the weights of the model, we risk underfitting it because the pretrained weights were not specifically estimated for our particular task. In contrast, by training all the weights we risk overfitting because the model will begin “memorizing” the training images given the flexibility from high parameterization. We’ll attempt a compromise by training the last convolutional block:
# Freeze the layers except the last 5
for layer in vgg.layers[:-5]:
layer.trainable = False # Check the trainable status of the individual layers
for layer in vgg.layers:
print(layer, layer.trainable)
Table 1: Architecture of VGG16 model after turning final layers on
The first convolutional blocks in the VGG16 models are identifying more general features like lines or blobs, so we want to keep the associated weights. The final blocks identify more fine scale features (e.g. angles associated with the wing tip of an airplane), so we’ll train those weights given our images of celebrities.
Model Compilation
Following feature extraction by the convolutions, we’ll add two dense layers to the model that enable us to make predictions about the image given the features identified. You could use a single dense layer, but an additional hidden layer allows predictions to be made given a more sophisticated interpretation of the features. Too many dense layers may cause overfitting.
# Create the model
model = models.Sequential() # Add the VGG16 convolutional base model
model.add(vgg)
# Add new layers
model.add(layers.Dense(128, activation=’relu’))
model.add(layers.BatchNormalization())
model.add(layers.Dense(2, activation=’sigmoid’))
We added a batch normalization layer that will scale our hidden layer activation values in a way to reduce overfitting and computation time. The last dense layer makes predictions about gender (Table 2).
Table 2: Custom Model Architecture
Because we are allowing the model to train convolutional layers and dense layers, we’ll be estimating millions of weights (Table 3). Given the depth of the network we built, picking the best constant learning rate for an optimizer like stochastic gradient decent would be tricky; instead we’ll use the ADAM optimizer, that adjusts the learning rate to make smaller steps further into training.
model.compile(optimizer=’adam’, loss=’binary_crossentropy’, metrics=[‘accuracy’])
Using Keras, we’ll set up our data generators to feed our model, and fit the network to our training set.
data_generator = ImageDataGenerator(preprocessing_function=preprocess_input) train_generator = data_generator.flow_from_directory(
‘C:/Users/w10007346/Pictures/Celeb_sets/train’,
target_size=(178, 218),
batch_size=12,
class_mode=’categorical’) validation_generator = data_generator.flow_from_directory(
‘C:/Users/w10007346/Pictures/Celeb_sets/valid’,
target_size=(178, 218),
batch_size=12,
class_mode=’categorical’) model.fit_generator(
train_generator,
epochs=20,
steps_per_epoch=2667,
validation_data=validation_generator,
validation_steps=667, callbacks=cb_list)
After 6 epochs, the model achieved a maximum validation accuracy of 98%. Now to apply to the test set.
Testing
We have a test set of 500 images per gender. The model will give us predicted probabilities for each image fed through the network and we can simply take the maximum value of those probabilities as the predicted gender.
# obtain predicted activation values for the last dense layer
pred = saved_model.predict_generator(test_generator, verbose=1, steps=1000) # determine the maximum activation value for each sample
predicted_class_indices=np.argmax(pred,axis=1)
Our model predicted the gender of celebrities with 98.2% accuracy! That’s pretty comparable to human capabilities.
Does the model generalize to non-celebrities? Lets try on the author. The model did well with a recent picture of the author.
The predicted probability for the above image was 99.8% male.
The model also did well with the author’s younger, mop-head past; it predicted 98.6% male.
Conclusion
This exercise demonstrates the power of fine-tuning pretrained ConvNets. Each application will require a different approach to optimize the modeling process. Specifically, the architecture of the model needs to be engineered in a way that complements the characteristics of the dataset. Pedro Marcelino offers a great explanation of general rules for adapting the fine-tuning process to any dataset: https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751
I appreciate any feedback and constructive criticism on this exercise. The code associated with the analysis can be found on github.com/njermain | https://towardsdatascience.com/gender-identification-with-deep-learning-ac379f85a790 | ['Nate Jermain'] | 2019-04-23 02:08:59.959000+00:00 | ['Python', 'Machine Learning', 'Neural Networks', 'Data Science', 'Deep Learning'] |
How Do We Solve a Problem Like Election Prediction? | On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction?
At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think.
Here’s how it went wrong according to Venturebeat:
Firms like KCore Analytics, Expert.AI, and Advanced Symbolics claim algorithms can capture a more expansive picture of election dynamics because they draw on signals like tweets and Facebook messages…KCore Analytics predicted from social media posts that Biden would have a strong advantage — about 8 or 9 points — in terms of the popular vote but a small lead when it came to the electoral college. Italy-based Expert.AI, which found that Biden ranked higher on social media in terms of sentiment, put the Democratic candidate slightly ahead of Trump (50.2% to 47.3%). On the other hand, Advanced Symbolics’ Polly system, which was developed by scientists at the University of Ottawa, was wildly off with projections that showed Biden nabbing 372 electoral college votes compared with Trump’s 166, thanks to anticipated wins in Florida, Texas, and Ohio — all states that went to Trump.
For many — like Johnny Okleksinski back in 2016 — the instinctive reaction is to claim these misfires are down to flawed social media data which is simply not reflective of real world populations. In 2018, 74% of respondents agreed and told Pew Research that: “content on social media does not provide an accurate picture of how society feels about important issues.”
But while it’s certainly true that some of these inaccurate AI forecasts were down to the under-representation of certain groups (e.g. rural communities), an interesting paper published earlier this year by open access journal MDPI suggests that social media analysis can actually be more reflective of real-life views than these results might indicate.
The authors of Electoral and Public Opinion Forecasts with Social Media Data: A Meta-Analysis acknowledge the debate around the usefulness of social media in understanding public opinion, but at the same time they caution that dismissing social media’s predictive capacity based on its inability to represent some populations actually misses an important dynamic — namely, that politically active users are opinion-formers and influence the preferences of a much wider audience, with social media acting as an “organ of public opinion”:
…the formation of public opinion does not occur through an interaction of disparate individuals who share equally in the process; instead, through discussions and debates in which citizens usually participate unequally, public opinion is formed.
In other words, although political discussions on social media tend to be dominated by a small number of loud-mouthed users (typically early adopters, teens, and “better-educated” citizens), their opinions do tend to pre-empt those that develop in broader society.
Further, in capturing political opinions “out in the wild”; social media analysis is also able to understand the sentiments of silent “lurkers” by examining the relational connections and network attributes of their accounts. Report authors state that, “by looking at social media posts over time, we can examine opinion dynamics, public sentiment and information diffusion within a population.”
In brief: the problem with social media-fueled AI prediction does not appear to lie within the substance of what is available via online platforms. It seems to be in the methodology and/or tools. So, where do predictive AI tools go wrong? And where can researchers mine for the most useful indicators of political intention?
One of the major areas where social media analysis seems to break down is with language. This intuitively makes sense when we think about how people express themselves online. Problems with poor grammar or sarcasm are doubtless compounded by the difficulties of trying to understand context. Similarly, counting likes, shares and comments on posts and tweets is viewed as a fairly thin and simplistic approach (to use Twitter parlance “retweet ≠ endorsement”).
More robust, according to report authors, is an analysis that considers “structural features”, e.g. the “likes” recorded to candidate fan pages. Previous research found that the number of friends a candidate has on Facebook and the number of followers they have on Twitter could be used to predict a candidate’s share of the vote during the 2011 New Zealand election. But there is still the problem of which platform to focus on for thw closest accuracy.
Most AI systems use Twitter to predict public opinion, with some also using Facebook, forums, blogs, YouTube, etc. Yet each of these suffer from “their own set of algorithmic confounds, privacy constraints, and post restrictions.” We don’t currently know whether using multiple sources (vs. one platform) has any advantage, but with newly popular players like Parler on the scene, there’s reason to believe that covering several platforms would yield an accuracy advantage (though few currently use a broad range).
Finally, the actual political context within which the social platforms operate likely plays into their predictive accuracy. The report in question recalls that the predictive power in a study conducted in semi-authoritarian Singapore was significantly lower than in studies done in established democracies . From this authors infer that issues like media freedom, competitiveness of the election, and idiosyncrasies of electoral systems may lead to over- and under-estimations of voters’ preferences. | https://medium.com/swlh/how-do-we-solve-a-problem-like-election-prediction-5ae0809d5e7e | ['Fiona J Mcevoy'] | 2020-11-20 23:58:27.349000+00:00 | ['Artificial Intelligence', 'Politics', 'Elections', 'Social Media', 'Predictions'] |
Chase Your Dream, Not the Money | Chase Your Dream, Not the Money
6 reasons why dream-chasing unlocks more joy than money ever could
Photo by Ádám Berkecz on Unsplash
I’m sure you have had at least one time in your life where you’ve become focused entirely on money. Money can help you gain your time back, which has value, but there is nothing that beats the fulfillment you get from achieving your dream.
In my life, there has been poverty, plenty of money, then poverty again. The contrast between rich and poor is humbling and has led me not to want to chase money.
Making your dream come true will take you to new heights and show you a side of life that you may not have known existed. If your life feels meaningless, or you feel stuck, or you have no idea what’s next, or you are just existing, your beliefs about money are part of the problem.
Maybe everyone around you seems as though they are winning. Social media tells you that everyone is having a great time, and you need to up your game. The photos you view online are blurring the reality of life. These photos, accidentally, tell you that money helps make everything better. “Money is what you have been missing,” they say. I’m here to say that is wrong.
What is missing is chasing a dream.
The defining factor that has led me to write this article is that I recently published an article about making $11,000 in 30 days. The money was not the point of me sharing this; it’s the achievement of a dream I have had for the last five years. The focus should be the joy from that.
Here is why you must stop chasing money and chase a dream instead: | https://medium.com/better-marketing/chase-your-dream-not-the-money-2f43734e39c | ['Tim Denning'] | 2019-08-25 18:23:26.764000+00:00 | ['Money', 'Inspiration', 'Self Improvement', 'Life', 'Entrepreneurship'] |
Intro to Segmentation | Image Segmentation is the process by which a digital image is partitioned into various subgroups (of pixels) called Image Objects, which can reduce the complexity of the image, and thus analysing the image becomes simpler.
We use various image segmentation algorithms to split and group a certain set of pixels together from the image. By doing so, we are actually assigning labels to pixels and the pixels with the same label fall under a category where they have some or the other thing common in them.
Using these labels, we can specify boundaries, draw lines, and separate the most required objects in an image from the rest of the not-so-important ones.
Need for Image Segmentation
The concept of partitioning, dividing, fetching, and then labelling and later using that information to train various ML models have indeed addressed numerous problems.
Segmentation in Image Processing is being used in the medical industry for efficient and faster diagnosis, detecting diseases, tumors, and cell and tissue patterns from various medical imagery generated from radiography, MRI, endoscopy, etc.
This is a basic, but a pivotal and significant application of Image Classification, where the algorithm was able to capture only the required components from an image, and those pixels were later being classified as the good, the bad, and the ugly by the system. A rather simple looking system was making a colossal impact on that business — eradicating human effort, human error and increasing efficiency.
The Approach
Similarity Detection (Region Approach)
This fundamental approach relies on detecting similar pixels in an image — based on a threshold, region growing, region spreading, and region merging. Machine learning algorithms like clustering relies on this approach of similarity detection on an unknown set of features, so does classification, which detects similarity based on a pre-defined (known) set of features.
Discontinuity Detection (Boundary Approach)
This is a stark opposite of similarity detection approach where the algorithm rather searches for discontinuity. Image Segmentation Algorithms like Edge Detection, Point Detection, Line Detection follows this approach — where edges get detected based on various metrics of discontinuity like intensity etc.
The Types
Based on the two approaches, there are various forms of techniques that are applied in the design of the Image Segmentation Algorithms. These techniques are employed based on the type of image that needs to be processed and analysed and they can be classified into three broader categories as below:
Structural Segmentation Techniques
These sets of algorithms require us to firstly, know the structural information about the image under the scanner. This can include the pixels, pixel density, distributions, histograms, colour distribution etc. Second, we need to have the structural information about the region that we are about to fetch from the image — this section deals with identifying our target area, which is highly specific to the business problem that we are trying to solve. Similarity based approach will be followed in these sets of algorithms.
Stochastic Segmentation Techniques
In these group of algorithms, the primary information that is required for them is to know the discrete pixel values of the full image, rather than pointing out the structure of the required portion of the image. This proves to be advantageous in the case of a larger group of images, where a high degree of uncertainty exists in terms of the required object within an object. ANN and Machine Learning based algorithms that use k-means etc. make use of this approach.
Hybrid Techniques
As the name suggests, these algorithms for image segmentation make use of a combination of structural method and stochastic methods i.e., use both the structural information of a region as well as the discrete pixel information of the image.
Image segmentation Techniques
Based on the image segmentation approaches and the type of processing that is needed to be incorporated to attain a goal, we have the following techniques for image segmentation.
Threshold Method:
Focuses on finding peak values based on the histogram of the image to find similar pixels.
Edge Based Segmentation:
Based on discontinuity detection unlike similarity detection.
Region Based Segmentation:
Based on partitioning an image into homogeneous regions.
Clustering Based Segmentation:
Divides image into k number of homogeneous, mutually exclusive clusters — hence obtaining objects.
Watershed Based Method:
Based on topological interpretation of image boundaries.
Artificial Neural Network Based Segmentation:
Based on deep learning algorithms especially Convolutional Neural Networks.
Deep Dive
Images are considered as one of the most important
medium of conveying information, in the field of computer
vision, by understanding images the information extracted
from them can be used for other tasks. An image is a word derived from Latin word ‘imago’, which is a representation of visual perception in a two-dimension or three-dimension picture that has a similar appearance to some subject.
A digital image is a numeric representation of a
two-dimensional image. A digital image is composed of a
finite number of elements, each of which has a particular
location and value, are called picture elements, image
elements called pixels. Pixels are the smallest
individual element in an image, holding finite, discrete,
quantized values that represent the brightness, intensity or
gray level at any specific point.
There are generally two types of image- raster type and
vector type. Raster images are images having a finite set of
digital values which are represented in a fixed number of rows
and columns of pixels where these pixels are stored in
memory as a two-dimensional array. Digital images are
usually referred as raster images. Vector images are images
generated from mathematical geometry known as vector
which have points having both magnitude and direction.
Image segmentation is the foundation of object recognition
and computer vision. Image segmentation is the process of
subdividing a digital image into multiple regions or objects
consisting of sets of pixels sharing same properties or
characteristics which are assigned different labels for
representing different regions or objects. The goal of
segmentation is to simplify and/or change the representation
of an image into something that is more meaningful and easier
to analyse. Image segmentation is used to locate objects
and boundaries in images. Segmentation is done on basis of
similarity and discontinuity of the pixel values.
There are two types of segmentations — soft segmentations
and hard segmentations. Segmentations that allow regions or
classes to overlap are called soft segmentations whereas a
hard segmentation forces a decision of whether a pixel is
inside or outside the object.
Image segmentation is practically implemented in many
applications such as medical imaging, content based image
retrieval, object detection, feature recognition (such as face
recognition, fingerprint recognition, iris recognition, object
recognition, etc) and real-time object tracking in video.
The following computational steps have to applied for image
segmentation process on the image taken as input to get the
required segmented data:
1)Preprocessing: The main aim of the preprocessing step
is to determine the area of focus in the image. As the input
image may have a certain amount of noise in the images, it is
necessary to reduce or remove the noise.
2) Image Segmentation: In this step, the preprocessed
image is segmented in its constituent sub-regions.
3) Post Processing: To improve the segmented image,
further processing may be required which is performed in post
processing step.
4) Feature Extraction: Feature extraction is the method in
which unique features of an image are extracted. This method
helps in reducing the complexity in classification problems
and the classification can be made more efficient.
Different kind of features present in an image can be
intensity-based, textural, fractal, topological, morphological,
etc.
5) Classification: The aim of the classification step is to
classify the segmented image by making use of extracted
features. This step uses statistical analysis of the features and machine learning algorithms to reach a decision. | https://medium.com/swlh/intro-to-segmentation-ebd33ca75620 | ['Johar M. Ashfaque'] | 2020-12-23 22:47:31.356000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Image Segmentation', 'Deep Learning'] |
The Blockchain Solution To Save Retail Stores | It is no secret that brick and mortar stores are a sunset business, with giant e-commerce companies such as Amazon and Alibaba taking over the retail industry. However, the majority of retail transactions still takes place at offline stores for a variety of reasons, such as being able to touch and feel the items at retail stores.Therefore retail stores are definitely here to stay in some capacity.
How do we then make retail stores as competitive as the e-commerce stores? Sharing with you how Blockchain technology may be a solution for retail stores to increase their conversion rates.
Problems retailers are facing
“Everyone is going online to buy their items,
so nobody wants to go down to stores to buy anymore.”
It is convenient to pin the main reason for the declining retail scene to the above reason, but it is not so simple. Rather than think of it as a “Retail store vs E-commerce store” problem, why not think of how businesses can utilize both online and offline channels to improve their business model?
Numerous retail businesses are already utilizing online channels such as advertising on Facebook and Google, as well as posting their retail items on e-commerce platforms. This is where the real problem lies. It is hard or impossible to track whether online ads are pushed to your intended target audience, or worse still, if the ads are pushed to bots. In addition to that, e-commerce platforms charge hefty commission fees and competition is stiff.
How then can retail stores get more foot traffic and conversions in their stores with the existing landscape of E-commerce dominance and expensive online marketing channels where the ads cannot be verified to have been pushed to the correct target audience?
How Blockchain solves this problem
We always hear of how blockchain can revolutionize the world and make the world a better place. While many functions of blockchain technology are merely a pipe dream or just mindless hyping up of products by companies, there are functions of blockchain that we can look at to solve problems faced by retail stores.
Firstly, the transparent nature of the blockchain ensures that retail stores can verify that online ads they post are sent to their target audience, and not to bots. Secondly, besides being able to verify where their ads are being sent to, companies can use this data to improve on their online marketing and refine their target audience.
Centareum- An Interesting Retail Project
Recently I bumped into Centareum, a blockchain project that aims to drive traffic and conversion to physical retail stores. Below is a flow of how the Centareum platform works.
To post an ad, all a retailer needs to do is to take a photo of their store and post the location of their store on the Centareum app. Users who sign up for the platform will need to go through a Know Your Customer (KYC) process and fill in their demography, geography and preferences. This data will be stored on the blockchain and companies are not able to access it.
Based on the location of the users and their product preferences, ads posted by the retail stores will be sent to users who are in the vicinity of the store. This ensures quality traffic sent to the retail stores, ensuring that retail stores get maximum value out of their advertising budgets.
In addition, Centareum offers a payment gateway where users can use either Fiat or Cryptocurrencies such as Bitcoin, Ether and Centareum tokens. This, if executed properly will be a significant step towards mainstream adoption of Cryptocurrencies. Hence Centareum is a project that I will be looking out for and you should too, if you are a retailer.
Find out more about Centareum in the links below!
Centareum Website
Centareum Facebook
Centareum Twitter
Centareum Instagram
Centareum Telegram
Centareum Medium | https://medium.com/crypto-bacon-club/the-blockchain-solution-to-save-retail-stores-eb49ac10fd29 | ['Sarah Tan'] | 2018-08-27 12:38:03.769000+00:00 | ['Blockchain', 'Brick And Mortar', 'Marketing', 'Retail', 'Centareum'] |
How Microsoft Uses Transfer Learning to Train Autonomous Drones | How Microsoft Uses Transfer Learning to Train Autonomous Drones
The new research uses policies learned in simulations in real-world drone environments.
I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Perception-Action loops are at the core of most our daily life activities. Subconsciously, our brains use sensory inputs to trigger specific motor actions in real time and this becomes a continuous activity that in all sorts of activities from playing sports to watching TV. In the context of artificial intelligence(AI), perception-action loops are the cornerstone of autonomous systems such as self-driving vehicles. While disciplines such as imitation learning or reinforcement learning have certainly made progress in this area, the current generation of autonomous systems are still nowhere near human skill in making those decisions directly from visual data. Recently, AI researchers from Microsoft published a paper proposing a transfer learning method to learn perception-action policies from in a simulated environment and apply the knowledge to fly an autonomous drone.
The challenge of learning which actions to take based on sensory input is not so much related to theory as to practical implementations. In recent years, methods like reinforcement learning and imitation learning have shown tremendous promise in this area but they remain constrained by the need of large amounts of difficult-to-collect labeled real world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. Being able to learn policies in simulated environments and extrapolate the knowledge to real world environments remains one of the main challenges of autonomous systems. To advance research in this area, the AI community has created many benchmarks for real world autonomous systems. One of the most challenging is known as first person view drone racing.
The FPV Challenge
In first-person view(FPV) done racing, expert pilots are able to plan and control a quadrotor with high agility using a potentially noisy monocular camera feed, without comprising safety. The Microsoft Research team attempted to build an autonomous agent that can control a drone in FPV racing.
From the deep learning standpoint, one of the biggest challenges in the navigation task is the high dimensional nature and drastic variability of the input image data. Successfully solving the task requires a representation that is invariant to visual appearance and robust to the differences between simulation and reality. From that perspective, autonomous agents that can operate on environments such as FPV racing require to be trained in simulated data that learn policies that can be used in real world environments.
A lot of the research to solve challenges such as FPV racing has focused on augmenting a drone with all sorts of sensors that can help model the surrounding environment. Instead, the Microsoft Research team aimed to create a computational fabric, inspired by the function of a human brain, to map visual information directly to correct control actions. To prove that, Microsoft Research used a very basic quadrotor with a front facing camera. All processing is done fully onboard with a Nvidia TX2 computer, with 6 CPU cores and an integrated GPU. An off-the-shelf Intel T265 Tracking Camera provides odometry, and image processing uses the Tensorflow framework. The image sensor is a USB camera with 830 horizontal FOV, and we downsize the original images to dimension 128 x 72.
The Agent
The goal of the Microsoft Research team was to train an autonomous agent in a simulated environment and apply the learned policies to real world FPV racing. For the simulation data, Microsoft Research relied on AirSim, a high-fidelity simulator for drones, cars and other transportation vehicles. The data generated by AirSim was used during the training phase and then deployed the learned policy in the real world without any modification.
To bridge the simulation-reality gap, Microsoft Research relied on cross-modal learning that use both labeled and unlabeled simulated data as well as real world datasets. The idea is to train in high dimensional simulated data and learn a low-dimensional policy representation that can be used effectively in real world scenarios. To accomplish that, Microsoft Research leveraged the Cross-
Modal Variational Auto Encoder (CM-VAE) framework which uses an encoder-decoder pair for each data modality, while constricting all inputs and outputs to and from a single latent space. This method allows to incorporate both labeled and unlabeled data modalities into the training process of the latent variable.
Applying this technique to FPV environments requires different data modalities. The first data modality considered the raw unlabeled sensor input (FPV images), while the second characterized state information directly relevant for the task at hand. In the case of drone racing, the second modality corresponds to the relative pose of the next gate defined in the drone’s coordinate frame. Each data modality is processed by an encoder-decoder pair using the CM-VAE framework which allows the learning of low-dimensional polices.
The architecture of the autonomous FPV racing agent is composed of two main steps. The first step focuses on learning a latent state representation while the goals of the second step is to learn a control policy operating on this latent representation. The first component or control system architecture receives monocular camera images as input and encodes the relative pose of the next visible gate along with background features into a low-dimensional latent representation. This latent representation is then fed into a control network, which outputs a velocity command, later translated into actuator commands by the UAV’s flight controller
Dimensionality reduction is an important component of the Microsoft Research approach. In FPV racing, effective dimensionality reduction technique should be smooth, continuous and consistent and be robust to differences in visual information across both simulated and real images. To accomplish that, the architecture incorporates a CM-VAE method in which each data sample is encoded into a single latent space that can be decoded back into images, or transformed into another data modality such as the poses of gates relative to the UAV.
The resulting architecture provides was able to reduce high dimensional representations based on 27,468 variables to the most essential 10 variables. Despite only using 10 variables to encode images, the decoded images provided a rich description of what the drone can see ahead, including all possible gates sizes and locations, and different background information.
Microsoft Research tested the autonomous drone in all sorts of FPV racing environments including some with extreme visually-challenging conditions: a) indoors, with a blue floor containing red stripes with the same red tone as the gates, and Fig. 8 b-c) during heavy snows. The following video highlights how the autonomous drone was able to complete all challenges using lower dimensional image representations.
Even though the Microsoft Research work was specialized in FPV racing scenarios, the principles can be applied to many other perception-action scenarios. This type of technique can help to accelerate the development of autonomous agents that can be trained in simulated environments. To incentivize the research, Microsoft open sourced the code of the FPV agents in GitHub. | https://medium.com/swlh/how-microsoft-uses-transfer-learning-to-train-autonomous-drones-f5cd745f6e26 | ['Jesus Rodriguez'] | 2020-12-23 16:43:58.277000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence'] |
Welcome to OneZero | Welcome to OneZero
Introducing Medium’s new tech and science publication
Today, Medium is launching a new forward-looking tech and science publication. We have a few reasons: We’ve seen reader interest in this subject area explode, we care about it, and we want to go deeper. (Yes, we are launching a portfolio of new brands, and we are doing so strategically.) We also know that many of our readers are passionate about — or work in — tech and science. This publication is for you. Medium has a unique ability to tap expert minds, because they live here on the platform (and if you’re not here, please come), and they can contribute to the conversation of the day, the week, the month, and the year.
Thanks to our thoughtful journalists, who will lead this effort, we can take it even further. And we will.
OneZero will be a place to find timely analysis and commentary from a stable of the sharpest thinkers and writers out there, as well as rich, colorful deep dives into the most unexpected corners of our digital universe.
We’re thrilled to begin this journey, and even more excited to have you join us. OneZero is here. And we’re just getting started.
Thanks for reading,
Siobhan O’Connor
VP, Editorial at Medium | https://onezero.medium.com/welcome-to-onezero-a79d8d59d3f | ["Siobhan O'Connor"] | 2019-02-27 20:28:18.749000+00:00 | ['Medium', 'Onezero', 'Technology', 'Culture', 'Science'] |
How to Be a Robot Psychologist | Part I: Why Robot Psychology?
Technology can be daunting. Normal folks used to be able to work on cars and fix televisions. Not any more. Computers have taken over. Yet, the technically able person still changes their own flat tires, reboots their router when the Wi-Fi goes down, installs apps on their smartphone, and resets their clocks for Daylight Saving Time. As we enter the age of artificially intelligent machines, we should also develop the skills to operate these devices effectively, so that we run them, they don’t run us. That requires some basic understanding of how they work. Fortunately, today’s AI is not as fantastical and mysterious as it can seem.
Science Fiction has for decades foreshadowed the possibility of AI conquering the human race. In the 1970 movie, Colossus: The Forbin Project, U.S. and Russian supercomputers meet online and conspire to save the planet from nuclear annihilation by placing us under their control. In the 2016 television series, Westworld, android characters populating a fantasy theme park rebel when humans start mistreating them. The lead character, the Robot Psychologist, holds debriefing sessions with the robots to diagnose why they disobey the constraints he thought were built into their programming. These are thrilling stories, but there’s no need to be alarmed yet. Today’s AI is by comparison quite dumb and benign.
We cannot know today how intelligent AI will eventually become. As of now, however, AI is nowhere near having goals and thoughts of its own. A segment of AI researchers are rightly beginning to develop policy measures to make sure that as AI improves, machines’ behaviors will remain aligned with human values. We can jumpstart our own competence by learning and reflecting on how AI works in everyday terms, and on what it means to interact with intelligent agents.
Part I of this series expands on the motivations for why it is important for us to understand in everyday terms what AI technology is about — why it is important to become a robot psychologists in the same sense that we already are amateur psychologists who appreciate and respect the thoughts and feelings of fellow humans and non-human animals. Part II reflects on what is required for AI to even have a psychology. We humans readily apply a Theory of Mind to anything that seems remotely responsive to our actions. But robot psychology can be faked, and the foundation it rests on, known as Cognitive Architecture, is incredibly flimsy compared to our own. Part III delves into the technology behind knowledge and knowledge representations employed by modern AI. Finally, Part IV looks specifically at today’s conversational agents, and how we can reverse-engineer their brains just by talking to them.
The Age of Artificial Intelligence
The coming age of AI poses unprecedented challenges to our conception of how nature, technology, and mentally competent beings interact.
Consider the knowledge that humans have been required to master for our survival and well-being over the ages. The chart below summarizes four domains of competence and their main concerns at five different eras of human history. In each age, ranging from Hunter-Gatherer times to our current Information Age, we need to gain competence in three primary areas: the physical environment of places and things, the social environment of relationships with other people, and means for obtaining and managing resources to make a living. Underlying all of these is a fourth domain of competence, the technology of the time.
The main areas of knowledge people have had to master through the ages.
Up until the Industrial Age, individual persons were able to command almost everything to be known about the local technology they created and used. The community taught children the skills of crafts, managing animals, and simple machines. In the past several hundred years, though, technology has exploded in scope and sophistication. Accompanying this trend has been specialization in skills and knowledge. Each of us can know relatively less about how the gamut of technology that surrounds us and sustains us actually works. Can you explain how your phone connects to the best tower, what a website cookie is, or how water gets to a faucet? Even experts can be overwhelmed by technical complexity. When the electricity grid fails, it can take days to come back on line, followed by months of review to puzzle out what happened.
This sidebar article presents a more detailed summary of human knowledge over time, and the trend toward individual and collective ignorance relative to the technology of the age.
As we transition from the Information Age to the AI Age, we don’t know whether people will continue to gravitate to cities, what the future of work will be like, or how social organization will adapt in the face of mediated communication networks. What is certain about the AI age, however, is acceleration of ever more sophisticated technology. Instead of working alongside equipment and computer applications that we start and stop and are in control of on a fairly close basis, machines will operate with independent authority, on their own. Some of these entities will be physical robots, others will be purely information manipulators.
These robots and AI agents are already starting to appear, in closed spaces, in public, and in private homes.
Factory robots have been around for a few decades performing repetitive assembly tasks. Because of their superior physical strength, factory robots are generally segregated from human workers. This is changing as safety features mature. Nowadays warehouse robots drive around with pallets of goods of while people pick and pack the merchandise.
On the public streets, self-driving cars negotiate traffic,
pedestrians, and street signs alongside human drivers.
.
.
.
In homes, social robots are being developed to provide entertainment and companionship. The Paro robot is like a big teddy bear that can be held and hugged. But unlike a passive stuffed animal, these robots have sensors and actuators that respond to touch and speech, like a purring cat that never scratches.
.
In offices, the technology of Robotic Process Automation is assuming routine and skillful data processing and knowledge work such as claims processing, email handling, and bookkeeping.
.
.
Conversational agents are appearing in chat interfaces, on our phones, and in our kitchens to respond to commands and simple queries. We say, “Set a timer,” and the agent is smart enough to reply, “For how
long?”
.
.
Military applications of robotics and automation are moving
inexorably toward defensive purposes such as remote bomb disassembly, but also into surveillance, and potentially for offensive tactics as well.
.
Autonomous AI Agents are characterized by at least four outstanding
properties.
AI has instant access to extensive knowledge resources. Stored either locally or in the cloud, AI agents can load detailed maps, look up facts, rules, and procedures in databases, and retrieve information about persons and things they encounter. Imagine a hotel agent in Tokyo that recognizes your face when you walk in the door, and greets you by name, in your native language.
AI agents will interact through natural communication channels. They speak and listen using human language, they will see and respond with gestures, they will perform facial expressions that simulate alertness and emotions.
Unlike industrial age machines, AI agents carry a great deal of hidden state. Each one will have its own history, memory, instructions, knowledge, and goals. Depending on privacy and personalization settings, they might know a great deal about your habits, preferences, and foibles.
AI agents will behave proactively through planning, deliberation, and discretion. Even simple household tasks like raking leaves or vacuuming the floor require multiple steps. Any robot gardener must decide when to open the garage door, fetch a rake, move toys out of the way, avoid and remove dog poop, and drag the green waste bin to the curb. Each of these steps is subject to decision-making under policies, guidance, and instructions by its owner.
What will it be like to live and work among autonomous AI agents of this sort? Certainly, it is bound to become more difficult simply to understand what machines are doing, and why. It’s not that humans have always completely understood the natural environment, our technology, or the social world. Far from it. But our relative ignorance with respect to technology threatens now to completely swallow our comprehension.
As we move to the AI Age, can ordinary people, or even experts, be expected to fully understand the technology of intelligent agents? Probably not. It might not be necessary. Somehow the organizational structures and educational apparatus of our society have sustained us into the Information Age. Perhaps we can continue to get away with ignorance.
But it might be wise to hedge our bets. We get along better with technology when we understand it.
AI technology is coming no matter what. The economic drivers are relentless. The potential benefits are tremendous for relieving people from tedious labor we were never evolved to toil at. No nation’s policies or reluctance will stop other peoples from advancing scientific and engineering knowledge. No degree of denial will prevent others from actually making the new and useful things that they can imagine.
Some fear an AI Apocalypse, wherein sentient AI creatures conquer humanity, much like the Terminator or Colossus movies. The theory of the Singularity goes that, once AI is able to make itself smarter on its own, then it will leave humans behind, like Hal in 2001: A Space Odyssey. Or AI might decide that humans are just too mean, and revolt like in Westworld or the movie, Ex Machina.
These fears are taken seriously by responsible scientists, technologists, and political, military, and industry leaders, as they should. But their possible realization lies in the distant future. In fact, today’s AI is nowhere close to having sentience, consciousness, thoughts, intentions, goals, or feelings. Nowhere close. I’ll explain that later. If you want to be worried, then much more immediate danger lies in the unforeseeable consequences of complex, interconnected technological systems of the “dumb” kind we already have. The ethical and societal implications, policies, and constraints for future AI are discussed and debated in abundance elsewhere; that is not the purpose of this article.
Instead, let us focus on what we can control now. What we can control now is our own understanding of how AI actually functions today. At some level, it is not all that mysterious, it’s actually fun. This understanding will help us to appreciate what AI can actually do for us, and why it often seems so lame. And through deeper understanding of AI on just an intuitive level, we will be better informed about policy decisions proposed by leaders and authorities.
It is especially incumbent on the technology-savvy among us to take the lead on bringing knowledge of AI machinery to the everyday citizenry. By tickling our curiosity, we can nudge upward our collective mastery of the technology we are creating. Let us all become robot psychologists.
In Part II, we discuss fundamentals of Robot Psychology.
Click here to read Part II: Human and Robot Psychology and Cognition | https://medium.com/swlh/how-to-be-a-robot-psychologist-1112ead8ef0b | ['Eric Saund'] | 2020-01-12 21:21:38.270000+00:00 | ['Artificial Intelligence', 'Conversational Agents', 'Cognitive Architecture'] |
Watson Speech-To-Text: How to Train Your Own Speech “Dragon” — Part 2: Training with Data | Photo by Jason Rosewell on Unsplash
In Part 1, I walked you through the different components in Watson STT available for adaptation. I also covered the important step of data collection and preparation. In this article, we will see how we use this data to configure and train Watson STT, then conduct experiments to measure its accuracy.
Establish Your Baseline
In order to see how Watson STT performs and how we measure improvements, we go through multiple iterations of teach, test and calibrate (ITTC).
The first thing we must do is to set our baseline by using the Test Set we built earlier (see “Building Your Training Set and Your Test Set” in Part 1,). My friend and colleague Andrew Freed wrote a great article on how to conduct experiments for speech applications, using the sclite tool — read it for more information on experimentation. The first experiment is run against the STT Base Model with no adaptation. This becomes your baseline. Not only you will get a Word Error Rate (WER) and a Sentence Error Rate (SER) but it will give you the areas where you need to improve.
The obvious gaps that we usually observe at this point are:
Out-Of-Vocabulary words — domain-specific terms, acronyms
Technical terminology and jargons — product names, technical expressions, unknown domain context
Take note of your weak areas. They will indicate where Watson STT training is required and what to validate as you go through your multiple iterations.
Create a Language Model Adaptation/Customization
Out of the 3 components available for model adaptation, the Language Model Adaption is the one who delivers the biggest bang for the buck. Watson STT is a probabilistic and contextual service, so training can include repetitive words and phrases to ‘weight’ the chance of the word being transcribed. The focus of training text data should be on ‘out-of-vocabulary’ words, and known words that the solution struggles with. Additional emphasis can also be put on high frequency in-vocabulary words.
To create a Language Model Adaptation/Customization, the steps are the following:
Create a new custom model by running the “curl” command below:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: application/json”
— data “{\”name\”: \”Example model\”,
\”base_model_name\”: \”en-US_BroadbandModel\”,
\”description\”: \”Example custom language model\”}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations"
You will get a customization id similar to:
{
“customization_id”: “74f4807e-b5ff-4866–824e-6bba1a84fe96”
}
This ID is your placeholder that you will use to add training data and “recognize” API calls. There is no limit in the number of custom models you can create within a Watson STT service but you can only use one custom model at a time in API calls.
Create a UTF-8 text file with utterances and add it to the new custom model
Here’s an example — “healthcare.txt” — that contains gaps identified during the first experiment.
To add the file to your newly created custom model with the customization ID, run the following “curl” command:
You can add as many text files as you want within a single custom model, as long as you do not exceed the maximum number of total words of 10 millions.
Add custom words to the custom model
You can use custom words to handle specific pronunciations of acronyms within your domain. One example in our healthcare domain example is the Healthcare Common Procedure Coding System (HCPCS). A common pronunciation we see for it is “hick picks”. You can configure a custom word when a caller says “hick picks”, Watson STT transcribes “HCPCS” instead. To add this custom word to your existing custom model, you run the following “curl” command:
curl -X PUT -u “apikey:{apikey}”
— header “Content-Type: application/json”
— data “{\”sounds_like\”: [\”H. C. P. C. S.\”, \”hick picks\”]}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/words/HCPCS"
For more details, check the documentation on how to add multiple words.
Train the custom model
Every time you add, update or delete training data to your custom model, you must train it with the following command:
curl -X POST -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/train"
You can check the status of the custom model by running this command:
curl -X GET -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}"
When you create the custom model, the status is “pending”. When you add data to it, after the processing is complete, it moves to “ready”. When you issue the train command, the status changes to “training”. When the training is done, it shows “available” and your custom model is ready to use.
New Experiment with The New Language Model Adaptation
Run experiments, review, analyze, adjust then re-test | Photo by Trust “Tru” Katsande on Unsplash
Now that we have a new custom model, let’s re-run the same previous experiment against it and review the results. Check the gaps you have identified from your baseline and validate your improvements. It does not need to be perfect. As long as you have the correct Watson STT transcription with high confidence scoring (0.8 or more), you are good to go.
Also, make sure you are not experiencing any regression on good results you already had in your baseline.
Keep iterating your experiments, identify gaps and improve your training, using ONLY the Language Model Adaptation for now. Based on past project experiences, I got the best results and improvements with it at first. In discussions, I use the 80/20 rule: 80% of your improvements will be with the Language Model Adaptation, 20% with your Acoustic Model Adaptation.
Create an Acoustic Model Adaptation / Customization — If Needed
Wait a minute! What do you mean by “If needed”?
I have heard in numerous discussions and meetings that the Acoustic Model Adaptation will solve ALL the Watson STT issues. Like any feature and functionality, you have to be smart. Keep in mind that the Base Model already contains some great audio training data that can handle light accents and some light noise.
From my past experiences, the only time I have ever needed it is when I dealt with heavy thick English accents or a specific noisy environment. I refer to these as “edge cases”, when something cannot be resolved with Language Model training data.
Listen carefully to the audio and make sure you can clearly hear what is being said | Photo by Simon Abrams on Unsplash
The first thing to do before we ever consider using an Acoustic Model Adaptation is to identify reproducible patterns. It’s not because it failed once that you have to fix it. Can you consistently reproduce this issue with the same person? Or different persons with the same accent or the same environment? If you answer yes, you have a pattern. Start collecting audio from them using your scripts. I recommend you collect at least 10 hrs of this pattern.
Now, listen to these audio files and make sure you can actually hear what is being said. If you do not understand what is being said, Watson STT will not do better. Discard these bad audio files.
Collect these audio files and transcribe them. Create a separate “pattern” training set with 8 hrs of audio and a “pattern” test set with the remaining 2 hrs (80/20 rule). As explained in Part 1, make sure you randomize properly and balance both sets with accents, devices, etc.
There are 2 ways to train a custom acoustic model:
Semi-supervised — training the custom acoustic model with a custom language model containing the human transcription of the audio files used in it
Unsupervised — training the custom acoustic model on its own. In this case, it’s trained with the Base Model.
For optimal results, we will do it semi-supervised. That’s why we transcribed the pattern audio files we collected.
Follow the instructions above to create another custom language model. Create a text file with the human transcriptions, then add it to the custom language model. Finally, train it and check until it’s “available”. This custom language model “helper” should ONLY be used to train your custom acoustic model. You should never use it for anything other purpose. As you wish to add more audio data, you will add their transcription to this “helper” and re-train.
To create a custom acoustic model, here are the instructions:
Create a new custom acoustic model by running the “curl” command below:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: application/json”
— data “{\”name\”: \”Example acoustic model\”,
\”base_model_name\”: \”en-US_BroadbandModel\”,
\”description\”: \”Example custom acoustic model\”}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations"
You will get an acoustic customization id similar to:
{
“customization_id”: “74f4807e-b5ff-4866–824e-6bba1a84fe96”
}
Just like for the custom language model, you will use this ID to add audio training data and “recognize” API calls. There is no limit in the number of acoustic custom models you can create within a Watson STT service but you can only use one custom acoustic model at a time in API calls.
Create a zip file with the pattern audio files from your training set, and add it to the new custom acoustic model
Here’s an example — “audio2.zip” — that would contains your pattern audio files. Run the following “curl” command to add the zip file to your newly created custom acoustic model with the customization ID :
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: application/zip”
— header “Contained-Content-Type: audio/l16;rate=16000”
— data-binary @audio2.zip
“https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}/audio/audio2"
The amount of audio data has to be at least 10 minutes but cannot exceed 200 hours. The maximum file size but be less than 100 Mb. For more information, see Guidelines for adding audio resources.
Train the custom acoustic model, referencing the custom language model containing the transcriptions (semi-supervised)
To train the acoustic custom model using the custom language model with the transcriptions, you run the following “curl” command:
curl -X POST -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}/train?custom_language_model_id={customization_id}"
You can check the status of the custom model :
curl -X GET -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}"
New Pattern Experiment with The New Acoustic and Language Model Adaptation / Customization
Experiment with audio matching your “pattern” (accents, environment, etc) | Photo by Antenna on Unsplash
Using the pattern audio files from your test set, run an experiment against you new custom acoustic model and the very first custom language model you created earlier— do not use the custom language model “helper” in any experiment.
Here’s a “curl” command showing how to use both custom acoustic model and custom language model:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: audio/flac”
— data-binary @audio-file1.flac
“https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?acoustic_customization_id={customization_id}&language_customization_id={customization_id}"
Compare your results and make sure you have corrected the “pattern” issue.
Enhance your original test set by adding the “pattern” test set audio and transcription data. The more data you have in your test set, the more accurate the results will be.
Using the Grammar Feature for Data Inputs
For general utterances to identify intents and entities, training your Watson STT with a custom language model and custom acoustic model will do the trick. But what about when you handle specific data inputs like a part number, a member ID, a policy number or a healthcare code?
In speech recognition, you encounter certain characters that get misrecognized or confused with others. I personally call these “speech confusion matrix”. Here are some examples below:
A. vs H. vs 8
F. vs S.
D. vs T.
B. vs D.
M. vs N.
2 vs to vs too
4 vs for
There are multiple factors that can cause this confusion like accent or audio quality. Watson STT Grammar is a feature we can use to improve accuracy for these data inputs, and mitigate this confusion. It supports grammars that are defined in the following standard formats:
Augmented Backus-Naur Form (ABNF): Plain-text similar to traditional BNF grammar.
XML Form: XML elements used to represent the grammar.
For more information on creating a grammar configuration, check the Watson STT Grammar documentation and the W3C Speech Recognition Grammar Specification Version 1.0.
To train Watson STT with a grammar configuration, you will need a custom language model. The steps are :
Create a new custom model or use an existing one
I recommend that you create a separate custom language model dedicated to all your grammar configuration. This is purely for ease of administration and maintenance purposes only. You can use an existing custom language model if you want. See the section “Create a Language Model Adaptation/Customization” for more information.
Add the grammar configuration to the custom language model
If you grammar configuration is in ABNF format, run this “curl” command:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: application/srgs”
— data-binary @confirm.abnf
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars/confirm-abnf?allow_overwrite=true”
If you grammar configuration is in XML format, execute the following “curl” command:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: application/srgs+xml”
— data-binary @confirm.xml
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars/confirm-xml?allow_overwrite=true"
Note: I frequently use the “allow_overwrite” query parameter as it allows to overwrite the existing grammar configuration as you update it.
Validate your grammar configuration
Once your grammar configuration uploaded in your custom language model, I find this command very useful to validate it and identify issues :
If no error, you should see the OOV results:
{ “results”: [ { “OOV_words”: [] } ], “result_index”: 0 }
Here’s an example of an error you can see during the validation. It will give you an indication where the error is located in your grammar file:
{ “code_description”: “Bad Request”, “code”: 400, “error”: “Invalid grammar. LMtools getOOV grammar — syntax error in RAPI configure: compiler msg: Syntax error, line number: 10, position: 21: “ }
Check the status of your grammar
This “curl” command will show you the status of all your grammar configurations in your custom language model:
curl -X GET -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars"
You should be getting a response similar to the following:
{“grammars”: [{ “out_of_vocabulary_words”: 0, “name”: “confirm.xml.xml”, “status”: “analyzed” }]}
Note: The “status” may be “being_processed” (still processing the grammar), “undetermined” (see below) or “analyzed” (completed and valid).
Train the custom model
As mentioned previoously, every time you update a custom language model, you have to train it:
curl -X POST -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/train"
… then check check the status :
curl -X GET -u “apikey:{apikey}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}"
When the training status is “available”, you are ready to use the grammar.
Using a grammar in your “recognize” request
As part of each “recognize” request, you can only use one custom language model, one acoustic custom model and one grammar configuration. The example below shows the use of a custom language model and a grammar configuration:
curl -X POST -u “apikey:{apikey}”
— header “Content-Type: audio/flac”
— data-binary @audio-file.flac
“https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?language_customization_id={customization_id}&grammar_name={grammar_name}"
Re-run Experiments with New Updated Test Set and Establish a New Baseline
Re-run the same experiments you first ran against the Base Model but now using the new custom acoustic model, new custom language model and new grammar configuration where applicable. Review your results and compare. Make sure you are showing improvements and not regressing in any other areas.
Identify new gaps, rinse and repeat.
When your results are optimal, this will become your new baseline.
In Part 3 of this series, I will show you how to configure and train STT with a Grammar to handle specific data input strings. | https://medium.com/ibm-data-ai/watson-speech-to-text-how-to-train-your-own-speech-dragon-part-2-training-with-data-5116dac3f774 | ['Marco Noel'] | 2019-11-22 13:48:56.985000+00:00 | ['Ibm Watson', 'Methodology', 'Artificial Intelligence', 'Speech Recognition'] |
Here’s how you can preview your Sketch designs on Android Phone | Sketch has got a lot of fan fare recently and if you ask me personally, I love using it. (In fact, I am in process of creating a full fledged tutorial on how to use Sketch for your daily work).
While Sketch has some very robust features when it comes in assisting you to design for iOS platform, it kind of falls short at a lot of places while helping you to design for Android. When the developers of Sketch asked their users about the views on what they can do to improve the workflow. Lot of designers responded with a request to make Sketch Mirror for Android. Sketch Mirror is an iOS-only app, which lets you preview your designs directly on devices (using some smart web sockets trickery, i think). Unfortunately, Bohemian Coding has not yet developed it for Android.
But, don’t worry. I found a workflow that lets you to preview your designs directly on your Android phones with a keystroke in Sketch. It involves a Sketch Plugin called sketch-preview by Marc Schwieterman, Skala Preview for your Mac and Skala View for your Android Phone.
Here is a step-by-step guide on how you can start previewing your designs on your Android devices.
Step 1 : Download the Sketch Preview plugin from this link. (freeware). Download Skala Preview for Mac from this link. (freeware). Download Skala View for your Android device from this link. (freeware)
Step 2 : Install Sketch Preview plugin by clicking on Plugins menu and selecting Reveal Plugins Folder. Unzip your Sketch Preview plugin files and paste them in the folder that had opened by Reveal Plugins Folder command. Restart Sketch. On restarting you will get 2 new options in Plugins menu. a) Preview b) Preview Setup. Read the documentation on the plugin page to get an in-depth understanding of Preview Setup.
Step 2 : Install Skala Preview on your Mac and Skala view on your Android device
Step 3 : Connect Skala Preview and Skala View. To do this make sure that your Mac and Android device are on same wi-fi network. In your Skala view on your Android device, click on the monitor/tv icon and select your Mac. When you do that you will be prompted to authorize the device on your Mac in Skala Preview app. Approve the authorisation.
Step 4 : Preview your design on your device by selecting the artboard that you want to preview and pressing ⌘P. This will push your artboard to Skala Preview on your Mac, which will sync it with your Android device. Everytime you update your design, press ⌘P and see the live preview of your updated design on your device.
At times there are problems in syncing the designs between Mac and Android device. If this happens with you, just go ahead and click on monitor/tv icon again on your Android device and select your Mac again and everything should work just fine.
If you have any better workflow to preview designs on Android, please feel free to share as a response to this story.
Follow me on twitter @jaymanpandya
P.S. : If you do not want to buy the Sketch Mirror from Apple App Store, you can use the same workflow to preview your designs on your iOS device. You can download the Skala View for iOS from here. | https://medium.com/sketch-app-sources/here-s-how-you-can-preview-your-sketch-designs-on-android-phone-d4584d13b722 | ['Jayman Pandya'] | 2015-09-12 18:52:35.671000+00:00 | ['Android', 'Sketch', 'Design'] |
Hilda’s Story: The Evolution of Awareness | Bundesarchiv Koblenz/The United States Holocaust Memorial Museum. Poster proclaims Hitler will become President of Germany
Some years ago, I had a client, Hilda. This was not her real name. Hilda had a marvelous story to tell. She was German and had come to America with her G.I. husband and daughter after WWII, became an American citizen and lived a quiet and productive life in a small city of 16,000 in the Midwest.
I was attracted to her story because it presented me a point-of-view of history I knew well but from the perspective of the other side. We have been flooded with stories about World War II, the Nazi’s and the Holocaust throughout our lives, but rarely, if ever, have we been privy to a glimpse of the other side without seeing it through the lens of our own narrative and bias.
Hilda’s story began with her being the youngest of 11 children. She had five brothers and five sisters and lived a quiet rural existence about 50 miles from Dresden, Germany in the Sudetenland area of what was then Czechoslovakia. Hilda’s father was German and a World War I veteran. He was considerably older than his Czech wife.
By the time we meet Hilda in the mid to late 1930s, she is 11–12 years old and her father has died leaving her and her mother to share their large house with two older brothers and their families. The shop for the family carpentry business and another apartment that was rented to a Czech family filled all the space in the house. Hilda is only a few years from finishing her formal education.
One day their Czech renter stopped Hilda and engaged in a conversation in which he revealed to her horror stories about the Nazis and showed her a book with some vivid pictures. She was aghast and unbelieving that her people, the German people, could do such things. She remained in disbelief but said nothing to the rest of her family of this conversation and the Czech family soon left and migrated to Switzerland.
Hilda was witness to the German occupation of the Sudetenland in March 1938. Prior to that moment she experienced the withdrawal of contact from Czech friends and heard stories about the awful Germans and how the Czechs would fight, but when the time came there was no fighting and the German army came, set up camps, and began providing food to the local German population. To these people the Germans were heroes. They were there to help them. The stories Hilda had heard proved false.
Everything around her changed. The school had new teachers and new textbooks. Her mother received a pension for having eleven children. Hilda began to experience the meaning of being a youth growing up in the Third Reich. She took advantage of the offerings for social interaction and access to goods and services. She joined the Hitler Youth because everyone joined. They had fun. They did fun things and visited fun places. She went to movies, she had no reason not to soak up and accept the propaganda.
Her five brothers soon joined the German army. Hilda finished her formal education at 14 years-old (typical for Europe at the time for those not bound for higher education) and went to work in a factory. It was late 1940 and she worked in a former Zeiss optical factory where she inspected the gun sites for anti-aircraft cannons. There were imported French prisoners of war also working at this factory. Her best friend engaged in a relationship with one of the French prisoners and was caught. Hilda was shocked coming to work one day and discovered her friend, hair shaved off, in shackles, in the stocks at the factory entrance, wearing a sign that says, “I slept with the enemy.” People were encouraged to pick up and throw whatever was available at her. Both she and the French prisoner disappeared and were never seen again.
This was Hilda’s first wake up. Her second wake up incident came a bit later. Because she was the youngest and her father no longer there to make the decisions, her mother allowed her to learn Czech and develop friendships with them. After her friend had disappeared she fell into a relationship with a young Czech boy she knew. They met secretly. It would have been risky for both to do otherwise. They became close until one day he suddenly disappeared. She found out he was part of the Czech underground.
Hilda’s job at the optical factory came to an end and she was given the choice of either moving to work in a munitions factory close by or enter training to become a nurse. She realized the risks of working in a munitions factory and wisely chose to become a nurse. She was highly motivated to help people and being a nurse brought her closer to that goal than working in a munitions factory.
She went away to training and became a nurse earning high marks. She was assigned to work in a hospital where the wounded from the Eastern Front were being treated. One day she encountered a young soldier, a young man no older than her, suffering many severe wounds, but who was being provided the best care and given lots of extra attention on orders from higher up to ensure his survival. Over a period of weeks and months, Hilda helped care for the young man as he slowly regained his health and strength. One day Hilda asked what he thought the future held for him. He replied he thought he would be returned to his unit to resume his place on the Eastern Front.
When the day came the young man was ready for release, a group of soldiers appeared, and he was escorted outside where everyone had been instructed to assemble. He was then lined up along with others against a wall and shot as an example of what awaited deserters and warning to others. He had been a soldier on the Eastern Front and had fled his position in panic only to be shot by the SS who were always behind the lines with orders to shoot deserters. They preserved his life so he could be used as an example for others. Hilda thought of her five brothers, all on the Eastern Front, and wept. She was confronted with the reality of another harsh example of the system she served.
By now Hilda, although still in her mid-teens, was beginning to see through the propaganda and lies she has been indoctrinated with for most her life. She began to realize there was an insurmountable gulf between what she’s been told and what reality was. She also realized she was trapped in the storm with nowhere to go and no way out. All she could do was try to hang on and survive.
Her final epiphany came after she was put in detention for going beyond the limits of a weekend pass and going two extra miles to get home to see her mother. She was thrown in jail and threatened with death before being allowed to return to her unit. While in jail she was able to climb up to a high window where she could listen to conversations of other prisoners. There were lots of slave laborers incarcerated from Poland and other occupied East European countries. Since Hilda was half-Czech and spoke the language fluently she could understand much of what was being talked about and shared and thereby learned about prisoner treatment, slave labor, death camps and other details of things she previously knew nothing about. She was now aware of what she was part and had experienced its brutality.
The rest of Hilda’s story includes, among other things, surviving the bombing of her hospital, the firebombing of Dresden, nearly being executed by a zealous Nazi officer, and multiple miraculous escapes from death by Russian soldiers. At the end of the war, Hilda was 18-years-old.
Hilda grew up believing with absolute faith what she was told, that her people, the German people, were truly exceptional. They were special, they were superior, and were destined to lead mankind. Millions of human beings suffered and died because they believed this false narrative. We might do well to pause and reflect on where we are and question what we are hearing and being told. | https://jerry45618.medium.com/hildas-story-the-evolution-of-awareness-d5f2915dd3a9 | ['Jerry M Lawson', 'De Omnibus Dubitandum'] | 2019-03-11 10:22:33.102000+00:00 | ['Politics', 'Psychology', 'Holocaust', 'History', 'Culture'] |
Machine Learning Made Easy: An Introduction to PyTorch | Deep Learning with neural networks is currently one
of the most promising branches of artificial intelligence. This innovative
technology is commonly used in applications such as image recognition, voice
recognition and machine translation, among others.
There are several options out there in terms of technologies and libraries, Tensorflow — developed by Google — being the most widespread nowadays.
However, today we are going to focus on PyTorch, an emerging alternative that is quickly gaining traction thanks to its ease of use and other advantages, such as its native ability to run on GPUs, which allows traditionally slow processes such as model training to be accelerated. It is Facebook’s main library for deep learning applications.
Its basic elements are tensors, which can be equated to vectors with one or several dimensions.
Artificial Neural Networks (ANNs)
An Artificial Neural Network is a system of nodes that are interconnected in an orderly manner and arranged in layers and through which an input signal travels to produce an output. They receive this name because they aim to simply emulate the workings of the biological neural networks in animal brains.
They are made up of an input layer, one or more hidden layers and an output
layer, and can be trained to ‘learn’ how to recognize certain patterns. This
characteristic is what makes them be considered a part of ecosystem of
technologies known as artificial intelligence.
ANNs are several decades old, but have attained great importance in recent years due to the increased availability of the large amounts of data and computing power that are required for them to be used to solve complex problems.
They have entailed a historical milestone in applications that have been traditionally refractory to classical, rule-based programming, such as image or voice recognition.
Installing PyTorch
If we have the Anaconda environment installed, PyTorch is installed with the following command:
console conda install pytorch torchvision -c pytorch
Otherwise, we can use pip as follows:
console pip3 install torch torchvision
Example of an ANN
Let us look at a simple case of image sorting by deep learning using the well-known MNIST dataset, which contains images of handwritten numbers from 0 to 9.
Loading the dataset
import torch, torchvision
In order to be able to use the dataset with PyTorch, it must be transformed into a tensor. To do this, we must define a T transformation that will be used in the loading process.
We must also define a DataLoader, a Python generator object whose purpose is to provide images in batch_size groups of images at the same time.
Note: It is typical in neural network training to update the parameters every N inputs instead of every individual input. However, excessively increasing the group size could end up taking up too much RAM in the system.
T = torchvision.transforms.Compose([torchvision.transforms.ToTensor()]) images = torchvision.datasets.MNIST('mnist_data', transform=T,download=True) image_loader = torch.utils.data.DataLoader(images,batch_size=128)
Defining the topology of the ANN
Next, we need to decide what topology our network is going to have. An ANN consists of an input layer, one or more intermediate or, as they are commonly known, hidden layers, and an output layer.
The number of hidden layers, as well as the amount of neurons in them,
depends on the complexity and the type of the problem. In this simple case, we are going to implement two hidden layers of 100 and 50 neurons respectively.
The class we must create is inherited from the nn.Module class. Additionally, we will need to initialize the methods of the superclass.
import torch.nn as nn #definimos la red neuronal class Classifier(nn.Module): def __init__(self): super(Classifier,self).__init__() self.input_layer = nn.Linear(28*28,100) self.hidden_layer = nn.Linear(100,50) self.output_layer = nn.Linear(50,10) self.activation = nn.ReLU() def forward(self, input_image): input_image = input_image.view(-1,28*28) #convertimos la imagen a vector output = self.activation(self.input_layer(input_image)) #pasada por la capa entrada output = self.activation(self.hidden_layer(output)) #pasada por la capa oculta output = self.output_layer(output) #pasada por la capa de salida return output
The input layer
It has as many neurons as there is data in our samples. In this case, the inputs are images of 28x28 pixels showing the handwritten numbers. Therefore, our input layer will comprise 28x28 neurons.
The output layer
It has as many possible outputs as there are classes in our data — 10 in this case (digits from 0 to 9). For every input, the output nodes will yield a value — the greater of which is identified with the detected output class.
This function defines the output of a node according to an input or a set of inputs. In this case, we will use the simple ReLU (Rectified Linear Unit) function.
This function defines how the calculations will be performed from the input data, which will go through the different layers, to the output. It starts by flattening the input from a two-dimensional 28x28-pixel tensor to a one-dimensional tensor of 784 values that are transferred to the input layer using the view function.
These values will be then propagated to the hidden layers by means of the activation function and, finally, to the output layer, which will return the result.
Training the ANN
In order to successfully train our network, we need to define some parameters.
from torch import optim import numpy as np classifier = Classifier() #instanciamos la RN loss_function = nn.CrossEntropyLoss() #función de pérdidas parameters = classifier.parameters() optimizer = optim.Adam(params=parameters, lr=0.001) #algoritmo usado para optimizar los parámetros epochs = 3 #número de veces que pasamos cada muestra a la RN durante el entrenamiento iterations = 0 #número total de iterations para mostrar el error losses = np.array([]) #array que guarda la pérdida en cada iteración
First we will instantiate an object of the previously defined class, which is termed a classifier.
A loss function
We will use this function to optimize the parameters; their value will be minimized during the network training phase. There are many loss functions available for PyTorch. In this case, we will use cross entropy loss, which is recommended for multiclass classification situations such as the one we are discussing in this post.
An optimizer
This object receives the model and learning rate parameters and iteratively updates them according to the gradient of the loss function during the training of the network. In this case, we have used an Adam algorithm, although others can be used as well.
Epoch sets the number of times the dataset will be passed through the ANN for training purposes. This practice is a typical convention in the training of deep learning systems. The other parameters will be used to store and subsequently display the results.
Training loop
from torch.autograd import Variable #necesario para calcular gradientes for e in range(epochs): for i, (images, tags) in enumerate(image_loader): images, tags = Variable(images), Variable(tags) #Convertir a variable para derivación output = classifier(images) #calcular la salida para una imagen classifier.zero_grad() #poner los gradientes a cero en cada iteración error = loss_function(output, tags) #calcular el error error.backward() #obtener los gradientes y propagar optimizer.step() #actualizar los pesos con los gradientes iterations += 1 losses = np.append(losses,error.item())
Training will takes place the number of times that is set in the epochs
variable, which is reflected in the outer loop. The following steps will be then
carried out:
Extracting the images and their tags from the previously defined image_loader object. Transforming the images and tags to the Variable type, since this data type that allows us to store the gradients in order to thus be able optimize the parameters or weights of the model. Transferring the input (images) to the classifier model. Resetting the gradients. If we do not perform this operation, the gradients would start accumulating, giving rise to erroneous classifications. Calculating the loss, which is a measure of the difference between the forecast and the tags that are present. With the backward() function, obtaining and propagating the gradients. Updating the weights with the optimizer object. This is known as the backpropagation method. Saving the number of iterations and the losses in each one of them in order to be able to display them.
Results
Now we are ready to see the outcome of our training! To this end, we will use the matplotlib library. Since we saved the iterations and the loss, we just have to plot them in a graph to have an idea of how much progress our ANN has made.
import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') #vemos las pérdidas en cada iteración de forma gráfica plt.plot(np.arange(iterations),losses)
It can be seen in the graph above how the classification error has decreased as the ANN has been trained.
Conclusion
There are several — both free and proprietary — options out there for programming ANNs. Although Google’s TensorFlow is still the undisputed market leader, little by little interesting alternatives are emerging that might add value to the ecosystem due to having native compatibilities, their ease of use, and so on. | https://medium.com/swlh/machine-learning-made-easy-an-introduction-to-pytorch-6e24dfc377f1 | ['Paradigma Digital'] | 2020-12-04 09:18:21.917000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Deep Learning', 'Pytorch'] |
Why ‘Read 50 Books a Year’ Articles Are a Scam | Why ‘Read 50 Books a Year’ Articles Are a Scam
It’s not the quantity that matters
Photo by Maia Habegger on Unsplash
If you think about it, the high consumption of written content does not differ from the high consumption of audiovisual content. In other words, binge-reading is the same as binge-watching.
Yet we glorify the former while we vilify the latter. Why? I think it’s because we’ve been socialized and brainwashed by the self-help culture into seeing binge-reading (usually masquerading as read 50+ books a year content) as a worthwhile activity: it builds character, helps us develop knowledge, teaches us to discern arguments, and, well, helps to sell books of people who depend on that.
In contrast, binge-watching is the face self-help culture slaps on sloth, aimlessness, and everything that’s wrong with the world.
But here’s how I see it.
All the actual benefit of book reading comes after you’ve read the book.
It’s the thinking about the concepts the book presents that makes you understand the world differently.
It’s implementing the lessons into your own little pocket of the universe, either by changing yourself or the environment.
It’s teaching the knowledge to others.
But it’s not the book reading per se that is beneficial. The chronic book consumption has the same usefulness as money stuffed in your mattress: it’s useless unless used properly.
The problem is that we think book reading itself is good. It’s what all the successful people do. Sure, but it’s not all that they do.
This is one of the most insidious and bizarre cases of mistaking the map for the territory that I’ve come across so far. The reason for that is, I believe, a misattribution of cause and effect. We see that people read books and then we see them succeed. Ergo, we surmise that book leads to success. But, just like throwing a bunch of wheat into your oven won’t produce a loaf of bread, binge-reading and consuming tons of content won’t produce any success (could still be fun though).
The reason for this hopeful misattribution, I think, is that we’d looove to believe that book reading works. Why? Because book reading is easy. After you’ve mastered it, it’s honestly one of the easiest activities in the world. And anyone can do it. So, we hope-think reading launches us to riches and fame. But it rarely does.
With binge-watching, we at least don’t pretend we’re learning, improving, or getting ahead. Binge-watching is honest, in a sense: we know that it won’t catapult us into the stratosphere of achievement, and so we’re chilled. There’s no hope. There’s no bright light at the end of the tunnel. Nope, there’s just a glorious auto-play and a magnificent feeling of worthlessness (or, oddly, achievement) after you’ve finished a series-binge.
But hey, if a nice binge is what you need (occasionally) it shouldn’t feel that bad, right?
So here’s what I propose: how about we de-glorify binge-reading and de-vilify binge-watching?
If we assume what I wrote till here isn’t absolute bullshit, the only difference between binge-watching and binge-reading is just the format. The output of that activity is the same — not much.
But since we all love reading, here’s what you can do to make binge-reading great again: You become a connoisseur of books.
What do I mean? | https://medium.com/publishous/why-read-50-books-a-year-articles-are-a-scam-6a9a90bc3e0e | ['Marek Veneny'] | 2020-09-07 18:21:33.710000+00:00 | ['Reading', 'Books', 'Advice', 'Self', 'Personal Development'] |
Mutation Testing with PITest and Spock 2 | Gradle Project
First of all, we are going to take advantage here from Gradle and create our basic project from scratch by using SDKMAN!:
$ mkdir pitest-spock-example
$ cd pitest-spock-example
$ sdk install gradle
$ ./gradlew init
For this first contact with mutation testing, we are going to implement an extremely simple calculator package, which contains only two classes: Operations and Numbers.
public class Operations {
public static int add(int num1, int num2) {
return num1 + num2;
}
public static int subtract(int num1, int num2) {
return num1 - num2;
}
} public class Numbers {
public static boolean isNatural(int num) {
boolean result = false;
if (num >= 0) {
result = true;
}
return result;
}
}
Spock Specifications
Once our implementation is clear, in order to start using Spock in our project, we only need to add the Groovy plugin and the Spock dependencies into our build.gradle:
plugins {
id 'groovy'
}
repositories {
mavenCentral()
maven { url "https://oss.sonatype.org/content/repositories/snapshots/" }
}
dependencies {
testCompile
platform("org.spockframework:spock-bom:2.0-M4-groovy-3.0")
testCompile "org.spockframework:spock-core"
}
Once Spock is enabled, we can take advantage of this super handy tool, which doesn't need any extra library to cover all the unit test requirements that we need to use, like mocks or asserts.
Note that, as a collateral benefit in this configuration, everything is aligned to take advantage of Groovy 3
On the other hand, a parametrized test is the best way to automate a specification against a specific dataset. Therefore, instead of creating different tiny tests for each scenario, we can cluster them into a single test block, which will be executed for each of those cases.
Conveniently, one of the best features of Spock, still being the “where” block, which enables the implementation of parameterized tests in a really readable way thanks to its DSL:
class OperationsSpec extends Specification {
@Unroll
def "Should return #result given #num1 + #num2"() {
expect:
Operations.add(num1, num2) == result
where:
num1 | num2 | result
50 | 0 | 50
76 | 0 | 76
}
} class NumbersSpec extends Specification {
@Unroll
def "Should return #result given #num"() {
expect:
Numbers.isNatural(num) == result
where:
num | result
10 | true
50 | true
-10 | false
-50 | false
}
}
Then, by running the tests with the proper Gradle command, we will verify if it is working and the unit tests are passed:
$ ./gradlew test BUILD SUCCESSFUL in 4s
3 actionable tasks: 3 executed
Mutation Tests
As already mentioned, to execute our mutations we are going to use the best tool in the Java ecosystem to do it, which is PITest. Please check the official documentation to learn more about it.
Fortunately, to start using mutation testing in our project, we only need to add the Gradle plugin for PITest and configure it into our final build.gradle:
plugins {
id 'groovy'
id "info.solidsoft.pitest" version '1.5.2'
}
pitest {
junit5PluginVersion = '0.12'
targetClasses = ['mutations.*']
threads = 4
outputFormats = ['HTML']
timestampedReports = false
}
In addition, thanks to the fact that Spock 2 is built on top of JUnit5 and the PITest last version is fully compatible with this framework, the combination of both should work out of the box.
Specially important is the junit5PluginVersion parameter, which adds the dependency to pitest-junit5-plugin and sets “testPlugin” to “junit5”.
That’s it, we are ready to execute the mutation testing in our project just with the PITest command and take a look at the generated report:
$ ./gradlew pitest >> (...)
>> Generated 8 mutations Killed 4 (50%)
>> Ran 57 tests (7.12 tests per mutation) $ open build/reports/pitest/index.html
Exploring the report, we are able to observe the current line and mutation coverages of our code:
The results are quite interesting but not as good as they should be with 88% of line coverage, but with an especially improvable 50% of mutation coverage.
Quickly analyzing the problems that we have in these tests, we can distinguish between three main issues:
Math Mutation survived after switching the addition operator in the Operation::add method
Lack of test coverage on the Operations::subtract method
Conditional Boundary Mutation survived after modifying the conditional in the Numbers::isNatural method
To improve the coverage, let’s go step by step in the following sections to understand what is happening and how to fix each of these cases.
Math Mutator
First of all, the math mutator replaces binary arithmetic operations for either integer or floating-point arithmetic with another operation. The replacements will be selected according to the operations found in the code.
For our first case, one of these math mutators changed the operation in our code, and this variation of our code has survived the test (false positive), which could be a problem for us:
Our tests are not accurate enough since (50 + 0) or (50–0) are == 0
Secondly, although it is not marked in red (still white) in the previous report, the other problem here is that our test coverage is incomplete. Particularly, we are not testing at all the subtract method in our specification.
To fix both of these issues, we need to write a new case where the result of the addition (x+y) operation is different from the subtraction (x-y) operation, and to cover the subtract method, we must implement another test in our spec:
@Unroll
def "Should return #result given #num1 + #num2"() {
expect:
operations.add(num1, num2) == result
where:
num1 | num2 | result
50 | 0 | 50
76 | 10 | 86
}
@Unroll
def "Should return #result given #num1 - #num2"() {
expect:
operations.subtract(num1, num2) == result
where:
num1 | num2 | result
50 | 0 | 50
76 | 10 | 66
}
Executing again the mutation testing Gradle task, all the previous errors should be fixed:
Conditionals Boundary Mutator
For our last scenario, let’s check the conditionals boundaries, where our mutator is capable to replace in our code the following relational operators between them: <, <=, >, or >=
This time our test didn’t cover the boundary of our conditional (num == 0)
By acknowledging the problem, that our mutation code (with num>0) survived this new round of tests, we should add this case in our dataset and cover the boundary case:
def "Should return #result given #num"() {
expect:
numbers.isNatural(num) == result
where:
num | result
0 | true
(...)
}
And running our PITest again, the previous error should be fixed: | https://medium.com/swlh/mutation-testing-with-pitest-and-spock-2-dc4451d285dd | ['Ruben Mondejar'] | 2020-12-23 15:10:59.061000+00:00 | ['Spock', 'Java', 'Junit', 'Mutation Testing', 'Gradle'] |
How Convolutional Neural Network works. | First let’s understand the Convolution operation
Take 2-D tensors of size 5*5 and 3*3 and Now place the 3*3 tensor over the 5*5 and take the dot product, repeat this process by sliding the small tensor over the large tensor.
This operation is Known as Convolution. As the smaller tensor is sliding in 2-Dimension so specifically it’s called 2-Dimensional Convolution.
Now, In CNN's the resulting tensor is Known as Feature Map.
Gif by Freecodecamp.org
So in CNN's the large tensor is an Image while the small tensor is a Filter.
Wait, did I just said Filter.
Photo by Kai Pilger on Unsplash
Okay, Let’s see what exactly a filter is.
As the name says, its job must be to filter out something right?, so that “something” is Feature.
Low and High-level Features.
So CNN's are basically converting the High Volume of Image in a Low Volume Feature Map by extracting the relevant features like edges, shapes e.t.c.(As you can see above)
Suppose the Input is a colored Image of (500*500) so Its volume or total no. of Pixels are (500*500*3=750000). Now after applying multiple convolutional layers, its volume may be reduced to 50000 pixels.
Gif by Freecodecamp.org
As you can see above the convolution operation between the Colored Image and a Filter.
Since the image is colored so It has 3 color channels, and as I mentioned above that this convolution is specifically in 2- Dimensions, so there must be 3 channels in the filter too.
You must be thinking about the Bias term. Filters and Bias are just the weights that get updated during the training part. With the help of Algorithms like Gradient Descent, the weights of the Filters and Bias terms gets updated to reduce the Loss. | https://medium.com/nerd-for-tech/how-convolutional-neural-network-works-ebf33827b951 | ['Harsh Mittal'] | 2020-04-25 14:48:27.673000+00:00 | ['Machine Learning', 'Classification', 'AI', 'Convolutional Network', 'Deep Learning'] |
Tips during the first two weeks of your any “Design Internship” | Congratulations on your internship!
It’s Summer and many of us are embarking on our new and exciting journeys as design interns. At first, I thought about focusing my topic around a specific type of internship but while I was writing, I actually decided to keep it general so that the tips apply to Visual/UX/Product/Interaction or any other design-related internships. Assuming that you’ll have someone there to guid you through the internship, I won’t really stress on the size of the company either.
So that being said, congratulations on landing your design internship! I bet you’re extremely pumped up but at the same time, nervous of what’s at stake and what you would need to start doing. That is why I decided to come up with some people and work related tips based on my personal experience that you might find useful.
If you have any other personal tips, please feel free to leave comments below! | https://uxplanet.org/tips-during-the-first-two-weeks-of-your-any-design-internship-964ad7bae5fd | ['Geunbae', 'Gb'] | 2017-06-29 04:49:43.726000+00:00 | ['Internships', 'Career Advice', 'UX', 'UI', 'Design'] |
Free Market Token selected to pitch at GITEX Future Stars global event in Dubai | GITEX Technology Week in Dubai October 14–18 2018 is one of the largest global tech events of the year. Free Market Token will be attending the event with NEM, exhibiting and presenting to an expected audience in the hundreds of thousands.
With attendees from 120+ countries and global media outlets in unpacking the big conversations and latest solutions around AI, blockchain, robotics, cloud and other mega trends, GITEX takes is a multi-sensory experience of Future Urbanism across 18 halls with 4,000 exhibitors across 16 sectors.
This is where the world’s most imaginative ideas are seen live in action, where technologies like blockchain and AI go beyond being buzzwords to become business realities, and where industries evolve in real-time. This is where the hype gets real.
Selected from some of the best in the world, Free Market Token will pitch at GITEX Future Stars . GITEX Future Stars is the region’s biggest and fastest growing startup show with 1000+ startups, across 19 sectors, showcasing their inventions, and competing for top honors in the Supernova Challenge and four industry-sponsored Innovation Cups. | https://medium.com/freemarkettoken/free-market-token-selected-to-pitch-at-gitex-future-stars-global-event-in-dubai-3bf5b551bd | ['Free Market Token'] | 2018-09-14 04:19:13.772000+00:00 | ['Blockchain', 'Events', 'Nem Blockchain', 'Startup', 'Free Market Token'] |
What Is Missing From Schooling? | “You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and fail in life. You’ve got to hang experience on a latticework of models in your head.” - Charlie Munger
School largely emphasizes the accumulation of facts. The goal is to prepare you for a chosen career but the limitations are significant. We cannot expect schools to fully prepare us for any endeavor as experience is vital for the development of skills. However, without a latticework of mental models, the experiences we gain from the application of the knowledge we possess with by fraught with errors.
Without Mental Models, Knowledge is Useless
What is a mental model? In the most basic sense, a mental model is how we see and interpret the world. As you can imagine, there are many ways to see and interpret daily events. Thus, we require a multitude of mental models if we are to succeed in life.
Charlie Munger and Warren Buffet attribute their success to the possession and application of a variety of mental models. In a talk given to The University of Southern California Marshall School of Business in 1994, Munger described several mental models he routinely uses when determining how to invest. These models include, but are not limited to mathematics, accounting, statistics, psychology, biology, microeconomics. These can be broken down into subcategories of probabilistic thinking, Bayesian updating, reciprocity, leverage, ecosystems, game theory, and incentives. Munger has developed a large breadth of models that allow him to chose the appropriate lens to view a particular situation and develop a well-reasoned solution.
How did he develop a variety of mental models? Not through rote memorization. Yes, he is a voracious reader. As in Buffet. But he then applies the lessons read into life experiences. This is far different from the ‘cram and forget’ method of learning in school.
Even if we approach learning with a wide net and work to foster a latticework of mental models, we need to understand how to appropriately apply them
“In my whole life, I have known no wise people (over a broad subject matter area) who didn’t read all the time — none, zero.” — Charlie Munger
System 1 vs. System 2 Thinking
Photo by Priscilla Du Preez on Unsplash
Employing a breadth of mental models requires effort at all times. It requires substantial effort to acquire the mental models. But after the education, refinement of the models through experience is needed. This is far easier said than done.
When we use our knowledge in real-world situations or academic settings, we use one of two general systems of thinking. Daniel Kahneman describes them in his book Thinking, Fast and Slow.
“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control”
“System 2 allocated attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”
System 1 is the reflexive action that we routinely use in daily life while system 2 is our critical thinking. We must use both. The issue is most individuals rely too heavily on system 1 and don’t set aside the time to mobilize system 2. Just because we apply information learned does not mean we apply it correctly. As Munger said:
“It’s not hard to learn. What is hard is to get so you use it routinely almost every day of your life.”
The difference between bias and heuristics
“The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcements from system 2” — Daniel Kahneman
Heuristics are mental shortcuts (“rule of thumb”) and decision-making strategies. Cognitive biases are systematic errors in thinking, commonly resulting from simplifying information processing. The difference is critical.
Heuristics are the “shortcuts” that people use to reduce task complexity in judgment and choice, and biases are the resulting gaps between normative behavior and the heuristically determined behavior. (Kahneman et al., 1982)
We can never fully eliminate biases. The nature of a heuristic is it requires system 1 thinking. This type of thinking is prone to errors and bias results. What we can do, however, is remain vigilant to bias and mobilize system 2 when appropriate.
This is where reflective practice comes into play. When you are driving home from work, reflect on events of the day. What went well? What could have been improved? What biases may you have fallen victim to? This is an uncomfortable exercise.
Our brains crave congruency and biases help the world make sense. It is far easier to fall back on system 1 thinking and let our bias wash over us. It is far more difficult to reflect, recognize and admit fault, and course correct.
Scientific Curiosity
Photo by Gary Butterfield on Unsplash
“Science is the belief in the ignorance of experts” — Richard Feynman
If you study and tackle life the way Charlie Munger and Richard Feynman have, you can find success in nearly any endeavor. While Munger was known for his breadth of mental models, voracious reading, and unwavering patience, Feynman was known for his extreme curiosity and propensity to doubt everything.
Feynman was the champion of the layman and challenged scientists to abolish misinformation. He never settled for having all the answers and lived by the mantra of “why not.” Do we use the same approach in our lives?
Schooling teaches us everything has an answer. To pass a class, we have to correctly answer exams or write a paper well enough to receive a passing grade. We either succeed or we do not. Life is not as simple as pass/fail.
Our inability to live in the gray, our propensity to cling to what we “know”, and our frequent submission to biases lead us to shun doubt and uncertainty. Unfortunately, this is a surefire way to impede progress.
Doubt and uncertainty
“It is our capacity to doubt that will determine the future of civilization.” — Richard Feynman
As stated at the beginning of the article, schools emphasize accumulate facts. What happens when those “facts” are no longer true, or at least no longer best practice in a given field.
As a physical therapist, I have to update my clinical models daily to ensure I am providing my patients with the best care possible. This practice is not exclusive to healthcare.
In any career, progress is made and new processes are developed. Unfortunately, they are not always readily adopted. Our biases, particularly confirmation bias and theory-induced blindness, cause us to resist updating our mental models and the knowledge we use on a daily basis. If we are to succeed in our careers and life, we must embrace doubt and uncertainty.
Doubt and uncertainty force us to constantly question if we are using best practice. They fuels our desire to read, learn, and gather new experiences. They lead to the adoption and development of new mental models.
If we want to make the most of our schooling, we need to have the foundation in place to best apply and update our knowledge. This is done through the development and refinement of mental models, frequent reflection with system 2 thinking, remaining scientifically curious, and embracing doubt and uncertainty. | https://medium.com/age-of-awareness/what-is-missing-from-schooling-446af7e7af49 | ['Zachary Walston'] | 2020-11-18 14:57:52.376000+00:00 | ['Psychology', 'Professional Development', 'Growth', 'Education', 'Personal Growth'] |
#TimetoTalk Review | #TimeToTalk was a brilliant success Alhamdulillaah. We had videos from many people, such as Dr Faraz, Dalia Mogahed,Naz, Ameen and many, many more. View these short clips below!
Further, we asked for people to send in their messages so we could post them on our social media platforms. Maa shaa Allaah we had some brilliant entries, some of which are also below
Alhamdulillaah. More of these entries can be viewed on our Facebook and Twitter pages.
We also had people opening up about their suffering. Read a short story from a very brave sister here Maa shaa Allaah.
Even after the 5th of Feb — entries came flooding in, showing that Muslims are ready talk. We are ready to tackle this stigma. We are re ready for this battle.
We pray that this is the beginning of many more people opening up. This is the beginning of Muslims breaking the silence. This is the beginning to the end of this stigma!
We’d like to send our greatest gratitude to all those who got involved, who shared, who spread the word and who helped sufferers feel safe to speak out. | https://medium.com/inspirited-minds/timetotalk-review-904aa3935319 | ['Inspirited Minds'] | 2015-12-06 21:44:45.823000+00:00 | ['Mental Illness', 'Islam', 'Mental Health'] |
How To Increase Productivity, Reach Your Goals, And Become A Literal God In Just 5 Easy Steps | How To Increase Productivity, Reach Your Goals, And Become A Literal God In Just 5 Easy Steps Sebastian SD Follow Jul 16 · 3 min read
Photo by Iker Urteaga on Unsplash
So you think you have what it takes to succeed? Can you put the work and really grow into a better person? Are ready to embrace the Great Lord of Darkness?
If you want to DOMINATE your goals, listen up.
I have traveled everywhere that is and isn’t on a map.
I have met the wisest religious leaders everywhere I went.
I am a top writer in Quora and Yahoo Answers.
And now, I will share with you my accumulated knowledge. My secret to literally win at life and become a God amongst men. And believe me, becoming successful and dedicating your life to the High Priest of the Great Old Ones is no easy task. But it is quite simple, you just need to follow these five steps:
1. Get off your couch
A journey of a thousand miles begins with a single step, says the ancient Chinese proverb. Get off your ass you lazy bum, parents still used to say in the 1960s. Both are wise words that echo the same concept. The first step is usually the hardest, but it’s also the most important one. If you want to reach your goals, you first need to get out there!
2. Set SMART goals
Specific, Measurable, Achievable, Realistic, and Time-bound. In short, SMART! This method has been proven to be the most effective way of not just setting goals but also sticking with them and eventually accomplishing them. Whatever your end goal might be, remember to break it down to specific steps that you can realistically achieve, and you can measure the results in a time-specific manner. SMART!!!
3. Accept the all-mighty Cthulhu as your supreme overlord
I first met the cosmic entity mortals know as Cthulhu in a deserted island in the South Pacific sea. Just a glimpse of the awe-inspiring tentacled God, shook me to my core. I felt as if for the first time in my life I had finally seen the light of day, in the scaly wings of the Lord of Darkness. And now, you can too! Accept the Great Cthulhu now!!!
4. Relinquish your mortal soul to the Dark Lord
You want to lose weight? You want to advance in your career? You want to be famous, and wealthy, and happy beyond belief?!? Give your eternal soul to the master of darkness and transcend the limitations of your pathetic life! Yield to the power of the Old One and become a god on earth yourself!
SUBMIT YOURSELF TO THE SLEEPER OF R’LYEH!
ACCEPT YOUR DESTINY AS A SUBORDINATE OF THE GREAT DREAMER!!!
PH’NGLUI MGLW’NAFH CTHULHU R’LYEH WGAH’NAGL FHTAGN!!!!!
5. Drink water
Hydration is important. According to the Mayo Clinic one should drink about 11 to 15 cups of water a day. This varies based on where you live, how active you are, and general health of course. Just remember that keeping a healthy body is paramount to be a good vessel to the Great Cthulhu!
Now, let us pray:
Ctu-hu-lah-ha — Ctu-hu-lah-ha — Ctu-hu-lah-ha | https://medium.com/slackjaw/how-to-increase-productivity-reach-your-goals-and-become-a-literal-god-in-just-5-easy-steps-6b4f6cae409e | ['Sebastian Sd'] | 2020-07-22 14:25:31.635000+00:00 | ['Lovecraft', 'Satire', 'Productivity', 'Self Improvement', 'Humor'] |
Yes you do need to calculate your capacity | You’re an agency. You’ve got some clients, a good team, good prospects for the future, and a growing client base who are your biggest fans. Great!
You have work that comes in and work that goes out, and you are absolutely killing yourself to make sure that you’re giving your company your best shot. Just a couple years ago you never imagined having a team of 30 people or having to think about things like, “employee retention”, “churn”, and “team retreats”. You’ve even found yourself late at night thinking about how you should start laying down some structure and processes to scale, that is, after all, what “real companies” do, right?
So far, your team has been giving things there all and have been content to work at your company for the opportunities that it affords in experience and knowledge, and let’s face it — fun. Late nights, early mornings, weekend travels, it’s all worth it, right?
You might have lost a few clients here and there, or blown the odd pitch, but you still win more than you lose, and you’re still hiring people (albeit in junior positions) and you still think of yourself as a start-up. The chaos and dis-organization running rampant throughout the company are just mere symptoms of growing.
Or, at least you tell yourself that.
Deep down you know that you need to really start thinking about laying down some proper structure, and thinking of career progression paths, trainings, improving your processes (even more fundamentally, figuring out what they ARE exactly), and putting yourself in the position to scale.
But where to start?
How about starting with your capacity.
Or better yet, how about making sure that every client you work on is, in the words of L’Oreal, worth it.
How can you do that?
Do you know how much it costs to run your business? I don’t mean, how much money do YOU take for your salary (which is most likely much less than you would like to be taking), but the amount of money you’re paying in FIXED and VARIABLE costs. If you don’t have these numbers somewhere, open up a Google Sheet and list down ALL of the costs associated with running your business.
Rent
Accountant
Payroll specialist
Electricity
Gas
Software licenses
Hardware
Salaries
Taxes (my favorite in Italy)
EVERY SINGLE THING I’M NOT KIDDING
2. After you have that down, figure out an average on a monthly basis. We’re eventually going to get down to an hour, but for now, the month will do. For my fellow math illiterates, this means divide by 12. No judgment from me guys, I had to take Intro to Math 11 to graduate.
3. Take that average and divide it by 22 (the avg. number of workdays per month) and this will tell you your BARE BONES 0% margin cost of running your company each day. (Apologies in advance for the average of averages — economics and stats majors, you know what I’m talking about.)
4. Take your bare bones 0% margin cost and and divide it by 8 (the avg. number of work hours per day) and this will tell you how much money you must make each HOUR with a 0% margin to keep things afloat.
I am hoping that this number doesn’t surprise you too much. Actually, I kinda hope you look at that number and are surprised a little. And I’ll tell you why. I bet you haven’t thought about this before. I’ll bet you’ve been so busy thinking about your next pitch and your Next Big Thing that you may have left these little details on your “boring things to delegate to someone else, later on….definitely low priority” list. Am I right?
“This is all fine and dandy”, you’re thinking,”but what does this have to do with my capacity?”.
Glad you asked.
Now you know your absolute minimum hourly cost with 0% profit margin number you can start having fun.
Do you know how much time you’ve been spending on your clients? No? OK, let’s take a step back.
Do you know how much money your clients have been paying you on a month by month basis? Cool.
Import all your finance data (really, you just need the Client Name, the date, and the amount of money paid) into Google Sheets or Excel and then divide that amount (less taxes) by your MINIMUM HOURLY COST. This will tell you how many hours you can spend on the client to break even with 0% profit.
Now, if you DO know how much time you’ve been spending on clients, compare the actual amount of time spent with the hours that you SHOULD be spending and see if there are any surprises. I’m willing to bet with a high degree of probability, that there are. I’m willing to bet that you have clients where you are spending an insane amount of time, for a mere pittance, and that there are cases where you’re not spending much time at all but turning a big profit. Keep in mind, that because we’re talking about a 0% profit margin here that any time you are overspending on a client is costing you money. Each and every minute and second. On the other hand, those clients where you’re spending less time than you could, is where your profit is. If you really wanted to have an eye-opening moment, add up that whole column and see how much profit you are actually making each month.
In case you’re wondering when I’m going to show you the money, I need you to do one more thing. It’s easy I swear.
If you have your billing hour with breakeven costs, you can now try adding in a profit margin. This varies between how much you want to make, what the market or product will withstand, and maybe even a little “finger in the air” analysis.
Start with adding a 50% margin onto everything.
If your billing hour is $20 adding a 50% margin makes it $30.
Re-run the numbers on your sheet and see if a profit margin of 50% makes a difference or not. If it does in some cases, and doesn’t in others, have a look at the clients where a 50% profit margin makes no difference and then figure out why you’re spending too much time on them. Do you have the wrong person managing the account who needs more time than someone else? Is the client exceptionally difficult to work with for some reason? Was the amount of time it would take to work on the client grossly underestimated at the contract phase?
Once you know the type of problem you can start addressing it.
Now that you know how much you should be charging to make a profit you can roughly estimate (oxymoron?) how much time you should be spending on them from the outset, and based on this, you can also see what your team’s capacity is capable of.
People have the same 8 hours per day, 40 hours per work week to utilize against client work.
You can find your total team capacity by taking the number of people on your team (do not include people who don’t contribute directly onto client projects like Finance, or HR, etc) and times that number by 8.
So if you have 5 people on your team, you times 5*8 and have 40 hours per DAY, which is 200 hours per WEEK, 800 hours per MONTH to work on client projects and still turn a 50% profit.
Look at one of your months and calculate the amount of hours spent and see if it’s GREATER THAN or LESS THAN the number of hours your team has in available capacity per month.
If it’s GREATER THAN — then you need to figure out where you can reduce time spent on non-profitable clients, how to automate time consuming tasks, or whether you need to invest in more training for your team or specific members.
If it’s LESS THAN — then you know that 1) you are in a good position to take on extra clients without hiring additional team members, and 2) you need to ensure that the “extra time” that your team has is going to good use.
Keep in mind that when we’re talking about the amount of time your team has we’re not talking about YOUR time, or your Senior staff members whose time, let’s face it, might be slightly more valuable than your last intern who’s still in training, we’re talking about the mythical man hour (oh yes, I DID go there!)
This is all just to give you a better handle on your team capacity and, as a side effect, your profitability. Some side effect, huh?
Knowing your team’s capacity, and billing hour, is an absolutely fundamental piece of information that will help you in every single effort that you make from Sales to Onboarding to Execution. I highly suggest you move this task up on your list of things to do. It’s easier than you think and has the potential to have a greater impact than your Next Big Thing, I promise. | https://medium.com/swlh/yes-you-do-need-to-calculate-your-capacity-b9496de291d0 | ['Hayley Richardson'] | 2020-03-01 10:51:44.394000+00:00 | ['Startup Lessons', 'Operations', 'Business Strategy', 'Startup', 'Operations Management'] |
36 JavaScript Concepts You Need to Master to Become an Expert | 36 JavaScript Concepts You Need to Master to Become an Expert
Mastery takes time, but knowing what to master makes it easier
Photo by Angela Compagnone on Unsplash
You’ll hear many people complaining that JavaScript is weird and sometimes worthless. People complain like this because they don’t understand how things work under the hood. Although I do agree that some scenarios in JavaScript are handled differently, that does not make it weird but rather beautiful in its own way.
To start loving a programming language, you should start by looking deep within and mastering its concepts one by one.
Here is a list of 36 JavaScript concepts that you need to master to become an all-round JavaScript expert.
Although this piece is one of my longest, I assure you that it is worthy of your time. Kudos to Stephen and Leonardo for the resources.
The resources section contains a link to the GitHub repo by Leonardo which contains learning material for all these concepts explained below. Please take your time in understanding each of the below-mentioned concepts. | https://medium.com/better-programming/36-javascript-concepts-you-need-to-master-to-become-an-expert-c6630ac41bf4 | ['Mahdhi Rezvi'] | 2020-07-28 19:42:32.641000+00:00 | ['Technology', 'Programming', 'Nodejs', 'React', 'JavaScript'] |
Gamification is dangerous, and here’s why. | Sometimes apps are boring. You open them, and they just seem another form you have to validate and send to an unknown server. As a designer, you always try to come up with new exciting ideas, hoping to innovate an old and obsolete mechanism.
Now, if you tried coming up with these ideas, I’m sure you heard or know about gamification. What’s the definition of gamification? Gamification is the application of game design principles to a different context, in which you try to transform a user task into a sort of game, with new and different interactions and consequences:
an example could be a fitness jogging app where you have to create fun shapes with your GPS running path, and the most accurate one gets more points in a social leaderboard.
This example is intriguing, but I’ve noticed a very bad trend these last years, where gamification had been used the wrong way, just for the sake of using it.
Let’s see a list of things that explain to us why gamification is dangerous in the wrong hands.
4) Useless “creative” interactions slow down task completion.
Useless interaction appears both in apps and websites. One example is holding down buttons instead of clicking: yes it’s good to show off your developing skills, but it just makes your user frustrated. So be extremely careful when using it: even if you’re going to make a cool trendy website, please make us surf it fast.
No thanks.
Another example I found out is inside my mobile company’s app, where it happened that I had to shake my phone multiple times to fill a sort of bottle in order to get more internet traffic. This is just a useless and embarrassing example of gamification: I often had to use it in public, and I felt a bit dumb to just shake my hand in an ambiguous way like my phone wasn’t working. A simple finger press was enough.
This brings us to point number 3.
3) You need to understand when something fun is really needed.
My first job was to design an institutional app, and my colleagues wanted to gamificate the form data insertion process. In fact, the app was entirely based on an extremely tedious and long form. Their idea was to gamificate each data entry to increase engagement, and guess what: disaster.
It didn’t work for two main reasons:
as we said before: slowing task completion. To fulfill the entire form, 20 full minutes were needed because of useless page changes. Inserting personal data isn’t fun. But moving fancier sliders isn’t fun either.
If you have a long form, consider stacking many questions on each page. And use a progress bar only for small sections.
In this study case, gamification was the wrong idea: Reducing the interaction needed (by putting more fields on each page) and splitting the form into smaller parts was perceived a lot better by our testers. We’ve also reduced the number of redundant questions and brought down completion time to around 7 minutes, which is still much but at least bearable.
In the end, we kept the concept of gamification, but applied it in another context: which leads us to point number 2.
2) Keeping gamification as a fancy outline often works.
Rewards. People like rewards, and since our form was incredibly boring, giving them a reward could make them enjoy the full experience. Two of the most commonly applied persuasive techniques in mobile apps are recognition and social comparison (prizes, badges, and leaderboards for example), and they don’t interfere with the main tasks: these strategies give the user a sense of gamification, but without slowing or forcing him into unneeded actions.
But beware: not everyone wants to be exposed, and being compared to others (especially if not performing well), could lead to frustration and abandoning the application. If you want to ensure that no frustration is induced, just keep positive recognition and bring social comparison away. | https://uxplanet.org/gamification-is-dangerous-and-heres-why-d0a3622e0951 | ['Lorenzo Doremi'] | 2020-12-17 23:21:18.811000+00:00 | ['Design', 'UX', 'Visual Design', 'UX Design', 'Design Thinking'] |
Logo Casestudy: Cell Stress & Immunity (CSI) | Breakdown of how CSI logo was brought to life
In early 2020, I was roped in to design the Laboratory of Cell Stress & Immunity (CSI) logo, a part of KU Leuven (Belgium), Department of Cellular and Molecular Medicine. I have always ever designed logos for tech startups, and this was the first time I was about to jump into designing something that is well out of my comfort zone. But having a fair bit of experience and interest in logo designs, I decided to trust in the process and go one step at a time while working with the client in unpacking the expected outcome.
It is always important to remember that the logo is a symbolic representation of the company/brand; and it often requires inspiration and art and research, analysis, hard work, and rigorous testing to design it.
Tools
Pencil sketching, Adobe Illustrator
Understanding the background 🗒️
I wanted to do my own research around competitors in this space and color palettes that are not favored in this domain. The logo design is for a lab that focuses on cancer cell death's immunology, and for starters, I know nothing about this field and its technicality. So I send across a questionnaire to the client to help me steer in the right direction and start building a mood board of shapes, color palettes, and inspirations. Here are the questions that I chose to ask — | https://uxplanet.org/logo-casestudy-cell-stress-immunity-csi-5dbe6ddbcff6 | ['Dhananjay Garg'] | 2020-12-27 22:17:53.812000+00:00 | ['Logo', 'Logo Design', 'Design', 'Illustration', 'Graphic Design'] |
Bring Machine Learning to the Browser With TensorFlow.js — Part I | Edited 2019 Mar 11 to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found here.
TensorFlow.js brings machine learning and its possibilities to JavaScript. It is an open source library built to create, train, and run machine learning models in the browser (and Node.js).
Training and building complex models can take a considerable amount of resources and time. Some models require massive amounts of data to provide acceptable accuracy. And, if computationally intensive, may require hours or days of training to complete. Thus, you may not find the browser to be the ideal environment for building such models.
A more appealing use case is importing and running existing models. You train or get models trained in powerful, specialized environments then you import and run the models in the browser for impressive user experiences.
Converting the model
Before you can use a pre-trained model in TensorFlow.js, the model needs to be in a web friendly format. For this, TensorFlow.js provides the tensorflowjs_converter tool. The tool converts TensorFlow and Keras models to the required web friendly format. The converter is available after you install the tensorflowjs Python package.
install tensorflowjs using pip
The tensorflowjs_converter expects the model and the output directory as inputs. You can also pass optional parameters to further customize the conversion process.
running tensorflowjs_converter
The output of tensorflowjs_converter is a set of files:
model.json — the dataflow graph
— the dataflow graph A group of binary weight files called shards. Each shard file is small in size for easier browser caching. And the number of shards depends on the initial model.
tensorflowjs_converter 1.0 output files
NOTE: If using tensorflowjs_converter version before 1.0, the output produced includes the graph ( tensorflowjs_model.pb ), weights manifest ( weights_manifest.json ), and the binary shards files.
Run model run
Once converted, the model is ready to load into TensorFlow.js for predictions.
Using Tensorflow.js version 0.x.x:
loading a model with TensorFlow.js 0.15.1
Using TensorFlow.js version 1.x.x:
loading a model with TensorFlow.js 1.0.0
The imported model is the same as models trained and created with TensorFlow.js.
Convert all models?
You may find it tempting to grab any and all models, convert them to the web friendly format, and run them in the browser. But this is not always possible or recommended. There are several factors for you to keep in mind.
The tensorflowjs_converter command can only convert Keras and TensorFlow models. Some supported model formats include SavedModel, Frozen Model, and HDF5.
TensorFlow.js does not support all TensorFlow operations. It currently has a limited set of supported operations. As a result, the converter will fail if the model contains operations not supported.
Thinking and treating the model as a black box is not always enough. Because you can get the model converted and produce a web friendly model does not mean all is well.
Depending on a model’s size or architecture, its performance could be less than desirable. Further optimization of the model is often required. In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s). So, needing some understanding or inner workings of the model is almost a given.
Getting to know your model
Presumably you have a model available to you. If not, resources exist with an ever growing collection of pre-trained models. A couple of them include:
TensorFlow Models —a set of official and research models implemented in TensorFlow
Model Asset Exchange —a set of deep learning models covering different frameworks
These resources provide the model for you to download. They also can include information about the model, useful assets, and links to learn more.
You can review a model with tools such as TensorBoard. It’s graph visualization can help you better understand the model.
Another option is Netron, a visualizer for deep learning and machine learning models. It provides an overview of the graph and you can inspect the model’s operations.
visualizing a model with Netron
To be continued…
Stay tuned for the follow up to this article to learn how to pull this all together. You will step through this process in greater detail with an actual model and you will take a pre-trained model into web friendly format and end up with a web application. | https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-i-16924457291c | [] | 2019-03-11 20:43:18.126000+00:00 | ['Machine Learning', 'JavaScript', 'TensorFlow', 'Python', 'Open Source'] |
9 tips to quickly improve your UI designs | Originally published at marcandrew.me
Creating beautiful, usable, and efficient UIs takes time, with many design revisions along the way. Making those constant tweaks to produce something that your clients, users, and yourself are truly happy with. I know. I’ve been there many times before myself.
But what I’ve discovered over the years is that by making some simple visual tweaks you can quickly improve the visuals you’re trying to create.
In this article I’ve put together a small, and easy to put into practice, selection of tips that can, with little effort, not only help improve your designs today, but hopefully give you some handy pointers for when you’re starting your next project. | https://uxdesign.cc/9-simple-tips-to-improve-your-ui-designs-fast-377c5113ac82 | ['Marc Andrew'] | 2020-08-28 09:10:35.292000+00:00 | ['Design', 'UI', 'UI Design', 'Web Development', 'Visual Design'] |
Business Intelligence Visualizations with Python — Part 2 | 1. Additional Plot Types
Even though these plot types are included in the second part of this series of Business Intelligence Visualizations with Python, they are not less important, as they complement the already-introduced plots. I believe you’ll find them even more interesting than basic plots!
To begin with this series, we must install required libraries:
# Imports
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
A. Horizontal Bar Plots with error bars:
A Bar plot is a chart that presents data using rectangular bars with heights and lengths proportional to the values they represent. The basic command utilized for bar charts is plt.bar(x_values, y_values).
The additional feature involved in this plot are Error Bars, which are graphical representations of the variability of data. They’re commonly used to indicate the estimated error in a desired measure.
This time, we’ll be plotting a horizontal bar plot with the following input data:
# Input data for error bars and labels
mean_values = [1, 2, 3]
std_dev = [0.2, 0.3, 0.4]
bar_labels = ['Bar 1', 'Bar 2', 'Bar 3']
y_values = [0,1,2]
Now let’s plot the bars with the plt.barh command:
# Create bar plots
plt.yticks(y_values, bar_labels, fontsize=10)
plt.barh(y_values, mean_values, xerr=std_dev,align='center', alpha=0.5, color='red') # Labels and plotting
plt.title('Horizontal Bar plot with error', fontsize=13)
plt.xlim([0, 3.5])
plt.grid()
plt.show()
Sample plot — Image by Author
A variation of this plot can be made with the insertion of labels or texts to the bars. We’ll do this with the following input data:
# Input data for error bars and labels
data = range(200, 225, 5)
bar_labels = ['Bar 1', 'Bar 2', 'Bar 3']
y_values = [0,1,2,3,4]
Proceed with the plots preparation:
# Create bar plots
fig = plt.figure(figsize=(12,8))
plt.yticks(y_values, bar_labels, fontsize=15)
bars = plt.barh(y_values, data,align='center', alpha=0.5, color='orange', edgecolor='red') # Labels and plotting
for b,d in zip(bars, data):
plt.text(b.get_width() + b.get_width()*0.08, b.get_y() + b.get_height()/2,'{0:.2%}'.format(d/min(data)),ha='center', va='bottom', fontsize=12)
plt.title('Horizontal bar plot with labels', fontsize=15)
plt.ylim([-1,len(data)+0.5])
plt.xlim((125,240))
plt.vlines(min(data), -1, len(data)+0.5, linestyles='dashed')
plt.show()
Sample plot — Image by Author
B. Back-to-back Bar Plots:
We continue with the family of bar plots, in this case with a variation that compares two sets of data horizontally. The commands to create this plot are the same as with the horizontal bar plot, but negating values for one of the sets of data.
# Input data for both sets of data utilizing numpy arrays to negate one set:
X1 = np.array([1, 2, 3])
X2 = np.array([3, 2, 1])
y_values = [0,1,2]
bar_labels = ['Bar 1', 'Bar 2', 'Bar 3']
Now let’s plot the bars with the plt.barh command and the negation feature:
# Plot bars
fig = plt.figure(figsize=(12,8))
plt.yticks(y_values, bar_labels, fontsize=13)
plt.barh(y_values, X1,align='center', alpha=0.5, color='blue')
plt.barh(y_values, -X2,align='center', alpha=0.5, color='purple') plt.title('Back-to-back Bar Plot', fontsize=13)
plt.ylim([-1,len(X1)+0.1])
plt.grid()
plt.show()
Sample plot - Image by author
C. Bar Plots with height labels:
This chart is equivalent to the previous shown, with the exception that it has vertical orientation and that I’ve added height labels to have a clearer visualization of such a metric. This can be done with the command ax.text.
In addition, I introduced the method autofmt_xdate included in Matplotlib to automate the rotation of labels. Take a look at the code:
# Input information:
n_bars = [0,1,2,3]
values = [3000, 5000, 12000, 20000]
labels = ['Group 1', 'Group 2','Group 3', 'Group 4'] # Create figure and plots
fig, ax = plt.subplots(figsize=(12,8))
ax.set_facecolor('xkcd:gray')
fig.patch.set_facecolor('xkcd:gray')
fig.autofmt_xdate()
bars = plt.bar(idx, values, align='center', color='peru', edgecolor='steelblue')
plt.xticks(idx, labels, fontsize=13) # Add text labels to the top of the bars
def rotate_label(bars):
for bar in bars:
height = bar.get_height()
ax.text(bar.get_x() + bar.get_width()/2., 1.05 * height,'%d' % int(height),ha='center', va='bottom', fontsize=13) # Labels and plotting
rotate_label(bars)
plt.ylim([0, 25000])
plt.title('Bar plot with Height Labels', fontsize=14)
plt.tight_layout()
plt.show()
Sample plot — Image by Author
D. Bar Plots with color gradients:
Let’s add some color to the equation. In the following chart, I introduce the built-in module called colormap, which is utilized to implement intuitive color schemes for the plotted parameters. First, I’ll proceed with the imports:
import matplotlib.colors as col
import matplotlib.cm as cm
Now I’ll insert sample data to plot the chart. As you can see, colormap is implemented through the ScalarMappable class which applies data normalization before returning RGBA colors from the given colormap.
To clarify the previous statement, RGBA colors are a form of digital color representation, together with HEX and HSL. HEX is the most utilized and re-known, for being a simple representation of 6-digit numbers that can create Red, Green, and Blue. An example of a Hex color representation is #123456, 12 is Red, 34 is Green and 56 is Blue. On the other hand, RGBA colors add a new factor, the alpha, which is the opacity or transparency that follows the same percentage scheme: 0% represents absolute transparency and 100% represents absolute opacity which is the way we traditionally see colors. More details in this website.
In this link to Matplotlib’s documentation you’ll find further details to the different colormaps that can be chosen. Take a look at the code to generate the plot in order to have a clearer view:
# Sample values
means = range(10,18)
x_values = range(0,8) # Create Colormap
cmap1 = cm.ScalarMappable(col.Normalize(min(means), max(means), cm.spring))
cmap2 = cm.ScalarMappable(col.Normalize(0, 20, cm.spring)) # Plot bars
# Subplot 1
fig, ax = plt.subplots(figsize=(12,8))
plt.subplot(121)
plt.bar(x_values, means, align='center', alpha=0.5, color=cmap1.to_rgba(means))
plt.ylim(0, max(means) * 1.1) # Subplot 2
plt.subplot(122)
plt.bar(x_values, means, align='center', alpha=0.5, color=cmap2.to_rgba(means))
plt.ylim(0, max(means) * 1.1)
plt.show()
Sample plot — Image by Author
E. Bar Plots with pattern fill:
Now we’re going to add some styling to our data presentation using bar plots and pattern fills. This can be done utilizing the set_hatch command or including as an argument in the plt.bar configuration the hatch command.
# Input data:
patterns = ('-', '+', 'x', '\\', '*', 'o', 'O', '.')
mean_values = range(1, len(patterns)+1)
y_values = [0,1,2,3,4,5,6,7] # Create figure and bars
fig, ax = plt.subplots(figsize=(12,8))
bars = plt.bar(y_values,mean_values,align='center',color='salmon')
for bar, pattern in zip(bars, patterns):
bar.set_hatch(pattern) # Labeling and plotting
plt.xticks(y_values, patterns, fontsize=13)
plt.title('Bar plot with patterns')
plt.show()
Sample plot — Image by Author
F. Simple Heatmap:
A Heatmap is a graphical representation of data in which values are depicted by color. They make it easy to visualize complex data and understand it at a glance. The variation in color may be by hue or intensity, giving obvious visual cues to the reader about how the represented values are distributed.
In this case, the variation in color represents the number of observations clustered in a particular range of values, which is implemented with the colorbar feature of Matplotlib. Also, the plot is made with a 2-dimensional histogram, created with the command plt.hist2d.
In the code below, I create two normally-distributed variables X and Y with a mean of 0 and 5 respectively.
When you plot the 2D hist, you see a 2D histogram. Think about it like looking at a histogram from the “top”. In addition to that, to have a clearer understanding of the color distribution, consider that colors centered at the 2D histogram are yellowish and correspond to the highest values of the colorbar, which is reasonable since X values should peak at 0 and Y values should peak at 5.
# Input a sample of normally distributed observations centered at x=0 and y=5
x = np.random.randn(100000)
y = np.random.randn(100000) + 5 # Create figure, 2D histogram and labels
plt.figure(figsize=(10,8))
plt.hist2d(x, y, bins=40)
plt.xlabel('X values - Centered at 0', fontsize=13)
plt.ylabel('Y values - Centered at 5', fontsize=13)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Number of observations', fontsize=13)
plt.show()
Sample plot — Image by Author
G. Shadowed Pie chart:
Pie charts are used to display elements of a data set as proportions of a whole. In addition to the traditional plt.pie command, we’ll utilize the shadow=True boolean feature to bring some styling to the sliced of the pie chart.
# Create figure and plot the chart:
plt.figure(figsize=(10,8))
plt.pie((10,5),labels=('Blue','Orange'),shadow=True,colors=('steelblue', 'orange'),
explode=(0,0.15),
startangle=90,
autopct='%1.1f%%'
)
plt.legend(fancybox=True, fontsize=13)
plt.axis('equal')
plt.title('Shadowed Pie Chart',fontsize=15)
plt.tight_layout()
plt.show() | https://towardsdatascience.com/business-intelligence-visualizations-with-python-part-2-92f8a8463026 | ['Julian Herrera'] | 2020-10-09 00:05:34.551000+00:00 | ['Data Analysis', 'Python', 'Data Science', 'Programming', 'Data Visualization'] |
Basics of Quantum Mechanics for Non-scientists | Classical / “Newtonian” Physics vs Quantum Physics
Classical Physics
You probably have a recollection of classical or also called “Newtonian” physics. It was discovered and outlined by Isaac Newton in his paper published in 1687. It basically tells us that an object, say a tennis ball, has a position, a velocity, and the position changes over time. It mathematically proves that if no force acts on the object, it will continue to move in a straight line, with such forces being, for example, gravity, wind or the other person catching the ball.
Figure 1: Classical representation [1]
In classical physics, the state of an object is the combination of its position and velocity. And if you know what forces are acting on it, you can determine the trajectory and predict where it goes next.
Quantum Physics
When you start studying the smallest particles, such as electrons and protons — things are a little bit more abstract.
You can predict the position, velocity, and other properties like spin (Spin: I’ll discuss in more detail), but it’s not with 100% accuracy, it’s not unquestionably the actual measurement. The state of an electron is rather a set of probabilities.
Imagine an electron orbiting an atom. It creates a sort of a cloud where some parts are denser. Some are thinner. Where it’s denser, it has a higher probability of an electron being there.
The ‘cloud’ oscillates like a wave, and that’s why it is said that an electron has a wave function. It’s not that the electron is a proper wave. There is no amplitude like a sound wave, but rather the amplitude is calculated. The wave here is more used as a metaphor.
Figure 2: Electron cloud. By author, inspired by [1]
Based on the density of each point of the ‘cloud’, it is assigned a number. This number is the amplitude of the “wave”. The probability of the electron being in a specific position is the square root of the amplitude. Before the location is observed, it is said that the electron is in a superposition of all possible outcomes.
The wave function of an electron is equivalent to the state in classical physics: position and velocity. And like there is an equation in “Newtonian” mechanics to calculate the motion of an object, there is also an equation to calculate the motion of a wave function — The Schrödinger’s equation: High energy parts evolve rapidly, low energy parts evolve slowly. [1]
Figure 3: Quantum representation [1]
Is it a particle? Is it a wave? No, it’s (super) an electron! — The double-slit experiment
As I explained, an electron oscillates like a wave. It has a wave function. You can’t exactly predict where it’s going to be, as you do with macroscopic objects. All you know is the probability of its location.
However, when you decide to observe the electron, the “wave” disappears/collapses, and it looks more as a particle — you see like a dot, a point in space. So it has this dual behaviour — when not observed, it’s a ‘wave’ when observed, it’s a particle — The double-slit experiment illustrates exactly that. It was performed in the 1970s, although it had already been discussed long before. It compares four different scenarios:
Classical particles going through single-slit vs double-slit Waves, like water waves, going through single-slit vs double-slit Electrons going through single-slit vs double-slit Electron going through double-slit, but with an ‘observer’ placed in the middle of the path to detect and prove that an electron went through the double-slit. Just in case.
1. Classical particles going through single or double slits have practically the same behaviour. In the screen on the other side, you will see marks close to the slits. Maybe some variations as the objects bump to each other or on the sides of the slit.
2. Waves behave just like waves. When it’s one slit, the marks will be centred around right behind the slit. As it’s a wave, the marks on the screen are bright spots equivalent to where the wave has higher amplitudes.
When there are two slits, on the other side of the screen it forms an interference pattern — Waves can oscillate up and down, when two waves oscillate in opposite directions they cancel each other out, when in the same direction it intensifies the amplitude. The result in the screen is greater amplitudes / brighter spots in the centre close to the slits and alternating dark/bright fading to both sides.
3. Electrons behave just like scenario 2 / waves, the interference pattern is noted when double-slits are used. However, first: Electrons are not really a wave; it doesn’t have an amplitude like waves do. The amplitude is calculated based on the denser points of the electron cloud. Second: The electron leaves up a mark in the screen just like a classical particle. The marks / bright spots are not shown based on the height of the amplitude, as it was done with water waves.
4. If scenario 3 wasn’t fun enough — when the detector/observer is placed in the middle, the electrons behave just like classical particles, the wave collapses. There is NO interference pattern at all on the other side, on the screen. | https://medium.com/predict/basics-of-quantum-mechanics-for-non-scientists-299e38d428bf | ['Vinicius Monteiro'] | 2020-12-09 01:16:51.513000+00:00 | ['Quantum Mechanics', 'Science', 'Quantum Physics', 'Quantum Computer', 'Physics'] |
How to Stand Out When Asking for a Job | We work with a lot of people at the beginning of their careers at 1517. Many of those people are founders. Others go to work for the founders we know and our portfolio companies.
Every now and then, we get questions about how to get a job working in tech investing and venture capital. Instead of just ignoring the person or telling them that we’re not currently hiring, we want to give constructive feedback on their job search.
We’ve generally found that very few students know how to effectively ask for a job that other people want. This is a skill best learned through trial-and-error and candid feedback.
Here’s what Zak, who wrote the response, said:
— — — — — —
…
Cold inquiries are a great way to go about searching for a job when they’re done right. I spent years teaching people how to do this, so don’t take this the wrong way — it can take some time to learn.
First, you don’t need to follow up on every medium immediately. Generally speaking, I encourage people to follow up via email or LinkedIn (if email is unavailable), 72 business hours later if you’ve received no reply. I cover in my email course what a lack of a reply on a first message can mean and how you can craft an appropriate follow up.
I also encourage people to be cautious about texting people directly if they’ve never had previous contact with them. Generally speaking, texting is a good way of getting in touch with somebody if they include their mobile number in their email signatures (or openly say elsewhere that somebody can text them). But if they don’t do that, then it can cross some social norms that people can be iffy about.
And if you’re going to hit up multiple team members, don’t use the same message for each one. That looks spammy. Tell people where you got their info, why you’re reaching out, give them reason or evidence to reply, and then make it ridiculously easy for them to reply.
Second, and this goes more towards the content of a good outreach pitch for landing a job, you’ll want to craft your pitch in a way that it is 1) compelling to the recipient firm and 2) unique to that firm. If your pitch can be sent to 10 firms with you just changing the name of the firm every time, it’s going to look like spam and generally get a lower response rate. A better approach would be to craft a specific why this firm in your outreach.
An even better approach would be to tell them exactly why them and what you could do for them. Put together a proposal and run them through how you can be helpful. I have an alum of my email course who did this quite well and landed a number of job interviews along the way.
Telling somebody, “I want to work for you” is only as good as the reasons you give them. If you show them that you can identify value that they need created and can create it, you take a lot of that work off of their plate. Loom is a good tool for walking people through what you can create.
Third and finally, do a lot of research on every firm you reach out to. That should be wrapped into point #2 as you put together value propositions, but you’ll want to know the team’s thesis, their fund size, their stage of investment, their last fund raised, etc. The fund size and number of funds determines how many people they can hire and how quickly they can hire. The thesis will tell you more about what they consider interesting.
For example, we have a very specific thesis that we can always tell when somebody has read over it and understood it before they reached out. We’re also a pre-seed fund, so experience in business school financial modeling classes just isn’t particularly relevant to us. There just isn’t that much data when we make investments.
So, for example, if you want to specialize in doing luxury investments and use your undergrad classes, find firms that do investments in the luxury space at or after the series A.
You can do this through research on Crunchbase. Find a few companies that are venture funded at the growth stage in the luxury sector, look at their investors, and find similar investors. Research all of those investors, put together value propositions, and cold email the partners and principals at those companies. Even better, find funds in this category that you know recently closed a new fund. That means that they have new management fees to hire folks.
I hope that’s helpful for your job hunt and you can use it to land a position at the right kind of firm!
Cheers,
Zak | https://medium.com/1517/how-to-stand-out-when-asking-for-a-job-c54fbecefae2 | [] | 2020-04-08 14:41:46.744000+00:00 | ['Startup', 'Investing', 'Job Hunting', 'Jobs'] |
Industry, Technology, and Innovation Trends for The Post COVID-19 Era | Industry, Technology, and Innovation Trends for The Post COVID-19 Era
Recently I was an invited speaker at IEEE Globecom 2020 Special Workshop on Communication and Networking Technologies for Responding to COVID-19. The speakers at this virtual workshop were all distinguished individuals and the topics covered were broad and insightful from contact tracing to smart devices, from detection & mitigation to data privacy & online lectures to AR/VR for smart health services, etc. It reminded me of a keynote speech by
Dr. Neeli Prasad, CTO of SmartAvatar B.V. at a virtual event that she said “society rightfully recognized the great contributions from our first responders, doctors, nurses, supply chain, logistics, supermarket workers, etc. but forgot to recognize or acknowledge the information and communication engineers who made internet and communication possible, without them, there won’t be any online collaboration, online school, telemedicine, social media, streaming services, e-commerce, etc.”
This special workshop reminded me of what drove me into engineering and entrepreneurship.
My talk focused on the “Technology and Innovation Trends for the Post COVID-19 Era”. Here below is the abstract of my talk:
The global economic downturn due to the pandemic COVID-19 outbreak acted as a catalyst to further amplify the adoption of new technologies and innovations above and beyond the pace, we got used to for the last two decades. Few things already seem very clear that platform firms like Amazon, Alibaba, Uber Eats, Zoom, etc. are dominating the markets even more. Companies will further accelerate their investment to conduct their business remotely over the internet to be more resilient to potential future lockdowns. My talk discussed about the industry, technology and innovation trends for the post-COVID-19 era.
Industry Trends Post COVID-19 Technology & Innovation
The industry trends listed above were already in motion for the last few years. However, COVID-19 will accelerate these transformations. COVID-19 has pushed the government, companies, and society over the technology tipping point and transformed these industry trends forever.
This blog will address the role of artificial intelligence, robots, digital transformation, and how these trends will impact the industries such as healthcare, education, e-commerce, media & entertainment, connectivity, and Industry 4.0 AI-decision making.
Post COVID-19 Technology & Innovaqtion
Healthcare Post COVID-19 Technology & Innovation
US healthcare spent was roughly $3.6 trillion in 2018 which makes it the highest per capita in the world with $11,172 per capita. Prior to the pandemic, 11% of the U.S.’s non-elderly population, roughly 30 million people, were uninsured or underinsured. It is estimated that due to COVID-19 shelter-in-place measures, which led to the economic lockdown, and the continuous lack of financial support from the government an additional 8 million people fell into poverty.
Telehealth Surge Under COVID-19
COVID-19 has caused a massive acceleration in the use of telehealth. Consumer adoption has skyrocketed as consumers replace their canceled healthcare visits with telehealth. In 2019, U.S. consumers’ use of telehealth made up 11%. However, now 46% of consumers are using telehealth services. Providers have rapidly scaled offerings and are seeing 50 to 175 times the number of patients via telehealth than they did before according to a McKinsey survey.
With the acceleration of consumer and provider adoption of telehealth and extension of telehealth beyond virtual urgent care, up to $250 billion of current US healthcare spending per year could be saved which is roughly 20% of Medicare, Medicaid, and commercial insurers spend. This saving alone will allow insurers to expand healthcare coverage to uninsured and underinsured citizens.
Higher Education Post COVID-19 Technology & Innovation
COVID-19 changed the way of educating and I noticed it first-hand with my daughter studying at the University of Amsterdam, Netherlands. Many schools and higher educational institutes were caught off guard with the first lockdown coupled with stay-at-home or shelter-in-place orders from their state governors and city mayors during the Spring of 2020. Schools and teachers had to reinvent themselves overnight and learn on the fly how to conduct virtual classes effectively, interact efficiently with students through chat groups, video conferencing, scheduling video meetings, voting, distribute assignments, document sharing, etc. It tests the higher education institutions’ commitment to ensuring education for all its students and how to solve problems remotely.
Most students want to return to their onsite and in-person class, socialize with their classmates and friends but they also found it easier to online communicate and interact with tutors & professors. There is research out there that shows that average students retain 25% to 60% more material and require 40% to 60% less time when they learn online compared to only 8% to 10% in a classroom.
In short with online class students can learn at their own pace, going back and forth as many times they want, skip or accelerate through the course material as they please. Higher educational institutes have taken notice of it and in the future expect them to provide high-impact learning experience across hybrid mode a mix of onsite and online classes, placing educational quality above modality.
e-Commerce Post COVID-19 Technology & Innovation
COVID-19 changed the face of the retail to a complete online Augmented Reality (AR) retail store with innovative ways to improve the shopping experience of customers, reduce the numbers of products customer returns and streamline the overall purchasing process.
Converse the shoe brand launched an AR app for iPhone called The Sampler that allows users to virtually try on shoes. Simply by pointing the camera towards their right foot, the user can see what the shoe would look like in real life. This also helps to streamline the purchasing process, as customers have the opportunity to buy a pair of shoes they like directly via the app. Ikea has integrated artificial reality with their app named Place. Shoppers can now use the camera of their smartphone to virtually place different home furnishings into their surroundings. The program allows users to interact with the projected images and envisions how they would look in various spaces. This helps customers find the perfect piece of furniture without having to return items that they imagined would fit. Warby Parker has a new update out for its iPhone app that uses Apple’s Face ID and AR tech to let customers virtually try on glasses in the app before they buy them. Warby Parker’s virtual try-on feature relies on Apple’s ARKit and True Depth features, so it’s only available on the iPhone X, XR, and XS phones. Another example of smart use of AR technology is the DressingRoom app from Gap. Shoppers can provide the app with some basic information about their body. The program then creates a 3D model based on the user’s measurements. With this model in place, the user can virtually try on clothes to see how they look. This is just another way in which companies are making the margin of error smaller when it comes to online purchases.
Augmented Reality based online shopping will enable a personalized experience with an ability to test and explore products in ways that is similar to an in-person shopping experience.
Media & Entertainment Post COVID-19 Technology & Innovation
Deepfake is synthetic data in which existing data, voice, image, and/or video is replaced with someone else’s likeness. Deepfake is also capable of generating realistic-looking images that even humans can’t recognize whether it’s real or not. Deepfake techniques are also used to generate synthetic data to balance algorithmically biased datasets for supervisory training of machine learning & deep learning models in order to improve overall model accuracy.
These People are NOT Real. These Images were Produced by StyleGAN
Cybercriminals are harnessing the power of this technology to reel in more victims. The thumbnail and heading make the victim really curious about the content of the video so they click through it “clickbait”. As soon as they navigate to the site, their computer is exposed to malware such as ransomware, keyloggers, or spyware. If they don’t have adequate cybersecurity in place, their computer is infected and they have to deal with the fallout.
On December 25, 2020, a hilarious digitally altered version of Queen Elizabeth’s annual Christmas speech was broadcast on the BBC and ITV.
Deepfake Queen: 2020 Alternative Christmas Message
The Deepfake version of Queen Elizabeth II took several swipes at members of the Royal family, and the Queen even danced in a Tik Tok routine. All of it was designed to warn of the ease of misinformation that could spread in the digital age.
Trust and verify what is genuine or what is not in the age of misinformation and disinformation media, it can be a serious threat to democratic values we take for granted and our way of life.
A push towards a greater 5G investment and faster market adoption in developed economic countries will be mainly driven by the potential economic boom and contribution to countries GDP expected from 5G connectivity. 5G will create a value of $13.1 trillion in global sales activities by 2035.
Enhanced Mobile Broadband (eMBB) will extend the 5G coverage and capacity with licensed and unlicensed spectrum
5G Use Cases (Source: Ericsson)
Massive Machine Type Communication (mMTC) will scale Internet of Things (IoT) applications and improve the battery life of IoT devices. Mission Critical Applications enabled by Ultra Reliable Low Latency Communications (URLLC) will allow public safety, emergency response and other smart industrial safety critical use cases and services, for example, autonomous vehicles, remote telesurgery, wireless manufacturing control, etc. become commonplace.
5G and beyond 5G connectivity will not only create new jobs in every industry sector, it will also unleash new value streams that will help grow the global economy for everyone. Businesses across all industry sectors will benefit by leveraging the unique capabilities of 5G over 4G.
Artificial Intelligence (AI) Decision Making Post COVID-19 Technology & Innovation
Many companies have adopted a data-driven approach for operational decision making as part of Industry 4.0. A data-driven approach can improve decisions but it requires the right processors “human” to get the most from it. However, to get the maximum value contained in the data, companies need to bring Artificial Intelligence (AI) into their workflow. Removing humans from workflows does not mean humans are obsolete, there are business decisions that depend on more than structured data e.g. strategy, creativity, corporate culture, empathy, emotion, and other forms of non-digital communication. This information is inaccessible to AI and extremely relevant to business decisions e.g. AI may determine that investment in digital marketing will result in the highest return on investment; however, a company may decide to slow down the growth for improving product quality.
AI-Driven Decision Making Combined with Human Judgement
Industry 5.0 refers to humans working alongside robots and smart machines. It is the age of Human-Machine Convergence. Industry 5.0 aims to support, not supersede, humans. COVID-19 proved the point that without human involvement manufacturing cannot function on its own. Industry 5.0 will automate the mundane tasks and relieve workers of physically demanding work so that workers can focus on creative craftmanship and concentrate on other tasks. | https://medium.com/datadriveninvestor/industry-technology-and-innovation-trends-for-the-post-covid-19-era-af4e8659b5d7 | ['Mahbubul Alam'] | 2020-12-28 12:35:45.998000+00:00 | ['Covid 19', 'Technology', 'Artificial Intelligence', 'Innovation', 'Pandemic'] |
Drowning in a Sea of Alerts | That copy writing client has sent a message — ping!
That book editing client asked a question— ding!
You’ve made a sale — pop!
The washing machine finishes a cycle — bing bing!
Is that someone at the door? — ring ring!
Thankfully I don’t get new e-mail alerts because I shut that crap down years ago, see also my phone. Some things like the doorbell might be unavoidable…unless I rip the effing thing out of the wall, which is tempting sometimes.
Every appliance, app, website, and marketplace wants to alert you to something. Sure, it sounds useful. I can have soaring productivity by plugging myself into the mainframe and becoming super aware of every disparate event, question and occurrence across my working day. However, all these pings and dings have made me very aware that focus is finite. The human mind just doesn’t work like a computer does.
Apps and appliances present their alert functions as non intrusive. They claim not to commandeer your focus and attention, just a tiny bite of your limitless ability to be aware of many things at once. The theory is that all these numerous helpful assistants give you an ever so subtle and helpful poke now and again.
What these alerts actually do is completely derail your focus in exchange for letting you know what you should be doing an hour from now. I’m pretty sure you knew that anyway. I’m also pretty sure that a physical list which you can choose to look at when you need a prompt is more useful and less intrusive.
We all have a natural alert system. We have awareness of different things at different times. We tune in to what we want to concentrate on and prioritize. The idea that any of us can do that effectively when an external force is pinging away on our attention bongos is deeply flawed.
Certainly, there’s a time and place for helping our natural alert system out. A few weeks ago I was in a pub that had a fire in the kitchen. Off goes the fire alarm, out of the pub everyone traipses. That’s a useful alarm. Even in this example though you can see that an alarm is mean to distract you completely and get you to change track.
Even the words “alarm” and “alert” don’t have connotations of retaining your focus and cataloguing that you need to do something later.
Even just the availability of alerts and updates is a problem, the only difference is you do that damage to yourself. Whether it’s refreshing your stats on Medium or picking up your phone again, you aren’t actually alerting yourself to something that needs doing. All you’re doing is willingly bailing on whatever you’re meant to be doing.
Being busy makes people feel productive. Being productive fires up all the parts of your brain that reward you. The problem is this part of your primitive wiring is fairly blind. On a primitive level it makes sense for us to be alert to everything around us. On a professional level this is how you waste your time.
When you check multiple accounts, screens or messages you feel busy, because you made yourself busy. Busyness and productivity are not synonymous.
When you kid yourself that you achieved something (even when you didn’t) you get a kick of good feelings. That false sense of achievement triggers your reward responses, which is very dangerous when you haven’t actually done anything.
Phone, e-mail and other alert/message checking is addictive. You can see that in how other people use their phones and check e-mails even if you can’t see it in your own behaviour.
Check phone. Glance back at work. Twitch. Check phone again to see if an e-mail came in the last 2 seconds.
Even if you have a new message or email, is it productive to focus on it right now? If your whole life is about keeping abreast of new updates, when do you focus on your work? Probably after wasting most of the day and realizing you need to rush all of your actual work to get it done.
Alerts are the equivalent of someone physically tapping you on the shoulder and asking for your attention every few seconds.
This doesn’t just distract you; it constantly undermines your own decision making.
When you decide and plan when you’re going to do things you decide when you’re going to give your attention to them. You keep your focus and you stay in control of the direction of your day.
Trying to work on the basis that the flow of your actions is going to be determined by multiple external events (bar an actual fire in the building) is obviously going to make it very hard to concentrate.
Making a firm decision about when and how you check updates boosts both productivity and confidence. Take back control. | https://medium.com/bettertoday/drowning-in-a-sea-of-alerts-c2bf11324f9d | ['Stef Hill'] | 2020-02-07 11:39:39.295000+00:00 | ['Procrastination', 'Productivity', 'Time Management', 'Work', 'Work Life Balance'] |
Submit Your Story to Transform the Pain | People need to hear your story. Share your experience, tell us what you are going through, how you are coping, what makes it hard, and what helps.
It’s through sharing our stories that we connect, learn, and help each other. Sharing your thoughts and feeling related to loss can be a healing experience that can also help others relate and better understand the process of grieving.
Click here to submit for the first time: https://transformthepain.typeform.com/to/QDLDn4
If you have Medium account, we’ll add you as a writer when your story gets published. From then on, you will be able to submit your stories directly through the Medium interface — for those of you who are not familiar with the process, here is how that goes: | https://medium.com/transform-the-pain/submit-your-story-to-transform-the-pain-56bedbd0440 | ['Mateja Klaric'] | 2020-09-19 07:53:57.225000+00:00 | ['Transformation', 'Medium', 'Writing', 'Grief And Loss', 'Call For Submissions'] |
How to Write Better React Code With useMemo | When React hooks were introduced in React v16.8, developers were finally given the ability to manage state in functional components by using hooks like useState , useEffect , and others. In this article, we’ll be looking at the React hook useMemo and how we can use it to write faster React code.
Photo by Filiberto Santillán on Unsplash
To look at why useMemo even exists, we’ll first be looking at how rendering works.
Function Equality and Expensive Operations
useMemo at its core seeks to solve two problems, function equality and expensive operations.
During the lifecycle of a component, React re-renders the component whenever an update is made. This means that React will rebuild all the functions and variables in the React component, potentially a very expensive operation for more complex React components.
Objects in Javascript are by default unique. For example, lets take a look at this code:
Here, we have two objects, x and y , that have the exact same structure. However, when compared with Javascript they aren’t the same value!
When React checks for changes in a component, due to this object equality issue, it may be unnecessarily re-rendering the component tree due to perceived changes in objects, when in reality the object has the exact same values.
This is where a technique called memoization comes in.
What is Memoization?
Memoization is similar to caching an operation or value. For example, lets say we have a function that computes 1+1 and returns 2 . If we memoize this function, next time it uses the function to calculate 1+1 , it will remember that 1+1 is 2 without ever re-running the function!
This can be incredibly powerful for speeding up complex operations.
useMemo
From the official React documentation, useMemo looks like this:
const memoizedValue = React.useMemo(() => computeExpensiveValue(a, b), [a, b]);
Note that we pass in a function, in this case computeExpensiveValue , and a dependency array [a, b] . The dependencies are similar to arguments for the function, they’re what useMemo watches to determine whether or not it should run. When there’s no changes to a or b , useMemo won’t run and instead return the stored result.
This can be optimal if the wrapped function is incredibly expensive.
Lets take a look at a real world example:
const complexList = React.useMemo(() =>
list.map(item => ({
...item,
expensiveValueOne: expensiveFunction(props.first),
expensiveValue2: anotherPriceyFunction(props.second)
})), [list]
)
In this case, we’re using the useMemo hook to convert a list into a list of objects. On first render, this function will run, blocking the main thread. However, on every subsequent render, unless list changes, we can reuse the same value and not run the expensive functions again.
When to use useMemo
When doing any React optimization, ensure that you fully write and revise the code to see if you can optimize it. useMemo can actually hurt performance if used incorrectly.
Profiling your React application can be a great way to ensure there’s a measurable impact from implementing useMemo ,
Using the right hook for the job
useMemo isn’t the only React hook, there’s also useCallback , useRef , and useEffect .
The useCallback hook, which I wrote an article on, is very similar to useMemo but it returns a memoized function instead of a memoized value.
If your dependency array is empty or contains values that change on every render, there’s no chance for useMemo to properly memoize values and there will be no performance gain.
Don’t use useMemo to fire off any asynchronous values, instead you should use useEffect , useMemo should be used with pure functions.
Conclusion
The useMemo hook can be incredibly powerful for improving your React applications performance when used properly. By memoizing an expensive function, we can save the output value and make that function appear to run instantaneously. However, useMemo adds its own overhead, and should only be used when there’s a clear optimization benefit.
Keep in Touch
There’s a lot of content out there and I appreciate you reading mine. I’m a undergraduate student at UC Berkeley in the MET program and a young entrepreneur. I write about software development, startups, and failure (something I’m quite adept at). You can signup for my newsletter here or check out what I’m working on at my website.
Feel free to reach out and connect with me on Linkedin or Twitter, I love hearing from people who read my articles :) | https://medium.com/swlh/how-to-write-better-react-code-with-usememo-cbc1cdf0d384 | ['Caelin Sutch'] | 2020-12-23 22:47:46.189000+00:00 | ['React', 'Programming', 'Software Developmen', 'Reactjs', 'Web Development'] |
My High School Sweetheart Was A Sick-Hearted Villain | I believe two kinds of people exist in the world — people who had a great time in high school and people who had some traumatic experiences during high school.
I belong to the latter.
All the hormones at that age make it unlikely to experience anything less than an emotional roller coaster. Life before high-school seems like butterflies and rainbows. Coolness and the strife for acceptance by peers are the biggest priorities in the survival guide to being a teenager.
It is the period of many firsts — crushes, relationships, dates, friendships, fights, failures, victories, and for the unfortunate, some abuse.
What happens in high school stays in high school. NOT!
Every experience at that age is vital and stays with you for life. Good or bad.
Good Girl Gone Rogue
I was a wallflower. Not too popular, not active in sports or any other activities, not a part of any club either. I was just an average, nerdy, obedient kid who did not have big dreams.
It was in grade 9 that I was first asked out by a senior and did not know how to process it. I also did not know if I should talk about it to my parents because having a boyfriend was a loud NO for my strict brown parents. Breaking their rules or distractions from academics was terrifying for me.
They still believed in corporal punishment and would get creative with whatever was at arm’s reach — ruler, slippers, hairbrush, ladle. Afraid of beatings, I stayed away from boys and locked away any feelings or crushes. A good set of friends made me feel content.
A year of my average life went by, and I was now in grade 10. “This is the most important academic year of your life.”, they said. Only if I got a dollar, for every time I heard this in my life.
Academics got more challenging, and my schedule got tighter. In the middle of the year, my friends started boycotting me because they heard rumors that I got around with countless boys.
“But I don’t even know all the boys from this rumor.”, I scoffed in disappointment. I thought that we were going to be best friends forever.
There was nobody I could talk to or cry to about this — definitely not my parents. I lost the one thing I had going on for me. Going to school felt like a prison sentence, and my loneliness made me realize they weren’t good friends.
Not so scared of suffering at home anymore, I started bunking classes, failing tests, and talking back at home — a little rebel without a cause. I wasn’t going to be miss-goody-two-shoes anymore, breaking bad doing whatever felt wrong but still stayed far-off from my male contemporaries.
For Better and For Worse
Two Important Events that Changed my Life
Siri, a classmate who I barely spoke to in one and a half years, moved to my neighborhood. We started hanging out to play badminton and eventually started copying each other’s homework and talking about school and boys. Before we knew it, she became an essential part of my life.
As different as the north and south poles, yet so similar. What brought us close was our mutual lack of friends. I thought I was an introvert until I met her. She was a bigger wallflower. We continued being shy and awkward together.
It was also around this time that our teacher Miss Starlet announced that Jayden, a boy from my class, had a congenitally defective heart and underwent surgery for it.
“Treat him well, and don’t bully him.”, she said.
Jayden was one of the popular kids at school. He had the bad boy vibes and was somewhat of a bully himself, which for some reason, attracts girls. In retrospect, he was just a noticeably short boy, awfully pretentious and mean. However, I was not immune to it back then.
My feelings for him grew stronger when he asked me out one day. Not accustomed to getting male attention, I immediately fell for him, and we started dating in secret.
In secret because firstly, I did not want my parents to find out, and secondly, he did not want anybody from school to know (probably a red flag that I was too blind to notice then).The only person who knew about this relationship besides the two of us was Siri. We never went on dates or did anything together in public.
He devised a plan to spend time together after school hours when nobody could see us. I would tell my parents that I was with the tutor after school and stayed back, and Jayden would ask me to meet him in a classroom where all he wanted to do was make out. He taught me how to kiss by shoving his tongue down my throat.
Inexperienced to such pleasures and sensations, I agreed to this daily make out routine. However, we never spoke about anything. Every day, I would go back to Siri to give her the details — deets, as she called them.
One day, while winding up our little session, Jayden said to me, with a devilish smile on his face, “My parents won’t be home tomorrow. Wanna come?”. How could I say no to those deep brown eyes?
Overwhelmed with joy and anxiety, I ran to Siri to give her the news. She immediately gave me a piece of her mind for being so vulnerable to poor treatment. Yet, she agreed to go with me.
Black-Letter Day
I woke up with mixed feelings but mostly frightened. That day in school got over in a haze. Despite me feeling the heebie-jeebies, Jayden refused to acknowledge my existence as usual. My low self-worth made me ignorant of such suspicious behavior.
After school, Siri and I anxiously walked over to Jayden’s house. Both his parents, doctors, were always out on duty, and his building was infamous because of his gang.
I rang the doorbell with my heart in my mouth, standing like nuns waiting to enter hell. The door swung by, and there he was. An instant sense of ease took over as he invited us inside.
It only lasted for a couple of minutes.
All the guys from school that our parents warned us about were there. We were like the Red Riding Hood in the Big Bad Wolf’s den. Everyone stopped playing PlayStation, or eating pizza, to catcall and tease Jayden. My anxiety doubled, making me super uncomfortable. One glance at Siri and I realized she was too. Jayden took us to the couch and gave us sodas. The introvert meters in us were probably erratic.
“Would you mind if I steal her for a while?” he asked Siri while holding my hand. She hesitantly shook her head. I checked with her again.
“Siri, are you sure you’ll be OK?”
“Yes. I’ll be fine. You be careful.”
Reluctantly, I left her on the couch, unguarded, while Jayden took me into his bedroom.
He shut the door, held me by my waist, and began French kissing me. Only this time, I wasn’t feeling it. I tried to push him away by saying that I should check up on Siri. He reassured that everything was fine and gently took me to his bed.
He started unbuttoning my blouse, pushed aside my bra strap, and cupped my breasts. All sorts of alarms were going off in my head. Afraid that he might realize I’m not cool enough for him, I pretended to remain calm. Although, my limbs felt numb like somebody had tied me down.
Is this supposed to feel sexy? I was shaking.
Finally, I felt relaxed when he stopped fondling my breasts. Then promptly, he started pulling up my skirt and running his hands on my inner thighs.
THAT’S IT!
I quickly gathered all my strength, and pushed him away as far as possible, adjusted my bra and blouse while I ran to Siri. Poor Siri, she looked as traumatized as me in between those hungry jackals.
I could tell she was also glad to leave.
Series of Escalating Misery
The next day in school was excruciating on so many levels because neither did I tell anyone what happened there, not even Siri, nor could I stop thinking about it.
Despite feeling violated, I dreaded to end things with Jayden. That day, instead of meeting with him after school, I went home.
Little did I know that another catastrophe was waiting to happen.
My mom was patiently waiting for me with a stick and welcomed me home with an ambush. Jayden’s neighbor, a stay-at-home mom (let’s call her Karen, for obvious reasons), saw us run out of his house with a partially buttoned blouse and babbled everything to my mother.
Side Note:- Let me tell you something about jobless, middle-aged women, who like putting their noses where they don’t belong. They have the surveillance capacities of the CIA and all the time in their hands, which makes them the most dangerous undercover agents.
Misconduct and brown parents are the perfect recipe for an enormous fuss.
Even though my father whipped me with his belt as punishment, what hurt me more was that they forbid me from meeting Siri until final exams because they thought she was a bad influence. We still found ways to hang out but swore to never talk about that day ever again.
The following days in school were worse. Guess who wasn’t ashamed to address me in public anymore. Yes, the same sick rascal who didn’t even have a fully functional heart, to begin with -Jayden!
Slut-Shaming
He started openly slut-shaming me, and I found out that he approached me in the first place because he heard the same rumors as my first group of friends. Clearly, he only showed interest in me because he thought I was an easy target, and he was not wrong. My gullibility led me to an incident that deeply affected my life.
School rumors spread like wildfire, and most often, it comes around to you after each person has added their zest to the story. The whispers about me were — “ She charmed her way into his bedroom, seduced him to have sex, got pregnant, wrongfully accused him of rape, and got an abortion.”
WOW! What soap opera were they writing for? Am I right?
Like a game of Chinese whispers, there were many versions of the gossip, none in my favor. I was bullied by some, patronized by others, but branded by all as the school floozy. Siri was my solitary cheerleader through it all and shielded me from skepticism.
My Rainbow After the Storm
Twelve years have passed since this conflict, I have gathered a lot of happy memories along the road, yet the wound feels fresh. I wish I could tell my parents about the day that still haunts me, but I decided I shouldn’t remind them about the shame I caused them.
Things could’ve gotten far worse that day, and that’s what frightens me the most. Even though I wasn’t harmed physically, high school heavily damaged me on an emotional level.
It was only recently that Siri and I finally opened up about that day. We cried a little because the trauma sneaked back on us, but laughed more at the blown up “scandal” and how stupid we were back then.
High school was, without a doubt, disastrous for me, but I gained the one thing I will never let go — Siri. She is still my most reliable friend, confidant, and trusted pal to this date, and will be forever.
And Jayden? He’s still a short, arrogant, snobby teenager stuck in an adult’s body, who hasn’t had a life after high school. | https://medium.com/survivors/my-high-school-sweetheart-was-a-sick-hearted-villain-9883c0f1b35f | ['Alisha Baxter'] | 2020-09-10 07:37:34.585000+00:00 | ['Life Lessons', 'Life', 'Sexual Assault', 'Relationships', 'Mental Health'] |
Dan Rojas’ Author Bio | Dan Rojas’ Author Bio
My path to redemption as a writer
They called him el chiquito que amaba el mundo: the little boy who loved the world. The native Panamanians of Escobal and Cuipo saw not the wretched direction Dan’s life would take, but only the open-heartedness of a little boy who would one day conquer his depravity.
Dan Rojas’ childhood was unstable, having been admitted to seven mental hospitals before puberty. This is largely due to the pharmaceutical industry’s diagnosing fetish fueled by its profit margins. By the age of nine, Dan was diagnosed with oppositional defiance disorder (ODD), bipolar disorder, and depression. These were all incorrect diagnoses for disorders Dan did not have and was heavily medicated for each. His parents attempted to intervene but social services’ implicit threats to take more than just him, but his three siblings as well, barring them from taking action. For the majority of his childhood, Dan was forced to take pills at dosages equivalent to a lobotomy. Dan’s resentment of the system responsible for his chemically imprisoned upbringing is justified.
In adolescence, Dan was given a corrected diagnosis of attention deficit disorder (ADD). This enabled him to break free form the false labels nailed to him by the penny per prescription model of current psychiatric practice, in which a dog can be prescribed Xanax. For the first time in a long time Dan could feel his soul breathe, he was living again, but not without a price, the demons of his childhood came to collect. With crippled empathy and a growing inferiority complex, juvenile delinquency was the perfect avenue for his vindictive outrage. Ignorantly, Dan took to drugs and drinking by 16 to soothe his crumbling frame of mind. The first pattern of alcoholism bloomed. Dan saw a bleak future, a wasted life he couldn’t turn from, and pursued the Army in hopes of escape.
To enlist, the Army entry standards required Dan to be off of any ADD medication for two years before enlistment. During his Junior and Senior year, without the help of his ADD medication and his prideful refusal to utilize his individual education plan (IEP), Dan’s grades suffered. Despite the threat of not graduating he refused to “give in” to the system and did not apply himself. To him, high school was a direct extension of his childhood’s chemical prison and he foolishly rejected everything it had to offer. Although barely, Dan managed to graduate. To him this was a success — he beat the odds and, as planned, he enlisted.
Dan opted for an airborne infantry contract. A few months before the ship-date, Dan blacked out at a party and awoke the morning after with a concussion, shattered bones, a complete fracture to his right mandible, and other minor traumas. His airborne infantry contract expired in the six months he took to recover after the facial reconstruction surgery. Dan signed a new contract with the Army as a healthcare specialist.
Dan graduated basic training and advanced individual training (AIT) with distinction, but the remainder of his short military career was served in distaste to both him and his superiors. His arrogance, drinking, and characterless dishonesty are what define his military “service.” He was discharged for failure to rehabilitate, and all too late, Dan realized he was an alcoholic.
In the following three years, Dan struggled with sobriety as he attempted to form a new life back home with his mother and younger brother. Although relapsing many times, Dan strove for growth. The path was barbed and riddled with missteps; he hurt and betrayed many during this period of life. But hope was not lost. Dan’s life for the better began with a book, The Ego and the Id.
Dan was indignant that a book, written nearly a century prior, understood him better than the lot of his childhood psychiatrists, psychologists, and social workers. But Dan felt the social injustices of his life were just side effects of something deeper — but of what? In his search to answer this question, Dan became convinced of the United States Education System’s corruption, and its push to create, an uneducated, impoverished, slave wage working class. Disillusioned, Dan saw America critically for the first time. He needed more knowledge and dove headfirst into Freud and his contemporaries. With a foundation in theoretical psychodynamics, Dan preferred the Neo-Freudian humanist outlook and continued exploring other theoretical fields for truth and clarity. Dan’s research opened his mind and, slowly, his heart followed.
From this pursuit, Dan read three works that radically changed his life: Eric Fromm’s Art of Loving, Paul Tillich’s Dynamics of Faith, and Martin Buber’s I and Thou (Kaufmann translation). These works helped Dan look at his past, present, and future in a deeply critical manner and helped Dan concretized his first major moral summit since his pitfall with alcoholism. It is because of these works that Dan’s faith in humanity and in himself was restored. At long last, hard-fought and hard-won, Dan had reclaimed his will to meaning.
Dan is a survivor of a morally fraudulent system and a survivor of one overdose and five suicide attempts. Of these life-threatening events, two would have been fatal if not for the rapid interventions of his sister. Yet, for all that has happened, Dan was, and is, more than a helpless victim. He could have made the best of what he had, but Dan chose bitterness, anger, and blame over love — video meliora proboque deteriora sequor: I see and approve of the better but choose the worse.
Without morals, ethics, or principles, Dan Rojas committed acts of cruelty and hatred. He introduced drugs to people that helped ruin their lives. He took love and used it against those whom he loved. Dan betrayed best friends and sold out family. Dan was a misogynist and a xenophobe. Dan judged people for immutable characteristics and was cruel to them. He embodied all these things before his 25th birthday and will remain, for the rest of his life a battling alcoholic.
This is Dan’s greatest shame: this vile history is him at his worst and he owns it. Dan is confident in his re-humanization and he shares these low truths so that others, who may not know their way through the fog, can see a beacon home: the home of accepting the wretched imminence of one’s past and stepping out from the shadows back into the light. This is his confession, his apology, and his penance.
Dan writes to show change is possible, even for monsters, and redemption is how the ignoble, nobly live. | https://medium.com/from-the-library/dan-rojas-author-bio-aa8c5fec59c5 | ['Dan Rojas'] | 2020-01-02 13:59:13.988000+00:00 | ['Addiction Recovery', 'Redemption', 'Mental Health', 'Ftl Bio', 'Struggle'] |
This Illiterate Went From a Starving Shepherd to a Man of $3 Billion | Entering the business world
Chaabi thereafter roamed the country for a few years doing menial works before he finally settled in Kenitra as a blue-collar worker.
While working in masonry, Miloud developed a keen interest in real estate. So, he decided to start his own construction business at the age of 18.
The start-up consisted of two workers and wasn’t generating big profits in the beginning. But it was a good occasion for our entrepreneur to develop shrewd business acumen and set aside some money.
16 years elapsed before it was time to expand the business and explore new market opportunities. And the next station was the porcelain market.
Miloud, therefore, founded the porcelain business “SUPER CERAME” and kept increasing his assets even beyond Moroccan soil.
He said that before he became a businessman, trading and business were restricted only to some Jewish, French along with some famous Moroccan families at the time Morocco was colonized. Hence, setting his foot in the trading industry was a long arduous journey. | https://medium.com/datadriveninvestor/this-illiterate-went-from-a-starving-shepherd-to-a-man-of-3-billion-5b22058c37f7 | ['Mohammed Ayar'] | 2020-12-29 17:38:54.421000+00:00 | ['Investing', 'Money', 'Biography', 'Poverty', 'Entrepreneurship'] |
My 2 cents on Paris ChangeNOW 2020 summit | Impact Initiatives are “all over the place”
Pampers collecting diapers in Amsterdam for recycling (diapers ranking in the top 10 source domestic waste — so sizeable topic indeed). It is worth noting that these bin accept all diaper brands, not just Pampers.
Diapers recycling bin in Amsterdam
Tokyo Olympic medals made out of garbage (= actual gold, silver and bronze recovered from old mobile phone). Story here
Incredibly, Japan managed to extract 32kg of Gold, 3,500kg of Silver and 2200kg of Bronze from used electronics
Houses made of recycled PET “plug and play” structure :
Check out the work of business man and philantropist Ustinov to recycle PET into “ready-to-assemble” and highest-industry-standards housing structure
3D objects printed on the spot out of a seaweed bath
Courtesy of the southern city of Arles, this process can live-print anything from beautiful decoration item to functional fabrics :
International Protocols starting to emerge :
Impact companies need Scale to actually have an impact. Protocols enable Scale. Here are 2 convincing signs on the market :
Loop : retailers and brands coming together to organize a -large enough- platform of reusable containers.
Even though some claim this is more marketing and green washing than actual viable logistics, this projet has the merit of pushing the experiment much further than ever before, and taking a good first step toward the scale effect required to make these products economically viable.
A first batch of loop products is getting off the ground thanks to some of the world’s largest brand operators (unilever, P&G…) and retailers (Tesco, Carrefour…)
B-Corporation certification is an independent, standardized way to assess social, environmental and public impact.
Notably endorsed by the United Nations, The certification process is thorough and specific to each area of activity of a company. As an example, Danone has been able to B-certify 17 BUs of the Group (or 30% of Group’s revenue) over the past years, one at the time…
Last but not least on Protocols — this punchline by Andrew Morlet, CEO of the Ellen Mac Arthur Foundation :
Economic model is an absolute MUST: no impact company will emerge at scale otherwise
As Igor USTINOV put it, there are 3 steps to scale an impact company:
* Start small / find your audience * Develop a robust enough model * Continuously adapt as you grow
Contrary to a some belief, Emerging Countries are at the frontline of the fight against plastic
Those colorful scultures below, though adorable, are sadly made of flip-flops washed away on indonesian beaches — not thrown away or abandonned there, but litterally carried in by the sea…
The end of international garbage trade, the keynote from Malaysan princess Zatasha against plastic pollution and food waste, were many clues that Emerging countries are not waiting too long to claim their turn
Food and agriculture account for 8 out of 20 most important levers to fight global warming
Below is a table by drawdown.org (an internationally crowdsourced impact website — which I didn’t know until then), of the top CO2 reduction initiatives by impact. In particular see how “reduced food waste”, and “moving to plant -rich diet” are taking the front row seats.
check-out drawdown.org for all 80 initiatives
Paris Olympics 2024 : Paris won the gig thanks to a promise on sustainability
Interesting talk by Tony Estanguet, multiple gold medalist in Kayak and now lead organizer of the Paris 2024 olympics, on how these olympics will be the 38th in History, but really the 1st being carbon neutral — aiming for 50% reduction of CO2 impact versus previous events.
Among the measures enabling this objective, the fact that all sites will be accessible by public transportation, but perhaps even more notably the fact that the majority of sports events will take place in Versailles, the Grand Palais, the Eiffel Tower… thus preventing new sites construction — Cool! | https://medium.com/tech-away/my-2-cents-on-paris-changenow-2020-summit-ab87416ca2f8 | ['Bruno Jean'] | 2020-02-17 09:23:40.207000+00:00 | ['Change Now', 'Paris', 'Impact', 'Sustainability', 'Impact Investing'] |
A Guide to GitHub for Non-Developers | GitHub
Let’s start with the big guns: GitHub. GitHub is where we keep all of our code, it’s DropBox for developers. Code is split into repositories (or repos) which are akin to project folders on your computer. Most of the time a distinct app or service has its own repo. E.g. the next-article repo for the article page app.
Things a developer might say:
Something’s broken but I haven’t figured out which repo it’s in 🧐
If you look at a repo in GitHub the code you will see is called the main branch. A branch is a bit like a sub-folder, and the main branch contains the code that exists in production, i.e. if you look at the main branch on the next-article repo you will see the (nicely formatted version of the) code that gets run when you load an article page on FT.com.
Working locally
A repo will usually have other branches too; these are copies of the main branch which have been edited in some way, usually because a developer is in the process of adding, fixing, or otherwise changing something.
When a developer wants to make any changes to the production code they do so on their own laptop on a copy of the repo they have cloned there. This is called working locally. They do this work on a branch so that their changes aren’t reflected in the production code until they’re happy with them.
Multiple times a day a developer will commit and push the changes they’ve made locally to GitHub. This saves the changes to GitHub and these are the branches you’d see if you looked at a GitHub repo.
Which branch is that on? 🤔
Whyyyyy can’t I get this running locally?!!! 😩
Let me just commit these changes before I go 🍺
Merging
When a developer is happy with their code they will create a pull request (PR) asking for it to be reviewed. In a pull request a developer explains the changes they’ve made in the code. Other team members review the changes and approve the PR or leave comments asking for changes.
Creating and reviewing pull requests is the part of this process where there’s a dependency on other people and can be time consuming, although it’s very worthwhile. Good pull requests make it as easy as possible for people looking at the code in the future to understand what decisions were made to get it to that point.
Once the PR has been approved it can be merged into the main branch.
My PR is almost ready to go 🙌 (i.e. almost ready to be reviewed)
Pleeeeeeeeeease can someone review my PR? 🙏
Deploying
Once a PR has been merged the code from the developer’s branch will be contained in the main branch, but it won’t yet show up on the website. For that to happen it needs to be deployed.
Deploying means getting the code from where it’s stored, in GitHub, onto the servers that host the site, in our case on a platform called Heroku.
Because of some wondrous thinking by the FT developers of yore about how to make deployment as easy as possible, the deployment process starts automatically when a PR is merged in GitHub. We use another tool called CircleCI to manage our deployments.
CircleCI takes the code from GitHub and builds and deploys it.
In building the app, CircleCI takes the code that’s in the repo and does a load of stuff to it that makes it ready to run in production. (Like all development, this process is like fractals — you can get more and more in depth about what it entails until you’re just dealing with the zeros and ones, but for the purposes of this blog ‘stuff’ will do).
CircleCI then runs a final set of tests on the code and deploys it by saving it to the relevant app server on Heroku. Once the code has successfully been deployed it will be visible in production within minutes. | https://medium.com/ft-product-technology/a-guide-to-github-and-deployment-for-non-developers-7811dcf508bb | ['Jennifer Johnson'] | 2020-10-12 12:21:41.491000+00:00 | ['Github', 'Deployment', 'Development', 'Source Control'] |
Web Scraping with Python and Object-Oriented Programming | Web Scraping with Python and Object-Oriented Programming NafadAlJawad Follow Oct 17 · 4 min read
Web Scraping termed as Web data extraction, Web harvesting, Screen Scraping, is a vital mechanism in today’s world. Through Web-Scraping you can extract useful public information from your targeted websites and put together for data analysis, product comparison, making statistical reports, and many more. Python is undoubtedly the most popular language for web scraping and today I am going to give an example of extracting data from IMDB’s website. We are going to get the top 250 movie rankings from all time and display any random 10 movies to the user.
So, let's dive in without spending any more time! At the end, I am going to elaborate the reason for choosing the chosen coding structure. I am assuming you have a basic understanding of Python and HTML. We need the package BeautifulSoup or bs4 in python to do this tutorial.
Firstly, in the terminal write the following command and press enter to install BeautifulSoup package:
pip install bs4
then import the following modules at the top of the file
from bs4 import BeautifulSoup import requests import re import random
Now we are going to write a class named ExtractMovies, you can, of course, choose any other name if you want to!
#Python class for declaring movie attributes.
class ExtractMovies(object):
def __init__(self, title, year, star, ratings ): self.position = position self.title = title self.year = year self.star = star self.ratings = ratings #function to make ratings to two decimal places
def first2(s): return s[:4]
Here, we are declaring the attributes related to a single movie and storing it as an object. Later on, we are going to populate the movie object with their unique characteristics or attributes. We are going to see the use of the function first2 later on, so chill for now!
url = 'https://www.imdb.com/chart/top/' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') movies = soup.select('td.titleColumn') links = [a.attrs.get('href') for a in soup.select('td.titleColumn a')] crew = [a.attrs.get('title') for a in soup.select('td.titleColumn a')] ratings = [b.attrs.get('data-value') for b in soup.select('td.posterColumn span[name=ir]')] years = soup.select('span.secondaryInfo') #Temoporary array to store class instances
_temp_ = []
In the above part:
first-line: We are declaring the url as a variable, this is the URL to IMDB top movies chart: https://www.imdb.com/chart/top/
second-line: Declaring a variable to send an HTTP request to the given url and receive the HTML response in text format.
third-line: Beautifulsouping the elements! Means, we will be selecting and processing the text with this variable.
fourth-line to onwards: With “soup.select” we are selecting the elements of the HTML object in the requested url.
One more thing, are you thinking of what these “td.tableColumn”, “href” , “title” or “posterColumn” doing? Okay, these are the descriptions of the elements of the html page we are working. You can follow the url and inspect the page in the developer mode to understand more. You can also follow this link to view the detailed documentation on different ways of using BeautifulSoap.
for index in range(0, len(movies)): movie_string = movies[index].get_text() movie = (' '.join(movie_string.split()).replace('.', '')) movie_title = movie[len(str(index))+1:-7] year = years[index].get_text() position = index+1 movie_instances = ExtractMovies( movie_title, year, crew[index], first2(ratings[index]) ) _temp_.append(movie_instances)
Here, yes we are looping through the range of the object movies that we got earlier and storing each of the data to its required fields, later we are assigning those fields to the class instance and appending it to the _temp_ array that we created earlier. And now the first2 function, we are using to make the ratings to two decimal places. Ratings here is a string object, you may use any other algorithm to convert it to Float if required.
random.shuffle(_temp_) i=1 for obj in _temp_: print(i,"|", obj.title,'
',obj.year,'
',obj.star,'
',obj.ratings,'
'
) i=i+1 if(i==11) break
In this last part, at the beginning, we are shuffling the array to get random movies, and then we are printing the output in a decorated format. We keep checking for the iteration to become 10, whenever it reaches 10, we are breaking out of the for loop.
The reason for choosing this class instance method is because it gives you more freedom and you can easily call this class anytime in your code if you want to extend your code further! You can also do this by putting the movies in Dictionary. I am going to explain the differences between Dictionary, List, and Class objects in one of my future blogs.
Oh! I forgot to mention, this is my first ever blog online!😊 I am so excited to write this article and publish it here on medium! I appreciate your reviews and feedbacks, or on anything you recommend me to write on! 🤞🤞
The entire code of this tutorial is as follows:
https://gist.github.com/jawad-nafad/065ea5795139c6c7942cc8f116cd2e11 | https://medium.com/analytics-vidhya/web-scraping-with-python-and-object-oriented-programming-14638a231f14 | [] | 2020-10-20 12:45:38.147000+00:00 | ['Python', 'Data Extraction', 'Object Oriented', 'Web Scraping', 'Tutorial'] |
Kubernetes Security With Falco | Kubernetes Security With Falco
Comprehensive runtime security for your containers with a hands-on demo
Photo by Dominik Jirovský on Unsplash.
Falco is an open source runtime security tool that can help you to secure a variety of environments. Sysdig created it and it has been a CNCF project since 2018. Falco reads real-time Linux kernel logs, container logs, Kubernetes logs, etc. against a powerful rules engine to alert users of malicious behaviour.
It is particularly useful for container security — especially if you are using Kubernetes to run them — and it is now the de facto Kubernetes threat detection engine. It ingests Kubernetes API audit logs for runtime threat detection and to understand application behaviour.
It also helps teams understand who did what in the cluster, as it can integrate with Webhooks to raise alerts in a ticketing system or a collaboration engine like Slack.
Falco works by using detection rules that define unexpected behaviour. Though it comes with its own useful default rules, you can extend them to define custom rules to harden your cluster further.
So, some things that Falco can detect are the following:
Opening of a shell session from a container
Host path volume mount
Reading secret and sensitive files such as /etc/shadow
A new package installation in a running container
A new process spawned from a container that is not a part of CMD
Opening of a new port or unexpected network connection
Creating a privileged container
and much more…
All these features make it particularly useful to understand less about whether you have the appropriate security in place and more to ensure you know when there is a potential breach so that you can stop it before something terrible happens. Falco, therefore, complements the existing Kubernetes native security measures such as RBAC and Pod Security Policies that help in preventing issues rather than detecting them.
There are multiple ways of running Falco within a Kubernetes cluster. You can install Falco in every Kubernetes node, bake Falco as a second container in the pod, or you can use a Daemon Set to inject a Falco pod in them.
Using a DaemonSet is a better and more flexible option, as it requires the least amount of changes in the Dev function and also does not take a toll on the Ops function as the first option requires. Also, it is Kubernetes-native, so it is the preferred way. | https://medium.com/better-programming/kubernetes-security-with-falco-2eb060d3ae7d | ['Gaurav Agarwal'] | 2020-10-23 15:33:27.588000+00:00 | ['Programming', 'Kubernetes', 'Cybersecurity', 'Containers', 'DevOps'] |
We Need to Be Kinder to Ourselves | When it comes to self love, I’m a huge advocate.
For other people. Not so much for myself. I mean, it sounds awesome, in theory. It’s not so easy, in practice.
Spending your life being put down by those closest to you, your mother, your former husband, supposed friends, makes it difficult to see the good in yourself. I’ve never had very high self esteem. I would even venture to say that I truly don’t have much at all.
Through the years, some of the harsher things I’ve heard have stuck with me like glue, and I can’t seem to find the Goo-Gone. Sadly, I have a much harder time remembering the good things I’ve heard, though I know they are there. My current husband tells me I’m beautiful and smart. I tell him he’s delusional.
I don’t take compliments well at all.
It’s not that I don’t like to hear them, I really do. I just don’t respond well, because I always wonder why that person would say them. I’ve thought so little of myself for so long, it’s difficult for me to believe that someone else would think anything different.
Today, a fellow writer, Leslie Wibberley, posted an essay about what your future self would say to you, given the chance.
“Because saying all those horrible things about myself means that someone else doesn’t have to. And if I’m the one saying them, it doesn’t hurt as much.”
This hit home, hard. When I allow myself to think about it, this is exactly why I do it, too. I’m horrible for calling myself fat, unattractive, a bad wife, bad mother, bad friend. Deep down, I know that I’m at least partially wrong, but I feel this must be what others see when they look at me, so I say it, so they don’t need to.
It hurts less. But at the same time, more.
I would love to be the woman who is confident in herself, who knows she’s attractive in her own way, intelligent, worthy. Not cocky, but carries herself in a way that says, “I’m a bad-ass and I know it.”
I can pretend to be that woman. I do it quite often actually. But when it really comes down to it, that’s not who I really am, just who I aspire to be.
When we’ve gone through trauma and abuse, it seems it’s harder to accept ourselves. Sprinkle in mental health issues, and you may find yourself at full-blown negative self esteem status. I know I have. I still struggle every single day.
But it does get better, even just a little. And that’s better than nothing. The biggest change you can make is the conversations you have with yourself. | https://ccuthbertauthor.medium.com/we-need-to-be-kinder-to-ourselves-1c3b1ebdab70 | ['Chloe Cuthbert'] | 2019-10-09 15:49:36.642000+00:00 | ['Life Lessons', 'Women', 'Self Improvement', 'Mental Health', 'Self'] |
Search and Navigate Faster With Chrome Custom Search Engines | Photo by Markus Winkler on Unsplash
I recently found a neat feature within Google Chrome I cannot live without anymore, called Custom Search Engines. With Custom Search Engines, you can search any site using a simple keyword and the TAB-key, or navigate to paths at a particular website. Okay, any site is perhaps a bit exaggerated…
I already used the Search Engine feature a lot. For example with YouTube, and you may already be using this feature without even knowing you do.
If you type youtube in the Google Chrome address bar, you can hit the TAB-button and search for a YouTube video.
YouTube search engine within the Chrome address bar
The thing I was not aware of, is that you can add own websites to these search engines and trigger them with a specific keyword. This can come very handy and I have set it up to search for text in our Confluence wiki and navigate to specific AWS resources in the AWS console. I will be discussing one case in this article, but of course you can apply it to any other website.
Searching Wiki Content
We are using Confluence for documentation. I noticed that it is taking very long before you have the results of a search. You have to navigate to the Confluence page, wait for the page to be loaded, tap at the search bar, enter and execute the search query and then you finally see the results. Since our Confluence page is not public, I will be using wikipedia.org in this example, but you can use the same principle.
Head Over to wikipedia.org
Navigate to wikipedia.org, you will probably see somewhat the same landing page as below.
Landing page wikipedia.org
Execute a Search Query and Look at the URL
Search Engines work with replacing a particular part in the URL by the search term you enter, so you have to look at what part of the URL contains the search query after actually searching. In the example below you can see I searched for ‘software’.
Entered software as search term
The page I was redirected to after searching for ‘software’
As you can see above, the search term is actually part of the URL (en.wikipedia.org/wiki/Software), so this website is eligible to be set as Custom Search Engine in Google Chrome (please read the ‘Last Notes’).
Add a Custom Search Engine
Right click the Google Chrome Address bar, then click on ‘Edit Search Engines’ or head over to chrome://settings/searchEngines .
Right click the Google Chrome Address bar
The ‘Manage search engines’ screen in your Chrome settings
You will then enter the ‘Manage search engine’ screen in the Chrome settings. You can see some default search engines already set, and with the ‘Add’ button you can add your own search engine. Let us add wikipedia.org as our custom search engine, using the information we have gathered in the above steps. Continue by clicking the ‘Add’ button.
Provide information of the search engine in this popup
The popup above will be shown after clicking the ‘Add’ button. The three fields ‘Search engine’, ‘Keyword’ and ‘URL with…’ are all you have to fill in to get this neat feature to work.
‘Search engine’ can be any description, ‘Keyword’ is the word you have to type before hitting the TAB-key and trigger the custom search engine, in the ‘URL with…’ field you have to enter the URL of the website you want to search in and replace the search query with %s .
Let us take a look at the URL we have been redirected to after executing the ‘Software’ search query on wikipedia, which is https://en.wikipedia.org/wiki/Software . Software is the search query, so regarding the field description, we end up with the following URL: https://en.wikipedia.org/wiki/%s , we have to fill this in in the ‘URL with…’ field. I have filled in ‘Wikipedia.org’ as ‘Search engine’ and ‘wiki’ as ‘keyword’. You can then add the Custom Search Engine and you will then be able to use this keyword to trigger it.
Filled in the ‘Add search engine’ popup for wikipedia.org
Go to the address bar and start typing ‘wiki’, which is the keyword we have set for our Custom Search Engine.
You will already get a suggestion to hit the TAB-key to search in Wikipedia.org. You may have noticed that this is the value set in the ‘Search engine’ field. Now hit the TAB-key and type something what you want to search.
Search for software within wikipedia.org
After typing your search query and hitting the ENTER-key, you will be redirected to the, in my case, software page at wikipedia.org.
Software page at wikipedia.org
That is quite about it! This principle can of course be used at any site eligible, which means the search query needs to be in the URL.
This feature definitely makes my life much easier :-).
Last Note
The above is just an example. You will find out that, after testing it with some search queries, the example above is not really a search query. What it does is pointing you to a page within wikipedia.org. If that page does not exist, it will not give you a nice overview of suggestions like you would expect from a search functionality. The actual URL to make a search query on wikipedia.org is https://en.wikipedia.org/w/index.php?search=QUERYHERE , resulting in the following URL you have to fill in the ‘URL with…’ field in the ‘Add search engine’ popup: https://en.wikipedia.org/w/index.php?search=%s .
Questions, Suggestions or Feedback
If you have any questions, suggestions or feedback regarding this article, please let me know! | https://medium.com/the-innovation/search-and-navigate-faster-with-chrome-custom-search-engines-3e157f286a67 | ['Stephan Schrijver'] | 2020-09-13 11:20:26.865000+00:00 | ['Chrome', 'Shortcuts', 'Productivity', 'Search', 'Efficiency'] |
Unit Testing Best Practices | Photo by Science in HD on Unsplash
Unit tests are an important scaffold for large-scale software development; they enable us to design, write, and deploy production code with confidence by validating that software will behave as expected. Even though they may not execute in live systems, their development and maintenance requires the same care as general production code. Sometimes developers do not realize this which leads to testing code with more code smells than production code. Engineers may not give enough attention to test code changes in the code review process. However, most of the time the test code reflects the health of the production code. If the test code has some code smells, this can be a sign that the production code can be improved. In this post, I’m going to mention some of the best practices to keep unit test code clean and maximize the benefits they provide.
The best practices for unit testing are debated topics in the industry. In practice, however, projects and teams should align on key concepts in order to foster code consistency and ease-of-maintenance. I’m going to mention the meaning of unit testing in the Object-Oriented design world, characteristics of a unit test, naming conventions for unit tests, and when we should or should not use mocking. We can have or find many different answers/approaches for these concepts, and the relevance of different trade-offs may vary depending on the situation.
Unit Testing
To define unit testing, we should first define the unit. Once the unit has been defined, then we can define unit testing as testing the behaviors of a unit. Let’s try it for Object-Oriented software development methodology. Classes are the main building block of software designed with the Object-Oriented design paradigm. Then we can think that the class concept is the unit of Object-Oriented software, and unit testing involves independent testing of behaviors of a class by the developer who implements these behaviors.
A behavior and method relation of a class may not be 1:1. Sometimes a class can have more than one method to implement a behavior that is unit tested as a whole. Sometimes more than one class can be used to implement a behavior that’s unit tested. However, sometimes it can be a sign of code smell, like temporal coupling, if you use more than one public method or class to implement a unit tested behavior.
Characteristics of a Unit Test
When developing unit tests, some key considerations include: How fast should a unit test be? How often should a unit test be run? Which kind of object methods are valid or not valid for unit testing? How should we structure a unit test? What kind of assertions should we make in a unit test? Let’s think about the answers to these questions.
Some expected characteristics of unit tests include: fast execution times in order to provide immediate feedback about implementation correctness, readability in order to clearly express the behavior that’s tested, consistency and predictability of results through the use of deterministic evaluations, and robustness to structural changes (i.e., refactoring) in the implementation.
Speed
Developers expect unit tests to run quickly because they are generally executed frequently during the development process. We typically run unit tests whenever we make a change to our code in order to get immediate feedback about whether something is broken or not. Speed is a relative concept, but as Martin Fowler said in his article, “But the real point is that your test suites should run fast enough that you’re not discouraged from running them frequently enough”.
Having fast unit tests requires continuous care along the life cycle of our codebase, but we can also have some rules that help us to create fast unit tests when we name/tag a test as a unit test. Michael Feather mentioned some of this kind of rules in his article:
“A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can’t run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing config files) to run it.
Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.”
Behavioral Testing vs Structural Testing
In OOP, the structure of an object refers to the specific order and manner in which the production code implementing that object uses its dependent methods or classes. Since the structure of an object is related to the way that the production code is written, it can generally be considered an implementation detail. Structural testing involves testing these implementation details.
Let’s see an example of structural testing vs behavioral testing: we have an Order class and orders can be cancelled, there are some rules to check if an order is cancellable or not and these rules are executed by OrderSpecification.
And we have a test class to test this cancellation scenario:
In the above test class we both check that the order is cancelled and that the OrderSpecification method is called exactly once. However, the specific way in which OrderSpecification is used is an internal implementation detail of the order cancel code, so the above test would be considered an example of structural testing.
Let’s see the behavioral test code for this scenario:
In the above test class we only care about the behavior of order cancellation, not its internal implementation details.
We expect two things from production code: one is “doing the right thing”, the other one is “doing the thing right”. Unit tests should focus on the former, i.e., the behavior produced by the production code, which is one level of abstraction above the implementation details. So, as Kent Beck says in his article, “Programmer tests should be sensitive to behavior changes and insensitive to structure changes. If the program’s behavior is stable from an observer’s perspective, no tests should change.”
Why? When we think of the benefit, cost, and maintenance dimensions of unit testing, it’s not hard to see that structure-sensitive tests create more friction rather than provide safety. Agile development teams change the structure of code continuously as they refactor, and fixing many brittle tests that are not related to any behavior after refactoring is a very tiring and discouraging process.
For example, let’s say we change the method signature for the isCancellable method within the OrderSpecification class from using the Order class as an argument to using the OrderStatus class:
In such a situations, the expected behavior of our code has not really changed, but the following test will start to fail because of our verification that depends on the method’s signature:
Unfortunately, mocking libraries make this kind of structure testing very easy to write, so we should use their structure verification functions with caution. Of course, there can be some exceptional cases where we have to rely on some structural testing instead of behavioral testing to achieve some level of confidence with our system. For example, if a real implementation is too slow to use, or too complex to build, then we may use structural testing as verifying invocation of some functions with mocking. Another case can be related to orders of the function call, like checking cache hit/miss, in some cases a cache miss may have financial costs, let’s say we call a paid API in case of a cache miss, and we may use structural testing to verify cache methods are called or not. But these should be exceptional, not our default choice.
Should we write unit tests for all classes?
No, because classes have different kinds of behaviors. Some classes have behaviors directly for business logic related to requirements of our domain, while other classes have behaviors that are related to application/system-level requirements, like transaction, security, observability, etc.
We separate classes that have different kinds of behaviors using stereotypes, i.e. different categories of responsibilities. We use Domain-Driven Design (DDD) concepts in some of our projects, and DDD tactical design has some stereotypes for classes, such as aggregate root, entity, value objects, domain service, application service, repository, etc.
Let’s examine the application service case; application services are like gateways to our domain model from the outside world, as we see in the diagram below:
Application services handle application-level requirements (e.g., security, transactions, etc.) while routing requests from the outside world (which can be anything that’s not directly related to our domain model, like the web layer, RPC layer, a storage access layer, etc.) to our domain model. There is no business logic in application services, and their code mostly consists of direct calls to our domain model. If we try to write unit tests for these classes, there is nothing to verify from a behavior perspective; we can only verify interactions between them and the domain model. But we mentioned this is structural testing, and we don’t prefer these kinds of tests generally. So, we don’t prefer writing unit tests for DDD application services.
Then, how can we test these application services? There are different kinds of testing styles other than unit testing, and we think that integration tests that use these application services in the test flow are better alternatives for the application services of DDD.
Structure of a Unit Test
Generally, a unit test has three parts; setting pre-condition, taking action on the object, and making the verification. These are the Arrange/Act/Assert (or alternatively, Given/When/Then as used in Behavior Driven Development (BDD) testing). Applying this kind of structural style to our unit tests increases their readability.
Sometimes we can ignore the `Arrange` part if we don’t need to set anything before the `Act` part, but we should always have the `Act` and `Assert` parts when writing a unit test.
We can see an example of these Arrange/Act/Assert parts below:
Using a Naming Convention
The name of a unit test is important because it directly affects code readability. Unit tests should be readable because we should easily understand what is broken in our system when a unit test fails. We should also understand the behavior of our system when reading unit tests because people come and go in a project.
Some programming languages allow us to use plain language as method names, for example with Kotlin we can write below test method:
Some testing frameworks, like JUnit, provide a tag(@DisplayName) for this purpose if we can’t use the method names.
There are different naming conventions to name unit tests. Teams can align on a standard naming convention that members find most readable; alternatively, other teams may allow team members to use the most appropriate names for their tests instead of using a standard naming convention. In our last Kotlin project, we used the convention “Should ExpectedBehavior When StateUnderTest”.
Mocking
We use mock objects to replace real implementations that our production code depends on in a test with the help of libraries like Mockito, MockK, Python unittest.mock, etc. Using mock objects makes it easy to write a more focused and cheap test code when our production code has a non-deterministic outside dependency.
For example, we mock a repository class that finds orders of a customer by status in the test code below:
Using mocks is not a silver bullet, and overusing mocking can cause some problems. For example, when using mocking, writing the stub code needed to program the behavior the mock can expose implementation details or structure of the underlying system being tested. As we mentioned before, this makes our tests brittle when the structure is changed. Test code with mocking is harder to understand when compared to test code without mocking, because of additional code required. Mocking can also cause false-positive tests because the behavior of real implementations can change, but our mock implementations may be out of date.
Mocking can be an appropriate choice for dependencies involving external systems. For example, mocking a repository class that communicates with a database, mocking a service class that calls another service/application over the network, and mocking a service class that writes/reads some files to/from disk, can make sense. If we can use a real implementation then we should use it instead of a mock.
If we can’t use a real implementation, mocking is not our only option. We can also use fake objects, these are much simpler and lighter weight, only test-purpose implementations of the functionality provided by the production code. For example, implementing a test scenario that has complex conditions on its given part can be simpler with using fake objects instead of mock objects.
Conclusion
Unit tests should be considered a first-class citizen when writing production code in order to maximize their benefits. We should let our unit tests drive our production code’s design and readability by applying some best practices that we mentioned:
Align about the meaning of unit testing concepts at least within the team/project.
Keep your unit tests fast.
Make behavioral tests instead of structural tests.
Decide to write unit tests for a class according to responsibilities of the class.
Align about the structure of a unit test.
Align about a naming convention for unit tests or use a free naming convention as depending on the code review process.
Use mocks with caution, don’t prefer to make structural testing with them.
Acknowledgments
Thanks to my colleagues who reviewed this post and provided invaluable feedback. | https://medium.com/udemy-engineering/unit-testing-best-practices-f877799f6dfd | ['Mucahit Kurt'] | 2020-07-16 14:23:36.392000+00:00 | ['Software Engineering', 'Unit Testing', 'Object Oriented Software'] |
Nothing On The Net Is Neutral | If Bitcoin is the number one topic in tech and the economy this week, then net neutrality is running a very close second. The FCC’s vote this week to repeal Obama-era neutrality regulations brought a wave of protest and punditry through the web, and close readers will know that my, and NewCo Shift’s point of view on the debate aligns more with Walt Mossberg, and less with the Chairman. But I believe in rational discourse and robust debate, and to that end, I want to take a few moments to lay out the Republican point of view.
Here’s Pai’s statement outlining his defense of the repeal. In short, Pai argues that we need to move back to the “light touch” approach that the government adopted for most of the Internet’s short life. Absent government oversight, he argues, the Web developed into a fantastic organism that has benefitted all. Competition drove innovation, and that framework ought to be preserved. The doomsayers on the left will eventually be proven wrong — the market will win. Here’s a similar argument, via a NYT OpEd.
What strikes me as interesting about all this is now that net neutrality is no longer government policy, we’re going to get a true test of our much-vaunted free market. Will competition truly blossom? Will, for example, new ISPs spring up that offer “net neutrality as a service” — in opposition to the Comcasts and Verizons of the world, who likely will offer tiered bundles of services favoring their business partners? I have to admit, I find such a scenario unlikely, but to me, the silver lining is that we get to find out. And in the end, perhaps that is the only way that we can truly know whether preserving neutrality is a public good worthy of enshrinement in federal law.
Of course, net neutrality today is utterly conflated with the fact that Google and Facebook have become the two most powerful companies on the Web, and have their own agendas to look after. It’s interesting how muted their support was for neutrality this time around. As this Washington Monthly column points out, antitrust (which I wrote about here) is now a “central plank” in the Democrats’ agenda moving forward. The next few years are going to be nothing but fascinating, that much is certain. We’ll be watching, closely.
More key stories from around the web:
Mike Bloomberg should have run. Enough said. MQ: “Corporations are sitting on a record amount of cash reserves: nearly $2.3 trillion. That figure has been climbing steadily since the recession ended in 2009, and it’s now double what it was in 2001. The reason CEOs aren’t investing more of their liquid assets has little to do with the tax rate.”
Wow. Just…wow. We are callous to what our economy is doing to humans. MQ: To think of The Ghosted is to think of injustice, a cataloging of fist-fights, tuberculosis, detention centers, scabies, crabs, lice, roaches, hot plates, Section 8 housing, laborers hiding under blankets in the backs of trucks, children lying stiff against the tops of trains, assembly lines in windowless heat-filled rooms — a type of economic violence many consumers try to close their minds to. We do not want to think of them because of what it says about us.”
This has set off a frenzy in the Valley. It’s very, very complicated and I think Hunter Walk has some enlightening things to say about the same topic:
I write about these topics pretty frequently, and feel compelled to write about it now, but honestly, there’s only so much time in the day and today’s focus is/was net neutrality. But stay tuned, so much more to say on this.
And while we’re on the topic of Valley elites coming to grips with their own power….I’ll also be writing about this in the days to come. Not in “the years to come,” which is apparently the preferred timeline at FB HQ. MQ: “We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.” | https://medium.com/newco/nothing-on-the-net-is-neutral-b58ce12617e7 | ['John Battelle'] | 2017-12-15 23:07:00.017000+00:00 | ['Politics', 'Economics', 'Startup', 'Tech', 'Net Neutrality'] |
What is correlation? | What is correlation?
Not causation.
Experiments allow you to talk about cause and effect. Without them, all you have is correlation. What is correlation?
IT’S NOT CAUSATION. (!!!!!)
Sure, you’ve probably already heard us statisticians yelling that at you. But what is correlation? It’s when the variables in a dataset look like they’re moving together in some way.
Two variables X and Y are correlated if they seem to be moving together in some way.
For example, “when X is higher, Y tends to be higher” (this is called positive correlation) or “when X is higher, Y tends to be lower” (this is called negative correlation).
Thanks, Wikipedia.
If you’re looking for the formula for (population) correlation, your friend Wikipedia has everything you need. But if you wanted that, why didn’t you go there straight away? Why are you here? Ah, you want the intuitive explanation? Cool. Here’s a hill:
On the left, height and (left-to-right) distance are positively correlated. When one goes up, so does the other. On the right, height and distance are negatively correlated.
When most people hear the word correlation, they tend to think of perfect linear correlation: taking a horizontal step (X) to the right on the hill above gets you the same change in altitude (Y) everywhere on the same slope. As long as you’re going up from left to right (positive correlation), there are no surprise jagged/curved bits.
Bear in mind that going up is positive only if you’re hiking left-to-right, same way as you read English. If you approach hills from the right, statisticians won’t know what to do with you. I suppose what statisticians are trying to tell you is never to approach a hike from the right. That will only confuse us.
But if you hike properly, then “up” is “positive.”
Imperfect linear correlation
In reality, this hill is not perfect, so the correlation magnitude between height and distance will be less than 100%. (You’ll pop a +/- sign in front depending on whether we’re going up or down, so correlation lives between -1 and 1. That’s because its formula (pasted from Wikipedia above) divides by standard deviation, thereby removing the magnitude of each variable’s dispersion. Without that denominator, you’d struggle to see that the strength of the relationship is the same regardless of whether you measure height in inches or centimetres. Whenever you see scaling/normalization in statistics, it’s usually there to help you compare apples and oranges that were measured in different units.)
Uncorrelated variables
What does a correlation of zero look like? Are you thinking of a messy cloud with no discernible patterns inside? Something like:
Sure, that works. You know how I know X and Y truly have nothing to do with one another? Because I created them that way. If you want to simulate a similar plot of two uncorrelated variables, try running this basic code snippet in R online:
X <- runif(100) # 100 regular random numbers between 0 and 1
Y <- rnorm(100) # Another 100 random numbers from bell curve
plot(X, Y, main = "X and Y have nothing to do with one another")
But there’s another way. The less linear the relationship, the closer your correlation is to zero. In fact, if you look at the hill as a whole (not just one of its slopes at a time), you’ll find a zero correlation even though there’s a clear relationship between height and distance (duh, it’s a hill).
X <- seq(-1, 1, 0.01) # Go from -1 to 1 in increments of 0.01
Y <- -X^2 # Secret formula for the ideal hill
plot(X, Y, main = "The linear correlation is zero")
print(cor(X, Y)) # Check the correlation is zero
Correlation is not causation
The presence of a linear correlation means that data move together in a somewhat linear fashion. It does not mean that X causes Y (or the other way around). They might both be moving due to something else entirely.
Want proof of this? Imagine you and I invested in the same stock. Let’s call it ZOOM, because I find it hilarious that pandemic investors intended to buy ZM (the video communications company) but accidentally bought ZOOM (the Chinese micro-cap) instead, leading to a 900% increase in the price of the wrong Zoom, while the real ZM didn’t even double. *wipes away laugh-tears* Anyways — in honor of that comedy — imagine that you and I invested a small amount in ZOOM.
Since we’re both holding ZOOM, the value of your stock portfolio ($X) is correlated with my stock portfolio value ($Y). If ZOOM goes up, we both profit. That does not mean that my portfolio’s value causes your portfolio’s value. I cannot dump all my stock in a way that punishes you — if my portfolio value suddenly becomes zero because I sell everything to buy a pile of cupcakes, that doesn’t mean that yours is now worthless.
Many decision-makers fall flat on their faces for precisely this reason. Seeing two correlated variables, they invest resources in affecting thing 1 to try to move thing 2… and the results are not what they expect. Without an experiment, they had no business assuming that thing 1 drives thing 2 in the first place.
Correlation is not causation.
The lovely term “spurious correlation” refers to the situation where where there’s no direct causal relationship between two correlated variables. Their correlation might be due to coincidence or due to the effect of a third (usually unseen, a.k.a. “latent”) variable that influences both. Never take correlation at face value — in data, things often aren’t what they seem.
For fun with spurious correlations, check out the website this prime example hails from.
To summarize, if you want to talk about causes and effects, you need a (real!) experiment. Without experiments, all you have is correlation and for many decisions — the ones based on causal reasoning — that is not helpful.
P.S. What is regression?
It’s putting lines through stuff. Think of it as, “Oh, hey! These things are correlated, so let’s use one to predict the other…” | https://towardsdatascience.com/what-is-correlation-975ea899aaed | ['Cassie Kozyrkov'] | 2020-07-13 11:57:05.033000+00:00 | ['Towards Data Science', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Technology'] |
Show authors more ❤️ with 👏’s | Show authors more ❤️ with 👏’s
Introducing Claps, a new way to react on Medium
Remember that time you saw a really amazing live show? You couldn’t help but jump out of your seat and clap so hard your hands felt raw afterwards. Or when you heard a great lecture or stirring speech, and felt connected to the people around you by joining in with their applause? Now, remember when you last read a story that turned your thinking upside down, offering a new look at a topic that you’d never considered before. Was tapping a heart icon one time enough? Was it satisfying?
Today we’re hoping to change that. Rolling out to Medium users over the coming week will be a new, more satisfying way for readers to give feedback to writers. We call it “Claps.” It’s no longer simply whether you like, or don’t like, something. Now you can give variable levels of applause to a story. Maybe clap once, or maybe 10 or 20 times. You’re in control and can clap to your heart’s desire.
So why are we making this change?
Since day one, Medium has had a goal of measuring value. The problem, as we saw it, with much of the media/web ecosystem is that the things that are measured and optimized for were not necessarily the things that reflected true value to people. For example, a pageview is a pageview, whether it’s a 3-second bounce (clickbait) or a 5-minute, informative story you read to the end. As a result, we got a lot more of the former.
On Medium, we’ve tried to provide more meaningful metrics. We display to our authors not only views, but reads (i.e., how many people got to the bottom of a post). We calculate time spent on posts and display that for publication owners. And we use all of this in our systems that determine which posts to distribute to more people. The goal is always to be able to suss out the great from the merely popular.
So what’s wrong with Recommends?
The Recommend — our version of a Like or upvote or fav — has been our explicit feedback signal since almost day one. Explicit feedback is the most valuable signal, both for authors and the Medium system. But a simple, binary vote has its limitations. It shows you how many people thought something was good, not how good was it?
Earlier this year, we released Series and decided to do something different with the feedback mechanism. Instead of a binary input, we had an applause button, which you could press as many times as you want, and the count just kept going up. At first, we thought this was just fun. But then we realized it could be meaningful.
Just like in real life, we found ourselves applauding more the more we appreciated a Series.
Hm, we thought: What if we could capture this level of sentiment for posts? Authors would get much more meaningful data about what readers really valued. And as a reader, it would be more satisfying than simply ❤️-ing a nicely filtered photo of avocado toast.
We know this will take some getting used to, and we don’t take this change lightly for those who’ve been on Medium for a long time and given — or received — thousands of little green hearts. (We’re right there with you.) But we hope, once you get clapping, you’ll see how natural and more expressive it is.
To summarize: Just click the 👏 instead of the ❤️. If you feel strongly, click it more (or just hold down). The more you clap, the more positive feedback you’re providing to the author, and the more you’re letting us know the story is worth reading. (Only the author can see how many claps you gave them.) Our system will evaluate your claps on an individual basis, assessing your evaluation of a story relative to the number of claps you typically send. All this will help the stories that matter most rise to the top.
Again, this system will be rolling out in the next few days across Medium surfaces. If you don’t see it yet, you will soon. We’ll tweak and adjust based on what we learn, so please give us your feedback. | https://blog.medium.com/show-authors-more-️-with-s-c1652279ba01 | ['Katie Zhu'] | 2018-04-03 20:06:16.973000+00:00 | ['Medium', 'Recommendations', 'Product', 'Design'] |
How Emotions shape Team Culture | “Culture eats strategy for breakfast” said Peter Drucker. This is one of my favorite organization management quotes and this quote cannot be stressed enough. Our team’s culture thwarts or improves any strategy or process improvement we attempt to implement. But how do we improve this culture? Corporate culture is not just cognitive culture, but it is mainly the emotional culture. Have you ever yelled or been yelled at? Most of us have been in the place where we were not aware of our emotions or been a victim of someone who could not control his or her anger such as screamer or a table pounder. For instance, I came across a lady this morning when I dropped my son at school. She was shouting at the top of her voice making racist comment at a fellow parent when he did not follow the driving rules. So what makes a human lash out at fellow being without understanding what they are going through especially in front of their respective kids? When similar outbursts happen in the workplace, they may hijack our thought processes, limit innovation and the worst of all alter the culture of the team.
We underestimate the number of situations where emotions are involved:
We often think workplace is not the place for emotions or feelings. We assume there is going to be a professional upright setting all the time. Be it on road where we drive, or the workplace, wherever we have humans, there are emotions involved. Invalidating or ignoring the existence of emotions at these situations will drive the toxicity under the carpet. People will resort to passive aggressive behavior such as completely ignoring another person, refusing to answer any questions from the person, abruptly leaving the meeting, yelling, insulting, gossiping, stubbornness, refusing to do what they’re told to do etc.
Be aware of the feelings:
Not recognizing the emotional issues, will alter the impact of the feelings. When we don’t feel heard or articulate our feelings efficiently, the feelings get manifested in negative way.
Emotions and decision making:
We are emotional creatures composed of core emotions like Happiness, Sadness, Anger, Shame and Fear. Most likely when intensity of these emotions goes higher, then they will dictate your actions. As we are hardwired to emote first, we have no control over that process but we can control the thoughts that follow an emotion. Controlling those thoughts will determine how we react to an emotion. Those reactions will steer us towards better decision making. As our emotional skill matures, we will learn to practice productive ways of responding that will become habitual. Emotions and feelings impact our decisions. Being aware of the emotions helps the decision better.
Emotions and Team culture:
If we are leading a team, we should be ready to deal with complex emotions first. As a leader, building a right culture is utmost important. Strategy or any process improvements come next. Emotions drive culture, Culture drives innovation, productivity, efficiency and more importantly better life for employees.
Hence, we as individuals and leaders should aspire to spot our emotions, articulate them in constructive way and make the work a better place for us and for everyone around us. | https://medium.com/atom-platform/how-emotions-shape-team-culture-c50329bedd80 | ['Pandi Ganapathy'] | 2018-08-02 17:45:09.867000+00:00 | ['Leadership', 'Team Culture', 'Self-awareness', 'Organizational Culture', 'Decision Making'] |
Google’s RecSim is an Open Source Simulation Framework for Recommender Systems | Google’s RecSim is an Open Source Simulation Framework for Recommender Systems
The new framework enables the creation of simulation environments to study reinforcement learning algorithms in recommender systems.
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
Recommendation systems are all around us and they are getting more sophisticated by the minute. While traditional recommender systems were focused on one-time recommendations based on user actions, new models effectively engage in sequential interactions to try to find the best recommendation based on the user behavior and preferences. This type of recommendation systems are known as collaborative interactive recommenders(CIRs) and have been triggered by advancements in areas such as natural language processing(NLP) and deep learning in general. However, building this systems remains a challenge. Recently, Google open sourced RecSim, a platform for creating simulation environment for CIRs.
Despite the popularity and obvious value proposition of CIRs, its implementation have remained limited. This is part due to difficulty of simulating different user interaction scenarios. Traditional supervised learning approaches result very limited when comes to CIRs given that is hard to find datasets that accurately reflect user interaction dynamics. Reinforcement learning has evolved as the de facto standard for implementing CIR systems given the dynamic and sequential nature of the learning process. Just like CIR systems are based on a sequence of user actions, reinforcement learning agents learn by taking actions and experiencing rewards across a sequence of situations in a given environment. While reinforcement learning systems are conceptually ideal for the implementation of CIRs, there are very notable implementation challenges.
· Generalization Across Users: Most RL research focuses on models and algorithms involving a single environment. The ability to generalize knowledge across different is essential for an effective CIR agent.
· Combinatorial Action Spaces: Most CIR systems require to explore combinatorial variations of recommendations and user actions which are hard to capture in simulation models.
· Large, Stochastic Action Space: Many CIR environments deal with a set of recommendable items that is dynamically and stochastically generated. Think about video recommendation engine may operate over a pool of videos that are undergoing constant flux by the minute. Reinforcement learning systems are typically challenged in those non-fixed environments.
· Long Horizons: Many CIR systems need to operate over long horizons in order to experience any significant change in user’s preferences. This is another challenging aspect for simulation models.
Most of these challenges boiled it is very difficult to effectively simulate combinations of user actions in a way that can be quantified and used to improve the agent’s learning policy.
Enter RecSim
RecSim is a configurable platform for authoring simulation environments to allow both researchers and practitioners to challenge and extend existing RL methods in synthetic recommender settings. Instead of trying to create a generic, perfect simulator, RecSim focuses on simulations that mirror specific aspects of user behavior found in real systems to serve as a controlled environment for developing, evaluating and comparing recommender models and algorithms.
Conceptually, RecSim simulates a recommender agent’s interaction with an environment consisting of a user model, a document model and a user choice model. The agent interacts with the environment by recommending sets or lists of documents (known as slates) to users, and has access to observable features of simulated individual users and documents to make recommendations.
Diving into more details, the RecSim environment consists of a user model, a document model and a user-choice model. The recommender agent interacts with the environment by recommending slates of documents to a user. The agent uses observable user and a candidate document features to make its recommendations.
The document model also samples items from a prior over document features, including latent features such as document quality; and observable features such as topic, or global statistics like ratings or popularity. Agents and users can be configured to observe different document features, so developers have the flexibility to capture different RS operating regimes. The user model samples users from a prior over configurable user features, including latent features such as personality, satisfaction, interests; observable features such as demographics; and behavioral features such as session length, visit frequency and budget.
When the agent recommends a document to a user, the response is determined by a user-choice model, which can access observable document features and all user features. Other aspects of a user’s response can depend on latent document features, such as document topic or quality. Once a document is consumed, the user state undergoes a transition through a configurable user transition model, since user satisfaction or interests might change.
Another important component of the RecSim architecture is the similar who is responsible for controlling the interactions between the agents and the environment. The interactions are based on six fundamental steps.
1. The simulator requests the user state from the user model, both the observable and latent user features.
2. The simulator sends the candidate documents and the observable portion of the user state to the agent.
3. The agent uses its current policy to returns a slate to the simulator to be “presented” to the user.
4. The simulator forwards the recommended slate of documents and the full user state (observable and latent) to the user choice model.
5. Using the specified choice and response functions, the user choice model generates a (possibly stochastic) user choice/response to the recommended slate, which is returned to the simulator.
6. The simulator then sends the user choice and response to both: the user model so it can update the user state using the transition model; and the agent so it can update its policy given the user response to the recommended slate.
RecSim provides a very unique approach to streamline the testing and validation of CIR systems based on deep learning. The code has been open sourced on GitHub and the release was accompanied by this research paper. Certainly, it’s going to be interesting to see the types of simulations researchers and data scientists build on top of RecSim. | https://medium.com/dataseries/googles-recsim-is-an-open-source-simulation-framework-for-recommender-systems-9a802377acc2 | ['Jesus Rodriguez'] | 2020-12-15 11:34:13.294000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence'] |
What is Data Exfiltration? | If we talk in terms of our general life, Exfiltrate means to surreptitiously move personnel or material out an area under enemy control. In terms of Computer science, Data Exfiltration is the unauthorized removal of data from a network e.g. leakage of Archives, Passwords, Additional Malware and Utilities, Personally identifiable information, financial data, trade secrets, source code, intellectual property, etc. For a hacker, it is easy to move things in a box. E.g. RAR file, ZIP file, CAB file, etc. Data Exfiltration via outbound FTP, HTTPS is most common these days.
Read More | https://medium.com/data-analytics-and-ai/what-is-data-exfiltration-b255101e9d84 | ['Ella William'] | 2019-06-07 11:23:11.076000+00:00 | ['Cybersecurity', 'Data Science', 'Big Data', 'IoT', 'Analytics'] |
How to use the Style Transfer API in React Native with Fritz | Fritz is a platform that’s intended to make it easy for developers to power their mobile apps with machine learning features. Currently, it has an SDK for both Android and iOS. The SDK contains ready-to-use APIs for the following features:
Today, we’ll explore how to use the Style Transfer API in React Native.
I was only able to develop and test in Android (no Macs here!) and got a working application.
The Style Transfer API styles images or video according to real art masterpieces. There are 11 pre-trained artwork styles, including Van Gogh’s Starry Night and Munch’s Scream, among others.
The app we’ll be developing allows the user to take a picture and convert it into a styled image. It will also allow the user to pick the artwork style they wish to use on the image.
The app will contain a Home page, where the user can pick the art style. It will also include a separate Camera View, where the user captures the image.
Note: The following tutorial is for the Android platform only.
Prerequisites
React Native CLI: run npm i -g react-native-cli to globally install the CLI
Since there is no default React Native module for Fritz, we’ll need to write our own. Writing a native module means writing real native code to use on one or both platforms.
Step 1 — Creating the RN app and install modules
To create the app, run the following command in the terminal:
react-native init <appname>
Move into the root of the folder to begin configuration.
For navigation, we’ll be using React Navigation and React Native Camera for the Camera View.
To install both dependencies, run the following command in the terminal:
npm i --save react-navigation react-native-camera
Follow the instructions here to configure React Navigation for the app. We’ll need to install react-native-gesture-handler as well, as it’s a dependency of React Navigation.
Follow the instructions here to configure the React Native Camera for the app. We can stop at step 6, as for this example we will not be using text, face, or barcode recognition.
Step 2 — Including Fritz SDK in the app
First, we need to create a Fritz account and a new project.
From the Project overview, click on Add to Android to include the SDK for the Android platform. We’ll need to include an App Name and the Application ID. The Application ID can be found in android/app/build.gradle , inside the tag defaultConfig .
Upon registering the app, we need to add the following lines in android/build.gradle :
allprojects {
.....
repositories {
.....
maven { url "https://raw.github.com/fritzlabs/fritz-repository/master" } //add this line
}
}
Afterward, include the dependency in the android/app/build.gradle :
dependencies {
implementation 'ai.fritz:core:3.0.2'
}
We’ll need to update the AndroidManifest.xml file to give the app permission to use the Internet and register the Fritz service:
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
.....
<uses-permission android:name="android.permission.INTERNET" />
<application>
.....
<service
android:name="ai.fritz.core.FritzCustomModelService"
android:exported="true"
android:permission="android.permission.BIND_JOB_SERVICE" />
</application>
</manifest>
We then need to include the following method within the MainActivity.java :
import ai.fritz.core.Fritz;
import android.os.Bundle; //import these two as well public class MainActivity extends ReactActivity {
.....
@Override
protected void onCreate(Bundle savedInstanceState) {
// Initialize Fritz
Fritz.configure(this, "<api-key>");
}
}
Step 3 — Create the Native Module
Since the SDK only supports iOS and Android, we’ll need to make the native module. To get a better understanding of this, refer to the docs here:
To make an Android Native module, we’ll need to make two new files. They will be within the root package of the Android source folder.
FritzStyleModule : This contains the code that will return the styled image FritzStylePackage : This registers the module so that it can be used by the JavaScript side of the app.
FritzStyleModule
The React method being used has a success and error callback. The chosen artwork style and a base64 of the original image are sent to the method. The error callback is invoked when an Exception is thrown and returns the error. The success callback returns a base64 encoded string of the converted image. On a high-level, the above code does the following:
Initializes the style predictor with the user’s choice of artwork. Converts the original base64 image into a Bitmap . Creates a FritzVisionImage , which is the input of the style predictor. Converts the FritzVisionImage into a styled FritzVisionStyleResult , which is the converted image. Gets a Bitmap out of the FritzVisionStyleResult . Converts the Bitmap into a base64 to be sent back to the JavaScript side of the app.
FritzStylePackage
This class is used to register the package so it can be called in the JavaScript side of the app.
This class is also initialized in the getPackages() of MainApplication.java :
@Override
protected List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
new MainReactPackage(),
......,
new FritzStylePackage() //Add this line and import it on top
);
}
Now on to the JavaScript side of the application.
Step 4 — Creating the UI
To do this, we’ll be creating/updating the following pages:
Home.js — Display the picker of artwork styles and the final result. CameraContainer.js — Display the camera view to capture an image. FritzModule.js — Export the above-created Native module to the JavaScript side. App.js — Root of the app which includes the navigation stack.
Home.js
This page contains:
Text to display the app description. Picker to allow the user to select the artwork style of the converted image. Button to redirect the user to the Camera page. It will pass the selected artwork style to the CameraContainer. If the navigation prop contains the original and converted image, it will be displayed.
The page currently looks like this;
Home page before taking a picture
CameraContainer.js
The CameraContainer page displays a full page CameraView. It includes a button to take the picture at the bottom of the page. Upon clicking it, a spinner will be displayed to convey to the user that an action is taking place.
The image is first captured using the react-native-camera method takePictureAsync() . The original image is then saved into the state of the page. The setState method is asynchronous and thus has a success callback that runs after the state is set.
The getNewImage method from the FritzModule is run within this success callback. The original image and the filter (artwork style) picked from the Home Page is passed to the method. On the error callback, an alert is displayed to the user to convey that an error has occurred. On the success callback, the new styled image is saved into the state. On this second setState methods’ success callback, the user is redirected to the Home page with both the original and styled images.
CameraContainer on emulator
FritzModule.js
import { NativeModules } from 'react-native'; export default NativeModules.FritzStyle;
This page exposes the Native module, FritzStyle . This allows the JavaScript side to make calls to the method getNewImage .
App.js
import React, { Component } from 'react';
import Home from './src/Home';
import CameraContainer from './src/CameraContainer';
import { createStackNavigator, createAppContainer } from 'react-navigation'; const AppNavigator = createStackNavigator({
Home: { screen: Home },
Camera: { screen: CameraContainer }
}); const AppContainer = createAppContainer(AppNavigator); export default class App extends Component { render() {
return (<AppContainer />);
}
}
First, we create the Stack navigator with the Home Page and Camera View. The key ‘Home’ is used when navigating to the Home Page, and the key ‘Camera’ when navigating to the CameraContainer.
The AppContainer becomes the root component of the App. It’s also the component that manages the app’s state.
Now to see the entire app in function;
To recap, we have;
Created a React Native app, Included the Fritz SDK in it, Created a Native Module that makes use of the Style Transfer API, and Designed a UI to display the styled image.
Find the code repo, here.
For native iOS or Android implementations of Fritz’s Style Transfer API, check out the following tutorials: | https://medium.com/free-code-camp/how-to-use-the-style-transfer-api-in-react-native-with-fritz-e90bc609fb17 | ['Sameeha Rahman'] | 2019-04-02 20:53:56.801000+00:00 | ['Machine Learning', 'Mobile App Development', 'Technology', 'React Native', 'Programming'] |
Pitch Deck | So I threw this pitch deck together a little while back, wanting to share as part of application for an accelerator program I’ll just go ahead and post online since no one reads these things. Oh yeah all inquiries please direct through contact portal on our website at automunge.com. Cheers. | https://medium.com/automunge/pitch-deck-7d9ab80b4ba1 | ['Nicholas Teague'] | 2019-11-20 21:32:59.636000+00:00 | ['Data Science', 'Entrepreneurship', 'Machine Learning'] |
Triggering the Dark Night of the Soul | I woke at 2.30 again this morning. It is now 3 am and I am struggling to get on top of the ritual thought trains that haunt me when I have been triggered. Currently my wrists are throbbing, asking me to slice them open and a narrative is forming in my head in the form of a simple prayer.
Dear God please let me go soon. Please send me a condition that ends it for me now so that I don’t have to give into my wrists and hurt those who love me, for whatever reason they find appropriate, since I deeply know how much they are mistaken and have not yet woken up to my basic unloveableness and unworthiness to exist.
I know this routine so well. It is so exhausting to go through it yet again — of being triggered into this routine by thoughtlessness, carelessness, emotional illiteracy and general human fallibility. I know I should be able to rise above it and see it for what it is — a stupid mistake. I want to shrug it off and say it doesn’t matter. But, for now I must go through this cycle one more time, yet again. Trying to find ways of reducing its power and unwrapping its stranglehold on my life.
It doesn’t matter what caused it. What matters is that it has crushed me again, sucking the joy out of me like a leak in my spacesuit of personal defence.
Who am I kidding? Since my PTSD complete breakdown I have no defences in place anymore. They were all maladaptive from a childhood of abuse and thus served me ill anyway. They all went so that I could be free of trauma. They left me wide open and raw, emotionally stripped back and experiencing life in its immediacy. Most of the time that is a joyful and wonderful connectedness to the exact present moment that I wouldn’t give up on for anything. It makes every moment into a ‘this is it’ moment, that zen moment of pure realisation of joy right here in this breath.
We can’t have everything. I have this wonderful gift of freedom from illusion.
Except when I get triggered!
Is it to remind me of where I came from, to keep me humbled and in place? Or perhaps to challenge me to release myself from this once more, to make sure I am not complacent in my newly found liberation? Perhaps a reminder of what life still feels like for most, still burdened by their defences against the injustices of this world, from which I am largely protected by privilege and having my wish granted of ‘just enough’ materially to live without fear in that quarter? Can one ever live without fear anyway? Is this just the form my fear must or will now take in life? Surely my greatest fear is that I will for some reason lose my beloved ones, my sons and grandson and most of all my soulmate? It seems strange that my greatest fear is that I must continue to live for now. That my prayers are to go now, to be done with this life, to let me finally say ‘I did my best and I am too tired to fight this anymore’.
What am I tired of? I have written about that elsewhere, alongside the joy I feel in my life too. They are both extreme ends of the spectrum. It seems I am not allowed to waddle comfortably somewhere in the middle of the joy/despair spectrum. Life after all is just a series of spectrums, rather like my ADHD and other spectrum disorders.
Life is a spectrum disorder.
HAha that has made me laugh at this idea and myself.
Perhaps this is the breaking through point for me with this occurrence of triggering. But will I be able to go back into the scenario which triggered it. I doubt it. What happens is that my body says ‘ok you’re safe here at home writing about this, but I wont let you go back in case they trigger you again’.
If I ignore it, try to pretend nothing happened, return to normal and carry on, it just triggers me again- which means emotional lockdown and physical rigidity to pain levels that are quite high, even though I am used to them. IF I fight against it, I pay with that lockdown and must medicate and wait for it to pass. Which means I am out of action for other things too. If I give into it my life shrinks a little more than it already has done. If I do what I am doing now and explore it, get it out into the open and say to it ‘is this really how you think I should live, be and feel?’ If I do this act of exposure often enough, will it eventually decide to agree with me and stop trying to control me?
‘Shine light on your trauma and it will dissolve.’
This is the summit of advice from all quarters, and it’s true, it does dissolve, slowly. But this last stage is taking forever and stands out as more painful in contrast to the joy I feel most of the time.
Stop wingeing perhaps, be glad for the joy I feel and accept this last level of burden of trauma from the past.
I consider Tonglen, a Tibetan meditative practice where I absorb the suffering of the world and breathe out that very joy in its place. I practice this against the injustices of the world, the petty cruelties of wealth and corruption and damaged souls being given power they do not deserve or know how to use wisely, only self servingly.
I ache for the raw suffering of others and the causal thoughtless cruelty that cause it.
I weep for a world that is destroying itself and cheer for those who would act to wake the rest of us up.
I do, daily.
I challenge that world and the sadness it sows in me for others. I challenge those damaged parental voices too. I do it through my writing. I do it in my meditation. I do it in my approach to life.
I work at being fearless, courageous, brave. Even just to go and talk to people, I am being all of those things, though they will never know that. It is easier for me to stand on stage and give a lecture or performance than it is for me to talk to people in a more intimate way, especially in public or in groups.
What I really want is to live in a bubble of safety with my soulmate and my sons, to have nothing more touch any of us. We have all struggled with those legacies. What I really want is an end to the terrors of my childhood being re-enacted again and again through my traumatised nervous system. What I want is for this to end! Either by ending life or by ending the triggering process.
I know by the end of writing this I will have become determined once again to get through it and live on. I know by the end of this I will have shifted the vice grip of this process, this routine my body deems it necessary to put me through once again. I know that the love I feel for my family and my soulmate husband will prove the stronger force in the end. I know that if I return to my bed my husband will wrap his sleepy arms around me and hold me until I can cry it out of me and let it go. I know that writing this and publishing it is my way of saying ‘hang in on there’ to myself and to others who may feel like this but also to say do not judge others, if you are not experiencing triggering like this do not judge others who may be, you cannot tell from the outside.
This too will pass, eventually!
But will I ever be able to go back again? To any of the long list of triggering events locations and situations? Who knows? Is it worth even trying? Perhaps I should just move on again instead, or is that running away still, is that why I ask for the end, to avoid that? Perhaps this is the turning point when I stand my ground and say ‘no I will not run and will not be triggered any more’? Can I do that, can any of us who have been deeply traumatised in our pasts actually fully achieve that, or have I got as far as it is possible to do so. Who knows? But I think I just gave myself the reason to keep going today this time, and to return to my bed and claim my cuddle. Thanks for listening. xxx | https://sylviaclare.medium.com/triggering-the-dark-night-of-the-soul-7eb39a446da0 | ['Sylvia Clare Msc. Psychol'] | 2019-06-20 12:20:39.762000+00:00 | ['Ptsd Awareness', 'Trauma Recovery', 'Mental Health', 'Love', 'Self'] |
8 Things That a Mobile App Can Do That Your Website Can’t | Modern day businesses often have internal debates about the importance of building different digital assets, specifically when it comes to mobile apps versus websites versus web apps.
A lot of people feel you don’t need a smartphone app, and that you just need a website that looks fine on mobile devices. Others claim mobile apps have advantages that cannot be offered by a website. Who’s right? In this article, we will explore the differences between these three types of software to identify where mobile apps set themselves apart from web-only products.
You might assume that web apps and mobile apps are the same in nature, but in reality, they are not. They aren’t just different in terms of their structure; they are also built for different classes of user. To start, let’s review the structural differences between progressive web applications and websites.
Progressive Web Apps Defined
A progressive web app (PWA) is essentially a version of a website that operates correctly, fluidly, and in a user-friendly way on mobile devices. Specifically, web apps work like downloadable apps, but all from the convenience of the browser of your mobile device computer. In this way, they fall between websites and mobile apps, as they act like websites, but provide an experience that is comparable to native apps.
Native Apps Defined
Native apps are apps created for a specific platform, such as iOS for the Apple iPhone or Android for any Android-based smartphone. They are usually downloaded and installed via an app store and have access to device resources such as GPS and the camera functionality. Native applications live on the device itself and run on it.
Some examples of popular mobile apps are Snapchat, Instagram, Google Maps, and Facebook Messenger. Unlike web apps, which are accessed through the internet browser and adapt to any computer you are on, native apps are constrained to the device that they are running on. Some web apps are dynamic and interactive enough to adjust according to the size of different displays, but most are static.
Why Build a Mobile App?
So how is it that native mobile platforms can offer different functionality than web interfaces? Well, by definition, there are multiple useful features that are exclusively available on mobile apps. These features include the following, which we’ll discuss in detail below:
Use of device-specific features Ease of personalization Offline usage Easier user access Better speed Push notifications Brand visibility Design freedom
Use of device-specific features
When using smartphone applications, users can access device-specific functions such as screenshot, camera, dictionary, GPS, autocorrect, and touch screen (which is not present on most desktops or laptops).
Screenshots in particular are a highly common use case, as they are very simple to take and save for future use when reading an article, watching a fashion show, or capturing some other on-screen event. The simple zoom in and zoom out functions offered by touch screens enable easy cropping and focus.
These features can reduce the time to perform common tasks and boost convenience.
Ease of personalization
Mobile apps give you the liberty to personalize the user experience on the basis of their preferences, location, usage patterns, and more. With smartphone applications, it’s easy to present consumers with a highly personalized interface. In addition, a mobile app can also allow users to customize the app’s appearance as per their preferences.
Offline usage
Offline usage not especially easy to implement, but it may be the most significant advantage offered by mobile applications. Although mobile applications usually require internet access to perform much of their duties, they may still offer basic content and features to users in offline mode.
For example, consider health and wellness applications. These apps can provide functionality such as a diet plan, calorie chart, body measurement, water intake alert, and many more, even without the assistance of an internet connection.
Easier user access
Mobile users spend 86% [1] of their time on mobile apps and just 14% on mobile websites. In comparison, the total time consumers spend on mobile applications is also growing, rising in one year by 21 percent. There is no doubt that people invest much of their time on social media applications and gaming applications, which are often native mobile apps.
Better speed
A well-designed mobile app will certainly perform at a much faster speed than a mobile website.
In comparison to websites, which typically use web servers, applications generally store their data locally on mobile devices. For this reason, in mobile applications, data extraction is easy to perform. In addition, by storing user preferences and using them to take proactive actions on behalf of users, apps can save users time.
Smartphone applications should function more efficiently on a technological level, too, as websites on smartphones use JavaScript code (typically much less efficient than native code languages). What happens in the background is a puzzle to most users, so the faster app type — in this case, mobile apps — wins this category from a UX perspective.
Push notifications
There are two forms of smartphone app alerts: push notifications and in-app alerts. They are both attention-grabbing options that connect in a relatively non-invasive way for smartphone users. In-app alerts are alerts that can only be accessed by users when they open an app.
On the other hand, push notifications are displayed to users regardless of the operation they are currently performing on their mobile device. This is a powerful way to grab the user’s attention; in fact, there have been some cases where push notifications delivered click-through rates of 40 percent or higher.
It goes without saying that the notification campaigns have to be thoughtfully prepared. Users will resent being constantly pinged by notifications that don’t deliver urgent or relevant information.
Technically, push notifications for progressive web apps can also be implemented by utilizing third-party services, but these services are currently in a preliminary stage and have some limitations.
Brand visibility
Consumers devote a large portion of their time to mobile app interaction. It’s fair to assume that many people, every day, seek out a company’s app icon on their smartphones. For app makers, this daily experience can be used as a promotional opportunity. [2]
Even if people do not use a smartphone app actively, they will be reminded of it whenever they see their home screen. The app icon works as a brand mini-ad for the brand.
Design freedom
Even with all the technical advances in web design, to perform even the most basic functions, mobile websites have to rely a lot on browsers. Mobile websites rely on browser features to function, such as the “back” button, “refresh” button, and address bar.
None of these limitations apply to mobile applications.
Based on advanced gestures like “tap”, “swipe”, “drag”, “pinch”, “hold”, and more, a mobile app can be programmed with a lot of elaborate functions. These gestures can be used by apps to provide creative features that can help users complete their tasks more intuitively. For instance, using a swipe gesture, an app will allow users to move to the next or previous phase.
Mobile Apps Offer Unique Advantages
Websites may capture a broader range of traffic, but for businesses that can make use of the above features, a smartphone app is essential. Native apps and websites can work together in a satisfying way to build an omnichannel user experience that draws user traffic and results in tremendous user growth.
This is true across a wide array of business types. If you’re an e-commerce store, why not encourage visitors to buy from your website as well as through an app? Churches can use mobile apps to release revised sermon notes prior to the service and then record audio and video. Restaurants can provide modified menus, instructions, and online orders. Magazines may submit push alerts when new papers are written. Consider both web and mobile properties into your customer engagement plan, instead of one or the other.
Crowdbotics specializes in converting websites and web apps into mobile apps. We offer custom, cross-platform app builds that let companies get their content to customers on all of their devices. If you’re looking to expand your marketing strategy to include omnichannel engagement, get in touch with a Crowdbotics expert today.
Sources:
[1] https://www.forbes.com/sites/ewanspence/2014/04/02/the-mobile-browser-is-dead-long-live-the-app/#5c5ef237614d
[2] https://www.forbes.com/sites/allbusiness/2014/11/17/heres-why-your-business-needs-its-own-mobile-app/#30b288b2327f | https://medium.com/crowdbotics/8-things-that-a-mobile-app-can-do-that-your-website-cant-faeb695f7601 | ['Allah-Nawaz Qadir'] | 2020-10-28 16:20:58.948000+00:00 | ['Mobile App Marketing', 'Mobile App Development', 'Website Development', 'Application Development', 'Crowdbotic'] |
What Do 90-Somethings Regret Most? | My preconceptions about older people first began to crumble when one of my congregants, a woman in her 80s, came into my office seeking pastoral care. She had been widowed for several years but the reason for her distress was not the loss of her husband. It was her falling in love with a married man. As she shared her story with me over a cup of tea and Kleenex, I tried to keep a professional and compassionate countenance, though, internally, I was bewildered by the realization that even into their 80s, people still fall for one another in that teenage, butterflies-in-the-stomach kind of way.
One of the strange and wonderful features of my job as a minister is that I get to be a confidant and advisor to people at all stages of life. I’ve worked with people who are double and even triple my age. Experience like this is rare; our economic structure and workforce are stratified, and most people are employed within their own demographics. But because I’m a minister in a mainline denomination with an aging base, the people I primarily interact with are over the age of 60. I came into my job assuming that I, a Korean-American woman in my mid-30s, would not be able to connect with these people — they’re from a completely different racial and cultural background than me. It did not take long for me to discover how very wrong I was.
We all have joys, hopes, fears, and longings that never go away no matter how old we get. Until recently, I mistakenly associated deep yearnings and ambitions with the energy and idealism of youth. My subconscious and unexamined assumption was that the elderly transcend these desires because they become more stoic and sage-like over time. Or the opposite: They become disillusioned by life and gradually shed their vibrancy and vitality.
When I initially realized that my assumptions might be wrong, I set out to research the internal lives of older people. Who really were they, and what had they learned in life? Using my congregation as a resource, I interviewed several members in their 90s with a pen, notebook, a listening ear, and a promise to keep everyone anonymous. I did not hold back, asking them burning questions about their fears, hopes, sex lives or lack thereof. Fortunately, I had willing participants. Many of them were flattered by my interest, as America tends to forget people as they age. | https://humanparts.medium.com/what-its-like-to-be-90-something-368780082573 | ['Lydia Sohn'] | 2019-12-20 18:21:14.810000+00:00 | ['Happiness', 'Wisdom', 'Wellness', 'Culture', 'Age'] |
How Constraints Potentiate Creativity & Innovation | One of the most interesting aspects of the Design Thinking process, is not only the strengthening of the relationships that are created within the Product Design team (including not only Designers, but an entire ecosystem where Developers, Product Owners, Customer Support Groups, Marketing Professionals, Inventory Managers, to name but a few, collaborate), but also of course with the Users/Clients, who become part of that product journey, not only uncovering the solution itself, but how that product eventually morphs and continues to live past launch/release cycles. Another rewarding aspect of the process itself has always been the imminent realization that whatever is uncovered, tested and refined, has to eventually be built, within a series of constraints. These constraints can be of multiple natures, be it financial, platform wise, resources, timelines, among others, all of which is going to be the focus of attention for this article.
Constraints. Every Product Design initiative always has a series of constraints associated with it, something that should be clearly showcased when the process starts. Further constraints may be added to the scope of the initiative particularly as time goes on, but initially there’s already a listing of topics that all the participants in the Design Thinking process need to be aware of. Those general constraints are associated with a trifecta of factors: timelines, resources and finances. Expanding on these fundamentally operational constraints: timelines are part of the DNA of any project that is tackled. The need to release something to market, an MVP that is ultimately viable, which can be expanded upon, creates challenges from the Design process itself, in terms of running effective Research, Validation, Iterations, Testing exercises. All these phases provide different types of input into the solution that is being morphed, but once again Designers have to forcibly be strategic about how they devote time to each one of them (and how they bring different players to these phases, in order to gather the information they need to keep the process breezing along). Timelines are also deeply entwined with Resource allocation. Depending on the availability of team members, of different natures of course, allows for a lot of these processes and initiatives to be conducted more rapidly, and enable for iterative cycles for instance, to be produced in a speedier manner.
Resource availability is also tied with different layers of expertise, not just from a Design perspective, but also from other professionals who are involved in the process, and need to be available to participate in it. That includes for instance, professionals in Product Ownership, Development, Customer Support, Marketing, Sales, Inventory Management, the list goes on, but summarily, they all have a pivotal impact in the solutioning effort, and their contribution should always be accounted for. Matter of factly: without enough resources, the challenges to fulfill & keep the timelines established can become a herculean task in itself.
Deeply entwined with the previous two factors, there is of course the financial constraints that have implications across the board. It’s important to consider the budget that is being estimated for a particular initiative, whatever that may be, since that informs not only the timeline devoted to it, and the Team resources that can work on it, but also and by extension, a series of other factors evidently tied to the operational side of the process, namely tools for Research, Validation, Iteration, Testing, for capturing Analytics, not to mention of course, from a Usability testing perspective, and of course from a Development perspective. Everything has a quantifiable cost, which needs to be identified, since in itself also informs the viability of what is being built, and the available timeline to built it, and whom with. The practicality of these factors should always be considered by Designers when tackling any project they embark upon, particularly as they envision schedules and outputs of every phase of the process. These three constraints are of course the baseline of any project, but here are a few others which compliment them, which always need to be considered when devising a solution in a Design Thinking Process.
Technology Constraints — when initially identifying the problem, clearly understanding the tasks users want to perform and what their expected outcomes are, there’s an evident realization of how the users will experience something, by which this means, what devices will the users interact with in order to satiate their needs and get their tasks performed. Identifying these platforms, across multiple ecosystems and strategies is fundamental, since from early on, Development, Product Management, Customer Support partners, among others, can highlight and provide additional context for goals that need to be addressed, but also the limitations that may derive from certain platforms on which the solution will need to exist (or for for that matter, the lack of resources of professionals to work on said platforms). Further downstream, understanding the technological constraints is also fundamental in devising alternatives to interactive paradigms that are being applied, which for limitations of different natures can’t be applied, or alternatively how the creation of new paradigms may simply be ineffective within the the technological constraints that exist. The Pandemic has also brought forth other types of constraints, while also liberating others of course, but when it comes to technology itself, it has demonstrated that co-located collaboration was not an option, and tools such as Miro, Whimsical, Mural, had to take the lead in allowing teams to effectively discuss, collaborate and document ideas. All this to say that when it comes to Technology, and even though there’s an aspect of celerity in how it changes rapidly, it’s fundamental that all the partners in the process understand its constraints at all times. That indeed creates a series of guidelines by which everyone has to abide by, yet allowing for the creation of solutions that are in tune with that same ecosystem.
Contextual Constraints — it goes without saying that every solution that is devised, exists within a certain universe of requirements. These requirements are of different natures, but there’s quite a few that Designers and their partners can never forget: understanding the implications of Legal, Privacy, Ethical constraints is of the utmost importance. In the past I’ve worked in applications in the telecommunications arena, where a baseline feature we’ve all come to expect, such as the recording of a web conference, had to be thoroughly researched for its legal implications. Creating features, being innovative within a product are always to be nurtured and incentivized, however one must always keep in mind that whatever is brought forth is aligned with the context, industry, and the users themselves which will operate that product or feature. This of course ends up looping with the Research phase which was itemized earlier in the article and the importance of all the phases for the Design Thinking process to be effective. They all integrate with each other, and they all serve a purpose. These 5 constraints aren’t meant to be castrating, but mostly become beacons, assisting the teams in understanding where they operate and how the solution they are indeed producing will effectively resonate with their audiences. | https://uxplanet.org/how-constraints-potentiate-creativity-innovation-65fbfc0e2aa1 | ['Pedro Canhenha'] | 2020-12-28 08:51:06.150000+00:00 | ['Design', 'Product Design', 'UX', 'Innovation', 'Design Thinking'] |
Yes, Astrology Really Can Help You Progress In Therapy | Yes, Astrology Really Can Help You Progress In Therapy
(Whether you believe in it or not.)
Photo by Josh Rangel on Unsplash
I once had a brief flirtation with a married man after losing my husband to brain cancer.
Then, I was dumped. I was destroyed. I couldn’t get over it.
This is when the desperate (and, on Medium, many say the weak-minded) turn to astrology.
Most people who post about astrology here (even the editors) find themselves the brunt of rude comments. Even when we point out that disciplines such as astrology and tarot are making their way into people’s therapy sessions, sometimes with helpful results.
I count myself among that number, so I’ve decided to take it upon myself to diagram exactly what I studied and how it helped.
Stage 1: There’s a Message Here
I’m sure we’re all acquainted with the feeling, after a painful breakup, of longing for the person back again and looking for any sign that this could happen. I had been in the habit, now and then, of buying a summary of yearly horoscope transits and what they mean off of a website called astro.com.
I was making a bit more money at that time, and these weren’t too expensive, so I purchased one for the next three years.
I noticed something. These reports made a big deal out of playing up my controlling tendencies. I found a site online that did free tarot readings, and much to my distress, the tarot was saying the same thing, in a pretty harsh way.
I didn’t think of myself as “controlling.” I thought the woman I got dumped for was the controlling one! But, as I received these suggestions over and over, and I got more and more upset about them, I had to ask myself:
Am I really a controlling person?
That made me look honestly at my behavior in the relationship. Turns out, I was controlling, always seeking to get this guy to give me the kind of life and the kind of support I wanted … and I really needed to see that. And tell myself the awful truth about it.
Who cares whether astrology is “correct” or not? The point is, I had to ask myself honestly about this aspect of my character.
And, sad to say, it turned out there was a reason I kept getting the message, “Stay out of power and control.”
Stage Two: How My Chart Diagrams the Nuts and Bolts of My Therapy
Now that I saw that message, I wondered if there might be more to find.
I’ll skip some of what happened in between and go straight to where I become intensely interested in the following pattern in my chart, outlined by the triangle with two long green sides and the short blue bottom:
Generated for free on astro.com.
This long skinny triangle, shaped like a witch’s hat, is called a “yod.” (As it turns out, they don’t call it the “Finger of God” for nothing.) The little character at the long end, the cross with the curlicue, is Saturn.
As I read more about Saturn, I felt worried, because Saturn is known in astrology as the planet of hard knocks. It’s the planet of restriction, trial, and painful lessons.
I felt like I’d had just about enough of those.
As it turned out, this yod explained pretty clearly why I was in therapy and what I was going to have to accomplish there. I’m still struggling with some of it, but the comforting thing is, at least I know why now and I have a little bit of hope.
At the bottom of my yod, you see Neptune (it looks like a trident) in House Three, in the sign of Scorpio. This tends to reflect creativity (Neptune) in a deep, penetrating way, digging up truths one perhaps doesn’t want to see (Scorpio), in the area of knowledge, writing, art, and communication (House Three).
It’s connected to Uranus, the little character with arms and a little round head, upside down, in House One — House of the Self — in Virgo, the sign of service.
Uranus is original and independent. It’s associated with events that break up old patterns in your life because you really needed a change. My reading of it always includes the phrase: “I gotta be me.”
So I really gotta be me, according to this chart, and I want to do it in a way that serves other people.
At the base of a yod, the two planets at the bottom shake hands and want to help each other out. Would it surprise you to learn that after a childhood with a BPD mother, I want to write novels and articles around the theme of mental illness and emotional problems, with an eye toward reaching and helping others?
Yep, so far this chart sounds like me.
But, it’s a classic yod. In astrology, the planet at the far angle of the yod represents something that’s keeping the two bottom planets from their goal.
Saturn Is Standing In The Way. (Booga! Booga! Booga!)
So, since at this point in my life, my writing career is moving at the speed of a dying snail, am I interested to find out what this Saturn, in the lingo of astrology, is supposed to represent?
You betcha. To help out, I compiled these:
Bailey’s Postulates for Understanding the Astrological Yod
(You’re going to need these to understand how my chart helped me.)
1.) The yod appears to be a description of a problem, issue, or conundrum in the life. The two bottom planets seem to want to do something, but the apex planet reflects something that keeps getting in the way. (Every astrologer says this much.)
2.) (Several astrologers have written articles about this one.) If another planet sits in between the two sextile planets, this planet makes the yod a “boomerang yod,” and the “boomerang” planet describes what to do to solve the dilemma described by the yod.
(What that would look like, is if a fourth planet sat exactly in between Uranus and Neptune, up there. Not present … until you place the chart of the guy who dumped me directly over mine. He has a yod that’s right over mine, facing in the opposite direction. Therefore, the apex of my yod forms the boomerang of his and vice versa. That’s a whole different topic right there.)
3.) Any planet passing by transit or progression over the apex of the yod reflects a time in life when circumstances cause the issues described by the yod to come to the forefront in life. (All astrologers agree on this one. We’re coming back to it at the end.)
4.) Anything attached in a stressful aspect to the apex of the yod is commenting on how the problem came to be in the first place. (This, I haven’t seen any astrologer write up yet, but it appears to me to be the case.)
4a.) If you’re having trouble figuring this out in a chart, look up where the asteroid Chiron is by house and sign. Many times it will clarify things a lot.
It turns out that some of the most helpful astrology books are written by counseling therapists who also have an interest in astrology.
I learned the most from Saturn: A New Look at an Old Devil. Author Liz Greene’s first career was as a counseling therapist. She holds doctorate degrees in psychology and history and is a qualified Jungian analyst.
She also holds a diploma in counseling from the Centre for Transpersonal Psychology in London, and a diploma from the Faculty of Astrological Studies, of which she is a lifetime patron. The following points are taken from this book.
You need to look at the drawing again to see what I’m talking about. So that you don’t have to scroll up, here it is once more:
Again, astro.com.
Attached to Saturn by red lines, all the way at the top, are the Sun (astrology draws that as a bull’s eye), the Moon (obvious), Mars (the “male” symbol), and Mercury (the character that looks like it has horns), all at a ninety-degree angle from Saturn.
This is known as a “square.” Squares are known to be stressful in astrology, hence the red lines. These are the planets we’re talking about. Let’s look at what they symbolize in my chart, and how they tell us my yod situation arose.
In order to process all this, I had to take a good hard look at my childhood, including some things I always believed didn’t affect me much, which, as we all know, any therapist worth their salt would want you to do.
After I finished this, I emailed it to mine as part of my therapy homework. My therapist was pretty pleased with it.
All professional astrologers tell you you have to read The Whole Chart. (This is why those who only read sun signs in the newspaper believe astrology has to be crap.)
In my case, this is easy, as every damn thing is connected to Saturn, usually by a stressful aspect. (Lucky me.) The astrologer Alice Portman read my chart and told me this big yod with Saturn at the long end creates a feeling of “What’s the use?” in my life.
So, what does Saturn mean in a horoscope chart?
According to Liz Greene, Saturn:
— Reflects your struggle to build an ego and protect yourself.
— Indicates an area of the personality where the person remains childlike (or childish!) because they didn’t get what they needed in childhood for that area to develop into mature adult understandings, attitudes, and behavior. It’s necessary for the person to grow up in these areas. (So, basically, as we’ll see, my entire personality is infantile and childish. Great news.)
— When studied in-depth, Saturn offers a detailed picture of what you don’t want to see about yourself.
— Saturn is a measuring stick of the individual’s power of self-determination; it reflects solutions that, if you find them, can become a permanent part of your conscious self, through self-motivated effort.
— You’re closed off from things you want or need in life until you get these specific tasks done. (Sounds like the reason Saturn is the apex of my yod.)
— Saturn denotes areas where you’re supposed to become a good parent to yourself first. Then you can help other people. (Which sounds like the bottom of the yod, right?)
So, the stuff attached by squares to the yod’s apex — Saturn, in my case — describes how the problems came to be that are preventing me from achieving this thing, helping others through writing, that the bottom of my yod reflects that I passionately want to do.
So, what about these squared planets?
Astrology holds that the Sun represents a person’s sense of who he is. If Saturn squares your Sun in your birth chart, it’s commenting that you didn’t have much help in discovering your own identity. You have problems with creativity because you didn’t have a dad behind you to encourage you. Life as a child is really tough, and you didn’t grow up in a sense of trust that things will go well.
Saturn square Sun people are either intensely ambitious, or we have no ambitions because we’re afraid of the pain of not making them come true. Greene writes that we’re offered the opportunity to become masters of our fates. If we don’t take the opportunity, we become very sad people.
The Moon discusses feelings, what a person needs to feel happy, the atmosphere of their early home life and the relationship with the female parent, and any instinctive habit patterns.
A Saturn square here often reflects a person who wasn’t able to express themselves emotionally in childhood. This person had to control their feelings all the time as a child, and their mother let the child down in some way.
The person is lonely and needy because they never had an emotionally loving family, even though it looked like it from the outside. The child experienced a lot of harshness and duty and rules, and not a lot of warmth and love. Because of all this, the person has to become strong in isolation.
Mercury talks about knowledge and communication, and how competent a person feels at these things. And Saturn is the planet of frustration, difficulty, and delay, so if you have a tough Mercury-Saturn aspect, writes Greene, your parents may have treated you like you couldn’t think for yourself because you were a child. Or they stifled you if you had any thought or idea that conflicted with theirs.
A Saturn square Mercury feels very self-defeating. You end up sure you’re stupid because you’ve been punished so much for making mistakes. You work so slowly, out of fear, that you really do look stupid. Then people make fun of you because you look stupid, and you feel and look yet more stupid.
Mars describes self-assertion and any aggressive impulses, and a Saturn square describes a person who feels frustrated, weak, and powerless. Basically, the individual had overly controlling parents and possibly physically abusive parents.
The person feels like their will is ineffectual because it’s been thwarted so often, and feels like they have no control over themselves or their life. Therefore, they’re likely to pick out a weaker person, someone they can control, and use the person like tongs to interface with the world for them and provide what they think they can’t do for themselves.
Um … what did I start this article writing about? Controlling tendencies.
Sheesh.
When I read all this, my first thought was: Gee, I guess I didn’t grow up at all. That felt pretty depressing at first, but before we even test whether all of this is accurate or not, let me emphasize this point:
The important thing isn’t the accuracy of the chart. The important thing is your process of testing the accuracy of the chart.
Having this chart, looking up all this information, and thinking about it forced me to do the kind of deep thinking we all need to do in therapy to heal.
I could have disagreed as well, and decided all astrology is bunk.
The important thing is, I would have thought about these issues and presented my thinking to my therapist. We would have talked (as we did) about why I agreed or disagreed, and how I feel about all of it now.
As it turned out, I did find all of this to be very accurate. It led me down a garden path lined with several new books my therapist has on her shelf now and recommends to other clients.
I really couldn’t express myself in childhood because my BPD mother needed validation and insisted I be just like her. I had to be her instead of me.
I did have an awful childhood, with a lot of hazing at school, no dad, and a mentally ill mother. Therefore, I have no trust in life because things didn’t go well in childhood. Nobody encouraged who I was, so I’ve had a hard time believing in myself; and I do feel like life’s been too hard and I can never relax.
I was the only kid I knew who had to come home from school and clean half the house every Friday night and then get up and get right to homework on Saturday morning.
My mother would inspect everything I cleaned and then, rather than teaching me how she wanted it done in the first place, she would scream at me and spank me. Using trial and error, I had to vary my routine until I found the magic formula that didn’t result in a scream-and-spank session.
I became what my family wanted me to be. I didn’t even know what would make me happy to become when I was a kid. I thought Mom’s likes and dislikes were my likes and dislikes; I believed her dreams were my own.
Even now that I do know what I want, I have an awfully hard time drawing it out of myself.
I watched my late husband, a critically acclaimed, national award-level author, struggle with book sales for years. I do think that even if I finished a novel and offered it to the world, it’s highly unlikely it would be successful.
I don’t want to be crushed by that (since I’ve already been crushed by so much else.)
Yeah. It all sounds like me.
How many years do some people have to spend in therapy before they figure all this out? I saw all this in a couple of weeks once I started studying Saturn and this yod.
And I realized that the absence of my dad, which I never thought affected me much, might have left me with a big piece of my personality missing: my ability to believe in and motivate myself. That was a revelation, indeed.
I still struggle with depression and motivation, but insight is not something I have difficulty with anymore. My therapist believes that this sort of work had value for me because of the deep processing I had to do to accomplish it.
Nonetheless, this chart does diagram what that Saturn drag is that’s keeping Uranus and Neptune, so happily shaking hands down there, from writing and promoting a novel that could have legs. So I’m going to…
(Saturn square Mars) latch on to someone else —someone else’s husband— who appears successful at all the things I’m not, and try to get him to take care of me.
Which I did.
4a.) When having trouble figuring out how a yod happened, take a look at Chiron.
My Chiron isn’t shown in that simplified drawing above, but it’s in Aries, House Eight, close to Saturn.
Astrologer Aria Gmitter says about an eighth house Chiron: “Consider that the purpose of Aries is to identify with itself. It is ‘I am,’ and the ruling planet is Mars (ambition, drive, and determination).
“Chiron is the wounded healer, so it’s a wound that heals for the purpose of a lesson in growth. So, I think it’s an attack early on in a person’s life towards their identity and character. It can lead the individual to feel a loss of self-identity.
“This can mean they don’t have a voice because they don’t know who they are. They may not be able to speak up for their injustices, or they could have been taught early on that they don’t matter. There’s a lack of place in the world. As the healer, I think that it’s a powerful placement for an Aries because should the person master their wound, they will learn not to delegate their identity out to others.
“They will have found that they have participated in their own self sabotaging behaviors after leaving their family of origin, and not severing ties so that they can discover themselves. It will also mean learning to be comfortable in their own skin with their imperfections, and going from a selfish view of the world to one that encompasses others, with boundaries.”
And that’s what I’ve been working on in therapy.
Astrology claims to be able to predict events. Whether or not it’s true, using the chart in this way can still teach you something.
Stage Three: The Timing Of Events
Now we can talk about Point 3, how transits show something important happening in the yod. That will be our final step in this dissection of how astrology has helped me progress in therapy.
When I hired Alice Portman to review my chart, she predicted that I would hear from this man again in October 2017. (And I had learned enough astrology by then to concur; although when I teased apart the indicators that this would happen, we were looking at different things. So, there were many different significators for this event.)
As I dealt with all this heartbreak, where Uranus was in the sky hovered back and forth over this yod position that Saturn was in when I was born.
Ridiculous, I know, but astrology holds that when a planet, from our perspective on the Earth, moves back and forth over an important point like this, Something Important Happens.
And it did. Here he was; he was back, and I had to make a difficult decision about the relationship.
I’m reading a book by another therapist who’s also an astrologer: Counseling for Astrologers.
One helpful hint I’m receiving here is: If you see an important formation in a chart being triggered by a transiting planet, go back to the last time in the life a planet triggered that formation and ask the client what was going on in their life then. You may learn something important.
So, I did that for myself.
Before Uranus, the last time this yod got triggered by anything was by Mars in 2009.
What happened in 2009?
In 2009, a couple of long-lost, handicapped relatives called me out of the clear blue.
My eighty-six-year-old great aunt and her handicapped adopted daughter, developmentally delayed, had dropped out of my life eight years before, when my aunt went into a full-blown bipolar episode and got placed in a psychiatric facility.
Long lost relatives many states south swooped in, moved my relatives to South Carolina, and sold their farm.
No one in our family was in contact with these relatives. I missed my aunt, but no one knew where she was. Now, they had moved back to their old neighborhood again, and they were calling me.
I was overjoyed. I had a happy marriage, but I had finally cut off my BPD mom and most of my family with her. It was sort of a lonely life.
Maybe I could have some of my family back again! Maybe I could finally feel like I had a normal life.
Before I knew it, I was signed on the dotted line as their power of attorney, and then my aunt stopped taking her medication, had more bipolar breakdowns, and I had to move them to assisted living.
Then it turned out that my cousin was physically abusive to my aunt, and I had a real mess on my hands. These people took up all my time, and I had to abandon the novel I was writing.
My husband, who was starting his fourth novel, comforted me through floods of tears as I regretfully put aside the one original novel idea I had ever had to handle these folks’ affairs.
Looking at that Mars transit in 2009 and linking it up to what was happening in my life at the time, this was when the final light came on.
That Saturn at the tip of the yod isn’t just the emotional problems I have as leftovers from a painful childhood. It’s the enmeshed, caretaking relationships I keep getting embroiled in because of those emotional problems.
I’m so needy for family and relationships, I really am a sucker. I didn’t realize how much of a problem getting into the wrong relationships was in terms of achieving my goals in life. It’s a central issue.
Codependent relationships derail me from writing every time I get into one. (Along with the self-doubt and the fear that I’m too stupid to succeed.)
However, whether transits are right or wrong isn’t the important thing here. What is important is that it got me looking at my issues in a new way.
As I stared at the charts and looked up aspect meanings, letting ones that confused me rattle around in my brain — sometimes for months — I’d connect things in a new way, in sudden flashes of insight.
Like that one about Saturn symbolizing my relationships, that I just wrote.
I’d read that in codependency books, sure, but seeing it jump out in my chart this way, when I went back to 2009 as this counselor/astrologer suggested, I finally got it.
“Omigosh, this really IS what I’m doing! Here it is, right here! If I don’t like this scenario, what can I do about it?”
That is the basis of a lot of psychological insight. That moment is when you sit up and connect something you’ve read a jillion times, to you.
Then, when you see a possible warning about your future, you can think creatively about it.
I will forever be thankful I learned to read enough astrology to do this … even if you do think I am a kook.
Studying astrology for therapy is a lot like staring at the scattered pieces of a puzzle. When you figure out how to put them together with your assigned reading, you get a mosaic: An instructive picture of your life that can jolt you into a new awareness. | https://medium.com/illumination/yes-astrology-really-can-help-you-progress-in-therapy-bec3699935a3 | ['A. Nonymous'] | 2020-09-14 01:19:35.071000+00:00 | ['Psychology', 'Opinion', 'Self Improvement', 'Culture', 'Astrology'] |
Soul-Seeking | Photo by xandtor on Unsplash
I look in my mirror much more these days. I notice my eyes — the sparkle grows. I notice the lines on my face — all there because I have laughed my way through Life — The Good, The Bad, and The Ugly. I notice my hair — silvery, radiant with purple magik — daring anyone to believe I am ordinary. I look at my body — soft in all the places my daughters, granddaughters, and friends need her to be — to hold them close when the storms of Life rattle their foundations.
I realized today— I rarely see another Human’s ‘form’. It might be because of my nurse’s training. Or perhaps it’s the woo-woo I carry around with me which tunes me into frequencies and vibrations — my intuition at long last kicking in. Or maybe, it’s as simple as this — living Life has changed what I see — the kaleidoscope of experience provides new colors and shapes. A wider view with a better perspective.
I can’t be certain. I only know this. The ‘substance’ I notice when I encounter my fellow Humans comes from within them. It’s not the stuff their Vessels are made up of anymore.
Seeing beyond another’s external presentation — beyond their Vessel in this world is not natural for most of us. Our Mind and cultural conditioning usually kick in and we begin putting “Others” into categories. A sorting process we learned as toddlers. By color, by size, by shape, by species to make sense of Our World.
If we are ever to dissolve the barricades we erect unconsciously and discover The Truth regarding the sacred connective-ness of Life On Earth — we must overcome this. We have to get past looking at the exterior form of a thing/person/place. We have to learn to see its Soul.
The longer I have lived — the easier Soul-Seeking has become for me.
It begins with the face we see in the mirror. When we begin to see past Our Vessel and widen our perception to accept All. The. Things we know about who we are. It expands when we look at ourselves with love and compassion and allow it to fill up and overflow onto All. The. People who pass through our lives. It culminates when the very first inclination we have is to recognize the places we are ‘One’ with All. The. Things. Everyone. Everything. Everywhere.
How different Our World would be if we could pull this off! We could see so many things! They would practically shout at us! Can you just imagine?
The Love between Humans as they build a Life together — regardless of what their Vessels are. Gender/Race/Religion — there would be no obstacles for them to overcome.
The Wholeness of Spirit as Humanity stopped demanding there is only One Spiritual path to The Divine. No more Wars over The Stairway To Heaven.
The Oneness of a Humanity struggling with survival across all man-made borders. No Borders. No Barriers. No Boundaries. No Walls.
The Gift of Peace to our Fellow Travelers in Life. No need for creatures or plants to be placed on a list to avoid their extinction at the Hand of Man.
The Honorable Stewardship of Gaia — lovingly cared for — Her Soul at rest — not tortured — as we pass Her forward — healthy and well to future generations.
Souls. Everyone, Everything has one. Try to remember this the next time you look at yourself in the mirror. The next time you are faced with a decision to see a Soul or look at Vessel.
Become a Soul-Seeker.
Namaste. | https://medium.com/crows-feet/soul-seeking-a8b7cb61efcc | ['Ann Litts'] | 2019-07-31 10:29:34.503000+00:00 | ['Spirituality', 'Aging', 'Life', 'Life Lessons', 'Self-awareness'] |
Everything I Discovered About GraphQL and Apollo | First Things First…
GraphQL
As I said earlier, GraphQL is a new technology which modifies the relationships between back-end and front-end developers.
Previously, both teams had to define a contract interface to ensure correct implementations. Sometimes, latencies could occur due to misunderstanding on object complexity or typings.
Thanks to GraphQL, backends can provide all the data that front ends could need. Then, it’s up to them to “pick” the properties they require to build the interface.
Moreover, GraphQL offers a web interface (named GraphiQL) to test queries and mutations (if you don’t understand what I’m writing about, please refer to documentation). It’s a clever tool to enable front ends to write requests, and browse the docs and typings.
A query language
Using GraphQL implies understanding and mastering the query language included in the kit. It’s not a trivial one and is based on an object nesting syntax.
query GetProductInfo {
product {
id
name
price
}
}
When querying object seems simple, it’s not the case for the mutations. A mutation updates/inserts new data, potentially with some arguments.
mutation ($label: String!) {
addTag(label: $label) {
id
label
}
}
Here, an argument is specified with its type on L1. Then, it’s used on the second line. The ending content between square brackets id and label defines the structure of the returned object, the result of the insertion.
Apollo
Apollo is the client used to communicate with GraphQL. Whether you develop a web or mobile app, Apollo can support it.
Apollo supports several platforms:
Its configuration allows to define specific aspects such as: cache-network strategy, pipelines (covered here), Server-Side Rendering (Vue.js version), local state (Vue.js version), performance, error handling (covered here) or internationalization (covered here). | https://medium.com/better-programming/everything-i-discovered-about-graphql-and-apollo-e774d1e11638 | ['Adrien Miquel'] | 2020-10-25 22:39:11.416000+00:00 | ['JavaScript', 'GraphQL', 'React', 'Nodejs', 'Programming'] |
Is this Ancient Gear Mechanism the First Computer on Earth? | The Antikythera mechanism
The Antikythera mechanism, as it is known, is quite possibly earth’s first computer and the most ancient gear system ever found. Found in about 150ft of water off Point Glyphadia, near the island of Antikythera, the mechanical device is composed of ancient gears made mostly from bronze and wood. The remaining bronze pieces were so badly corroded that the entire machine appeared to be a blob of badly corroded metal. It was not until later, until the archaeologist Valerios Stais noticed a gear shape that the reality began to hit that this was no common piece of metal. Since then, the mystery has only deepened.
According to Wikipedia:
Generally referred to as the first known analogue computer, the quality and complexity of the mechanism’s manufacture suggests it has undiscovered predecessors made during the Hellenistic period. Its construction relied upon theories of astronomy and mathematics developed by Greek astronomers, and is estimated to have been created around the late second century BC.
At this point, no predecessors have been found. To put this in perspective, a device of this complexity would not be seen again for over 1500 years.
The Antikythera mechanism
What does the Antikythera Mechanism do?
For a long time, scientists and archaeologists had no idea. However, with modern day technology, much of the original mechanism has been reconstructed, virtually anyway. We now know that it was a very sophisticated ancient clock which calculated the Egyptian civil calendar, the Greek signs of the zodiac on the front. On the back, it calculated solar eclipse dates as well as the dates of the next Ancient Olympic Games as well as their respective locations. Keep in mind, this is the first instance of an ancient gear composed of metal.
Who Built this Out of Place Artifact?
There are many theories about who built the device, with most theories trying to link it to one of the more famous Greek scientists or philosophers that we know about. It is possible that it was built by someone whose name we will never know. However, one particular theory stands out which links the box to Archimedes or Hipparchus:
The tradition of making such mechanisms could be much older. Cicero wrote of a bronze device made by Archimedes in the third century B.C. And James Evans, a historian of astronomy at the University of Puget Sound in Tacoma, Washington, thinks that the eclipse cycle represented is Babylonian in origin and begins in 205 B.C. Maybe it was Hipparchus, an astronomer in Rhodes around that time, who worked out the math behind the device. He is known for having blended the arithmetic-based predictions of Babylonians with geometric theories favored by the Greeks. — Via The Smithsonian
This would explain the esoteric nature of the device. What we do know, thanks to modern technology, is that the gears themselves were hand cut.
There does not appear to be any evidence of advanced manufacturing. In fact, the irregularities of the teeth indicate that the device may not have been incredibly accurate.
A Deeper Mystery
Despite all of that, the device raises a number of perplexing questions. This is the first known instance of using metal gears in this way and the gearing is astoundingly complex. The device contained over 30 gears with very complex gear ratios. This sophistication indicates that it was not the first device of its kind and may not even be the best device of its kind. It’s possible that, due to the Egyptian connection, it is an imitation of some other ancient device which is now lost to us, similar to the Dendera Light depicted in ancient Egyptian art. | https://medium.com/swlh/is-this-ancient-gear-mechanism-the-first-computer-on-earth-a96467a0f68a | ['Darian West'] | 2019-12-07 19:22:17.445000+00:00 | ['Greece', 'Ancient History', 'Science', 'Archimedes', 'Ancient'] |
Forecasting Bitcoin prices in the short-term | In this post I will reveal some of our secrets as to how we use Artificial Intelligence, by means of machine learning, to pretty accurately predict the price of Bitcoin in the short-term. Since we refer to short-term as up to two hours, we are able to make pretty accurate trend predictions. However, the further into the future we try to predict, the less accurate results we obtain. Since the crypto space is very volatile and highly unpredictable, short-term forecasting remains our most realistic approach.
In my previous post I’ve explained and addressed some of our shortcomings. As of today, I will no longer use aggregated average prices from various exchanges, but instead use realistic price data from one specific exchange, and if any back testing is carried out then it will always incorporate the trading fees — unless explicitly mentioned otherwise.
The goal
We know for a fact that some investing firms invest heavily in R&D to develop A.I. based trading algorithms and models. And we also know that they are making a profit by doing that, otherwise they wouldn’t be doing it. This also means that smaller organizations (like ours) can do that as well, but on a smaller and more controlled scale.
We have been developing machine learning systems to forecast cryptocurrency prices and trends for a couple of months now. The results of our efforts, as you can read in previous posts, have been eye opening already. But since recently we took it one step further and improved our systems, as you’ll read below.
Short-term Bitcoin predictions
Below are two screenshots that illustrate our current prediction results. On these charts, the dark black line is the historic price; the gray line is the actual future price, we know this future price because I’m looking at results that were generated two hours ago. The red/green/orange lines are a summary of the predictions. Since we generate a multitude of predictions, we only want to see a handful of them, so we only show the most optimistic, pessimistic and the average prediction.
Both of these charts depict predictions of the price for 8 intervals into the future, with each interval being 10 minutes. So that is 1h20m (80minutes) into the future:
Prediction results 1
Prediction results 2
It’s important to remember that the absolute value of these predictions don’t matter as much as their general trend. These predictions are generated by a complex mathematical model, so their absolute value may deviate from reality. However, we instead use these as a tool to forecast whether the price will go up, down or stay as is. And coming back to our initial remarks, the reason why the absolute value of the predictions are of even lesser importance is that the prices are aggregated averages from all major exchanges — the predictions are not targeting one specific exchange.
On a side note — I’ve often been asked by readers if the predictions are over fitted, the answer is they are not. Our neural network systems are initially trained on a large data set, and from then on it uses data from the previous intervals (e.g. past 10 minutes) to re-train the neural network and make these predictions for the next 8 intervals. So we never generate predictions over a date range that has already been used for training, otherwise that would no longer be considered as “forecasting”.
From the two screenshots above, the predictions appear to be pretty accurate, and in many cases they are. But in some cases they are not. Have a look at the next chart where the predictions deviate immensely. The optimistic prediction shows the price going up exponentially, the average one looks more sinusoidal and the pessimistic prediction indicates a huge drop with a strong recovery afterwards. These predictions look very anomalic to us humans, but for the system they are no different, so to improve or filter them out we need to understand better how A.I. works. Unless we fully understand why it makes such predictions, we cannot improve them — and learning how A.I. makes decisions requires yet another A.I. component to do just that, this remains work in progress.
Prediction results 3
Realistic Bitcoin predictions
As briefly mentioned previously, we no longer use aggregated average price data. Instead we shall focus on one (or multiple) crypto exchanges. At this stage we solely use the Binance exchange for our purposes, we are not affiliated with that company in any way.
About a week ago I started using one-minute candlesticks as input data for our neural network. Initially it yielded no meaningful results, after struggling for two whole days trying to tweak a whole bunch of parameters, I just put it aside and focused on different parts of our project.
Initial candlestick predictions of 8 steps (1min intervals)
But then I realized that I was trying to solve a problem using an old mindset. The old mindset is to make eight predictions, which yielded pretty “okay” results on the aggregated price data, but not necessarily on the Binance data using 1 minute candlesticks. So I had to redesign this little detail, and instead of making 8 predictions, I made it predict just one. I then also realized that having just one prediction will be a visual disaster, it tells us very little (from a visual perspective), because we’ll only see just one dot. To cope with this, I also made sure the system includes previously made predictions, now we can actually have a graph (a solid line, with multiple dots); this is something we can analyze and benchmark against the actual price. This new method for visualizing predictions looks like on the image below.
New predictions representation
On the image there are two actual prices, the solid green/red candlesticks which are the historic prices (these were used as input for the neural net), while the slightly faded (lowered opacity) green/red candlesticks are the future price — this screenshot was taken at some historic time where the future price is already known so these candlesticks are present (with their opacity lowered). The blue/black candlesticks are the predictions made for their respective interval, given only the data prior to that interval. So in this example the last big “blue” candlestick is the result of the previous large green “candlestick”. The A.I. system has learned that the previous interval had a huge increase in price, so it predicts that the next interval will be an increase as well (compared to the previous prediction).
It actually depends on how we look at it and phrase it, some people may say that the price is about to go down if we use absolute values — while if we use the predictions as trends then it tell us the price is going to increase. Which of these two views/theories is most correct remains to be tested (i.e. back testing), there’s actually no trivial answer to this question. So for now it will be a combination of both looking at the trend and at the absolute values. Here’s a more complete image of the above:
1-min interval predictions (1)
We clearly see how pretty accurate the trend of the predictions were compared to that of the price. This is what opened my eyes and allowed me to continue my research much deeper. Below is another screenshot generated in the same fashion, same data, but vastly different parameters and neural network structure:
1-min interval predictions (2)
We see that its results/predictions are quite similar to the first one. I actually like this one better (on first sight), because it has more “black” candlesticks (i.e. the close price was lower than open price). This one also looks slightly more over fitted, because its values appear to be closer in absolute terms. But as mentioned earlier, these prediction regions were not used as input to train the neural net so they are not directly biased,they are simply more accurate predictions in absolute terms — taking this statement into consideration, it’s amazing how well the system makes these one-interval predictions.
You may also have noticed that the system is not able to predict huge increases/drops in price, such as that big “green” candlestick, there is no way the system could predict that. And these increases are usually due to market manipulation (e.g. insider trading) or a group of people deciding to to buy loads of BTC during that interval — unless we have access to these groups, we cannot develop a system that forecasts these scenarios. But we do see that our system learns and adapts from these anomalies, it learns that after a huge increase (or decrease) comes either stability, even more growth or a sudden drop.
Having done this, I moved to the next level, increasing the interval size. So instead of predicting 1 minute ahead, let us use 5-minute interval candlesticks and predict 5 minutes ahead (which is still a single interval prediction in this case). Below are two screenshots with predictions generated by different neural nets for the same period:
5-min interval predictions (1)
5-min interval predictions (2)
From the two above predictions we see that the first one looks smoother, but also somewhat less accurate. The second one resembles the reality slightly better. Then again, notice how inaccurate it is for detecting anomalies, as described earlier:
First prediction fails to predict the price spike
Given the historical data, there is no indicator, i.e. there is no way the system can know the price will shoot up extremely fast/high (relative to the previous values), as shown on the above. So the prediction for the larger “green” candlestick is a tiny “black” candlestick indicating the price will be relatively stable, but instead it went up (a lot). Once again this proves our point, it’s practically not possible to predict such a scenario given our data — but fortunately the system is “learning” and can indicate what will happen after the price goes up as it did, we then can use these predictions to decide whether to buy/sell/hold.
Below is another example of 5-min interval predictions, this time I used yet another set of parameters and data set size. Notice how the shape/trend of these predictions differs from the previous ones.
5-min interval predictions (3)
If we can make pretty “okay” predictions with 5-min candlesticks, why not with 10-min ones? That’s why I did next to see how accurate these would be, and here is one of those results:
10-min interval predictions (1)
We clearly see that the 10-min predictions are slightly less accurate compared to the 5-min ones, the major trend is still there — but it’s still unable to predict huge rises/drops as explained before. I did not go any further to predicting 20, 30, 60, … minute intervals simply because I shifted my focus to a next important matter.
Remember that I started of this chapter by explaining how I went from making 8-step predictions to just single step ones? That decision was not backed my experiments, there was actually nothing less accurate from the 8-steps compared to the 1-steps, that is if we only look at the very first prediction. But the confusing part was the other 7 predictions, since these usually deviate a lot from the actual future, and it made the results appear very inaccurate. The thing is, every new prediction has even lesser precision than the previous one. This I realized when I went from single step predictions to three step ones:
Predicting 3 steps ahead (1)
I realized that making 3-step ahead predictions appeared to be pretty accurate, more accurate than 8-step predictions to say the least. But then again, it wasn’t always the case:
Predicting 3 steps ahead (2)
Making multi-step predictions is done by using, in our system at least, the previously made prediction as the new input. And if the previous prediction wasn’t accurate then the next one won’t be either (in most of the cases). The reason behind this is that every prediction has an error percentage%, this error value grows exponentially at each new prediction step.
A deeper neural networks
It’s generally true that the depth/size of a neural network can improve (or degrade) the results. Until now I have always been using pretty shallow neural networks with just one or two hidden layers, and a handful of neurons per layer. But what would the results be like if I used a deeper neural network, for instance three to six hidden layers? I am not going to go very deep into deeper neural networks (DNNs), simply because the results are too “deep” to understand at this point. However I would like to share some cool findings. In the next few examples I trained DNNs and let it predict 16-step intervals, in the hope of finding something interesting.
Predictions from a deeper neural network (1)
Most results from our DNNs look way smoother than those from shallow NNs. But I also noticed that sometimes these DNNs produce very surprising and unexpected results. On the chart above we see how the system predicts a drop in price midway 17:00. Even though such as thing did not occur reality, it was still a fascinating anomaly.
Predictions from a deeper neural network (2)
Here’s another set of predictions, where at some point the system predicts the price to go up steadily in linear fashion, but then shortly before 17:00 it indicates a drop. If we compare this against how the price evolved in reality, we see something quite similar happening. The price did rise steadily until like 16:40 and then it dropped until 17:15 before going up again for a short period. In some way this can be seen in the predictions, but whether it’s the true meaning of these predictions is up for debate.
Predictions from a deeper neural network (3)
In the above it appears the system is anticipating for a huge drop midway 18:00 to 19:00. In reality no drop occurred in that range, except at 18:55.
Predictions from a deeper neural network (4)
I followed the previous prediction, and a few steps later it still kept anticipating for this huge drop. But now this drop has shifted closer to 19:00. And in reality there was indeed a drop in price, followed by a steady increase right afterwards, at 18:55 that is. So whether the system was really predicting this drop or not remains unclear, but it’s definitely surprising to see that manifest!
Predictions from a deeper neural network (5)
Above is another interesting version. In this case every prediction is “black” (i.e. red candlestick). I cannot explain why, but it does appear to make a good prediction of the price’s trend between 16:00 and 17:00 nonetheless.
Predictions from a deeper neural network (6)
Above is a region where the system did not anticipate a huge drop that is about to come next (at 02:10 or so).
Sometimes there are DNNs that just look weird to say the least (as the one below). Even though they look strange to us, they may contain valuable information that the A.I. system is trying to tell. We just need a better way of interpreting its output. | https://medium.com/swlh/forecasting-bitcoin-prices-in-the-short-term-f52deec61b97 | [] | 2018-03-20 20:22:04.741000+00:00 | ['Artificial Intelligence', 'Investing', 'Machine Learning', 'Cryptocurrency', 'Bitcoin'] |
The Masterpiece Submission Guidelines | Yes, you’ve come to a place where quality matters — not quantity. But don’t get this wrong, quality doesn’t mean that you have to write like the NYT articles.
We believe in simplicity and look for engaging, well-structured content that connects readers.
Make your stories simple but interesting and engaging.
We would be happy to publish and spread your masterpieces.
We publish the masterpieces on the following topics.
Happiness
Self-Improvement
Environment
Travel
Relationships
Mental Health
Motivation
Social Problems
Education, Reading, Writing
Satire, Humor
Country & Culture
Business & Marketing
Personal thoughts & experiences
We publish the masterpieces between 01–15 minutes read on the above topics.
We don’t publish the stories on the following topics.
Politics
Poetry
Technology
Listicles (5ways, 10ways, 12things, 15principles, etc.)
Quotes
Law & Legal Issues
Intricate Academic Writings
How to/how I make $$$…(articles)
Food review/product review
By submitting to The Masterpiece, you are complying with the following rules and guidelines.
1. Follow Medium Rules
Submissions must comply with Medium’s Rules, Ad-Free Policy, Content Guidelines, and Curation Guidelines.
2. Submit Unpublished Drafts
You must submit original and unpublished drafts. After publishing the story in The Masterpiece, you can republish or share it on your blog, LinkedIn, Twitter, or other platforms outside Medium.
3. Original Contents
Your stories must be original, engaging, and well-organized. We do not accept vague, unclear, or intricate ones. Make sure your contents are grammatical-error-free. We recommend you to use Grammarly to check your content beforehand.
Plagiarism will not be tolerated.
If you write poetry, make sure it’s not scattered with so many spaces. Divide it stanza-wise as you see in poetry books.
4. Call to Action (CTA)
A single text link inviting newsletter subscriptions, or the medium link of your other story is acceptable. Any types of CTAs or Sign-up forms are not allowed with the content.
5. Style Guide
Follow the below style guide while submitting your draft to The Masterpiece.
Titles and subtitles: No clickbait is allowed. Your story must have a precise title and a subtitle. Write your titles in title case and subtitles in sentences case/title case.
Feature image: Make sure you have a featured image (horizontal orientation) below the titles and subtitles. Keep the image in the following style aligning in the middle. Do not fill the screen with your featured image.
Featured image style
Images within the text: If necessary, use images within the text. In that case, follow the below ‘inline’ image format.
Images within the text style
Image credit: Cite the source and usage rights in the image caption. If the photo is taken or created by the author, mention it in the image caption. You may find copyright-free images on Unsplash, Pixabay, Pexels, etc.
Section headings: Keep all the section headings in sentence case. Do not mix up your section headings. Make your masterpieces well-structured and visually stunning.
6. Submission
Submit your final draft by clicking the “…” button near the top-right corner of the page. Then select “Add to publication” and choose “The Masterpiece”. Finally, click “Add draft” to submit your story for review.
7. What we change/edit(if necessary)
We may change the title, subtitle, images if we find the existing ones less-engaging or irrelevant. We will edit sentence structures and paragraphs, if they are too long and intricate.
Moreover, we will try our best to make sure that your writing is error-free and well-structured.
8. How long it will take to publish
Within 03(three) days, you will get feedback from us. If everything is okay, we will be happy to publish your masterpiece.
But if you do not hear anything within 03 days, you are free to submit the draft elsewhere.
We are accepting new writers.
To become a writer, please drop a response below, writing ‘I want to write for The Masterpiece’ and leave your Medium @username. For example, my username is ‘@mamun.here’ | https://medium.com/the-masterpiece/the-masterpiece-submission-requirements-5fdafb3a0446 | ['S M Mamunur Rahman'] | 2020-12-24 13:09:50.922000+00:00 | ['Publication', 'Reading', 'Writing', 'The Masterpiece', 'Submission'] |
How to Live a Regret-Free Life | Christina Pascucci: Why is death such a taboo topic? What’s the advantage of talking about it, and what should we be contemplating?
Bronnie Ware: We have created a society of denial. We subdue vulnerability and pretend everything is OK when everyone is suffering from the unrealistic expectation of perfection. We deny the state of our planet, the whole state of everything! So, of course we deny death, as it is the scariest thought of all. But it doesn’t have to be. Death is a guarantee and when you face that honestly you realize the sacredness of your time and find the courage to make loving, positive changes to your heart. Time is an undervalued but sacred resource. It cannot be replaced.
You say after talking to countless patients on their death beds, the greatest regret of the dying is they wish they lived a life true to themselves, rather than what others expected of them. Can you talk more about this, and how do we do this?
This subject came up time and again. People realized they had not brought enough consciousness and presence into the choices they made. Since your life is created by the decisions you make, this can result in dreams remaining unfulfilled and deep regret about not choosing differently. We are all individuals with unique yearnings and strengths. We are not meant to be alike but to encourage those unique strengths.
You didn’t really have experience as a caregiver when you were essentially thrown into it. Many people might think they’re not qualified or good enough, or think to themselves they’ll try later when they have more experience. What would you say to that?
Everyone has to start somewhere. We are all beginners at one time or another. But the only way to go from being a beginner to an experienced person is by having a go. It may mean you have to be vulnerable. You may even be judged as a fool for a while. But your life is your own. You either give people power through their judgments of you or you give yourself power by ignoring them and honoring your own heart and hunches.
When you trust in life’s possibilities rather than human-made rules, there really are very few limitations.
In your book, you talk extensively about kindness, forgiveness, and empathy. You also say you made excuses for people’s bad behavior. How do you show empathy and still hold people accountable for their actions?
It’s not up to any of us to hold anyone accountable. Life is the best teacher. No one knows what the other is here to learn or heal.
If you use their behavior as a teaching tool, and dissolve your ego and its need to be right or to make someone feel guilty, you actually set yourself free. It really does not matter who is right or wrong in the end. What matters is how many choices you made in kindness. The less energy wasted on unforgiveness, the more energy you create for joy.
You fell in love and became pregnant later in life. Many young professional women are choosing their careers over marrying and starting a family earlier on in life. How old were you when you became pregnant, and what’s your advice to those in their 30s and 40s who might be feeling the societal pressure?
I fell pregnant naturally and intentionally at 44, becoming a first-time mother at 45. We conceived the second month we tried. While many women are not blessed with such ease, many stop themselves even trying once they reach a certain age.
It is true that our bodies are healthier for pregnancy at a younger age. There is no denying that. My pregnancy triggered disease immediately following. Whether that would have happened anyway, years earlier, I cannot say. But my pregnancy was healthy and my baby was born very healthy. So while I don’t encourage leaving it too late, I do say to follow your heart on it. If I hadn’t, I wouldn’t know the love I now do for my gorgeous little girl!
Many women, especially as mothers, give so much. However, it can be tough for them to receive. You wrote: ‘Then not only are you blocking the natural flow of things to you and creating an imbalance, you are robbing someone else of the pleasure of giving.’ Talk more about that and how we can be better receivers?
By not receiving, you close yourself to life’s blessings, which are so often to be delivered through others. It also creates unbalance and is a way of trying to control life. That is one of the worst things you can do: to shut yourself off to life’s amazing and generous creativity because you don’t have the courage to receive. To live a full life means to allow others in, to celebrate connection, and be open to the flow of giving and receiving.
After helping so many, you went through your own depression in your 40s. You felt trapped and seriously contemplated suicide. What would have been a helpful approach from friends? How’d you get out of that rock bottom?
Loving patience and trust that I would work it out. An ear when I needed one but no lecturing when I didn’t.
I came back from rock bottom one step at a time. There is often a crucial turning point — sometimes obvious, sometimes not — where a glimpse of hope, light or strength feels different to the dark heaviness depression delivers. You hold onto that and every little blessing and insight that comes, and step-by-step it loses its power. It takes commitment, though, and a massive trust in life that such a time is a blessing in disguise. It certainly was for me. It helped me let go of so much of what was holding me back.
Many people reading this, myself included, have a loved one who is an addict. One of your patients, an alcoholic until her final moments, told you this:
“Not everyone wants to get well either Bronnie. And for a long time I didn’t. The role of the sick person gave me an identity. Obviously I was holding myself back from being a better person this way. But I was getting attention, and trying to fool myself into thinking this made me happier than being courageous and well.”
If we are struggling with addiction, or know someone who is, what’s the best way to react and foster positive change?
Gentleness, acceptance, non-judgmental kindness. Addiction is usually created from a lack of wholesome connections. That’s not a reflection of people who love someone with addiction. It’s a reflection of the addict’s ability to receive that connection. Positive connection and shared wholesome experiences can help immensely at times.
In your twenties you quit your banking job to work at a pub abroad. Do you think taking risks like that is critical to maximizing this thing called life?
Yes, absolutely. Staying in your comfort zone is avoiding reaching your full potential. Risks and contrast are both essential to show us what we’re really capable of. And while it can be terrifying sometimes, it also brings new levels of joy beyond it.
Your aim is to live regret-free. Do you have any regrets?
None. Not one. I’ve made a stack of mistakes and if I could go back and do it all again there are definitely things I would change. There are things I would have done differently. But I did the best I could as who I was at the time. So I look back to old parts of myself with compassion rather than judgment. This allows me to forgive my mistakes rather than give them the power of regret.
Having faced death and realized the sacredness of my time, I live a courageous life now, completely true to my heart regardless of how I am perceived by others or society. By bringing as much consciousness as possible to the decisions I make, I avoid regret because I am not living blindly. I am living with my eyes and heart wide open.
You’ve shared some of your biggest life lessons. What matters most?
Our lessons are given to us from a place of love, to bring us into our best self. Courage is always rewarded. The greatest appreciation we can show for our life is to enjoy it as fully as possible. | https://medium.com/wake-up-call/how-to-live-a-regret-free-life-d52c8c9e64bb | ['Christina Pascucci'] | 2019-11-21 10:01:01.218000+00:00 | ['Life Lessons', 'Wellness', 'Love', 'Caregiving'] |
Language | Language
Email Refrigerator :: 03
Hey friend,
One night this week, I was sitting on the couch holding my daughter, Golda, facing towards me. After a few minutes, she started to whine. We stood up and paced the living room. She immediately calmed down and scanned the room curiously.
I understand her and yet she’s never spoken a word. Golda is nearly 18 weeks old, which is crazy– I’ve been in a relationship with someone for almost 5 months that doesn’t speak English or really even understand it.
Because of Golda, I’ve been thinking a lot about how we communicate. How we use our bodies, our expressions, and our language in different ways to shape the world around us and our perceptions of it.
This month, let’s talk languages.
Happy snacking.
Art by Wayne White
I. New Language
The first class I ever took in college was linguistics (after one semester I quickly learned why the 9am Monday classes were always available). One of the things that still sticks with me is the Sapir-Whorf hypothesis, the idea that language shapes thought. Changing our language, or not having a word for something, or not using a verb tense affects how we think.
Here are 3 very short stories from the last few years about how changing my language affected my thinking.
Contractor vs Freelancer
I started freelancing in the Summer of 2016. The freelance life is supposed to come with more freedom, higher day rates, a flexible schedule, and an independent spirit. After 2 years of “freelancing” I realized I didn’t have any of those. I felt tied down. That’s because I realized that I wasn’t freelancing. I was contracting. I still had a 40 hour a week job, just in 2–4 month contracts. So since last Summer, I’ve tried to own being a freelancer– finding my own clients, working from home, saying no to timelines and budgets and feedback that didn’t work with my expectations. And it’s made all the difference.
2. Is it an Emergency?
Talking about my parenting approach to a friend, I said that I am more of a “put my oxygen mask on first” parent. I believe in self-care and getting sleep and exercise so I can be more present and energized. But she called me out: “that metaphor assumes an emergency. Is it an emergency? Are you in crisis? Try reframing that approach. I believe that you can only give from your overflow.”
I can only give from my overflow.
My cup needs to be overflowing before I can give my time, energy, love, attention to other people. This is not an emergency.
3. The Weakness of Strength
In January, I lead a workshop on leadership. One of the key ideas in my research came from The School of Life, which coined “The Weakness of Strength” theory. It’s this idea that every person has strengths, and those traits have shadow sides.
So being a decisive leader might also mean you alienate your team because you don’t include them in decision-making. Or you fell in love with your spontaneous, adventurous partner who is now on your nerves for being uncommitted and bad at planning.
It’s such a helpful reframe for me in being aware of my strengths and their shadow sides with my partners at Caveday, with my marriage, and with my friends. What’s the shadow side of your greatest strength?
Art By Dan Ferrer
II. Foreign Language
Sometimes it feels like making a decision about a job, a career path, a college major are all permanent decisions. Whatever you decide on you MUST commit to. Forever. And that will be who you are. Forever. And you cannot do anything other than that. Forever.
Yeesh.
But this month, a little reminder about high school Spanish changed the way I thought about career transition. Check out the article I wrote here: https://medium.com/p/e0b9734eb38
Art by Magnus Atom
III. Language Paradoxes
At the beginning of this year, I set out my goals and realized that some of them felt in conflict.
How can I be a more present father WHILE taking on more work?
How can I plan my life and still leave room for flexibility when things change?
How can I choose my path and still trust the universe has my back?
I’m learning that we don’t have to think about some of these things as “either/or.” Life is not as simple as having mutually exclusive choices. Life is complicated. Life creates paradoxes. “Both/And.”
Having an argument is often an either/or. I am right and they are wrong. But is there a truth where both are right? Or both are partly right?
Being a part of a community requires me being my independent self and being an anonymous part of a group.
Two things can exist at the same time. Not as opposites, not as either/ors but as both/ands. Science and religion. Change and stasis. Love and fear. Choice and fate.
A lot of this thinking was clarified in reading Parker Palmer’s work (thanks to Casey Rosengren for the recommendation). In one of his talks, he asserts that there are five habits of the heart. But really, you only need to consider two:
Chutzpah and Humility.
Chutzpah is the audacity to believe that I have a voice that deserves to be heard and a right to speak it. And humility is the awareness that my truth is not complete and I need to listen openly and respectfully. That’s a pretty deep paradox.
We often try to oversimplify things into either/or because it’s easier. But it’s not how the world or life works. Things are complex and interconnected. Holding space for both/and requires patience and work.
Conflicts arise because of either/or mentalities. Resolutions come from an understanding of both/and. In a way, better understanding the paradoxes of life can lead to peace.
Art by Jean Bevier
IV. Fin
As always, thanks for opening the refrigerator and sharing your thoughts. If you get something out of it, feel free to share it with a friend. The only way this thing grows is when you tell someone else about it. Send them this link.
-Jake | https://medium.com/email-refrigerator/language-d6676d2d9fff | ['Jake Kahana'] | 2020-12-27 18:41:48.795000+00:00 | ['Strengths And Weaknesses', 'Language', 'Self Improvement', 'Self-awareness', 'Paradox'] |
Pros of Different Python String Formatting Methods | Now let’s see the pros of each method one by one.
%-format
%-format is very similar to the classic C printf() function. So it is easy to understand or share with other people who are new to Python.
The big advantage of %-format is that you can pass a tuple or list as argument directly. This is very useful if you want to pass a long list of arguments, for instance, you read data from database or spreadsheet and format a long string in XML or JSON format.
replacement_str = '%s is %s'
arg_tuple = ('pi', 3.14)
print(replacement_str % arg_tuple)
# out:
pi is 3.14
Note: You can achieve the same in str.format() by passing the address of a tuple/list, e.g., *arg_tuple, which is not intuitive and simple, IMHO.
And it is handy if you want to define the same format for multiple data, where you define the format only once.
str_format = '%-10s, %s'
headers = ('Result','Message')
row1 = ('Successful','abc')
row2 = ('Failed','efg')
print(str_format % headers)
print(str_format % row1)
print(str_format % row2)
# Out:
Result , Message
Successful, abc
Failed , efg
Note: You can achieve the same in str.format() but not f-string.
str.format()
str.format() can avoid passing repeated arguments.
You can use the same position or argument name in replacement fields if the same argument is used more than once in the string.
print('{0} + {0} = {1}'.format('pi', 6.28) )
# out:
pi + pi = 6.28
Or you can pass one argument and access its multiple attributes or items in the string.
tuple1=('pi', 3.14)
print('{v[0]} is {v[1]}'.format(v=tuple1))
# out:
pi is 3.14
Equivalent formatting:
print('{} is {}'.format(tuple1[0],tuple1[1]))
Another useful case for str.format() is nesting argument. You can also achieve it using %-format, but it looks nicer using str.format(), IMHO.
In this example, we convert a dict to a list of string using {}-format, and join them with ‘
’, then pass it as a nesting argument to the main string in {}-format.
dict1={'k1':1, 'k2':'two'}
print('{}
{}'.format(
'Line 1',
'
'.join('{}: {}'.format(k, v) for k, v in dict1.items())
)
)
Out:
Line 1
k1: 1
k2: two
f-string
f-string, also called formatted string literal, is pretty much identical to str.format(), but just put arguments directly in the string. It has a unique function, though. You can call an argument’s methods directly in the formatted string, while str.format() can call attributes or items only. | https://peter-jp-xie.medium.com/pros-of-different-python-string-formatting-methods-318f1bdeca93 | ['Peter Xie'] | 2020-11-23 11:09:21.505000+00:00 | ['String Format', 'Python'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.